Tag: Single

The Single Page App Morality Play

Baldur Bjarnason brings some baby bear porridge to the discussion of Single Page App (SPA) vs. Multi Page App (MPA).

Single-Page-Apps can be fantastic. Most teams will mess them up because most teams operate in dysfunctional organisations. Multi-Page-Apps can also be fantastic, both in highly functional organisations that can apply them when and where they are appropriate and in dysfunctional ones, as they enforce a limit to project scope.

Both approaches can be good and bad. Baldur makes the point that management plays the biggest role in ensuring either outcome.

Another truth: there are an awful lot of projects out there that aren’t all-in on either approach, but are a mixture. I feel that. I also feel like there is a strong desire to declare a winner or have a simple checklist to decide which approach to go for, and that just ain’t gonna happen because the landscape is too shifty and there are just too many factors. Technology: it’s complicated.

A recent episode of Web Rush went into all this as well:

Direct Link to ArticlePermalink

The post The Single Page App Morality Play appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.


, , ,

Developers and Designers Work on a Single Source of Truth with UXPin

(This is a sponsored post.)

There is a conversation that has been percolating for as long as I’ve been in the web design and development industry. It’s centered around the conflict between design tools and development tools. The final product of web design is often a mockup. The old joke was that web developers make websites and web designers make paintings of websites. That disconnect is a source of immense friction. Which is the source of truth?

What if there really could be a single source of truth. What if the design tool works on the same exact code as the production website? The latest chapter in this epic conversation is UXPin.

Let’s set up the facts so you can see this all play out.

UXPin is an in-browser design tool.

UXPin is a powerful design tool with all the features you’d expect, particularly focused on digital screen-based design.

The fact that it is in-browser is extra great here. Designing websites… on a website is an obvious and natural fit. It means what you are looking at is how it’s going to look. This is particularly important with typography! The implementer of this card component can see exact colors (in the right formats) are being used, as well as the exact pixel dimensions, etc.

This is laid out nicely by Ania Kubów in a video about UXPin.

Over a decade ago, Jason Santa Maria thought a lot about what a next-gen design tool would look like. Could we just use the browser directly?

I don’t think the browser is enough. A web designer jumping into the browser before tackling the creative and messaging problems is akin to an architect hammering pieces of wood together and then measuring afterwards. The imaginative process is cut short by the tools at hand; and it’s that imagination—or spark—at the beginning of a design that lays the path for everything that follows.

Jason Santa Maria, “A Real Web Design Application”

Perhaps not the browser directly, but a design tool within a browser, that could be the best of both worlds:

An application like this could change the process of web design considerably. Most importantly, it wouldn’t be a proxy application that we use to simulate the way webpages look—it would already speak the language of the web. It would truly be designing in the browser.

It’s so cool to see this play out in a way that aligns with what great designers envisioned before it was possible, and with new aspects that melt with today’s technological possibilities.

You can work on your own React components within UXPin.

This is where the source of truth magic can happen. It’s one thing if a design tool can output a React (or any other framework) component. That’s a neat trick. But it’s likely to be a one-way trip. Components in real-world projects are full of other things that aren’t entirely the domain of design. Perhaps a component uses a hook to return the current user’s permissions and disable a button if they don’t have access. The disabled button has an element of design to it, but most of that code does not.

It’s impractical to have a design tool that can’t respect other code in that component and essentially just leave it alone. Essentially, it’s not really a design tool if it exports components but can’t import them.

This is where UXPin Merge comes in.

Now, fair is fair, this is going to take a little work to set up. Might just be a couple of hours, or it might take few days for a complete design system. UXPin only works with React and uses a webpack configuration to integrate it.

Once you’ve gotten in going, the components you use in UXPin are very literally the components you use to build your production website.

It’s pretty impressive really, to see a design tool digest pre-built components and allow them to be used on an entirely new canvas for prototyping.

They’ve got lots of help for you on getting this going on your project, including:

As it should, it’s likely to influence how you build components.

Components tend to have props, and props control things like design and content inside. UXPin gives you a UI for the props, meaning you have total control over the component.

<LineChart    barColor="green"   height="200"   width="500"   showXAxis="false"   showYAxis="true"   data={[ ... ]} />

Knowing that, you might give yourself a prop interface for your components that provides you with lots of design control. For example, integrating theme switching.

This is all even faster with Storybook.

Another awfully popular tool in JavaScript-components-land to view your components is Storybook. It’s not a design tool like UXPin—it’s more like a zoo for your components. You might already have it set up, or you might find value in using Storybook as well.

The great news? UXPin Merge works together awesomely with Storybook. It makes integration super quick and easy. Plus then it supports any framework, like Angular, Svelte, Vue, etc—in addition to React.

Look how fast:

UXPin CEO Marcin Treder had a strong vision:

What if designers could use the very same components used by engineers and they’re all stored in a shared design system (with accurate documentation and tests)? Many of the frustrating and expensive misunderstandings between designers and engineers would stop happening.

And a plan:

  1. Connect to any Git repo.
  2. Learn about all the components in that repo.
  3. Make those components available to the UXPin visual editor.
  4. Watch for any changes to the repo and reflect those changes in the visual editor.
  5. Let designer’s design and deliver accurate specs to developers for using those components.

And that’s what they’ve pulled off here.

The post Developers and Designers Work on a Single Source of Truth with UXPin appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.


, , , , , ,

Implementing a single GraphQL across multiple data sources

(This is a sponsored post.)

In this article, we will discuss how we can apply schema stitching across multiple Fauna instances. We will also discuss how to combine other GraphQL services and data sources with Fauna in one graph.

What is Schema Stitching?

Schema stitching is the process of creating a single GraphQL API from multiple underlying GraphQL APIs.

Where is it useful?

While building large-scale applications, we often break down various functionalities and business logic into micro-services. It ensures the separation of concerns. However, there will be a time when our client applications need to query data from multiple sources. The best practice is to expose one unified graph to all your client applications. However, this could be challenging as we do not want to end up with a tightly coupled, monolithic GraphQL server. If you are using Fauna, each database has its own native GraphQL. Ideally, we would want to leverage Fauna’s native GraphQL as much as possible and avoid writing application layer code. However, if we are using multiple databases our front-end application will have to connect to multiple GraphQL instances. Such arrangement creates tight coupling. We want to avoid this in favor of one unified GraphQL server.

To remedy these problems, we can use schema stitching. Schema stitching will allow us to combine multiple GraphQL services into one unified schema. In this article, we will discuss

  1. Combining multiple Fauna instances into one GraphQL service
  2. Combining Fauna with other GraphQL APIs and data sources
  3. How to build a serverless GraphQL gateway with AWS Lambda?

Combining multiple Fauna instances into one GraphQL service

First, let’s take a look at how we can combine multiple Fauna instances into one GraphQL service. Imagine we have three Fauna database instances ProductInventory, and Review. Each is independent of the other. Each has its graph (we will refer to them as subgraphs). We want to create a unified graph interface and expose it to the client applications. Clients will be able to query any combination of the downstream data sources.

We will call the unified graph to interface our gateway service. Let’s go ahead and write this service.

We’ll start with a fresh node project. We will create a new folder. Then navigate inside it and initiate a new node app with the following commands.

mkdir my-gateway  cd my-gateway npm init --yes

Next, we will create a simple express GraphQL server. So let’s go ahead and install the express and express-graphqlpackage with the following command.

npm i express express-graphql graphql --save

Creating the gateway server

We will create a file called gateway.js . This is our main entry point to the application. We will start by creating a very simple GraphQL server.

const express = require('express'); const { graphqlHTTP } = require('express-graphql'); const { buildSchema }  = require('graphql');  // Construct a schema, using GraphQL schema language const schema = buildSchema(`   type Query {     hello: String   } `);  // The root provides a resolver function for each API endpoint const rootValue = {     hello: () => 'Hello world!', };  const app = express();  app.use(   '/graphql',   graphqlHTTP((req) => ({     schema,     rootValue,     graphiql: true,   })), );  app.listen(4000); console.log('Running a GraphQL API server at <http://localhost:4000/graphql>');

In the code above we created a bare-bone express-graphql server with a sample query and a resolver. Let’s test our app by running the following command.

node gateway.js

Navigate to [<http://localhost:4000/graphql>](<http://localhost:4000/graphql>) and you will be able to interact with the GraphQL playground.

Creating Fauna instances

Next, we will create three Fauna databases. Each of them will act as a GraphQL service. Let’s head over to fauna.com and create our databases. I will name them ProductInventory and Review

Once the databases are created we will generate admin keys for them. These keys are required to connect to our GraphQL APIs.

Let’s create three distinct GraphQL schemas and upload them to the respective databases. Here’s how our schemas will look.

# Schema for Inventory database type Inventory {   name: String   description: String   sku: Float   availableLocation: [String] }
# Schema for Product database type Product {   name: String   description: String   price: Float }
# Schema for Review database type Review {   email: String   comment: String   rating: Float }

Head over to the relative databases, select GraphQL from the sidebar and import the schemas for each database.

Now we have three GraphQL services running on Fauna. We can go ahead and interact with these services through the GraphQL playground inside Fauna. Feel free to enter some dummy data if you are following along. It will come in handy later while querying multiple data sources.

Setting up the gateway service

Next, we will combine these into one graph with schema stitching. To do so we need a gateway server. Let’s create a new file gateway.js. We will be using a couple of libraries from graphql tools to stitch the graphs.

Let’s go ahead and install these dependencies on our gateway server.

npm i @graphql-tools/schema @graphql-tools/stitch @graphql-tools/wrap cross-fetch --save 

In our gateway, we are going to create a new generic function called makeRemoteExecutor. This function is a factory function that returns another function. The returned asynchronous function will make the GraphQL query API call.

// gateway.js  const express = require('express'); const { graphqlHTTP } = require('express-graphql'); const { buildSchema }  = require('graphql');   function makeRemoteExecutor(url, token) {     return async ({ document, variables }) => {       const query = print(document);       const fetchResult = await fetch(url, {         method: 'POST',         headers: { 'Content-Type': 'application/json', 'Authorization': 'Bearer ' + token },         body: JSON.stringify({ query, variables }),       });       return fetchResult.json();     }  }  // Construct a schema, using GraphQL schema language const schema = buildSchema(`   type Query {     hello: String   } `);  // The root provides a resolver function for each API endpoint const rootValue = {     hello: () => 'Hello world!', };  const app = express();  app.use(   '/graphql',   graphqlHTTP(async (req) => {     return {       schema,       rootValue,       graphiql: true,     }   }), );  app.listen(4000); console.log('Running a GraphQL API server at http://localhost:4000/graphql');

As you can see above the makeRemoteExecutor has two parsed arguments. The url argument specifies the remote GraphQL url and the token argument specifies the authorization token.

We will create another function called makeGatewaySchema. In this function, we will make the proxy calls to the remote GraphQL APIs using the previously created makeRemoteExecutor function.

// gateway.js  const express = require('express'); const { graphqlHTTP } = require('express-graphql'); const { introspectSchema } = require('@graphql-tools/wrap'); const { stitchSchemas } = require('@graphql-tools/stitch'); const { fetch } = require('cross-fetch'); const { print } = require('graphql');  function makeRemoteExecutor(url, token) {   return async ({ document, variables }) => {     const query = print(document);     const fetchResult = await fetch(url, {       method: 'POST',       headers: { 'Content-Type': 'application/json', 'Authorization': 'Bearer ' + token },       body: JSON.stringify({ query, variables }),     });     return fetchResult.json();   } }  async function makeGatewaySchema() {      const reviewExecutor = await makeRemoteExecutor('https://graphql.fauna.com/graphql', 'fnAEQZPUejACQ2xuvfi50APAJ397hlGrTjhdXVta');     const productExecutor = await makeRemoteExecutor('https://graphql.fauna.com/graphql', 'fnAEQbI02HACQwTaUF9iOBbGC3fatQtclCOxZNfp');     const inventoryExecutor = await makeRemoteExecutor('https://graphql.fauna.com/graphql', 'fnAEQbI02HACQwTaUF9iOBbGC3fatQtclCOxZNfp');      return stitchSchemas({         subschemas: [           {             schema: await introspectSchema(reviewExecutor),             executor: reviewExecutor,           },           {             schema: await introspectSchema(productExecutor),             executor: productExecutor           },           {             schema: await introspectSchema(inventoryExecutor),             executor: inventoryExecutor           }         ],                  typeDefs: 'type Query { heartbeat: String! }',         resolvers: {           Query: {             heartbeat: () => 'OK'           }         }     }); }  // ...

We are using the makeRemoteExecutor function to make our remote GraphQL executors. We have three remote executors here one pointing to Product , Inventory , and Review services. As this is a demo application I have hardcoded the admin API key from Fauna directly in the code. Avoid doing this in a real application. These secrets should not be exposed in code at any time. Please use environment variables or secret managers to pull these values on runtime.

As you can see from the highlighted code above we are returning the output of the switchSchemas function from @graphql-tools. The function has an argument property called subschemas. In this property, we can pass in an array of all the subgraphs we want to fetch and combine. We are also using a function called introspectSchema from graphql-tools. This function is responsible for transforming the request from the gateway and making the proxy API request to the downstream services.

You can learn more about these functions on the graphql-tools documentation site.

Finally, we need to call the makeGatewaySchema. We can remove the previously hardcoded schema from our code and replace it with the stitched schema.

// gateway.js  // ...  const app = express();  app.use(   '/graphql',   graphqlHTTP(async (req) => {     const schema = await makeGatewaySchema();     return {       schema,       context: { authHeader: req.headers.authorization },       graphiql: true,     }   }), );  // ...

When we restart our server and go back to localhost we will see that queries and mutations from all Fauna instances are available in our GraphQL playground.

Let’s write a simple query that will fetch data from all Fauna instances simultaneously.

Stitch third party GraphQL APIs

We can stitch third-party GraphQL APIs into our gateway as well. For this demo, we are going to stitch the SpaceX open GraphQL API with our services.

The process is the same as above. We create a new executor and add it to our sub-graph array.

// ...  async function makeGatewaySchema() {    const reviewExecutor = await makeRemoteExecutor('https://graphql.fauna.com/graphql', 'fnAEQdRZVpACRMEEM1GKKYQxH2Qa4TzLKusTW2gN');   const productExecutor = await makeRemoteExecutor('https://graphql.fauna.com/graphql', 'fnAEQdSdXiACRGmgJgAEgmF_ZfO7iobiXGVP2NzT');   const inventoryExecutor = await makeRemoteExecutor('https://graphql.fauna.com/graphql', 'fnAEQdR0kYACRWKJJUUwWIYoZuD6cJDTvXI0_Y70');    const spacexExecutor = await makeRemoteExecutor('https://api.spacex.land/graphql/')    return stitchSchemas({     subschemas: [       {         schema: await introspectSchema(reviewExecutor),         executor: reviewExecutor,       },       {         schema: await introspectSchema(productExecutor),         executor: productExecutor       },       {         schema: await introspectSchema(inventoryExecutor),         executor: inventoryExecutor       },       {         schema: await introspectSchema(spacexExecutor),         executor: spacexExecutor       }     ],              typeDefs: 'type Query { heartbeat: String! }',     resolvers: {       Query: {         heartbeat: () => 'OK'       }     }   }); }  // ...

Deploying the gateway

To make this a true serverless solution we should deploy our gateway to a serverless function. For this demo, I am going to deploy the gateway into an AWS lambda function. Netlify and Vercel are the two other alternatives to AWS Lambda.

I am going to use the serverless framework to deploy the code to AWS. Let’s install the dependencies for it.

npm i -g serverless # if you don't have the serverless framework installed already npm i serverless-http body-parser --save  

Next, we need to make a configuration file called serverless.yaml

# serverless.yaml  service: my-graphql-gateway  provider:   name: aws   runtime: nodejs14.x   stage: dev   region: us-east-1  functions:   app:     handler: gateway.handler     events:       - http: ANY /       - http: 'ANY {proxy+}'

Inside the serverless.yaml we define information such as cloud provider, runtime, and the path to our lambda function. Feel free to take look at the official documentation for the serverless framework for more in-depth information.

We will need to make some minor changes to our code before we can deploy it to AWS.

npm i -g serverless # if you don't have the serverless framework installed already npm i serverless-http body-parser --save 

Notice the highlighted code above. We added the body-parser library to parse JSON body. We have also added the serverless-http library. Wrapping the express app instance with the serverless function will take care of all the underlying lambda configuration.

We can run the following command to deploy this to AWS Lambda.

serverless deploy

This will take a minute or two to deploy. Once the deployment is complete we will see the API URL in our terminal.

Make sure you put /graphql at the end of the generated URL. (i.e. https://gy06ffhe00.execute-api.us-east-1.amazonaws.com/dev/graphql).

There you have it. We have achieved complete serverless nirvana 😉. We are now running three Fauna instances independent of each other stitched together with a GraphQL gateway.

Feel free to check out the code for this article here.


Schema stitching is one of the most popular solutions to break down monoliths and achieve separation of concerns between data sources. However, there are other solutions such as Apollo Federation which pretty much works the same way. If you would like to see an article like this with Apollo Federation please let us know in the comment section. That’s it for today, see you next time.

The post Implementing a single GraphQL across multiple data sources appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.


, , , , , ,

From a Single Repo, to Multi-Repos, to Monorepo, to Multi-Monorepo

I’ve been working on the same project for several years. Its initial version was a huge monolithic app containing thousands of files. It was poorly architected and non-reusable, but was hosted in a single repo making it easy to work with. Later, I “fixed” the mess in the project by splitting the codebase into autonomous packages, hosting each of them on its own repo, and managing them with Composer. The codebase became properly architected and reusable, but being split across multiple repos made it a lot more difficult to work with.

As the code was reformatted time and again, its hosting in the repo also had to adapt, going from the initial single repo, to multiple repos, to a monorepo, to what may be called a “multi-monorepo.”

Let me take you on the journey of how this took place, explaining why and when I felt I had to switch to a new approach. The journey consists of four stages (so far!) so let’s break it down like that.

Stage 1: Single repo

The project is leoloso/PoP and it’s been through several hosting schemes, following how its code was re-architected at different times.

It was born as this WordPress site, comprising a theme and several plugins. All of the code was hosted together in the same repo.

Some time later, I needed another site with similar features so I went the quick and easy way: I duplicated the theme and added its own custom plugins, all in the same repo. I got the new site running in no time.

I did the same for another site, and then another one, and another one. Eventually the repo was hosting some 10 sites, comprising thousands of files.

A single repository hosting all our code.

Issues with the single repo

While this setup made it easy to spin up new sites, it didn’t scale well at all. The big thing is that a single change involved searching for the same string across all 10 sites. That was completely unmanageable. Let’s just say that copy/paste/search/replace became a routine thing for me.

So it was time to start coding PHP the right way.

Stage 2: Multirepo

Fast forward a couple of years. I completely split the application into PHP packages, managed via Composer and dependency injection.

Composer uses Packagist as its main PHP package repository. In order to publish a package, Packagist requires a composer.json file placed at the root of the package’s repo. That means we are unable to have multiple PHP packages, each of them with its own composer.json hosted on the same repo.

As a consequence, I had to switch from hosting all of the code in the single leoloso/PoP repo, to using multiple repos, with one repo per PHP package. To help manage them, I created the organization “PoP” in GitHub and hosted all repos there, including getpop/root, getpop/component-model, getpop/engine, and many others.

In the multirepo, each package is hosted on its own repo.

Issues with the multirepo

Handling a multirepo can be easy when you have a handful of PHP packages. But in my case, the codebase comprised over 200 PHP packages. Managing them was no fun.

The reason that the project was split into so many packages is because I also decoupled the code from WordPress (so that these could also be used with other CMSs), for which every package must be very granular, dealing with a single goal.

Now, 200 packages is not ordinary. But even if a project comprises only 10 packages, it can be difficult to manage across 10 repositories. That’s because every package must be versioned, and every version of a package depends on some version of another package. When creating pull requests, we need to configure the composer.json file on every package to use the corresponding development branch of its dependencies. It’s cumbersome and bureaucratic.

I ended up not using feature branches at all, at least in my case, and simply pointed every package to the dev-master version of its dependencies (i.e. I was not versioning packages). I wouldn’t be surprised to learn that this is a common practice more often than not.

There are tools to help manage multiple repos, like meta. It creates a project composed of multiple repos and doing git commit -m "some message" on the project executes a git commit -m "some message" command on every repo, allowing them to be in sync with each other.

However, meta will not help manage the versioning of each dependency on their composer.json file. Even though it helps alleviate the pain, it is not a definitive solution.

So, it was time to bring all packages to the same repo.

Stage 3: Monorepo

The monorepo is a single repo that hosts the code for multiple projects. Since it hosts different packages together, we can version control them together too. This way, all packages can be published with the same version, and linked across dependencies. This makes pull requests very simple.

The monorepo hosts multiple packages.

As I mentioned earlier, we are not able to publish PHP packages to Packagist if they are hosted on the same repo. But we can overcome this constraint by decoupling development and distribution of the code: we use the monorepo to host and edit the source code, and multiple repos (at one repo per package) to publish them to Packagist for distribution and consumption.

The monorepo hosts the source code, multiple repos distribute it.

Switching to the Monorepo

Switching to the monorepo approach involved the following steps:

First, I created the folder structure in leoloso/PoP to host the multiple projects. I decided to use a two-level hierarchy, first under layers/ to indicate the broader project, and then under packages/, plugins/, clients/ and whatnot to indicate the category.

Showing the HitHub repo for a project called PoP. The screen in is dark mode, so the background is near black and the text is off-white, except for blue links.
The monorepo layers indicate the broader project.

Then, I copied all source code from all repos (getpop/engine, getpop/component-model, etc.) to the corresponding location for that package in the monorepo (i.e. layers/Engine/packages/engine, layers/Engine/packages/component-model, etc).

I didn’t need to keep the Git history of the packages, so I just copied the files with Finder. Otherwise, we can use hraban/tomono or shopsys/monorepo-tools to port repos into the monorepo, while preserving their Git history and commit hashes.

Next, I updated the description of all downstream repos, to start with [READ ONLY], such as this one.

Showing the GitHub repo for the component-model project. The screen is in dark mode, so the background is near black and the text is off-white, except for blue links. There is a sidebar to the right of the screen that is next to the list of files in the repo. The sidebar has an About heading with a description that reads: Read only, component model for Pop, over which the component-based architecture is based." This is highlighted in red.
The downstream repo’s “READ ONLY” is located in the repo description.

I executed this task in bulk via GitHub’s GraphQL API. I first obtained all of the descriptions from all of the repos, with this query:

{   repositoryOwner(login: "getpop") {     repositories(first: 100) {       nodes {         id         name         description       }     }   } }

…which returned a list like this:

{   "data": {     "repositoryOwner": {       "repositories": {         "nodes": [           {             "id": "MDEwOlJlcG9zaXRvcnkxODQ2OTYyODc=",             "name": "hooks",             "description": "Contracts to implement hooks (filters and actions) for PoP"           },           {             "id": "MDEwOlJlcG9zaXRvcnkxODU1NTQ4MDE=",             "name": "root",             "description": "Declaration of dependencies shared by all PoP components"           },           {             "id": "MDEwOlJlcG9zaXRvcnkxODYyMjczNTk=",             "name": "engine",             "description": "Engine for PoP"           }         ]       }     }   } }

From there, I copied all descriptions, added [READ ONLY] to them, and for every repo generated a new query executing the updateRepository GraphQL mutation:

mutation {   updateRepository(     input: {       repositoryId: "MDEwOlJlcG9zaXRvcnkxODYyMjczNTk="       description: "[READ ONLY] Engine for PoP"     }   ) {     repository {       description     }   } }

Finally, I introduced tooling to help “split the monorepo.” Using a monorepo relies on synchronizing the code between the upstream monorepo and the downstream repos, triggered whenever a pull request is merged. This action is called “splitting the monorepo.” Splitting the monorepo can be achieved with a git subtree split command but, because I’m lazy, I’d rather use a tool.

I chose Monorepo builder, which is written in PHP. I like this tool because I can customize it with my own functionality. Other popular tools are the Git Subtree Splitter (written in Go) and Git Subsplit (bash script).

What I like about the Monorepo

I feel at home with the monorepo. The speed of development has improved because dealing with 200 packages feels pretty much like dealing with just one. The boost is most evident when refactoring the codebase, i.e. when executing updates across many packages.

The monorepo also allows me to release multiple WordPress plugins at once. All I need to do is provide a configuration to GitHub Actions via PHP code (when using the Monorepo builder) instead of hard-coding it in YAML.

To generate a WordPress plugin for distribution, I had created a generate_plugins.yml workflow that triggers when creating a release. With the monorepo, I have adapted it to generate not just one, but multiple plugins, configured via PHP through a custom command in plugin-config-entries-json, and invoked like this in GitHub Actions:

- id: output_data   run: |     echo "quot;::set-output name=plugin_config_entries::$ (vendor/bin/monorepo-builder plugin-config-entries-json)"

This way, I can generate my GraphQL API plugin and other plugins hosted in the monorepo all at once. The configuration defined via PHP is this one.

class PluginDataSource {   public function getPluginConfigEntries(): array   {     return [       // GraphQL API for WordPress       [         'path' => 'layers/GraphQLAPIForWP/plugins/graphql-api-for-wp',         'zip_file' => 'graphql-api.zip',         'main_file' => 'graphql-api.php',         'dist_repo_organization' => 'GraphQLAPI',         'dist_repo_name' => 'graphql-api-for-wp-dist',       ],       // GraphQL API - Extension Demo       [         'path' => 'layers/GraphQLAPIForWP/plugins/extension-demo',         'zip_file' => 'graphql-api-extension-demo.zip',         'main_file' =>; 'graphql-api-extension-demo.php',         'dist_repo_organization' => 'GraphQLAPI',         'dist_repo_name' => 'extension-demo-dist',       ],     ];   } }

When creating a release, the plugins are generated via GitHub Actions.

Dark mode screen in GitHub showing the actions for the project.
This figure shows plugins generated when a release is created.

If, in the future, I add the code for yet another plugin to the repo, it will also be generated without any trouble. Investing some time and energy producing this setup now will definitely save plenty of time and energy in the future.

Issues with the Monorepo

I believe the monorepo is particularly useful when all packages are coded in the same programming language, tightly coupled, and relying on the same tooling. If instead we have multiple projects based on different programming languages (such as JavaScript and PHP), composed of unrelated parts (such as the main website code and a subdomain that handles newsletter subscriptions), or tooling (such as PHPUnit and Jest), then I don’t believe the monorepo provides much of an advantage.

That said, there are downsides to the monorepo:

  • We must use the same license for all of the code hosted in the monorepo; otherwise, we’re unable to add a LICENSE.md file at the root of the monorepo and have GitHub pick it up automatically. Indeed, leoloso/PoP initially provided several libraries using MIT and the plugin using GPLv2. So, I decided to simplify it using the lowest common denominator between them, which is GPLv2.
  • There is a lot of code, a lot of documentation, and plenty of issues, all from different projects. As such, potential contributors that were attracted to a specific project can easily get confused.
  • When tagging the code, all packages are versioned independently with that tag whether their particular code was updated or not. This is an issue with the Monorepo builder and not necessarily with the monorepo approach (Symfony has solved this problem for its monorepo).
  • The issues board needs proper management. In particular, it requires labels to assign issues to the corresponding project, or risk it becoming chaotic.
Showing the list of reported issues for the project in GitHub in dark mode. The image shows just how crowded and messy the screen looks when there are a bunch of issues from different projects in the same list without a way to differentiate them.
The issues board can become chaotic without labels that are associated with projects.

All these issues are not roadblocks though. I can cope with them. However, there is an issue that the monorepo cannot help me with: hosting both public and private code together.

I’m planning to create a “PRO” version of my plugin which I plan to host in a private repo. However, the code in the repo is either public or private, so I’m unable to host my private code in the public leoloso/PoP repo. At the same time, I want to keep using my setup for the private repo too, particularly the generate_plugins.yml workflow (which already scopes the plugin and downgrades its code from PHP 8.0 to 7.1) and its possibility to configure it via PHP. And I want to keep it DRY, avoiding copy/pastes.

It was time to switch to the multi-monorepo.

Stage 4: Multi-monorepo

The multi-monorepo approach consists of different monorepos sharing their files with each other, linked via Git submodules. At its most basic, a multi-monorepo comprises two monorepos: an autonomous upstream monorepo, and a downstream monorepo that embeds the upstream repo as a Git submodule that’s able to access its files:

A giant red folder illustration is labeled as the downstream monorepo and it contains a smaller green folder showing the upstream monorepo.
The upstream monorepo is contained within the downstream monorepo.

This approach satisfies my requirements by:

  • having the public repo leoloso/PoP be the upstream monorepo, and
  • creating a private repo leoloso/GraphQLAPI-PRO that serves as the downstream monorepo.
The same illustration as before, but now the large folder is a bright pink and is labeled as with the project name, and the smaller folder is a purplish-blue and labeled with the name of the public downstream module,.
A private monorepo can access the files from a public monorepo.

leoloso/GraphQLAPI-PRO embeds leoloso/PoP under subfolder submodules/PoP (notice how GitHub links to the specific commit of the embedded repo):

This figure show how the public monorepo is embedded within the private monorepo in the GitHub project.

Now, leoloso/GraphQLAPI-PRO can access all the files from leoloso/PoP. For instance, script ci/downgrade/downgrade_code.sh from leoloso/PoP (which downgrades the code from PHP 8.0 to 7.1) can be accessed under submodules/PoP/ci/downgrade/downgrade_code.sh.

In addition, the downstream repo can load the PHP code from the upstream repo and even extend it. This way, the configuration to generate the public WordPress plugins can be overridden to produce the PRO plugin versions instead:

class PluginDataSource extends UpstreamPluginDataSource {   public function getPluginConfigEntries(): array   {     return [       // GraphQL API PRO       [         'path' => 'layers/GraphQLAPIForWP/plugins/graphql-api-pro',         'zip_file' => 'graphql-api-pro.zip',         'main_file' => 'graphql-api-pro.php',         'dist_repo_organization' => 'GraphQLAPI-PRO',         'dist_repo_name' => 'graphql-api-pro-dist',       ],       // GraphQL API Extensions       // Google Translate       [         'path' => 'layers/GraphQLAPIForWP/plugins/google-translate',         'zip_file' => 'graphql-api-google-translate.zip',         'main_file' => 'graphql-api-google-translate.php',         'dist_repo_organization' => 'GraphQLAPI-PRO',         'dist_repo_name' => 'graphql-api-google-translate-dist',       ],       // Events Manager       [         'path' => 'layers/GraphQLAPIForWP/plugins/events-manager',         'zip_file' => 'graphql-api-events-manager.zip',         'main_file' => 'graphql-api-events-manager.php',         'dist_repo_organization' => 'GraphQLAPI-PRO',         'dist_repo_name' => 'graphql-api-events-manager-dist',       ],     ];   } }

GitHub Actions will only load workflows from under .github/workflows, and the upstream workflows are under submodules/PoP/.github/workflows; hence we need to copy them. This is not ideal, though we can avoid editing the copied workflows and treat the upstream files as the single source of truth.

To copy the workflows over, a simple Composer script can do:

{   "scripts": {     "copy-workflows": [       "php -r "copy('submodules/PoP/.github/workflows/generate_plugins.yml', '.github/workflows/generate_plugins.yml');"",       "php -r "copy('submodules/PoP/.github/workflows/split_monorepo.yaml', '.github/workflows/split_monorepo.yaml');""     ]   } }

Then, each time I edit the workflows in the upstream monorepo, I also copy them to the downstream monorepo by executing the following command:

composer copy-workflows

Once this setup is in place, the private repo generates its own plugins by reusing the workflow from the public repo:

This figure shows the PRO plugins generated in GitHub Actions.

I am extremely satisfied with this approach. I feel it has removed all of the burden from my shoulders concerning the way projects are managed. I read about a WordPress plugin author complaining that managing the releases of his 10+ plugins was taking a considerable amount of time. That doesn’t happen here—after I merge my pull request, both public and private plugins are generated automatically, like magic.

Issues with the multi-monorepo

First off, it leaks. Ideally, leoloso/PoP should be completely autonomous and unaware that it is used as an upstream monorepo in a grander scheme—but that’s not the case.

When doing git checkout, the downstream monorepo must pass the --recurse-submodules option as to also checkout the submodules. In the GitHub Actions workflows for the private repo, the checkout must be done like this:

- uses: actions/checkout@v2   with:     submodules: recursive

As a result, we have to input submodules: recursive to the downstream workflow, but not to the upstream one even though they both use the same source file.

To solve this while maintaining the public monorepo as the single source of truth, the workflows in leoloso/PoP are injected the value for submodules via an environment variable CHECKOUT_SUBMODULES, like this:

env:   CHECKOUT_SUBMODULES: "";  jobs:   provide_data:     steps:       - uses: actions/checkout@v2         with:           submodules: $ {{ env.CHECKOUT_SUBMODULES }} 

The environment value is empty for the upstream monorepo, so doing submodules: "" works well. And then, when copying over the workflows from upstream to downstream, I replace the value of the environment variable to "recursive" so that it becomes:

env:   CHECKOUT_SUBMODULES: "recursive" 

(I have a PHP command to do the replacement, but we could also pipe sed in the copy-workflows composer script.)

This leakage reveals another issue with this setup: I must review all contributions to the public repo before they are merged, or they could break something downstream. The contributors would also completely unaware of those leakages (and they couldn’t be blamed for it). This situation is specific to the public/private-monorepo setup, where I am the only person who is aware of the full setup. While I share access to the public repo, I am the only one accessing the private one.

As an example of how things could go wrong, a contributor to leoloso/PoP might remove CHECKOUT_SUBMODULES: "" since it is superfluous. What the contributor doesn’t know is that, while that line is not needed, removing it will break the private repo.

I guess I need to add a warning!

env:   ### ☠️ Do not delete this line! Or bad things will happen! ☠️   CHECKOUT_SUBMODULES: "" 

Wrapping up

My repo has gone through quite a journey, being adapted to the new requirements of my code and application at different stages:

  • It started as a single repo, hosting a monolithic app.
  • It became a multirepo when splitting the app into packages.
  • It was switched to a monorepo to better manage all the packages.
  • It was upgraded to a multi-monorepo to share files with a private monorepo.

Context means everything, so there is no “best” approach here—only solutions that are more or less suitable to different scenarios.

Has my repo reached the end of its journey? Who knows? The multi-monorepo satisfies my current requirements, but it hosts all private plugins together. If I ever need to grant contractors access to a specific private plugin, while preventing them to access other code, then the monorepo may no longer be the ideal solution for me, and I’ll need to iterate again.

I hope you have enjoyed the journey. And, if you have any ideas or examples from your own experiences, I’d love to hear about them in the comments.

The post From a Single Repo, to Multi-Repos, to Monorepo, to Multi-Monorepo appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.


, , , , ,

How We Improved the Accessibility of Our Single Page App Menu

I recently started working on a Progressive Web App (PWA) for a client with my team. We’re using React with client-side routing via React Router, and one of the first elements that we made was the main menu. Menus are a key component of any site or app. That’s really how folks get around, so making it accessible was a super high priority for the team.

But in the process, we learned that making an accessible main menu in a PWA isn’t as obvious as it might sound. I thought I’d share some of those lessons with you and how we overcame them.

As far as requirements go, we wanted a menu that users could not only navigate using a mouse, but using a keyboard as well, the acceptance criteria being that a user should be able to tab through the top-level menu items, and the sub-menu items that would otherwise only be visible if a user with a mouse hovered over a top-level menu item. And, of course, we wanted a focus ring to follow the elements that have focus.

The first thing we had to do was update the existing CSS that was set up to reveal a sub-menu when a top-level menu item is hovered. We were previously using the visibility property, changing between visible and hidden on the parent container’s hovered state. This works fine for mouse users, but for keyboard users, focus doesn’t automatically move to an element that is set to visibility: hidden (the same applies for elements that are given display: none). So we removed the visibility property, and instead used a very large negative position value:

.menu-item {   position: relative; }  .sub-menu {   position: absolute   left: -100000px; /* Kicking off  the page instead of hiding visiblity */ }  .menu-item:hover .sub-menu {   left: 0; }

This works perfectly fine for mouse users. But for keyboard users, the sub menu still wasn’t visible even though focus was within that sub menu! In order to make the sub-menu visible when an element within it has focus, we needed to make use of :focus and :focus-within on the parent container:

.menu-item {   position: relative; }  .sub-menu {   position: absolute   left: -100000px; }  .menu-item:hover .sub-menu, .menu-item:focus .sub-menu, .menu-item:focus-within .sub-menu {   left: 0; }

This updated code allows the the sub-menus to appear as each of the links within that menu gets focus. As soon as focus moves to the next sub menu, the first one hides, and the second becomes visible. Perfect! We considered this task complete, so a pull request was created and it was merged into the main branch.

But then we used the menu ourselves the next day in staging to create another page and ran into a problem. Upon selecting a menu item—regardless of whether it’s a click or a tab—the menu itself wouldn’t hide. Mouse users would have to click off to the side in some white space to clear the focus, and keyboard users were completely stuck! They couldn’t hit the esc key to clear focus, nor any other key combination. Instead, keyboard users would have to press the tab key enough times to move the focus through the menu and onto another element that didn’t cause a large drop down to obscure their view.

The reason the menu would stay visible is because the selected menu item retained focus. Client-side routing in a Single Page Application (SPA) means that only a part of the page will update; there isn’t a full page reload.

There was another issue we noticed: it was difficult for a keyboard user to use our “Jump to Content” link. Web users typically expect that pressing the tab key once will highlight a “Jump to Content” link, but our menu implementation broke that. We had to come up with a pattern to effectively replicate the “focus clearing” that browsers would otherwise give us for free on a full page reload.

The first option we tried was the easiest: Add an onClick prop to React Router’s Link component, calling document.activeElement.blur() when a link in the menu is selected:

const Menu = () => {   const clearFocus = () => {     document.activeElement.blur();   }    return (     <ul className="menu">       <li className="menu-item">         <Link to="/" onClick={clearFocus}>Home</Link>       </li>       <li className="menu-item">         <Link to="/products" onClick={clearFocus}>Products</Link>         <ul className="sub-menu">           <li>             <Link to="/products/tops" onClick={clearFocus}>Tops</Link>           </li>           <li>             <Link to="/products/bottoms" onClick={clearFocus}>Bottoms</Link>           </li>           <li>             <Link to="/products/accessories" onClick={clearFocus}>Accessories</Link>           </li>         </ul>       </li>     </ul>   ); }

This approach worked well for “closing” the menu after an item is clicked. However, if a keyboard user pressed the tab key after selecting one of the menu links, then the next link would become focused. As mentioned earlier, pressing the tab key after a navigation event would ideally focus on the “Jump to Content” link first.

At this point, we knew we were going to have to programmatically force focus to another element, preferably one that’s high up in the DOM. That way, when a user starts tabbing after a navigation event, they’ll arrive at or near the top of the page, similiar to a full page reload, making it much easier to access the jump link.

We initially tried to force focus on the <body> element itself, but this didn’t work as the body isn’t something the user can interact with. There wasn’t a way for it to receive focus.

The next idea was to force focus on the logo in the header, as this itself is just a link back to the home page and can receive focus. However, in this particular case, the logo was below the “Jump To Content” link in the DOM, which means that a user would have to shift + tab to get to it. No good.

We finally decided that we had to render an interact-able element, for example, an anchor element, in the DOM, at a point that’s above than the “Jump to Content” link. This new anchor element would be styled so that it’s invisible and that users are unable to focus on it using “normal” web interactions (i.e. it’s taken out of the normal tab flow). When a user selects a menu item, focus would be programmatically forced to this new anchor element, which means that pressing tab again would focus directly on the “Jump to Content” link. It also meant that the sub-menu would immediately hide itself once a menu item is selected.

const App = () => {   const focusResetRef = React.useRef();    const handleResetFocus = () => {     focusResetRef.current.focus();   };    return (     <Fragment>       <a         ref={focusResetRef}         href="javascript:void(0)"         tabIndex="-1"         style={{ position: "fixed", top: "-10000px" }}         aria-hidden       >Focus Reset</a>       <a href="#main" className="jump-to-content-a11y-styles">Jump To Content</a>       <Menu onSelectMenuItem={handleResetFocus} />       ...     </Fragment>   ) }

Some notes of this new “Focus Reset” anchor element:

  • href is set to javascript:void(0) so that if a user manages to interact with the element, nothing actually happens. For example, if a user presses the return key immediately after selecting a menu item, that will trigger the interaction. In that instance, we don’t want the page to do anything, or the URL to change.
  • tabIndex is set to -1 so that a user can’t “normally” move focus to this element. It also means that the first time a user presses the tab key upon loading a page, this element won’t be focused, but the “Jump To Content” link instead.
  • style simply moves the element out of the viewport. Setting to position: fixed ensures it’s taken out of the document flow, so there isn’t any vertical space allocated to the element
  • aria-hidden tells screen readers that this element isn’t important, so don’t announce it to users

But we figured we could improve this even further! Let’s imagine we have a mega menu, and the menu doesn’t hide automatically when a mouse user clicks a link. That’s going to cause frustration. A user will have to precisely move their mouse to a section of the page that doesn’t contain the menu in order to clear the :hover state, and therefore allow the menu to close.

What we need is to “force clear” the hover state. We can do that with the help of React and a clearHover class:

// Menu.jsx const Menu = (props) => {   const { onSelectMenuItem } = props;   const [clearHover, setClearHover] = React.useState(false);    const closeMenu= () => {     onSelectMenuItem();     setClearHover(true);   }    React.useEffect(() => {     let timeout;     if (clearHover) {       timeout = setTimeout(() => {         setClearHover(false);       }, 0); // Adjust this timeout to suit the applications' needs     }     return () => clearTimeout(timeout);   }, [clearHover]);    return (     <ul className={`menu $  {clearHover ? "clearHover" : ""}`}>       <li className="menu-item">         <Link to="/" onClick={closeMenu}>Home</Link>       </li>       <li className="menu-item">         <Link to="/products" onClick={closeMenu}>Products</Link>         <ul className="sub-menu">           {/* Sub Menu Items */}         </ul>       </li>     </ul>   ); }

This updated code hides the menu immediately when a menu item is clicked. It also hides immediately when a keyboard user selects a menu item. Pressing the tab key after selecting a navigation link moves the focus to the “Jump to Content” link.

At this point, our team had updated the menu component to a point where we were super happy. Both keyboard and mouse users get a consistent experience, and that experience follows what a browser does by default for a full page reload.

Our actual implementation is slightly different than the example here so we could use the pattern on other projects. We put it into a React Context, with the Provider set to wrap the Header component, and the Focus Reset element being automatically added just before the Provider’s children. That way, the element is placed before the “Jump to Content” link in the DOM hierarchy. It also allows us to access the focus reset function with a simple hook, instead of having to prop drill it.

We have created a Code Sandbox that allows you to play with the three different solutions we covered here. You’ll definitely see the pain points of the earlier implementation, and then see how much better the end result feels!

We would love to hear feedback on this implementation! We think it’s going to work well, but it hasn’t been released to in the wild yet, so we don’t have definitive data or user feedback. We’re certainly not a11y experts, just doing our best with what we do know, and are very open and willing to learn more on the topic.

The post How We Improved the Accessibility of Our Single Page App Menu appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.


, , , ,

A Whole Website in a Single HTML File

I can’t stop thinking about this site. It looks like a pretty standard fare; a website with links to different pages. Nothing to write home about except that… the whole website is contained within a single HTML file.

What about clicking the navigation links, you ask? Each link merely shows and hides certain parts of the HTML.

<section id="home">   <!-- home content goes here --> </section> <section id="about">   <!-- about page goes here --> </section>

Each <section> is hidden with CSS:

section { display: none; }

Each link in the main navigation points to an anchor on the page:

<a href="#home">Home</a> <a href="#about">About</a>

And once you click a link, the <section> for that particular link is displayed via:

section:target { display: block; }

See that :target pseudo selector? That’s the magic! Sure, it’s been around for years, but this is a clever way to use it for sure. Most times, it’s used to highlight the anchor on the page once an anchor link to it has been clicked. That’s a handy way to help the user know where they’ve just jumped to.

Anyway, using :target like this is super smart stuff! It ends up looking like just a regular website when you click around:

Direct Link to ArticlePermalink

The post A Whole Website in a Single HTML File appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.


, , , ,

React Single File Components Are Here

Shawn Wang is talking about RedwoodJS here:

…  it is the first time React components are being expressed in a single file format with explicit conventions.

Which is the RedwoodJS idea of Cells. To me, it feels like a slightly cleaner version of how Apollo wants you to do it with useQuery. Shawn makes that same connection and I know RedwoodJS uses Apollo, so I’m thinking it’s some nice semantic sugar.

There is a lot of cool stuff going on in RedwoodJS. “A highly opinionated stack” if its helpful to think of it that way, but Tom made clear in our last episode of ShopTalk that it’s not like Rails. Not that Rails is bad (it isn’t), but that this new world can do things in new and better ways that make for long-term healthy software.

The post React Single File Components Are Here appeared first on CSS-Tricks.


, , , ,

A Whole Bunch of Places to Consider Contrast in a Single Paragraph

When we’re thinking about choosing colors in design, we’re always thinking about accessibility. Whenever colors touch, there is contrast and, if we’re talking about the color contrast of text, it needs to be high enough to be readable. This benefits people with a variety of visual disabilities, but also everyone appreciates easily readable text.

Let’s look at the color contrast considerations of just a single paragraph!

A link in a paragraph is probably set in another color to distinguish it as a link.

This link color has insufficient color contrast.

Focusable elements, like links, need to have focus styles as well. By default, we’ll probably get a fuzzy blue outline, so we’ll need to check that that has sufficient color contrast. And if we customize it, that customization will need good color contrast as well.

We’ll also need a hover state for mouse users. Here, I’ll use a background effect that covers about half the text. When you do something like that, the effect needs to be strong enough to be noticeable on hover, and the contrast needs to work on both backgrounds.

It’s not ultra common, but you can style :visited links as well. The styling is somewhat limited (you can’t change opacity or background, for example, because it’s a security concern), but you can change the color. If you do that, it also needs proper contrast. Perhaps we could yank the color differentiation away with color: inherit; to somewhat de-emphasize the link but still indicate it.

Another thing to consider is people highlighting text. On my machine, I have it set up to highlight in an orange color by default (it’s normally a light blue on macOS).

Hopefully, the link colors hold up under any highlight color choice a user might have. They tend to be very light shades.

The text selection colors on macOS Catalina

You can control text selection color through CSS as well with ::selection { background: purple; }. If you’ve done that, you have to check all your colors again to make sure you’ve gotten sufficient contrast.

You can try to take measures into your own hands by changing both color and background.

But note that the link inside the text has lost some of its contrast here. We could keep going down the rabbit hole by setting new colors on a::selection and every other element that has its own colorations within the text.

I’d say a good rule of thumb is to go very light if you’re going to do a custom highlighting color and then not do any further customizations specifically for the selected text.

The idea for this post came from some bug reports I’ve gotten from people reporting that links are hard to read on this site when highlighting text, then focusing a link. I tried fixing it by getting more elaborate, but there are limits to what you can do.

::selection a:focus { /* nope */ } a:focus::selection  { /* nope */ } a::selection:focus  { /* nope */ }

It’s better handled by doing less than more.

I recently read Ethan Muller’s Designing for Design Systems post and he makes a similar point about contrast, but across a color spectrum rather than across contexts. For example, your blue link color might work just fine on a few of your design system colors, but not so much on others.

There are a couple things that might help with this.

  • The generic advice of “every time you declare a background-color, declare a color” might help.
  • I’ve used parent-level classes like on-light to change colors of things broadly inside light-colored containers.
  • I’ve used a Sass mixin to help make it easier to change links and the link states, so I’m more apt to do it when changing backgrounds.
@mixin linkStyleText($ linkColor) {   a {     // Update generic link styles with $ linkColor     &:hover,     &:focus {       // Also change the hover/focus styles, probably by altering $ linkColor programatically     }   } }  .variation {    background: $ someNewBGColor;    linkStyleText($ someNewLinkColor) }

Well, thanks for coming on this journey with me. It just occurred to me that color contrast isn’t quite as simple as checking a single link color on a background to make sure it passes. There might be a dozen or more things to check depending on how many things you colorize in your body copy.

The font in these screenshots is Ibarra Real Nova, which was just recently released on Google Fonts.

The post A Whole Bunch of Places to Consider Contrast in a Single Paragraph appeared first on CSS-Tricks.


, , , , , ,

Recipes for Performance Testing Single Page Applications in WebPageTest

WebPageTest is an online tool and an Open Source project to help developers audit the performance of their websites. As a Web Performance Evangelist at Theodo, I use it every single day. I am constantly amazed at what it offers to the web development community at large and the web performance folks particularly — for free.

But things can get difficult pretty quickly when dealing with Single Page Applications — usually written with React, Vue, Svelte or any other front-end framework. How can you get through a log in page? How can you test the performance of your users’ flow, when most of it happens client-side and does not have a specific URL to point to?

Throughout this article, we are going to find out how to solve these problems (and many more), and you’ll be ready to test the performance of your Single Page Application with WebPageTest!

Note: This articles requires an intermediate understanding about some of WebPageTest advanced features.

If you are curious about web performance and want a good introduction to WebPageTest, I would highly recommend the following resources:

The problem with testing Single Page Applications

Single Page Applications (SPAs) radically changed the way websites work. Instead of letting the back end (e.g. Django, Rails and Laravel) do most of the grunt work and delivering “ready-to-use” HTML to the browser, SPAs rely heavily on JavaScript to have the browser compute HTML. Such front-end frameworks include React, Vue, Angular or Svelte.

The simplicity of WebPageTest is what makes part of its appeal to developers: head to http://webpagetest.org/easy, enter your URL, wait a little, and voilà! Your performance audit is ready.

If you are building an SPA and want to measure its performance, you could rely on end-to-end testing tools like Selenium, Cypress or Puppeteer. However, I have found that none of these has the amount of performance-related information and easy-to-use tooling that WebPageTest offers.

But testing SPAs with WebPageTest can be complex.

In many SPAs, most of the site is protected behind a log in form. I often use Netlify for hosting my sites (including my personal blog), and most of the time I spend in the application is on authenticated pages, like the dashboard listing all my websites. As the information on my dashboard is specific to me, any other user trying to access https://app.netlify.com/teams/phacks/sites is not going to see my dashboard, but will instead be redirected to either a login or 404 page.

The same goes for WebPageTest. If I enter my dashboard URL into http://webpagetest.org/easy, the audit will be performed against the login page.

Moreover, testing and monitoring the performance of dynamic interactions in SPAs cannot be achieved with simple WebPageTest audits.

Here’s an example. Nuage is a domain name registrar with fancy animations and a beautiful, dynamic interface. When you search for domain names to buy, an asynchronous call fetches the results of the request and the results are displayed as they are retrieved.

As you might have noticed in the video above, the URL of the page does not change as I type my search terms. As a consequence, it is not possible to test the performance of the search experience using a simple WebPageTest audit as we do not have a proper URL to point to the action of searching something — only to an empty search page.

Some other problems can arise from the SPA paradigm shift when using WebPageTest:

  • Clicking around to navigate a webpage is usually harder than merely heading to a new URL, but it is sometimes the only option in SPAs.
  • Authentication in SPAs is usually implemented using JSON Web Tokens instead of good ol’ cookies, which rules out the option of setting authentication cookies directly in WebPageTest (as described here).
  • Using React and Redux (or other application state management libraries) for your SPA can mean that forms are harder to fill out programmatically, since using .innerText() or .value() to set a field’s value may not forward it to the application store.
  • As API calls are often asynchronous and various loaders can be used to indicate a loading state, those can “trick” WebPageTest into believing the page has actually finished loading when it has, in fact, not. I have seen it happen with longer-than-usual API calls (5+ seconds).

As I have faced these problems on several projects, I have come up with a range of tips and techniques to counter them.

The many ways of selecting an element

Selecting DOM elements is a key part of doing all sorts of automated testing, be it for end-to-end testing with Selenium or Cypress or for performance testing with WebPageTest. Selecting DOM elements allows us to click on links and buttons, fill in forms and more generally interact with the application.

There are several ways of selecting a particular DOM elements using native browser APIs, that range from the straightforward document.getElementsByClassName to the thorny but really powerful XPath selectors. In this section, we will see three different possibilities, ordered by increasing complexity.

Get an element by id, className or tagName

If the element you want to select (say, an “Empty Cart” button) has a specific and unique id (e.g. #empty-cart), class name, or is the only button on the page, you can click on it using the getElementsBy methods:

const emptyCartButton = document.getElementsById("empty-cart")[0]; // or document.getElementsByClassName(".empty-cart-button")[0] // or document.getElementsByTagName("button")[0] emptyCartButton.click();

If you have several buttons on the same page, you can filter the resulting list before interacting with the element:

const buttons = document.getElementsByTagName("button"); const emptyCartButton = buttons.filter(button =>   button.innerText.includes("Empty Cart") )[0]; emptyCartButton.click();

Use complex CSS selectors

Sometimes, the particular element you want to interact with does not present an interesting unicity property in either its ID, class or tag.

One way to circumvent this issue is to add this unicity manually, for testing purposes only. Adding #perf-test-empty-cart-button to a specific button is innocuous for your website markup and can dramatically simplify your testing setup.

However, this solution can sometimes be out of reach: you may not have access to the source code of the application, or may not be able to deploy new versions quickly. In those situations, it is useful to know about document.querySelector (and its variant document.querySelectorAll) and using complex CSS selectors.

Here are a few examples of what can be achieved with document.querySelector:

// Select the first input with the `name="username"` property document.querySelector("input[name='username']"); // Select all number inputs document.querySelectorAll("input[type='number']");  // Select the first h1 inside the <section> document.querySelector("section h1");  // Select the first direct descendent of a <nav> which is of type <img> document.querySelector("nav > img");

What’s interesting here is you have the full power of CSS selectors at hand. I encourage you to have a look at the always-useful MDN’s reference table of selectors!

Going nuclear: XPath selectors

XML Path Language (XPath), albeit really powerful, is harder to grasp and maintain than the CSS solutions above. I rarely have to use it, but it is definitively useful to know that it exists.

One such instance is when you want to select a node by its text value, and can’t resort to CSS selectors. Here’s a handy snippet to use in those cases:

// Returns the  that has the exact content 'Sep 16, 2015' document.evaluate(   "//span[text()='Sep 16, 2015']",   document,   null,   XPathResult.FIRST_ORDERED_NODE_TYPE,   null ).singleNodeValue;

I will not go into details on how to use it as it would have me wander away from the goal of this article. To be fair, I don’t even know what many of the parameters above even mean. However, I can definitely recommend the MDN documentation should you want to read on the topic.

Recipes for common use cases

In the following section, we will see how to test the performance in common use cases of Single Page Applications. I call these my testing recipes.

In order to illustrate those recipes, I will use the React Admin demo website as an example. React Admin is an open source project aimed at building admin applications and back offices.

It is a typical example of a SPA because it uses React (as the name suggests), calls remote APIs, has a login interface, many forms and relies on client-side routing. I encourage you to go take a quick look at the website (the demo account is demo/demo ) in order to have an idea of what we will be trying to achieve.

Authentication and forms

The authentication page of React Admin requires the user to input a username and a password:

The authentication screen of React Admin

Intuitively, one could take the following approach to filling in the form and submit:

const [usernameInput, passwordInput] = document.getElementsByTagName("input"); usernameInput.value = "demo"; // innerText could also be used here passwordInput.value = "demo"; document.getElementsByTagName("button")[0].click();

If you run these commands sequentially in a DevTools console on the login page, you will see that all fields are reset and the login request fails upon submitting by clicking the button. The problem comes from the fact that the new values we set with .value() (or .innerText()) are not kicked back to the Redux store, and thus not “processed” by the application.

What we need to do then is explicitly tell React that the value has changed so that it will update internal bookkeeping accordingly. This can be achieved using the Event interface.

const updateInputValue = (input, newValue) => {   let lastValue = input.value;   input.value = newValue;   let event = new Event("input", { bubbles: true });   let tracker = input._valueTracker;   if (tracker) {     tracker.setValue(lastValue);   }   input.dispatchEvent(event); };

Note: this solution is pretty hacky (even according to its own author), however it works well for our purposes here.

Our updated script becomes:

const updateInputValue = (input, newValue) => {   let lastValue = input.value;   input.value = newValue;   let event = new Event("input", { bubbles: true });   let tracker = input._valueTracker;   if (tracker) {     tracker.setValue(lastValue);   }   input.dispatchEvent(event); };  const [usernameInput, passwordInput] = document.getElementsByTagName("input");  updateInputValue(usernameInput, "demo"); updateInputValue(passwordInput, "demo");  document.getElementsByTagName("button")[0].click();

Hurrah! You can try it in your browser’s console—It works like a charm.

Translating this to an actual WebPageTest script (with scripting keywords, single line commands and tab-separated parameters) would look like this:

setEventName    Go to Login  navigate    https://marmelab.com/react-admin-demo/  setEventName    Login      exec    const updateInputValue = (input, newValue) => {  let lastValue = input.value;  input.value = newValue;  let event = new Event("input", { bubbles: true });  let tracker = input._valueTracker;  if (tracker) {  tracker.setValue(lastValue);  }  input.dispatchEvent(event);};  exec    const [usernameInput, passwordInput] = document.getElementsByTagName("input")  exec    updateInputValue(usernameInput, "demo") exec    updateInputValue(passwordInput, "demo")  execAndWait document.getElementsByTagName("button")[0].click()

Note that clicking on the submit button leads us to a new page and triggers API calls, which means we need to use the execAndWait command.

You can see the full results of the test at this address. (Note: the results may have been archived by WebPageTest — you can, however, run the test again yourself!)

Here is a short video (captured by WebPageTest) in which you can see that we indeed passed the authentication step:

Navigating between pages

For traditional Server Rendered pages, navigating from one URL to the next in WebPageTest scripting is done via the navigate <url> command.

However, for SPAs, this does not reflect the experience of the user, as client-side routing means that the server has no role in navigation. Thus, hitting a URL directly would significantly slow down the measured performance (because of the time it takes for the JavaScript framework to be compiled, parsed and executed), a slowdown that the user does not experience when changing pages. As it is crucial to simulate the user flow the best we can, we need to handle the navigation on the client as well.

Hopefully, this is a lot simpler to do than filling up forms. We only need to select the link (or button) that will take us to the new page, and .click() on it! Let’s follow through our previous example, although now we want to test the performance of the Reviews list, and of a single Review page.

A user would typically click on the Reviews item on the left-hand navigation menu, then on any item in the list. Inspecting the elements in DevTools may lead us to a selection strategy as follows:

document.querySelector("a[href='#reviews']"); // select the Reviews link in the menu document.querySelector("table tr"); // select the first item in the Reviews list

As both clicks lead to page transition and API calls (to fetch the reviews), we need to use the execAndWait keyword for the script:

setEventName    Go to Login  navigate    https://marmelab.com/react-admin-demo/  setEventName    Login  exec    const updateInputValue = (input, newValue) => {  let lastValue = input.value;  input.value = newValue;  let event = new Event("input", { bubbles: true });  let tracker = input._valueTracker;  if (tracker) {    tracker.setValue(lastValue);  }  input.dispatchEvent(event);};  exec    const [usernameInput, passwordInput] = document.getElementsByTagName("input")  exec    updateInputValue(usernameInput, "demo") exec    updateInputValue(passwordInput, "demo")  execAndWait document.getElementsByTagName("button")[0].click()  setEventName    Go to Reviews  execAndWait document.querySelector("a[href='#/reviews']").click()  setEventName    Open a single Review  execAndWait document.querySelector("table tbody tr").click()

Here’s the video of the complete script running on WebPageTest:

The audit result from WebPageTest shows the performance metrics and waterfall graphs for each step of the script, allowing us to monitor the performance of each API call and interaction:

What about Internet Explorer 11 compatibility?

WebPageTest allows us to select which location, browser and network conditions the test will use. Internet Explorer 11 (IE11) is among the available browser options, and if you try the previous scripts on IE11, they will fail.

This is due to two reasons:

The ES6 syntax problem can be overcome by translating our scripts to ES5 syntax (no arrow functions, no let and const, no array destructuring), which might look like this:

setEventName    Go to Login  navigate    https://marmelab.com/react-admin-demo/  setEventName    Login  exec    var updateInputValue = function(input, newValue) {  var lastValue = input.value;  input.value = newValue;  var event = new Event("input", { bubbles: true });  var tracker = input._valueTracker;  if (tracker) {    tracker.setValue(lastValue);  }  input.dispatchEvent(event);};  exec    var usernameInput = document.getElementsByTagName("input")[0] exec    var passwordInput = document.getElementsByTagName("input")[1]  exec    updateInputValue(usernameInput, "demo") exec    updateInputValue(passwordInput, "demo")  execAndWait document.getElementsByTagName("button")[0].click()  setEventName    Go to Reviews  execAndWait document.querySelector("a[href='#/reviews']").click()  setEventName    Open a single Review  execAndWait document.querySelector("table tbody tr").click()

In order to bypass the absence of CustomEvent support, we can turn to polyfills and add one manually at the top of the script. This polyfill is available on MDN:

(function() {   if (typeof window.CustomEvent === "function") return false;   function CustomEvent(event, params) {     params = params || { bubbles: false, cancelable: false, detail: undefined };     var evt = document.createEvent("CustomEvent");     evt.initCustomEvent(       event,       params.bubbles,       params.cancelable,       params.detail     );     return evt;   }   CustomEvent.prototype = window.Event.prototype;   window.CustomEvent = CustomEvent; })();

We can then replace all mentions of Event by CustomEvent, set the polyfill to fit on a single line and we are good to go!

setEventName    Go to Login  navigate    https://marmelab.com/react-admin-demo/  exec    (function(){if(typeof window.CustomEvent==="function")return false;function CustomEvent(event,params){params=params||{bubbles:false,cancelable:false,detail:undefined};var evt=document.createEvent("CustomEvent");evt.initCustomEvent(event,params.bubbles,params.cancelable,params.detail);return evt}CustomEvent.prototype=window.Event.prototype;window.CustomEvent=CustomEvent})();  setEventName    Login  exec    var updateInputValue = function(input, newValue) {  var lastValue = input.value;  input.value = newValue;  var event = new CustomEvent("input", { bubbles: true });  var tracker = input._valueTracker;  if (tracker) {    tracker.setValue(lastValue);  }  input.dispatchEvent(event);};  exec    var usernameInput = document.getElementsByTagName("input")[0] exec    var passwordInput = document.getElementsByTagName("input")[1]  exec    updateInputValue(usernameInput, "demo") exec    updateInputValue(passwordInput, "demo")  execAndWait document.getElementsByTagName("button")[0].click()  setEventName    Go to Reviews  execAndWait document.querySelector("a[href='#/reviews']").click()  setEventName    Open a single Review  execAndWait document.querySelector("table tbody tr").click()

Et voilà!

General tips and tricks for WebPageTest scripting

One last thing I want to do is provide a few tips and tricks that make writing WebPageTest scripts easier. Feel free to DM me on Twitter if you have any suggestions!

Security first!

Remember to tick both privacy checkboxes if your script includes senstitive data, like credentials!

WebPageTest security controls

Browse the docs

The WebPageTest Scripting docs are full of features that I didn’t cover in this article, ranging from DNS Overriding to iPhone Spoofing and even if/else conditionals.

When you plan on writing a new script, I recommend to have a look at the available parameters first and see if any can help make your scripting easier or more robust.

Long loading states

Sometimes, a remote API call (say, for fetching the reviews) will take a long time. A loading indicator, such as a spinner, can be used to tell the user to wait a bit as something is happening.

WebPageTest tries to detect when a page has finished loading by figuring out if things are changing on the screen. If your loading indicator lasts a long time, WebPageTest might mistake it for an integral part of your page design and cut the audit before the API call returns — thus truncating your measures.

A way to circumvent this issue is to tell WebPageTest to wait at least a certain duration before stopping the test. This is a parameter available under the Advanced tab:

WebPageTest minimum test duration

Keeping your script (and results) human-readable

  • Use blank lines and comments (//) generously because single-line JavaScript commands can sometimes be hard to grasp.
  • Keep a multi-line version somewhere as your reference, and single-line everything as you are about to test. This helps readability. Like, a lot.
  • Use setEventName to name your different “steps.” This makes for more readable tests as it explicits the sequence of pages the audit goes through, and also appears in the WebPageTest results.

Iterating on your scripts

  • First, make sure that your script works in the browser. To do so, strip the WebPageTest keywords (the first word of every line of your script), then copy and paste each line in the browser console to verify that everything is working as expected at every step of the way.
  • Once you are ready to submit your test to WebPageTest, do it first with very light settings: only one run, a fast browser (cough — not IE11 — cough), no network throttling, no repeat view, a well-dimensioned instance (Dulles, VA, usually has good response times). This will help you detect and correct errors way faster.

Automating your scripts

Your test script is running smoothly, and you start getting performance reports of your Single Page App. As you ship new features, it is important that you monitor its performance regularly to catch regressions at the earliest.

To address this problem, I am currently working on Falco, a soon-to-be-open-sourced WebPageTest test runner. Falco takes care of automating your audits, then presents the results in an easy-to-read interface while letting you read the full reports when you need it. You can follow me on Twitter to know when it goes open source, and learn more about web performance and WebPageTest!

The post Recipes for Performance Testing Single Page Applications in WebPageTest appeared first on CSS-Tricks.


, , , , , ,