DebugBear takes just a few seconds to start using. You literally point it at a URL you want to watch, and it’ll start watching it. You install nothing.
It’ll start running tests, and you’ve immediately got performance charts you can start looking at that are tracked over time, not just one-offs.
Five minutes after signing up I’ve got great data to look at, including Core Web Vitals
I’ve got full Lighthouse reports right in there, showing me stuff I really need to work on.
Because all these changes are tracked over time, you can see exactly what changed and when. That’s pretty big. Did your JavaScript bundle jump up in size? When? Why? Did your SEO score take a dip? When? Why?
Now I have an exact idea of what’s causing an issue and how it’s affected performance over time.
The best part is being able to see how the site’s Core Web Vitals have performed over time.
Another great thing: DebugBear will email you (or send a Slack message) when you have regressions. So, even though the charts are amazing, it’s not like you have to log in every time to see what’s up. You can also set very clear performance budgets to test against:
Break a performance budget? You’ll be notified:
An email alert of an exceeded performance budget.
A Slack alert warning of HTML validation errors.
Want to compare across different areas/pages of your site? (You can log in to them before you test them, by the way.) Or compare yourself to competitors to make sure you aren’t falling behind? No problem, monitor those too:
Testing production is a very good thing. It’s measuring reality and you can get started quickly. But it can also be a very good thing to measure things before they become a problem. You know how you get deploy previews on services like Netlify and Vercel? Well, DebugBear has integrations built just for for services like Netlify and Vercel.
Now, when you’ve got a Pull Request with a deploy preview, you can see right on GitHub if the metrics are in-line.
That’s an awful lot of value for very little work. But don’t be fooled by the simplicity — there are all kinds of advanced things you can do. You can warm the cache. You can test from different geolocations. You can write a script for a login that takes the CSS selectors for inputs and the values to put in them. You can even even have it execute your own JavaScript to do special things to get it ready for testing, like open modals, inject peformance.mark metrics, or do other navigation. 🎉
Lighthouse is a free and open-source tool for assessing your website’s performance, accessibility, progressive web app metrics, SEO, and more. The easiest way to use it is through the Chrome DevTools panel. Once you open the DevTools, you will see a “Lighthouse” tab. Clicking the “Generate report” button will run a series of tests on the web page and display the results right there in the Lighthouse tab. This makes it easy to test any web page, whether public or requiring authentication.
If you don’t use Chrome or Chromium-based browsers, like Microsoft Edge or Brave, you can run Lighthouse through its web interface but it only works with publicly available web pages. A Node CLI tool is also provided for those who wish to run Lighthouse audits from the command line.
All the options listed above require some form of manual intervention. Wouldn‘t it be great if we could integrate Lighthouse testing in the continuous integration process so that the impact of our code changes can be displayed inline with each pull request, and so that we can fail the builds if certain performance thresholds are not net? Well, that’s exactly why Lighthouse CI exists!
It is a suite of tools that help you identify the impact of specific code changes on you site not just performance-wise, but in terms of SEO, accessibility, offline support, and other best practices. It’s offers a great way to enforce performance budgets, and also helps you keep track of each reported metric so you can see how they have changed over time.
In this article, we’ll go over how to set up Lighthouse CI and run it locally, then how to get it working as part of a CI workflow through GitHub Actions. Note that Lighthouse CI also works with other CI providers such as Travis CI, GitLab CI, and Circle CI in case you prefer to not to use GitHub Actions.
Setting up the Lighthouse CI locally
In this section, you will configure and run the Lighthouse CI command line tool locally on your machine. Before you proceed, ensure you have Node.js v10 LTS or later and Google Chrome (stable) installed on your machine, then proceed to install the Lighthouse CI tool globally:
$ npm install -g @lhci/cli
Once the CLI has been installed successfully, ru lhci --help to view all the available commands that the tool provides. There are eight commands available at the time of writing.
$ lhci --help lhci <command> <options> Commands: lhci collect Run Lighthouse and save the results to a local folder lhci upload Save the results to the server lhci assert Assert that the latest results meet expectations lhci autorun Run collect/assert/upload with sensible defaults lhci healthcheck Run diagnostics to ensure a valid configuration lhci open Opens the HTML reports of collected runs lhci wizard Step-by-step wizard for CI tasks like creating a project lhci server Run Lighthouse CI server Options: --help Show help [boolean] --version Show version number [boolean] --no-lighthouserc Disables automatic usage of a .lighthouserc file. [boolean] --config Path to JSON config file
At this point, you‘re ready to configure the CLI for your project. The Lighthouse CI configuration can be managed through (in order of increasing precedence) a configuration file, environmental variables, or CLI flags. It uses the Yargs API to read its configuration options, which means there’s a lot of flexibility in how it can be configured. The full documentation covers it all. In this post, we’ll make use of the configuration file option.
Go ahead and create a lighthouserc.js file in the root of your project directory. Make sure the project is being tracked with Git because the Lighthouse CI automatically infers the build context settings from the Git repository. If your project does not use Git, you can control the build context settings through environmental variables instead.
touch lighthouserc.js
Here’s the simplest configuration that will run and collect Lighthouse reports for a static website project, and upload them to temporary public storage.
The ci.collect object offers several options to control how the Lighthouse CI collects test reports. The staticDistDir option is used to indicate the location of your static HTML files — for example, Hugo builds to a public directory, Jekyll places its build files in a _site directory, and so on. All you need to do is update the staticDistDir option to wherever your build is located. When the Lighthouse CI is run, it will start a server that’s able to run the tests accordingly. Once the test finishes, the server will automatically shut dow.
If your project requires the use of a custom server, you can enter the command used to start the server through the startServerCommand property. When this option is used, you also need to specify the URLs to test against through the url option. This URL should be serveable by the custom server that you specified.
When the Lighthouse CI runs, it executes the server command and watches for the listen or ready string to determine if the server has started. If it does not detect this string after 10 seconds, it assumes the server has started and continues with the test. It then runs Lighthouse three times against each URL in the url array. Once the test has finished running, it shuts down the server process.
You can configure both the pattern string to watch for and timeout duration through the startServerReadyPattern and startServerReadyTimeout options respectively. If you want to change the number of times to run Lighthouse against each URL, use the numberOfRuns property.
// lighthouserc.js module.exports = { ci: { collect: { startServerCommand: 'npm run server', url: ['http://localhost:4000/'], startServerReadyPattern: 'Server is running on PORT 4000', startServerReadyTimeout: 20000 // milliseconds numberOfRuns: 5, }, upload: { target: 'temporary-public-storage', }, }, };
The target property inside the ci.upload object is used to configure where Lighthouse CI uploads the results after a test is completed. The temporary-public-storage option indicates that the report will be uploaded to Google’s Cloud Storage and retained for a few days. It will also be available to anyone who has the link, with no authentication required. If you want more control over how the reports are stored, refer to the documentation.
At this point, you should be ready to run the Lighthouse CI tool. Use the command below to start the CLI. It will run Lighthouse thrice against the provided URLs (unless changed via the numberOfRuns option), and upload the median result to the configured target.
lhci autorun
The output should be similar to what is shown below:
✅ .lighthouseci/ directory writable ✅ Configuration file found ✅ Chrome installation found ⚠️ GitHub token not set Healthcheck passed! Started a web server on port 52195... Running Lighthouse 3 time(s) on http://localhost:52195/web-development-with-go/ Run #1...done. Run #2...done. Run #3...done. Running Lighthouse 3 time(s) on http://localhost:52195/custom-html5-video/ Run #1...done. Run #2...done. Run #3...done. Done running Lighthouse! Uploading median LHR of http://localhost:52195/web-development-with-go/...success! Open the report at https://storage.googleapis.com/lighthouse-infrastructure.appspot.com/reports/1606403407045-45763.report.html Uploading median LHR of http://localhost:52195/custom-html5-video/...success! Open the report at https://storage.googleapis.com/lighthouse-infrastructure.appspot.com/reports/1606403400243-5952.report.html Saving URL map for GitHub repository ayoisaiah/freshman...success! No GitHub token set, skipping GitHub status check. Done running autorun.
The GitHub token message can be ignored for now. We‘ll configure one when it’s time to set up Lighthouse CI with a GitHub action. You can open the Lighthouse report link in your browser to view the median test results for reach URL.
Configuring assertions
Using the Lighthouse CI tool to run and collect Lighthouse reports works well enough, but we can go a step further and configure the tool so that a build fails if the tests results do not match certain criteria. The options that control this behavior can be configured through the assert property. Here’s a snippet showing a sample configuration:
The preset option is a quick way to configure Lighthouse assertions. There are three options:
lighthouse:all: Asserts that every audit received a perfect score
lighthouse:recommended: Asserts that every audit outside performance received a perfect score, and warns when metric values drop below a score of 90
lighthouse:no-pwa: The same as lighthouse:recommended but without any of the PWA audits
You can use the assertions object to override or extend the presets, or build a custom set of assertions from scratch. The above configuration asserts a baseline score of 90 for the performance and accessibility categories. The difference is that failure in the former will result in a non-zero exit code while the latter will not. The result of any audit in Lighthouse can be asserted so there’s so much you can do here. Be sure to consult the documentation to discover all of the available options.
You can also configure assertions against a budget.json file. This can be created manually or generated through performancebudget.io. Once you have your file, feed it to the assert object as shown below:
A useful way to integrate Lighthouse CI into your development workflow is to generate new reports for each commit or pull request to the project’s GitHub repository. This is where GitHub Actions come into play.
To set it up, you need to create a .github/workflow directory at the root of your project. This is where all the workflows for your project will be placed. If you’re new to GitHub Actions, you can think of a workflow as a set of one or more actions to be executed once an event is triggered (such as when a new pull request is made to the repo). Sarah Drasner has a nice primer on using GitHub Actions.
mkdir -p .github/workflow
Next, create a YAML file in the .github/workflow directory. You can name it anything you want as long as it ends with the .yml or .yaml extension. This file is where the workflow configuration for the Lighthouse CI will be placed.
cd .github/workflow touch lighthouse-ci.yaml
The contents of the lighthouse-ci.yaml file will vary depending on the type of project. I‘ll describe how I set it up for my Hugo website so you can adapt it for other types of projects. Here’s my configuration file in full:
# .github/workflow/lighthouse-ci.yaml name: Lighthouse on: [push] jobs: ci: runs-on: ubuntu-latest steps: - name: Checkout code uses: actions/checkout@v2 with: token: $ {{ secrets.PAT }} submodules: recursive - name: Setup Hugo uses: peaceiris/actions-hugo@v2 with: hugo-version: "0.76.5" extended: true - name: Build site run: hugo - name: Use Node.js 15.x uses: actions/setup-node@v2 with: node-version: 15.x - name: Run the Lighthouse CI run: | npm install -g @lhci/cli@0.6.x lhci autorun
The above configuration creates a workflow called Lighthouse consisting of a single job (ci) which runs on an Ubuntu instance and is triggered whenever code is pushed to any branch in the repository. The job consists of the following steps:
Check out the repository that Lighthouse CI will be run against. Hugo uses submodules for its themes, so it’s necessary to ensure all submodules in the repo are checked out as well. If any submodule is in a private repo, you need to create a new Personal Access Token with the repo scope enabled, then add it as a repository secret at https://github.com/<username>/<repo>/settings/secret. Without this token, this step will fail if it encounters a private repo.
Install Hugo on the GitHub Action virtual machine so that it can be used to build the site. This Hugo Setup Action is what I used here. You can find other setup actions in the GitHub Actions marketplace.
Build the site to a public folder through the hugo command.
Install and configure Node.js on the virtual machine through the setup-node action
Install the Lighthouse CI tool and execute the lhci autorun command.
Once you’ve set up the config file, you can commit and push the changes to your GitHub repository. This will trigger the workflow you just added provided your configuration was set up correctly. Go to the Actions tab in the project repository to see the status of the workflow under your most recent commit.
If you click through and expand the ci job, you will see the logs for each of the steps in the job. In my case, everything ran successfully but my assertions failed — hence the failure status. Just as we saw when we ran the test locally, the results are uploaded to the temporary public storage and you can view them by clicking the appropriate link in the logs.
Setting up GitHub status checks
At the moment, the Lighthouse CI has been configured to run as soon as code is pushed to the repo whether directly to a branch or through a pull request. The status of the test is displayed on the commit page, but you have click through and expand the logs to see the full details, including the links to the report.
You can set up a GitHub status check so that build reports are displayed directly in the pull request. To set it up, go to the Lighthouse CI GitHub App page, click the “Configure” option, then install and authorize it on your GitHub account or the organization that owns the GitHub repository you want to use. Next, copy the app token provided on the confirmation page and add it to your repository secrets with the name field set to LHCI_GITHUB_APP_TOKEN.
The status check is now ready to use. You can try it out by opening a new pull request or pushing a commit to an already existing pull request.
Historical reporting and comparisons through the Lighthouse CI Server
Using the temporary public storage option to store Lighthouse reports is great way to get started, but it is insufficient if you want to keep your data private or for a longer duration. This is where the Lighthouse CI server can help. It provides a dashboard for exploring historical Lighthouse data and offers an great comparison UI to uncover differences between builds.
To utilize the Lighthouse CI server, you need to deploy it to your own infrastructure. Detailed instructions and recipes for deploying to Heroku and Docker can be found on GitHub.
Conclusion
When setting up your configuration, it is a good idea to include a few different URLs to ensure good test coverage. For a typical blog, you definitely want to include to include the homepage, a post or two which is representative of the type of content on the site, and any other important pages.
Although we didn’t cover the full extent of what the Lighthouse CI tool can do, I hope this article not only helps you get up and running with it, but gives you a good idea of what else it can do. Thanks for reading, and happy coding!
A web font workflow is simple, right? Choose a few nice-looking web-ready fonts, get the HTML or CSS code snippet, plop it in the project, and check if they display properly. People do this with Google Fonts a zillion times a day, dropping its <link> tag into the <head>.
Let’s see what Lighthouse has to say about this workflow.
Stylesheets in the <head> have been flagged by Lighthouse as render-blocking resources and they add a one-second delay to render? Not great.
We’ve done everything by the book, documentation, and HTML standards, so why is Lighthouse telling us everything is wrong?
Let’s talk about eliminating font stylesheets as a render-blocking resource, and walk through an optimal setup that not only makes Lighthouse happy, but also overcomes the dreaded flash of unstyled text (FOUT) that usually comes with loading fonts. We’ll do all that with vanilla HTML, CSS, and JavaScript, so it can be applied to any tech stack. As a bonus, we’ll also look at a Gatsby implementation as well as a plugin that I’ve developed as a simple drop-in solution for it.
What we mean by “render-blocking” fonts
When the browser loads a website, it creates a render tree from the DOM, i.e. an object model for HTML, and CSSOM, i.e. a map of all CSS selectors. A render tree is a part of a critical render path that represents the steps that the browser goes through to render a page. For browser to render a page, it needs to load and parse the HTML document and every CSS file that is linked in that HTML.
Here’s a fairly typical font stylesheet pulled directly from Google Fonts:
You might be thinking that font stylesheets are tiny in terms of file size because they usually contain, at most, a few @font-face definitions. They shouldn’t have any noticeable effect on rendering, right?
Let’s say we’re loading a CSS font file from an external CDN. When our website loads, the browser needs to wait for that file to load from the CDN and be included in the render tree. Not only that, but it also needs to wait for the font file that is referenced as a URL value in the CSS @font-face definition to be requested and loaded.
Bottom line: The font file becomes a part of the critical render path and it increases the page render delay.
What is the most vital part of any website to the average user? It’s the content, of course. That is why content needs to be displayed to the user as soon as possible in a website loading process. To achieve that, the critical render path needs to be reduced to critical resources (e.g. HTML and critical CSS), with everything else loaded after the page has been rendered, fonts included.
If a user is browsing an unoptimized website on a slow, unreliable connection, they will get annoyed sitting on a blank screen that’s waiting for font files and other critical resources to finish loading. The result? Unless that user is super patient, chances are they’ll just give up and close the window, thinking that the page is not loading at all.
However, if non-critical resources are deferred and the content is displayed as soon as possible, the user will be able to browse the website and ignore any missing presentational styles (like fonts) — that is, if they don’t get in the way of the content.
Optimized websites render content with critical CSS as soon as possible with non-critical resources deferred. A font switch occurs between 0.5s and 1.0s on the second timeline, indicating the time when presentational styles start rendering.
The optimal way to load fonts
There’s no point in reinventing the wheel here. Harry Roberts has already done a great job describing an optimal way to load web fonts. He goes into great detail with thorough research and data from Google Fonts, boiling it all down into a four-step process:
Preconnect to the font file origin.
Preload the font stylesheet asynchronously with low priority.
Asynchronously load the font stylesheet and font file after the content has been rendered with JavaScript.
Provide a fallback font for users with JavaScript turned off.
Let’s implement our font using Harry’s approach:
<!-- https://fonts.gstatic.com is the font file origin --> <!-- It may not have the same origin as the CSS file (https://fonts.googleapis.com) --> <link rel="preconnect" href="https://fonts.gstatic.com" crossorigin /> <!-- We use the full link to the CSS file in the rest of the tags --> <link rel="preload" as="style" href="https://fonts.googleapis.com/css2?family=Merriweather&display=swap" /> <link rel="stylesheet" href="https://fonts.googleapis.com/css2?family=Merriweather&display=swap" media="print" onload="this.media='all'" /> <noscript> <link rel="stylesheet" href="https://fonts.googleapis.com/css2?family=Merriweather&display=swap" /> </noscript>
Notice the media="print" on the font stylesheet link. Browsers automatically give print stylesheets a low priority and exclude them as a part of the critical render path. After the print stylesheet has been loaded, an onload event is fired, the media is switched to a default all value, and the font is applied to all media types (screen, print, and speech).
Lighthouse is happy with this approach!
It’s important to note that self-hosting the fonts might also help fix render-blocking issues, but that is not always an option. Using a CDN, for example, might be unavoidable. In some cases, it’s beneficial to let a CDN do the heavy lifting when it comes to serving static resources.
Even though we’re now loading the font stylesheet and font files in the optimal non-render-blocking way, we’ve introduced a minor UX issue…
Flash of unstyled text (FOUT)
This is what we call FOUT:
Why does that happen? To eliminate a render-blocking resource, we have to load it after the page content has rendered (i.e. displayed on the screen). In the case of a low-priority font stylesheet that is loaded asynchronously after critical resources, the user can see the moment the font changes from the fallback font to the downloaded font. Not only that, the page layout might shift, resulting in some elements looking broken until the web font loads.
The best way to deal with FOUT is to make the transition between the fallback font and web font smooth. To achieve that we need to:
Choose a suitable fallback system font that matches the asynchronously loaded font as closely as possible.
Adjust the font styles (font-size, line-height, letter-spacing, etc.) of the fallback font to match the characteristics of the asynchronously loaded font, again, as closely as possible.
Clear the styles for the fallback font once the asynchronously loaded font file has has rendered, and apply the styles intended for the newly loaded font.
We can use Font Style Matcher to find optimal fallback system fonts and configure them for any given web font we plan to use. Once we have styles for both the fallback font and web font ready, we can move on to the next step.
Merriweather is the font and Georgia is the fallback system font in this example. Once the Merriweather styles are applied, there should be minimal layout shifting and the switch between fonts should be less noticeable.
We can use the CSS font loading API to detect when our web font has loaded. Why that? Typekit’s web font loader was once one of the more popular ways to do it and, while it’s tempting to continue using it or similar libraries, we need to consider the following:
It hasn’t been updated for over four years, meaning that if anything breaks on the plugin side or new features are required, it’s likely no one will implement and maintain them.
We are already handling async loading efficiently using Harry Roberts’ snippet and we don’t need to rely on JavaScript to load the font.
If you ask me, using a Typekit-like library is just too much JavaScript for a simple task like this. I want to avoid using any third-party libraries and dependencies, so let’s implement the solution ourselves and try to make it is as simple and straightforward as possible, without over-engineering it.
Although the CSS Font Loading API is considered experimental technology, it has roughly 95% browser support. But regardless, we should provide a fallback if the API changes or is deprecated in the future. The risk of losing a font isn’t worth the trouble.
The CSS Font Loading API can be used to load fonts dynamically and asynchronously. We’ve already decided not to rely on JavaScript for something simple as font loading and we’ve solved it in an optimal way using plain HTML with preload and preconnect. We will use a single function from the API that will help us check if the font is loaded and available.
document.fonts.check("12px 'Merriweather'");
The check() function returns true or false depending on whether the font specified in the function argument is available or not. The font size parameter value is not important for our use case and it can be set to any value. Still, we need to make sure that:
We have at least one HTML element on a page that contains at least one character with web font declaration applied to it. In the examples, we will use the but any character can do the job as long it’s hidden (without using display: none;) from both sighted and non-sighted users. The API tracks DOM elements that have font styles applied to them. If there are no matching elements on a page, then the API isn’t be able to determine if the font has loaded or not.
The specified font in the check() function argument is exactly what the font is called in the CSS.
I’ve implemented the font loading listener using CSS font loading API in the following demo. For example purposes, loading fonts and the listener for it are initiated by clicking the button to simulate a page load so you can see the change occur. On regular projects, this should happen soon after the website has loaded and rendered.
Isn’t that awesome? It took us less than 30 lines of JavaScript to implement a simple font loading listener, thanks to a well-supported function from the CSS Font Loading API. We’ve also handled two possible edge cases in the process:
Something goes wrong with the API, or some error occurs preventing the web font from loading.
The user is browsing the website with JavaScript turned off.
Now that we have a way to detect when the font file has finished loading, we need to add styles to our fallback font to match the web font and see how to handle FOUT more effectively.
The transition between the fallback font and web font looks smooth and we’ve managed to achieve a much less noticeable FOUT! On a complex site, this change would result in a fewer layout shifts, and elements that depend on the content size wouldn’t look broken or out of place.
What’s happening under the hood
Let’s take a closer look at the code from the previous example, starting with the HTML. We have the snippet in the <head> element, allowing us to load the font asynchronously with preload, preconnect, and fallback.
<body class="no-js"> <!-- ... Website content ... --> <div aria-visibility="hidden" class="hidden" style="font-family: '[web-font-name]'"> /* There is a non-breaking space here here */ </div> <script> document.getElementsByTagName("body")[0].classList.remove("no-js"); </script> </body>
Notice that we have a hardcoded .no-js class on the <body> element, which is removed the moment the HTML document has finished loading. This applies webfont styles for users with JavaScript disabled.
Secondly, remember how the CSS Font Loading API requires at least one HTML element with a single character to track the font and apply its styles? We added a <div> with a character that we are hiding from both sighted and non-sighted users in an accessible way, since we cannot use display: none;. This element has an inlined font-family: 'Merriweather' style. This allows us to smoothly switch between the fallback styles and loaded font styles, and make sure that all font files are properly tracked, regardless of whether they are used on the page or not.
Note that the character is not showing up in the code snippet but it is there!
The CSS is the most straightforward part. We can utilize the CSS classes that are hardcoded in the HTML or applied conditionally with JavaScript to handle various font loading states.
JavaScript is where the magic happens. As described previously, we are checking if the font has been loaded by using the CSS Font Loading API’s check() function. Again, the font size parameter can be any value (in pixels); it’s the font family value that needs to match the name of the font that we’re loading.
var interval = null; function fontLoadListener() { var hasLoaded = false; try { hasLoaded = document.fonts.check('12px "[web-font-name]"') } catch(error) { console.info("CSS font loading API error", error); fontLoadedSuccess(); return; } if(hasLoaded) { fontLoadedSuccess(); } } function fontLoadedSuccess() { if(interval) { clearInterval(interval); } /* Apply class names */ } interval = setInterval(fontLoadListener, 500);
What’s happening here is we’re setting up our listener with fontLoadListener() that runs at regular intervals. This function should be as simple as possible so it runs efficiently within the interval. We are using the try-catch block to handle any errors and catch any issues so that web font styles still apply in the case of a JavaScript error so that the user doesn’t experience any UI issues.
Next, we’re accounting for when the font successfully loads with fontLoadedSuccess(). We need to make sure to first clear the interval so the check doesn’t unnecessarily run after it. Here we can add class names that we need in order to apply the web font styles.
And, finally, we are initiating the interval. In this example, we’ve set it up to 500ms, so the function runs twice per second.
Here’s a Gatsby implementation
Gatsby does a few things that are different compared to vanilla web development (and even the regular create-react-app tech stack) which makes implementing what we’ve covered here a bit tricky.
To make this easy, we’ll develop a local Gatsby plugin, so all code that is relevant to our font loader is located at plugins/gatsby-font-loader in the example below.
Our font loader code and config will be split across the three main Gatsby files:
Plugin configuration (gatsby-config.js): We’ll include the local plugin in our project, list all local and external fonts and their properties (including the font name, and the CSS file URL), and include all preconnect URLs.
Server-side code (gatsby-ssr.js): We’ll use the config to generate and include preload and preconnect tags in the HTML <head> using setHeadComponents function from Gatsby’s API. Then, we’ll generate the HTML snippets that hide the font and include them in HTML using setPostBodyComponents.
Client-side code (gatsby-browser.js): Since this code runs after the page has loaded and after React starts up, it is already asynchronous. That means we can inject the font stylesheet links using react-helmet. We’ll also start a font loading listener to deal with FOUT.
You can check out the Gatsby implementation in the following CodeSandbox example.
I know, some of this stuff is complex. If you just want a simple drop-in solution for performant, asynchronous font loading and FOUT busting, I’ve developed a gatsby-omni-font-loader plugin just for that. It uses the code from this article and I am actively maintaining it. If you have any suggestions, bug reports, or code contributions, feel free to submit them on on GitHub.
Conclusion
Content is perhaps the most component to a user’s experience on a website. We need to make sure content gets top priority and loads as quickly as possible. That means using bare minimum presentation styles (i.e. inlined critical CSS) in the loading process. That is also why web fonts are considered non-critical in most cases — the user can still consume the content without them — so it’s perfectly fine for them to load after the page has rendered.
But that might lead to FOUT and layout shifts, so the font loading listener is needed to make a smooth switch between the fallback system font and the web font.
I’d like to hear your thoughts! Let me know in the comments how are you tackling the issue of web font loading, render-blocking resources and FOUT on your projects.
In this tutorial, I’ll show you step by step how to create a simple tool in Node.js to run Google Lighthouse audits via the command line, save the reports they generate in JSON format and then compare them so web performance can be monitored as the website grows and develops.
I’m hopeful this can serve as a good introduction for any developer interested in learning about how to work with Google Lighthouse programmatically.
But first, for the uninitiated…
What is Google Lighthouse?
Google Lighthouse is one of the best-automated tools available on a web developer’s utility belt. It allows you to quickly audit a website in a number of key areas which together can form a measure of its overall quality. These are:
Performance
Accessibility
Best Practices
SEO
Progressive Web App
Once the audit is complete, a report is then generated on what your website does well… and not so well, with the latter intending to serve as an indicator for what your next steps should be to improve the page.
Here’s what a full report looks like.
Along with other general diagnostics and web performance metrics, a really useful feature of the report is that each of the key areas is aggregated into color-coded scores between 0-100.
Not only does this allow developers to quickly gauge the quality of a website without further analysis, but it also allows non-technical folk such as stakeholders or clients to understand as well.
For example, this means, it’s much easier to share the win with Heather from marketing after spending time improving website accessibility as she’s more able to appreciate the effort after seeing the Lighthouse accessibility score go up 50 points into the green.
But equally, Simon the project manager may not understand what Speed Index or First Contentful Paint means, but when he sees the Lighthouse report showing website performance score knee deep in the red, he knows you still have work to do.
If you’re in Chrome or the latest version of Edge, you can run a Lighthouse audit for yourself right now using DevTools. Here’s how:
You can also run a Lighthouse audit online via PageSpeed Insights or through popular performance tools, such as WebPageTest.
However, today, we’re only interested in Lighthouse as a Node module, as this allows us to use the tool programmatically to audit, record and compare web performance metrics.
Let’s find out how.
Setup
First off, if you don’t already have it, you’re going to need Node.js. There are a million different ways to install it. I use the Homebrew package manager, but you can also download an installer straight from the Node.js website if you prefer. This tutorial was written with Node.js v10.17.0 in mind, but will very likely work just fine on the most versions released in the last few years.
You’re also going to need Chrome installed, as that’s how we’ll be running the Lighthouse audits.
Next, create a new directory for the project and then cd into it in the console. Then run npm init to begin to create a package.json file. At this point, I’d recommend just bashing the Enter key over and over to skip as much of this as possible until the file is created.
Now, let’s create a new file in the project directory. I called mine lh.js, but feel free to call it whatever you want. This will contain all of JavaScript for the tool. Open it in your text editor of choice, and for now, write a console.log statement.
console.log('Hello world');
Then in the console, make sure your CWD (current working directory) is your project directory and run node lh.js, substituting my file name for whatever you’ve used.
You should see:
$ node lh.js Hello world
If not, then check your Node installation is working and you’re definitely in the correct project directory.
Now that’s out of the way, we can move on to developing the tool itself.
Opening Chrome with Node.js
Let’s install our project’s first dependency: Lighthouse itself.
npm install lighthouse --save-dev
This creates a node_modules directory that contains all of the package’s files. If you’re using Git, the only thing you’ll want to do with this is add it to your .gitignore file.
In lh.js, you’ll next want to delete the test console.log() and import the Lighthouse module so you can use it in your code. Like so:
const lighthouse = require('lighthouse');
Below it, you’ll also need to import a module called chrome-launcher, which is one of Lighthouse’s dependencies and allows Node to launch Chrome by itself so the audit can be run.
Now that we have access to these two modules, let’s create a simple script which just opens Chrome, runs a Lighthouse audit, and then prints the report to the console.
Create a new function that accepts a URL as a parameter. Because we’ll be running this using Node.js, we’re able to safely use ES6 syntax as we don’t have to worry about those pesky Internet Explorer users.
const launchChrome = (url) => { }
Within the function, the first thing we need to do is open Chrome using the chrome-launcher module we imported and send it to whatever argument is passed through the url parameter.
We can do this using its launch() method and its startingUrl option.
Calling the function below and passing a URL of your choice results in Chrome being opened at the URL when the Node script is run.
launchChrome('https://www.lukeharrison.dev');
The launch function actually returns a promise, which allows us to access an object containing a few useful methods and properties.
For example, using the code below, we can open Chrome, print the object to the console, and then close Chrome three seconds later using its kill() method.
Now that we’ve got Chrome figured out, let’s move on to Lighthouse.
Running Lighthouse programmatically
First off, let’s rename our launchChrome() function to something more reflective of its final functionality: launchChromeAndRunLighthouse(). With the hard part out of the way, we can now use the Lighthouse module we imported earlier in the tutorial.
In the Chrome launcher’s then function, which only executes once the browser is open, we’ll pass Lighthouse the function’s url argument and trigger an audit of this website.
If you were to execute this code, you’ll notice that something definitely seems to be happening. We just aren’t getting any feedback in the console to confirm the Lighthouse audit has definitely run, nor is the Chrome instance closing by itself like before.
Thankfully, the lighthouse() function returns a promise which lets us access the audit results.
Let’s kill Chrome and then print those results to the terminal in JSON format via the report property of the results object.
While the console isn’t the best way to display these results, if you were to copy them to your clipboard and visit the Lighthouse Report Viewer, pasting here will show the report in all of its glory.
At this point, it’s important to tidy up the code a little to make the launchChromeAndRunLighthouse() function return the report once it’s finished executing. This allows us to process the report later without resulting in a messy pyramid of JavaScript.
One thing you may have noticed is that our tool is only able to audit a single website at the moment. Let’s change this so you can pass the URL as an argument via the command line.
To take the pain out of working with command-line arguments, we’ll handle them with a package called yargs.
npm install --save-dev yargs
Then import it at the top of your script along with Chrome Launcher and Lighthouse. We only need its argv function here.
This means if you were to pass a command line argument in the terminal like so:
node lh.js --url https://www.google.co.uk
…you can access the argument in the script like so:
const url = argv.url // https://www.google.co.uk
Let’s edit our script to pass the command line URL argument to the function’s url parameter. It’s important to add a little safety net via the if statement and error message in case no argument is passed.
if (argv.url) { launchChromeAndRunLighthouse(argv.url).then(results => { console.log(results); }); } else { throw "You haven't passed a URL to Lighthouse"; }
Tada! We have a tool that launches Chrome and runs a Lighthouse audit programmatically before printing the report to the terminal in JSON format.
Saving Lighthouse reports
Having the report printed to the console isn’t very useful as you can’t easily read its contents, nor are they aren’t saved for future use. In this section of the tutorial, we’ll change this behavior so each report is saved into its own JSON file.
To stop reports from different websites getting mixed up, we’ll organize them like so:
lukeharrison.dev
2020-01-31T18:18:12.648Z.json
2020-01-31T19:10:24.110Z.json
cnn.com
2020-01-14T22:15:10.396Z.json
lh.js
We’ll name the reports with a timestamp indicating when the date/time the report was generated. This will mean no two report file names will ever be the same, and it’ll help us easily distinguish between reports.
There is one issue with Windows that requires our attention: the colon (:) is an illegal character for file names. To mitigate this issue, we’ll replace any colons with underscores (_), so a typical report filename will look like:
2020-01-31T18_18_12.648Z.json
Creating the directory
First, we need to manipulate the command line URL argument so we can use it for the directory name.
This involves more than just removing the www, as it needs to account for audits run on web pages which don’t sit at the root (eg: www.foo.com/bar), as the slashes are invalid characters for directory names.
For these URLs, we’ll replace the invalid characters with underscores again. That way, if you run an audit on https://www.foo.com/bar, the resulting directory name containing the report would be foo.com_bar.
To make dealing with URLs easier, we’ll use a native Node.js module called url. This can be imported like any other package and without having to add it to thepackage.json and pull it via npm.
Create a new variable called dirName, and use the string replace() method on the host property of our URL to get rid of the www in addition to the https protocol:
const urlObj = new URL(argv.url); let dirName = urlObj.host.replace('www.','');
We’ve used let here, which unlike const can be reassigned, as we’ll need to update the reference if the URL has a pathname, to replace slashes with underscores. This can be done with a regular expression pattern, and looks like this:
const urlObj = new URL(argv.url); let dirName = urlObj.host.replace("www.", ""); if (urlObj.pathname !== "/") { dirName = dirName + urlObj.pathname.replace(///g, "_"); }
Now we can create the directory itself. This can be done through the use of another native Node.js module called fs (short for “file system”).
We can use its mkdir() method to create a directory, but first have to use its existsSync() method to check if the directory already exists, as Node.js would otherwise throw an error:
const urlObj = new URL(argv.url); let dirName = urlObj.host.replace("www.", ""); if (urlObj.pathname !== "/") { dirName = dirName + urlObj.pathname.replace(///g, "_"); } if (!fs.existsSync(dirName)) { fs.mkdirSync(dirName); }
Testing the script at the point should result in a new directory being created. Passing https://www.bbc.co.uk/news as the URL argument would result in a directory named bbc.co.uk_news.
Saving the report
In the then function for launchChromeAndRunLighthouse(), we want to replace the existing console.log with logic to write the report to disk. This can be done using the fs module’s writeFile() method.
The first parameter represents the file name, the second is the content of the file and the third is a callback containing an error object should something go wrong during the write process. This would create a new file called report.json containing the returning Lighthouse report JSON object.
We still need to send it to the correct directory, with a timestamp as its file name. The former is simple — we pass the dirName variable we created earlier, like so:
The latter though requires us to somehow retrieve a timestamp of when the report was generated. Thankfully, the report itself captures this as a data point, and is stored as the fetchTime property.
We just need to remember to swap any colons (:) for underscores (_) so it plays nice with the Windows file system.
If you were to run this now, rather than a timestamped.json filename, instead you would likely see an error similar to:
UnhandledPromiseRejectionWarning: TypeError: Cannot read property 'replace' of undefined
This is happening because Lighthouse is currently returning the report in JSON format, rather than an object consumable by JavaScript.
Thankfully, instead of parsing the JSON ourselves, we can just ask Lighthouse to return the report as a regular JavaScript object instead.
This requires editing the below line from:
return chrome.kill().then(() => results.report);
…to:
return chrome.kill().then(() => results.lhr);
Now, if you rerun the script, the file will be named correctly. However, when opened, it’s only content will unfortunately be…
[object Object]
This is because we’ve now got the opposite problem as before. We’re trying to render a JavaScript object without stringifying it into a JSON object first.
The solution is simple. To avoid having to waste resources on parsing or stringifying this huge object, we can return both types from Lighthouse:
Sorted! On completion of the Lighthouse audit, our tool should now save the report to a file with a unique timestamped filename in a directory named after the website URL.
This means reports are now much more efficiently organized and won’t override each other no matter how many reports are saved.
Comparing Lighthouse reports
During everyday development, when I’m focused on improving performance, the ability to very quickly compare reports directly in the console and see if I’m headed in the right direction could be extremely useful. With this in mind, the requirements of this compare functionality ought to be:
If a previous report already exists for the same website when a Lighthouse audit is complete, automatically perform a comparison against it and show any changes to key performance metrics.
I should also be able to compare key performance metrics from any two reports, from any two websites, without having to generate a new Lighthouse report which I may not need.
What parts of a report should be compared? These are the numerical key performance metrics collected as part of any Lighthouse report. They provide insight into the objective and perceived performance of a website.
In addition, Lighthouse also collects other metrics that aren’t listed in this part of the report but are still in an appropriate format to be included in the comparison. These are:
Time to first byte – Time To First Byte identifies the time at which your server sends a response.
Total blocking time – Sum of all time periods between FCP and Time to Interactive, when task length exceeded 50ms, expressed in milliseconds.
Estimated input latency – Estimated Input Latency is an estimate of how long your app takes to respond to user input, in milliseconds, during the busiest 5s window of page load. If your latency is higher than 50ms, users may perceive your app as laggy.
How should the metric comparison be output to the console? We’ll create a simple percentage-based comparison using the old and new metrics to see how they’ve changed from report to report.
To allow for quick scanning, we’ll also color-code individual metrics depending on if they’re faster, slower or unchanged.
We’ll aim for this output:
Compare the new report against the previous report
Let’s get started by creating a new function called compareReports() just below our launchChromeAndRunLighthouse() function, which will contain all the comparison logic. We’ll give it two parameters —from and to — to accept the two reports used for the comparison.
For now, as a placeholder, we’ll just print out some data from each report to the console to validate that it’s receiving them correctly.
As this comparison would begin after the creation of a new report, the logic to execute this function should sit in the then function for launchChromeAndRunLighthouse().
If, for example, you have 30 reports sitting in a directory, we need to determine which one is the most recent and set it as the previous report which the new one will be compared against. Thankfully, we already decided to use a timestamp as the filename for a report, so this gives us something to work with.
First off, we need to collect any existing reports. To make this process easy, we’ll install a new dependency called glob, which allows for pattern matching when searching for files. This is critical because we can’t predict how many reports will exist or what they’ll be called.
Install it like any other dependency:
npm install glob --save-dev
Then import it at the top of the file the same way as usual:
We’ll use glob to collect all of the reports in the directory, which we already know the name of via the dirName variable. It’s important to set its sync option to true as we don’t want JavaScript execution to continue until we know how many other reports exist.
launchChromeAndRunLighthouse(argv.url).then(results => { const prevReports = glob(`$ {dirName}/*.json`, { sync: true }); // et al });
This process returns an array of paths. So if the report directory looked like this:
We have a list of report file paths and we need to compare their timestamped filenames to determine which one is the most recent.
This means we first need to collect a list of all the file names, trim any irrelevant data such as directory names, and taking care to replace the underscores (_) back with colons (:) to turn them back into valid dates again. The easiest way to do this is using path, another Node.js native module.
const path = require('path');
Passing the path as an argument to its parse method, like so:
Therefore, to get a list of all the timestamp file names, we can do this:
if (prevReports.length) { dates = []; for (report in prevReports) { dates.push( new Date(path.parse(prevReports[report]).name.replace(/_/g, ":")) ); } }
A useful thing about dates is that they’re inherently comparable by default:
const alpha = new Date('2020-01-31'); const bravo = new Date('2020-02-15'); console.log(alpha > bravo); // false console.log(bravo > alpha); // true
So by using a reduce function, we can reduce our array of dates down until only the most recent remains:
dates = []; for (report in prevReports) { dates.push(new Date(path.parse(prevReports[report]).name.replace(/_/g, ":"))); } const max = dates.reduce(function(a, b) { return Math.max(a, b); });
If you were to print the contents of max to the console, it would throw up a UNIX timestamp, so now, we just have to add another line to convert our most recent date back into the correct ISO format:
const max = dates.reduce(function(a, b) { return Math.max(a, b); }); const recentReport = new Date(max).toISOString();
Assuming these are the list of reports:
2020-01-31T23_24_41.786Z.json
2020-01-31T23_25_36.827Z.json
2020-01-31T23_37_56.856Z.json
2020-01-31T23_39_20.459Z.json
2020-01-31T23_56_50.959Z.json
The value of recentReport would be 2020-01-31T23:56:50.959Z.
Now that we know the most recent report, we next need to extract its contents. Create a new variable called recentReportContents beneath the recentReport variable and assign it an empty function.
As we know this function will always need to execute, rather than manually calling it, it makes sense to turn it into an IFFE (Immediately invoked function expression), which will run by itself when the JavaScript parser reaches it. This is signified by the extra parenthesis:
const recentReportContents = (() => { })();
In this function, we can return the contents of the most recent report using the readFileSync() method of the native fs module. Because this will be in JSON format, it’s important to parse it into a regular JavaScript object.
If you’re getting any errors at this point, try deleting any report.json files or reports without valid content from earlier in the tutorial.
Compare any two reports
The remaining key requirement was the ability to compare any two reports from any two websites. The easiest way to implement this would be to allow the user to pass the full report file paths as command line arguments which we’ll then send to the compareReports() function.
Achieving this requires editing the conditional if statement which checks for the presence of a URL command line argument. We’ll add an additional check to see if the user has just passed a from and to path, otherwise check for the URL as before. This way we’ll prevent a new Lighthouse audit.
if (argv.from && argv.to) { } else if (argv.url) { // et al }
Let’s extract the contents of these JSON files, parse them into JavaScript objects, and then pass them along to the compareReports() function.
We’ve already parsed JSON before when retrieving the most recent report. We can just extrapolate this functionality into its own helper function and use it in both locations.
Using the recentReportContents() function as a base, create a new function called getContents() which accepts a file path as an argument. Make sure this is just a regular function, rather than an IFFE, as we don’t want it executing as soon as the JavaScript parser finds it.
This part of development involves building comparison logic to compare the two reports received by the compareReports() function.
Within the object which Lighthouse returns, there’s a property called audits that contains another object listing performance metrics, opportunities, and information. There’s a lot of information here, much of which we aren’t interested in for the purposes of this tool.
Here’s the entry for First Contentful Paint, one of the nine performance metrics we wish to compare:
"first-contentful-paint": { "id": "first-contentful-paint", "title": "First Contentful Paint", "description": "First Contentful Paint marks the time at which the first text or image is painted. [Learn more](https://web.dev/first-contentful-paint).", "score": 1, "scoreDisplayMode": "numeric", "numericValue": 1081.661, "displayValue": "1.1 s" }
Create an array listing the keys of these nine performance metrics. We can use this to filter the audit object:
Then we’ll loop through one of the report’s audits object and then cross-reference its name against our filter list. (It doesn’t matter which audit object, as they both have the same content structure.)
If it’s in there, then brilliant, we want to use it.
const metricFilter = [ "first-contentful-paint", "first-meaningful-paint", "speed-index", "estimated-input-latency", "total-blocking-time", "max-potential-fid", "time-to-first-byte", "first-cpu-idle", "interactive" ]; for (let auditObj in from["audits"]) { if (metricFilter.includes(auditObj)) { console.log(auditObj); } }
This console.log() would print the below keys to the console:
Which means we would use from['audits'][auditObj].numericValue and to['audits'][auditObj].numericValue respectively in this loop to access the metrics themselves.
If we were to print these to the console with the key, it would result in output like this:
We have all the data we need now. We just need to calculate the percentage difference between these two values and then log it to the console using the color-coded format outlined earlier.
Do you know how to calculate the percentage change between two values? Me neither. Thankfully, everybody’s favorite monolith search engine came to the rescue.
The formula is:
((From - To) / From) x 100
So, let’s say we have a Speed Index of 5.7s for the first report (from), and then a value of 2.1s for the second (to). The calculation would be:
Back in our auditObj conditional, we can begin to put together the final report comparison output.
First off, use the helper function to generate the percentage difference for each metric.
for (let auditObj in from["audits"]) { if (metricFilter.includes(auditObj)) { const percentageDiff = calcPercentageDiff( from["audits"][auditObj].numericValue, to["audits"][auditObj].numericValue ); } }
Next, we need to output values in this format to the console:
This requires adding color to the console output. In Node.js, this can be done by passing a color code as an argument to the console.log() function like so:
console.log('x1b[36m', 'hello') // Would print 'hello' in cyan
You can get a full reference of color codes in this Stackoverflow question. We need green and red, so that’s x1b[32m and x1b[31m respectively. For metrics where the value remains unchanged, we’ll just use white. This would be x1b[37m.
Depending on if the percentage increase is a positive or negative number, the following things need to happen:
Log color needs to change (Green for negative, red for positive, white for unchanged)
Log text contents change.
‘[Name] is X% slower for positive numbers
‘[Name] is X% faster’ for negative numbers
‘[Name] is unchanged’ for numbers with no percentage difference.
If the number is negative, we want to remove the minus/negative symbol, as otherwise, you’d have a sentence like ‘Speed Index is -92.95% faster’ which doesn’t make sense.
There are many ways this could be done. Here, we’ll use the Math.sign() function, which returns 1 if its argument is positive, 0 if well… 0, and -1 if the number is negative. That’ll do.
With the completion of this basic Google Lighthouse tool, there’s plenty of ways to develop it further. For example:
Some kind of simple online dashboard that allows non-technical users to run Lighthouse audits and view metrics develop over time. Getting stakeholders behind web performance can be challenging, so something tangible they can interest with themselves could pique their interest.
Build support for performance budgets, so if a report is generated and performance metrics are slower than they should be, then the tool outputs useful advice on how to improve them (or calls you names).
Ire Aderinokun writes about a new way to set a performance budget (and stick to it) with Lighthouse, Google’s suite of tools that help developers see how performant and accessible their websites are:
Until recently, I also hadn’t setup an official performance budget and enforced it. This isn’t to say that I never did performance audits. I frequently use tools like PageSpeed Insights and take the feedback to make improvements. But what I had never done was set a list of metrics that I needed the site to meet, and enforce them using some automated tool.
The reasons for this were a combination of not knowing what exact numbers I should use for budgets as well as there being a disconnect between setting a budget and testing/enforcing it. This is why I was really excited when, at Google I/O this year, the Lighthouse team announced support for performance budgets that can be integrated with Lighthouse. We can now define a simple performance budget in a JSON file, which will be tested as part of the lighthouse audit!
I completely agree with Ire, and much in the same way I’ve tended to neglect sticking to a performance budget simply because the process of testing was so manual and tedious. But no more! As Ire shows in this post, you can even set Lighthouse up to test your budget with every PR in GitHub. That tool is called lighthousebot and it’s just what I’ve been looking for – an automated and predictable way to integrate a performance budget into every change that I make to a codebase.
Today lighthousebot will comment on your PR after a test is complete and it will show you the before and after score:
How neat is that? This reminds me of Gareth Clubb’s recent post about improving web performance and building a culture around budgets in an organization. What better way to remind everyone about performance than right in GitHub after each and every change that they make?