Tag: Performance

Links on Performance V

CSS-Tricks

,

Quickly Get Alerted to Front-End Errors and Performance Issues

(This is a sponsored post.)

Measuring things is great. They say what you only fix what you measure. Raygun is great at measuring websites. Measuring performance, measuring errors and crashes, measuring code problems.

You know what’s even better than measuring? Having a system in place to notify you when anything significant happens with those measurements. That’s why Raygun now has powerful alerting.

Let’s look at some of the possibilities of alerts you can set up on your website so that you’re alerted when things go wrong.

Alert 1) Spike in Errors

In my experience, when you see a spike in errors being thrown in your app, it’s likely because a new release has gone to production, and it’s not behaving how you expected it to.

You need to know now, because errors like this can be tricky. Maybe it worked just fine in development, so you need as much time as you can get to root out what the problem is.

Creating a customized alert situation like this in Raygun is very straightforward! Here’s a quick video:

Alert 2) Critical Error

You likely want to be keeping an eye on all errors, but some errors are more critical than others. If a user throws an error trying to update their biography to a string that contains an emoji, well that’s unfortunate and you want to know about it so you can fix it. But if they can’t sign up, add to cart, or check out — well, that’s extra bad, and you need to know about it instantly so you can fix it as immediately as possible. If your users can’t do the main thing they are on your website to do, you’re seriously jeopardizing your business.

With Raygun Alerting, there are actually a couple ways to set this up.

  1. Set up the alert to watch for an Error Message containing any particular text
  2. (and/or) Set up the alert to watch for a particular tag

Error Message text is a nice catch-all as you should be able to catch anything with that. But tagging is more targetted. These tags are of your own design, as you send them over yourself from your own app. For example in JavaScript, say you performed some mission-critical operation in a try/catch block. Should the catch happen, you could send Raygun an event like:

rg4js('send', {   error: e,   tags: ['signup', 'mission_critical']; });

Then create alerts based on those tags as needed.

Alert 3) Slow Load Time

I’m not sure most people think about website performance tracking as something you tie real time alerting to, but you should! There is no reason a websites load time would all the sudden nose dive (e.g. change from, say 2 seconds to 5 seconds), unless something has changed. So if it does nose dive, you should be alerted right away, so you can examine recent changes and fix it.

With Raygun, an alert like this is extremely simple to set up. Here’s an example alert set up to watch for a certain load time threshold and email if there is ever a 10 minute time period in which loading times exceed that.

Setting up the alert in Raygun
Email notification of slowness

If you don’t want to be that aggressive to start with loading time, try 4 seconds. That’s the industry standard for slow loading. If you never get any alerts, slowly notch it down over time, giving you and your team progressively more impressive loading times to stay vigilant about.

Aside from alerts, you’ll also get weekly emails giving you an overview of performance issues.

Alert 4) Core Web Vitals

The new gold-standard web performance metrics are Core Web Vitals (which we’ve written about how Raygun helps with before) (CWV) because they measure things that really matter to users, as well as are an SEO ranking factor for Google. Those are two big reasons to be extra careful with them and set up alerts if your website breaks acceptable thresholds you set up.

For example, CLS is Culumative Layout Shift. Google tells us CLS under 0.1 is good and above 0.25 is bad. So why don’t we shoot for staying under 0.1?

Here we’ve got an alert where if the CLS creeps up over 0.1, we’ll be alerted. Maybe we accidentally added some new content to the site (ads?) that arrive after the page loads and push content around. Perhaps we’ve adjusted a layout in a way that makes things more shifty than they were. Perhaps we’ve updated our custom fonts such that when the load they cause shifting. If we’re alerted, we can fix it the moment we’re aware of it so the negative consequences don’t stick around.

Conclusion

For literally everything that you measure that you know is important to you, there should be an alerting mechanic in place. For anything website performance or error tracking related, Raygun has a perfect solution.


The post Quickly Get Alerted to Front-End Errors and Performance Issues appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , ,
[Top]

ct.css — Performance Hints via Injected Stylesheet Alone

This is some bonafide CSS trickery from Harry that gives you some generic performance advice based on what it sees in your <head> element.

First, it’s possible to make a <style> block visible like any other element by changing the display away from the default of none. It’s a nice little trick. You can even do that for things in the <head>, for example…

head, head style, head script {   display: block; }

From there, Harry gets very clever with selectors, determining problematic situations from the usage and placement of certain tags. For example, say there is a <script> that comes after some styles…

<head>   <link rel="stylesheet" href="...">   <script src="..."></script>   <title>Page Title</title>   <!-- ...  -->

Well, that’s bad, because the script is blocked by CSS likely unnecessarily. Perhaps some sophisticated performance tooling software could tell you that. But you know what else can? A CSS selector!

head [rel="stylesheet"]:not([media="print"]):not(.ct) ~ script, head style:not(:empty) ~ script {  }

That’s kinda like saying head link ~ script, but a little fancier in that it only selects actual stylesheets or style blocks that are truly blocking (and not itself). Harry then applies styling and pseudo-content to the blocks so you can use the stylesheet as a visual performance debugging tool.

That’s just darn clever, that. The stylesheet has loads of little things you can test for, like attributes you don’t need, blocking resources, and elements that are out of order.

Direct Link to ArticlePermalink


The post ct.css — Performance Hints via Injected Stylesheet Alone appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , ,
[Top]

Links on Performance IV


The post Links on Performance IV appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

,
[Top]

Links on Performance

  • Making GitHub’s new homepage fast and performant — Tobias Ahlin describes how the scrolling effects are done more performantly thanks to IntersectionObserver and the fact that it avoids the use of methods that trigger reflows, like getBoundingClientRect. Also, WebP + SVG masks!
  • Everything we know about Core Web Vitals and SEO — Simon Hearne covers why everyone is so obsessed with CWV right now: SEO. Simon says something I’ve heard a couple of times: The Page Experience Update is more of a carrot approach than stick — there is no direct penalty for failing to meet Google’s goals. That is, you aren’t penalized for poor CWV, but are given a bonus for good numbers. But if everyone around you is getting that bonus except you, isn’t that the same as a penalty?
  • Setting up Cloudflare Workers for web performance optimisation and testing — Matt Hobbs starts with a 101 intro on setting up a Cloudflare Worker, using it to intercept a CSS file and replace all the font-family declarations with Comic Sans. Maybe that will open your eyes to the possibilities: if you can manipulate all assets like HTML, CSS, and JavaScript, you can force those things into doing more performant things.
  • Now THAT’S What I Call Service Worker! — Jeremy Wagner sets up a “Streaming” Service Worker that caches common partials on a website (e.g. the header and footer) such that the people of Waushara County, Wisconsin, who have slow internet can load the site somewhere in the vicinity of twice as fast. This is building upon Philip Walton’s “Smaller HTML Payloads with Service Workers” article.
  • Who has the fastest F1 website in 2021? — Jake Archibald’s epic going-on-10-part series analyzing the performance of F1 racing websites (oh, the irony). Looks like Red Bull is in the lead so far with Ferarri trailing. There is a lot to learn in all these, and it’s somewhat cathartic seeing funny bits like, Their site was slow because of a 1.8MB blocking script, but 1.7MB of that was an inlined 2300×2300 PNG of a horse that was only ever displayed at 20×20. Also, I don’t think I knew that Jake was the original builder of Sprite Cow! (Don’t use that because it turns out that sprites are bad.)
  • Real-world CSS vs. CSS-in-JS performance comparison — Tomas Pustelnik looks at the performance implications of CSS-in-JS. Or, as I like to point out: CSS-in-React, as that’s always what it is since all the other big JavaScript frameworks have their own blessed styling solutions. Tomas didn’t compare styled-components to hand-written vanilla CSS, but to Linaria, which I would think most people still think of as CSS-in-JS — except that instead of bundling the styles in JavaScript, it outputs CSS. I agree that, whatever a styling library does for DX, producing CSS seems like the way to go for production. Yet another reason I like css-modules. Newer-fancier libs are doing it too.
  • The Case of the 50ms request — Julia Evans put together this interactive puzzle for trying to figure out why a server request is taking longer than it should. More of a back-end thing than front-end, but the troubleshooting steps feel familiar. Try it on your machine, try it on my machine, see what the server is doing, etc.

The post Links on Performance appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

,
[Top]

How to Improve CSS Performance

There is no doubt that CSS plays a huge role in web performance. Milica Mihajlija puts a point on exactly why:

When there is CSS available for a page, whether it’s inline or an external stylesheet, the browser delays rendering until the CSS is parsed. This is because pages without CSS are often unusable.

The browser has to wait until the CSS is both downloaded and parsed to show us that first rendering of the page, otherwise browsing the web would be a terribly visually jerky to browse. We’d probably write JavaScript to delay page rendering on purpose if that’s how the native web worked.

So how do you improve it? The classics like caching, minification, and compression help. But also, shipping less of it, and only loading the bit you need and the rest after the first render.

It’s entirely about how and how much CSS you load, and has very little to do with the contents of the the CSS.

Direct Link to ArticlePermalink


The post How to Improve CSS Performance appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

,
[Top]

The Mobile Performance Inequality Gap

Alex Russell made some interesting notes about performance and how it impacts folks on mobile:

[…] CPUs are not improving fast enough to cope with frontend engineers’ rosy resource assumptions. If there is unambiguously good news on the tooling front, multiple popular tools now include options to prevent sending first-party JS in the first place (Next.jsGatsby), though the JS community remains in stubborn denial about the costs of client-side script. Hopefully, toolchain progress of this sort can provide a more accessible bridge as we transition costs to a reduced-script-emissions world.

A lot of the stuff I read when it comes to performance is focused on America, but what I like about Russell’s take here is that he looks at a host of other countries such as India, too. But how does the rollout of 5G networks impact performance around the world? Well, we should be skeptical of how improved networks impact our work. Alex argues:

5G looks set to continue a bumpy rollout for the next half-decade. Carriers make different frequency band choices in different geographies, and 5G performance is heavily sensitive to mast density, which will add confusion for years to come. Suffice to say, 5G isn’t here yet, even if wealthy users in a few geographies come to think of it as “normal” far ahead of worldwide deployment

This is something I try to keep in mind whenever I’m thinking about performance: how I’m viewing my website is most likely not how other folks are viewing it.

Direct Link to ArticlePermalink


The post The Mobile Performance Inequality Gap appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, ,
[Top]

Some Performance Blog Posts I’ve Bookmarked and Read Lately

  • Back/forward cache — I always assumed browsers just do fancy stuff with the back/forward buttons and us developers had very little control. Philip Walton tells us it’s critical that we understand “what makes pages eligible (and ineligible) for bfcache to maximize their cache-hit rates.” For example, if you use the unload event, the page is instantly disqualified from the cache.
  • Big picture performance analysis using Lighthouse Parade — Lighthouse only tests one page of your site. Lighthouse Parade tests all the URLs of a site, and aggregates the results.
  • Beyond fast with new performance features — Jake Archibald gets into the CSS content-visibility property (and a few other things) and how it can lead to incredible performance boosts (you use it to tell the browser that it’s straight-up OK not to render things). Right this minute, content-visiblity makes me nervous as it has issues with scrollbar jankiness and accessiblity problems. I found it a smidge confusing at first glance, and Tim Kadlec has reservations.
  • Image Decode & Visual Timings — Image performance isn’t only about the size of the image. Different formats take different amounts of time to decode and render. GIF never wins.
  • How to increase CSS-in-JS performance by 175x — The trick, readers, is shipping CSS. You can still use CSS-in-JS as you author, and have the build process create the CSS. They call that “Zero-Runtime” like Linaria.
  • Testing Performance — Kelly Sutton: “The best approach that I have found to preventing performance regressions is to employ qualitative assessments of the code.” Performance is such an unwieldy beast, that only in production will you truly know what happens.
  • Front-End Performance Checklist 2021 — If you’re going to get serious about performance this year, you’d do well to dig into Vitaly’s guide.
  • We rendered a million web pages to find out what makes the web slow — HTTP/2 is a huge indicator of good performance.

The post Some Performance Blog Posts I’ve Bookmarked and Read Lately appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , , , ,
[Top]

Continuous Performance Analysis with Lighthouse CI and GitHub Actions

Lighthouse is a free and open-source tool for assessing your website’s performance, accessibility, progressive web app metrics, SEO, and more. The easiest way to use it is through the Chrome DevTools panel. Once you open the DevTools, you will see a “Lighthouse” tab. Clicking the “Generate report” button will run a series of tests on the web page and display the results right there in the Lighthouse tab. This makes it easy to test any web page, whether public or requiring authentication.

If you don’t use Chrome or Chromium-based browsers, like Microsoft Edge or Brave, you can run Lighthouse through its web interface but it only works with publicly available web pages. A Node CLI tool is also provided for those who wish to run Lighthouse audits from the command line.

All the options listed above require some form of manual intervention. Wouldn‘t it be great if we could integrate Lighthouse testing in the continuous integration process so that the impact of our code changes can be displayed inline with each pull request, and so that we can fail the builds if certain performance thresholds are not net? Well, that’s exactly why Lighthouse CI exists!

It is a suite of tools that help you identify the impact of specific code changes on you site not just performance-wise, but in terms of SEO, accessibility, offline support, and other best practices. It’s offers a great way to enforce performance budgets, and also helps you keep track of each reported metric so you can see how they have changed over time.

In this article, we’ll go over how to set up Lighthouse CI and run it locally, then how to get it working as part of a CI workflow through GitHub Actions. Note that Lighthouse CI also works with other CI providers such as Travis CI, GitLab CI, and Circle CI in case you prefer to not to use GitHub Actions.

Setting up the Lighthouse CI locally

In this section, you will configure and run the Lighthouse CI command line tool locally on your machine. Before you proceed, ensure you have Node.js v10 LTS or later and Google Chrome (stable) installed on your machine, then proceed to install the Lighthouse CI tool globally:

$  npm install -g @lhci/cli

Once the CLI has been installed successfully, ru lhci --help to view all the available commands that the tool provides. There are eight commands available at the time of writing.

$  lhci --help lhci <command> <options>  Commands:   lhci collect      Run Lighthouse and save the results to a local folder   lhci upload       Save the results to the server   lhci assert       Assert that the latest results meet expectations   lhci autorun      Run collect/assert/upload with sensible defaults   lhci healthcheck  Run diagnostics to ensure a valid configuration   lhci open         Opens the HTML reports of collected runs   lhci wizard       Step-by-step wizard for CI tasks like creating a project   lhci server       Run Lighthouse CI server  Options:   --help             Show help  [boolean]   --version          Show version number  [boolean]   --no-lighthouserc  Disables automatic usage of a .lighthouserc file.  [boolean]   --config           Path to JSON config file

At this point, you‘re ready to configure the CLI for your project. The Lighthouse CI configuration can be managed through (in order of increasing precedence) a configuration file, environmental variables, or CLI flags. It uses the Yargs API to read its configuration options, which means there’s a lot of flexibility in how it can be configured. The full documentation covers it all. In this post, we’ll make use of the configuration file option.

Go ahead and create a lighthouserc.js file in the root of your project directory. Make sure the project is being tracked with Git because the Lighthouse CI automatically infers the build context settings from the Git repository. If your project does not use Git, you can control the build context settings through environmental variables instead.

touch lighthouserc.js

Here’s the simplest configuration that will run and collect Lighthouse reports for a static website project, and upload them to temporary public storage.

// lighthouserc.js module.exports = {   ci: {     collect: {       staticDistDir: './public',     },     upload: {       target: 'temporary-public-storage',     },   }, };

The ci.collect object offers several options to control how the Lighthouse CI collects test reports. The staticDistDir option is used to indicate the location of your static HTML files — for example, Hugo builds to a public directory, Jekyll places its build files in a _site directory, and so on. All you need to do is update the staticDistDir option to wherever your build is located. When the Lighthouse CI is run, it will start a server that’s able to run the tests accordingly. Once the test finishes, the server will automatically shut dow.

If your project requires the use of a custom server, you can enter the command used to start the server through the startServerCommand property. When this option is used, you also need to specify the URLs to test against through the url option. This URL should be serveable by the custom server that you specified.

module.exports = {   ci: {     collect: {       startServerCommand: 'npm run server',       url: ['http://localhost:4000/'],     },     upload: {       target: 'temporary-public-storage',     },   }, };

When the Lighthouse CI runs, it executes the server command and watches for the listen or ready string to determine if the server has started. If it does not detect this string after 10 seconds, it assumes the server has started and continues with the test. It then runs Lighthouse three times against each URL in the url array. Once the test has finished running, it shuts down the server process.

You can configure both the pattern string to watch for and timeout duration through the startServerReadyPattern and startServerReadyTimeout options respectively. If you want to change the number of times to run Lighthouse against each URL, use the numberOfRuns property.

// lighthouserc.js module.exports = {   ci: {     collect: {       startServerCommand: 'npm run server',       url: ['http://localhost:4000/'],       startServerReadyPattern: 'Server is running on PORT 4000',       startServerReadyTimeout: 20000 // milliseconds       numberOfRuns: 5,     },     upload: {       target: 'temporary-public-storage',     },   }, };

The target property inside the ci.upload object is used to configure where Lighthouse CI uploads the results after a test is completed. The temporary-public-storage option indicates that the report will be uploaded to Google’s Cloud Storage and retained for a few days. It will also be available to anyone who has the link, with no authentication required. If you want more control over how the reports are stored, refer to the documentation.

At this point, you should be ready to run the Lighthouse CI tool. Use the command below to start the CLI. It will run Lighthouse thrice against the provided URLs (unless changed via the numberOfRuns option), and upload the median result to the configured target.

lhci autorun

The output should be similar to what is shown below:

✅  .lighthouseci/ directory writable ✅  Configuration file found ✅  Chrome installation found ⚠️   GitHub token not set Healthcheck passed!  Started a web server on port 52195... Running Lighthouse 3 time(s) on http://localhost:52195/web-development-with-go/ Run #1...done. Run #2...done. Run #3...done. Running Lighthouse 3 time(s) on http://localhost:52195/custom-html5-video/ Run #1...done. Run #2...done. Run #3...done. Done running Lighthouse!  Uploading median LHR of http://localhost:52195/web-development-with-go/...success! Open the report at https://storage.googleapis.com/lighthouse-infrastructure.appspot.com/reports/1606403407045-45763.report.html Uploading median LHR of http://localhost:52195/custom-html5-video/...success! Open the report at https://storage.googleapis.com/lighthouse-infrastructure.appspot.com/reports/1606403400243-5952.report.html Saving URL map for GitHub repository ayoisaiah/freshman...success! No GitHub token set, skipping GitHub status check.  Done running autorun. 

The GitHub token message can be ignored for now. We‘ll configure one when it’s time to set up Lighthouse CI with a GitHub action. You can open the Lighthouse report link in your browser to view the median test results for reach URL.

Configuring assertions

Using the Lighthouse CI tool to run and collect Lighthouse reports works well enough, but we can go a step further and configure the tool so that a build fails if the tests results do not match certain criteria. The options that control this behavior can be configured through the assert property. Here’s a snippet showing a sample configuration:

// lighthouserc.js module.exports = {   ci: {     assert: {       preset: 'lighthouse:no-pwa',       assertions: {         'categories:performance': ['error', { minScore: 0.9 }],         'categories:accessibility': ['warn', { minScore: 0.9 }],       },     },   }, };

The preset option is a quick way to configure Lighthouse assertions. There are three options:

  • lighthouse:all: Asserts that every audit received a perfect score
  • lighthouse:recommended: Asserts that every audit outside performance received a perfect score, and warns when metric values drop below a score of 90
  • lighthouse:no-pwa: The same as lighthouse:recommended but without any of the PWA audits

You can use the assertions object to override or extend the presets, or build a custom set of assertions from scratch. The above configuration asserts a baseline score of 90 for the performance and accessibility categories. The difference is that failure in the former will result in a non-zero exit code while the latter will not. The result of any audit in Lighthouse can be asserted so there’s so much you can do here. Be sure to consult the documentation to discover all of the available options.

You can also configure assertions against a budget.json file. This can be created manually or generated through performancebudget.io. Once you have your file, feed it to the assert object as shown below:

// lighthouserc.js module.exports = {   ci: {     collect: {       staticDistDir: './public',       url: ['/'],     },     assert: {       budgetFile: './budget.json',     },     upload: {       target: 'temporary-public-storage',     },   }, };

Running Lighthouse CI with GitHub Actions

A useful way to integrate Lighthouse CI into your development workflow is to generate new reports for each commit or pull request to the project’s GitHub repository. This is where GitHub Actions come into play.

To set it up, you need to create a .github/workflow directory at the root of your project. This is where all the workflows for your project will be placed. If you’re new to GitHub Actions, you can think of a workflow as a set of one or more actions to be executed once an event is triggered (such as when a new pull request is made to the repo). Sarah Drasner has a nice primer on using GitHub Actions.

mkdir -p .github/workflow

Next, create a YAML file in the .github/workflow directory. You can name it anything you want as long as it ends with the .yml or .yaml extension. This file is where the workflow configuration for the Lighthouse CI will be placed.

cd .github/workflow touch lighthouse-ci.yaml

The contents of the lighthouse-ci.yaml file will vary depending on the type of project. I‘ll describe how I set it up for my Hugo website so you can adapt it for other types of projects. Here’s my configuration file in full:

# .github/workflow/lighthouse-ci.yaml name: Lighthouse on: [push] jobs:   ci:     runs-on: ubuntu-latest     steps:       - name: Checkout code         uses: actions/checkout@v2         with:           token: $ {{ secrets.PAT }}           submodules: recursive        - name: Setup Hugo         uses: peaceiris/actions-hugo@v2         with:           hugo-version: "0.76.5"           extended: true        - name: Build site         run: hugo        - name: Use Node.js 15.x         uses: actions/setup-node@v2         with:           node-version: 15.x       - name: Run the Lighthouse CI         run: |           npm install -g @lhci/cli@0.6.x           lhci autorun

The above configuration creates a workflow called Lighthouse consisting of a single job (ci) which runs on an Ubuntu instance and is triggered whenever code is pushed to any branch in the repository. The job consists of the following steps:

  • Check out the repository that Lighthouse CI will be run against. Hugo uses submodules for its themes, so it’s necessary to ensure all submodules in the repo are checked out as well. If any submodule is in a private repo, you need to create a new Personal Access Token with the repo scope enabled, then add it as a repository secret at https://github.com/<username>/<repo>/settings/secret. Without this token, this step will fail if it encounters a private repo.
  • Install Hugo on the GitHub Action virtual machine so that it can be used to build the site. This Hugo Setup Action is what I used here. You can find other setup actions in the GitHub Actions marketplace.
  • Build the site to a public folder through the hugo command.
  • Install and configure Node.js on the virtual machine through the setup-node action
  • Install the Lighthouse CI tool and execute the lhci autorun command.

Once you’ve set up the config file, you can commit and push the changes to your GitHub repository. This will trigger the workflow you just added provided your configuration was set up correctly. Go to the Actions tab in the project repository to see the status of the workflow under your most recent commit.

If you click through and expand the ci job, you will see the logs for each of the steps in the job. In my case, everything ran successfully but my assertions failed — hence the failure status. Just as we saw when we ran the test locally, the results are uploaded to the temporary public storage and you can view them by clicking the appropriate link in the logs.

Setting up GitHub status checks

At the moment, the Lighthouse CI has been configured to run as soon as code is pushed to the repo whether directly to a branch or through a pull request. The status of the test is displayed on the commit page, but you have click through and expand the logs to see the full details, including the links to the report.

You can set up a GitHub status check so that build reports are displayed directly in the pull request. To set it up, go to the Lighthouse CI GitHub App page, click the “Configure” option, then install and authorize it on your GitHub account or the organization that owns the GitHub repository you want to use. Next, copy the app token provided on the confirmation page and add it to your repository secrets with the name field set to LHCI_GITHUB_APP_TOKEN.

The status check is now ready to use. You can try it out by opening a new pull request or pushing a commit to an already existing pull request.

Historical reporting and comparisons through the Lighthouse CI Server

Using the temporary public storage option to store Lighthouse reports is great way to get started, but it is insufficient if you want to keep your data private or for a longer duration. This is where the Lighthouse CI server can help. It provides a dashboard for exploring historical Lighthouse data and offers an great comparison UI to uncover differences between builds.

To utilize the Lighthouse CI server, you need to deploy it to your own infrastructure. Detailed instructions and recipes for deploying to Heroku and Docker can be found on GitHub.

Conclusion

When setting up your configuration, it is a good idea to include a few different URLs to ensure good test coverage. For a typical blog, you definitely want to include to include the homepage, a post or two which is representative of the type of content on the site, and any other important pages.

Although we didn’t cover the full extent of what the Lighthouse CI tool can do, I hope this article not only helps you get up and running with it, but gives you a good idea of what else it can do. Thanks for reading, and happy coding!


The post Continuous Performance Analysis with Lighthouse CI and GitHub Actions appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , ,
[Top]

Web Performance Calendar

The Web Performance Calendar just started up again this year. The first two posts so far are about, well, performance! First up, Rick Viscomi writes about the mythical “fast” web page:

How you approach measuring a web page’s performance can tell you whether it’s built for speed or whether it feels fast. We call them lab and field tools. Lab tools are the microscopes that inspect a page for all possible points of friction. Field tools are the binoculars that give you an overview of how users are experiencing the page.

This to me suggests that field tools are the future of performance monitoring. But Rick’s post goes into a lot more depth beyond that.

Secondly, Matt Hobbs wrote about the font-display CSS property and how it affects the performance of our sites:

If you’re purely talking about perceived performance and connection speed isn’t an issue then I’d say font-display: swap is best. When used, this setting renders the fallback font almost instantly. Allowing a user to start reading page content while the webfont continues to load in the background. Once loaded the font is swapped in accordingly. On a fast, stable connection this usually happens very quickly.

My recommendation here would be to care deeply about the layout shift if you use this property. Once the font loads and is swapped out you could create a big shift of text moving all about over the place. I once shipped a change to a site with this property without minding the layout shift and users complained a lot.

It was a good lesson learned: folks sure care about performance even if they don’t say that out loud. They care deeply about “cumulative layout shift” too.

Direct Link to ArticlePermalink


The post Web Performance Calendar appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

,
[Top]