Scott then jots down some thoughts on how we might do that. I think this is always so very useful to keep in mind: what we experience on our site, and what we measure too, might not be the full picture.
Puppeteer is an Node library for spinning up a copy of Chrome “headlessly” (i.e. no UI) and controlling it. People use it for stuff like taking a screenshot of a website or running integration tests. You can even run it in a Lambda.
Another use case is running synthetic (i.e. not based on real-users) performance tests, like some of these new Web Core Vitals
Addy Osmani lists out a bunch of these “recipes” for measuring certain performance things in Puppeteer. These would be super useful as part of a build process alongside other tests. Did the unit tests pass? Did the integration tests pass? Did the accessibility tests pass? Did the performance metrics tests pass?
BrowserStack released a thing to measure your site and give you a performance score.
You get the tests back super quick which is cool. I can see how tools like this are good for starting conversations with teams about improving performance.
But… that number seems a little weird. They don’t exactly document how it’s calculated, but it seems to be based on stuff like Time to First Byte (TTFB) and the page load event, which aren’t particularly useful performance metrics.
It’s not bad that this tool exists or anything, but I don’t think it’s for practitioners doing performance work.
Karolina Szczur from Calibre documents some common team struggles like, for example, having a team be able to identify real issues from variability noise.
Many people from different backgrounds can view performance dashboards. Not knowing what constitutes a meaningful change that needs investigation can result in false positives, lack of trust in monitoring and cycles spent looking for reasons for performance regressions or upgrades that aren’t there.
When the browser’s main thread hits max CPU for more than 50ms, a user starts to notice that their clicks are delayed and that scrolling the page has become janky and unresponsive. Batteries drain faster. People rage click or go elsewhere.
I love that this piece just doesn’t feel like dunking on the Notion team, and bragging about how Ivan might do things better. There’s always room for improvement and constructive feedback is better than guilting someone into it. Yay for making things fast while being nice about it!
The other day, I realized that web performance is an enormous topic covering so very much — from minimizing assets to using certain file formats, it can be an awful lot to keep in mind while building a website. It’s certainly far too much for me to remember!
So I made a web performance checklist. It’s a Notion doc that I can fork and use to mark completed items whenever I start a new project. It also contains a bunch of links for references.
This doc is still a work in progress. Any recommendations or links?Feel free to suggest something in the comments below!
I, Dave Rupert, a person who cares about web performance, a person who reads web performance blogs, a person who spends lots of hours trying to keep up on best practices, a person who co-hosts a weekly podcast about making websites and speak with web performance professionals… somehow goofed and added 33 SECONDS to their page load.
This stuff is hard even when you care a lot. The 33 seconds came from font preloading rather than the one-line wonder of font-display.
I also care about making fast websites, but mine aren’t winning any speed awards because I’ll take practical and maintainable over peak performance any day. (Sorry, world)
Yes, I know, it depends. 543 KB aren’t always bad, but on that specific page there’s only a single image (the logo ~20 KB) and a single paragraph. So why then is the page still relatively large, where are the remaining 523 KB coming from?
Bundle splitting will result in more requests being made to load your app. But as long as the requests are made in parallel that’s not a big problem, especially if your site served over HTTP/2.
Here’s a neat idea from Tim Kadlec. He uses the Modheader extension to toggle custom headers in his browser. It also lets him see when images are too big and need to be optimized in some way. This is a great way to catch issues like this in a local environment because browsers will throw an error and won’t display them at all!
As Tim mentions, the trick is with the Feature Policy header with the oversized-images policy, and he toggles it on like this:
Feature-Policy: oversized-images ‘none’;
By default, if you provide the browser an image in a format it supports, it will display it. It even helpful scales those images so they look great, even if you’ve provided a massive file. Because of this, it’s not immediately obvious when you’ve provided an image that is larger than the site needs.
The oversized-images policy tells the browser not to allow any images that are more than some predefined factor of their container size. The recommended default threshold is 2x, but you are able to override that if you would like.
I love this idea of using the browser to do linting work for us! I wonder what other ways we could use the browser to place guard rails around our work to prevent future mistakes…
This article is full of a bunch of data from Aggelos Arvanitakis. But lemme just focus on his final bit of advice:
Investigate whether a zero-runtime CSS-in-JS library can work for your project. Sometimes we tend to prefer writing CSS in JS for the DX (developer experience) it offers, without a need to have access to an extended JS API. If you app doesn’t need support for theming and doesn’t make heavy and complex use of the css prop, then a zero-runtime CSS-in-JS library might be a good candidate.
“Zero-runtime” meaning you author your styles in a CSS-in-JS syntax, but what is produced is .css files like any other CSS preprocessor would produce. This shifts the tool into a totally different category. It’s a developer tool only, rather than a tool where the user of the website pays the price of using it.
The flagship zero-runtime CSS-in-JS library is Linaria. I think the syntax looks really nice.
Performance advice from David Calhoun on how many scripts to load on a page for best performance:
[…] some of your vendor dependencies probably change slower than others. react and react-dom probably change the slowest, and their versions are always paired together, so they both form a logical chunk that can be kept separate from other faster-changing vendor code:
Funny how times haven’t changed that much! Me, in 2012, talking about how many CSS files need to be loaded on any given page: One, Two, or Three. I split it into global, section-specific, and-page-specific so it was less about third-party code, although that could certainly apply, too.