Tag: Performance

We need more inclusive web performance metrics

Scott Jehl argues that performance metrics such as First Contentful Paint and Largest Contentful Paint don’t really capture the full picture of everyone’s experience with websites:

These metrics are often touted as measures of usability or meaning, but they are not necessarily meaningful for everyone. In particular, users relying on assistive technology (such as a screenreader) may not perceive steps in the page loading process until after the DOM is complete, or even later depending on how JavaScript may block that process. Also, a page may not be usable to A.T. until it becomes fully interactive, since many applications often deliver accessible interactivity via external JavaScript

Scott then jots down some thoughts on how we might do that. I think this is always so very useful to keep in mind: what we experience on our site, and what we measure too, might not be the full picture.

Direct Link to ArticlePermalink

The post We need more inclusive web performance metrics appeared first on CSS-Tricks.

CSS-Tricks

, , , ,

Some Performance Links

Just had a couple of good performance links burning a hole in my pocket, so blogging them like a good little blogger.

Web Performance Recipes With Puppeteer

Puppeteer is an Node library for spinning up a copy of Chrome “headlessly” (i.e. no UI) and controlling it. People use it for stuff like taking a screenshot of a website or running integration tests. You can even run it in a Lambda.

Another use case is running synthetic (i.e. not based on real-users) performance tests, like some of these new Web Core Vitals

Addy Osmani lists out a bunch of these “recipes” for measuring certain performance things in Puppeteer. These would be super useful as part of a build process alongside other tests. Did the unit tests pass? Did the integration tests pass? Did the accessibility tests pass? Did the performance metrics tests pass?


BrowserStack SpeedLab

BrowserStack released a thing to measure your site and give you a performance score.

You get the tests back super quick which is cool. I can see how tools like this are good for starting conversations with teams about improving performance.

But… that number seems a little weird. They don’t exactly document how it’s calculated, but it seems to be based on stuff like Time to First Byte (TTFB) and the page load event, which aren’t particularly useful performance metrics.

It’s not bad that this tool exists or anything, but I don’t think it’s for practitioners doing performance work.


5 Common Mistakes Teams Make When Tracking Performance

Karolina Szczur from Calibre documents some common team struggles like, for example, having a team be able to identify real issues from variability noise.

Many people from different backgrounds can view performance dashboards. Not knowing what constitutes a meaningful change that needs investigation can result in false positives, lack of trust in monitoring and cycles spent looking for reasons for performance regressions or upgrades that aren’t there.


Are your JavaScript long tasks frustrating users?

50ms. That’s how long until any particular JavaScript task starts affecting user experience. Might as well track and (ideally) fix them.

When the browser’s main thread hits max CPU for more than 50ms, a user starts to notice that their clicks are delayed and that scrolling the page has become janky and unresponsive. Batteries drain faster. People rage click or go elsewhere.

The post Some Performance Links appeared first on CSS-Tricks.

CSS-Tricks

, ,
[Top]

Analyzing Notion app performance

Here’s a fantastic case study where Ivan Akulov looks at the rather popular writing app Notion and how the team might improve the performance in a variety of ways; through code splitting, removing unused vendor code, module concatenation, and deferring JavaScript execution. Not so long ago, we made a list for getting started with web performance but this article goes so much further into the app side of things: making sure that users are loading only the JavaScript that they need, and doing that as quickly as possible.

I love that this piece just doesn’t feel like dunking on the Notion team, and bragging about how Ivan might do things better. There’s always room for improvement and constructive feedback is better than guilting someone into it. Yay for making things fast while being nice about it!

Direct Link to ArticlePermalink

The post Analyzing Notion app performance appeared first on CSS-Tricks.

CSS-Tricks

, ,
[Top]

Web Performance Checklist

The other day, I realized that web performance is an enormous topic covering so very much — from minimizing assets to using certain file formats, it can be an awful lot to keep in mind while building a website. It’s certainly far too much for me to remember!

So I made a web performance checklist. It’s a Notion doc that I can fork and use to mark completed items whenever I start a new project. It also contains a bunch of links for references.

This doc is still a work in progress. Any recommendations or links?Feel free to suggest something in the comments below!

The post Web Performance Checklist appeared first on CSS-Tricks.

CSS-Tricks

,
[Top]

Maintaining Performance

Real talk from Dave:

I, Dave Rupert, a person who cares about web performance, a person who reads web performance blogs, a person who spends lots of hours trying to keep up on best practices, a person who co-hosts a weekly podcast about making websites and speak with web performance professionals… somehow goofed and added 33 SECONDS to their page load.

This stuff is hard even when you care a lot. The 33 seconds came from font preloading rather than the one-line wonder of font-display.

I also care about making fast websites, but mine aren’t winning any speed awards because I’ll take practical and maintainable over peak performance any day. (Sorry, world)

Direct Link to ArticlePermalink

The post Maintaining Performance appeared first on CSS-Tricks.

CSS-Tricks

,
[Top]

Performance Links

I’ve had a number of browser tabs open to articles all related to web performance and gosh darn it if blogging them is a way for me get some closure. They are all good!

Manuel Matuzovic, Why 543 KB keep me up at night:

Yes, I know, it depends. 543 KB aren’t always bad, but on that specific page there’s only a single image (the logo ~20 KB) and a single paragraph. So why then is the page still relatively large, where are the remaining 523 KB coming from?

Spoiler: it was the JavaScript. Also, I had no idea Google has a recommended ideal DOM that:

  • has less than 1500 nodes total.
  • has a maximum depth of 32 nodes.
  • has no parent node with more than 60 child nodes.

Next up, Performant front-end architecture (no byline):

Bundle splitting will result in more requests being made to load your app. But as long as the requests are made in parallel that’s not a big problem, especially if your site served over HTTP/2.

This is all about assuming the app is largely a client-side JavaScript site. I think there is a huge pile of low-hanging performance fruit, but it’s almost like a different list when talking about client-side JavaScript sites. It makes code-splitting one of the top priorities.


Jeremy Keith, Telling the story of performance:

Web Page Test is a terrific tool for measuring performance. It can also be used as a story-telling tool.

WPT ouputs video of the site loading. Put it side-by-side with a competitor and show it to the client.


CP Clermont, The Impact of Web Performance:

In this post, I’ll discuss what I did at ALDO to measure the revenue impact of web performance without having to spend time making performance improvements.

Not surprising that users with faster experiences generate more revenue. What is surprising is that it’s a lot more. Over 3x more on mobile and nearly 6x more on desktop.

The post Performance Links appeared first on CSS-Tricks.

CSS-Tricks

,
[Top]

In-Browser Performance Linting With Feature Policies

Here’s a neat idea from Tim Kadlec. He uses the Modheader extension to toggle custom headers in his browser. It also lets him see when images are too big and need to be optimized in some way. This is a great way to catch issues like this in a local environment because browsers will throw an error and won’t display them at all!

As Tim mentions, the trick is with the Feature Policy header with the oversized-images policy, and he toggles it on like this:

Feature-Policy: oversized-images ‘none’;

Tim writes:

By default, if you provide the browser an image in a format it supports, it will display it. It even helpful scales those images so they look great, even if you’ve provided a massive file. Because of this, it’s not immediately obvious when you’ve provided an image that is larger than the site needs.

The oversized-images policy tells the browser not to allow any images that are more than some predefined factor of their container size. The recommended default threshold is 2x, but you are able to override that if you would like.

I love this idea of using the browser to do linting work for us! I wonder what other ways we could use the browser to place guard rails around our work to prevent future mistakes…

Direct Link to ArticlePermalink

The post In-Browser Performance Linting With Feature Policies appeared first on CSS-Tricks.

CSS-Tricks

, , , ,
[Top]

The Unseen Performance Costs of Modern CSS-in-JS Libraries

This article is full of a bunch of data from Aggelos Arvanitakis. But lemme just focus on his final bit of advice:

Investigate whether a zero-runtime CSS-in-JS library can work for your project. Sometimes we tend to prefer writing CSS in JS for the DX (developer experience) it offers, without a need to have access to an extended JS API. If you app doesn’t need support for theming and doesn’t make heavy and complex use of the css prop, then a zero-runtime CSS-in-JS library might be a good candidate.

“Zero-runtime” meaning you author your styles in a CSS-in-JS syntax, but what is produced is .css files like any other CSS preprocessor would produce. This shifts the tool into a totally different category. It’s a developer tool only, rather than a tool where the user of the website pays the price of using it.

The flagship zero-runtime CSS-in-JS library is Linaria. I think the syntax looks really nice.

import { css } from 'linaria'; import fonts from './fonts';  const header = css`   text-transform: uppercase;   font-family: $  {fonts.heading}; `;  <h1 className={header}>Hello world</h1>;

I’m also a fan of the just do scoping for me ability that CSS modules brings, which can be done zero-runtime style.

Direct Link to ArticlePermalink

The post The Unseen Performance Costs of Modern CSS-in-JS Libraries appeared first on CSS-Tricks.

CSS-Tricks

, , , , ,
[Top]

Lightning-Fast Web Performance

,
[Top]

Bundling JavaScript for Performance: Best Practices

Performance advice from David Calhoun on how many scripts to load on a page for best performance:

[…] some of your vendor dependencies probably change slower than others. react and react-dom probably change the slowest, and their versions are always paired together, so they both form a logical chunk that can be kept separate from other faster-changing vendor code:

<!-- index.html --> <script src="vendor.react.[hash].min.js"></script> <script src="vendor.others.[hash].min.js"></script> <script src="index.[hash].min.js"></script>

Funny how times haven’t changed that much! Me, in 2012, talking about how many CSS files need to be loaded on any given page: One, Two, or Three. I split it into global, section-specific, and-page-specific so it was less about third-party code, although that could certainly apply, too.

Direct Link to ArticlePermalink

The post Bundling JavaScript for Performance: Best Practices appeared first on CSS-Tricks.

CSS-Tricks

, , , ,
[Top]