Tag: Performance

The truth about CSS selector performance

Geez, leave it to Patrick Brosset to talk CSS performance in the most approachable and practical way possible. Not that CSS is always what’s gunking up the speed, or even the lowest hanging fruit when it comes to improving performance.

But if you’re looking for gains on the CSS side of things, Patrick has a nice way of sniffing out your most expensive selectors using Edge DevTools:

  • Crack open DevTools.
  • Head to the Performance Tab.
  • Make sure you have the “Enable advanced rendering instrumentation” option enabled. This tripped me up in the process.
  • Record a page load.
  • Open up the “Bottom-Up” tab in the report.
  • Check out your the size of your recalculated styles.

DevTools with Performance tab open and a summary of events.

From here, click on one of the Recalculated Style events in the Main waterfall view and you’ll get a new “Selector Stats” tab. Look at all that gooey goodness!

Now you see all of the selectors that were processed and they can be sorted by how long they took, how many times they matched, the number of matching attempts, and something called “fast reject count” which I learned is the number of elements that were easy and quick to eliminate from matching.

A lot of insights here if CSS is really the bottleneck that needs investigating. But read Patrick’s full post over on the Microsoft Edge Blog because he goes much deeper into the why’s and how’s, and walks through an entire case study.

To Shared LinkPermalink on CSS-Tricks


The truth about CSS selector performance originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

CSS-Tricks

, , ,

7 Fresh Links on Performance For March 2022

I have a handful of good links to articles about performance that are burning a hole in my bookmarks folder, and wanna drop them here to share.

Screenshot of the new WebPageTest homepage, a tool for testing performance metrics.
The new WebPageTest website design

7 Fresh Links on Performance For March 2022 originally published on CSS-Tricks. You should get the newsletter.

CSS-Tricks

, , , ,
[Top]

Subsetting Font Awesome to Improve Performance

Font Awesome is an incredibly popular icon library. Unfortunately, it’s somewhat easy to use in a way that results in less-than-ideal performance. By subsetting Font Awesome, we can remove any unused glyphs from the font files it provides. This will reduce the number of bytes transmitted over the wire, and improve performance.

Let’s subset fonts together in a Font Awesome project to see the difference it makes. As we go, I’ll assume you’re importing the CSS file Font Awesome provides, and using its web fonts to display icons.

Let’s set things up

For the sake of demonstration, I have nothing but an HTML file that imports Font Awesome’s base CSS file. To get a reasonable sample of icons, I’ve listed out each one that I use on one of my side projects.

Here’s what our HTML file looks like in the browser before subsetting fonts:

Screenshot of a webpage with 54 various icons in a single row.

Here’s a look at DevTool’s Network tab to see what’s coming down.

Screenshot of DevTools Network tab showing a stylesheet without font subsetting that weighs 33.4 kilobytes.

Now let’s see how many bytes our font files take to render all that.

Here’s our base case

We want to see what the most straightforward, least performant use of Font Awesome looks like. In other words, we want the slowest possible implementation with no optimization. I’m importing the all.min.css file Font Awesome provides.

As we saw above, the gzipped file weighs in at 33.4KB, which isn’t bad at all. Unfortunately, when we peek into DevTool’s Font tab, things get a little worse.

Screenshot of DevTools Font tab showing five loaded woff-2 files, ranging in size from 138 kilobytes to 185.
Yikes. 757KB just for font files. For 54 icons.

While font files are not as expensive a resource for your browser to handle as JavaScript, those are still bytes your browser needs to pull down, just for some little icons. Consider that some of your users might be browsing your site on mobile, away from a strong or fast internet connection.

First attempt using PurifyCSS

Font Awesome’s main stylesheet contains definitions for literally thousands of icons. But what if we only need a few dozen at most? Surely we could trim out the unneeded stuff?

There are many tools out there that will analyze your code, and remove unused styles from a stylesheet. I happen to be using PurifyCSS. While this library isn’t actively maintained anymore, the idea is the same, and in the end, this isn’t the solution we’re looking for. But let’s see what happens when we trim our CSS down to only what’s needed, which we can do with this script:

const purify = require("purify-css");  const content = ["./dist/**/*.js"]; // Vite-built content  purify(content, ["./css/fontawesome/css/all.css"], {   minify: true,   output: "./css/fontawesome/css/font-awesome-minimal-build.css" });

And when we load that newly built CSS file, our CSS bytes over the wire drop quite a bit, from 33KB to just 7.1KB!

Screenshot of the DevTools Network tab showing a loaded stylesheet that is 7.1 kilobytes, thanks to removing unused CSS.

But unfortunately, our other Font Awesome font files are unchanged.

Screenshot of the DevTools Font tab showing five loaded font files.

What happened? PurifyCSS did its job. It indeed removed the CSS rules for all the unused icons. Unfortunately, it wasn’t capable of reaching into the actual font files to trim down the glyphs, in addition to the CSS rules.

If only there was a tool like PurifyCSS that handles font files…

Subsetters to the rescue!

There are, of course, tools that are capable of removing unused content from font files, and they’re called subsetters. A subsetter analyzes your webpage, looks at your font files, and trims out the unused characters. There are a bunch of tools for subsetting fonts out there, like Zach Leatherman’s Glyphhanger. As it turns out, subsetting Font Awesome is pretty straightforward because it ships its own built-in subsetters. Let’s take a look.

Subsetting fonts automatically

The auto subsetting and manual subsetting tools I’m about to show you require a paid Font Awesome Pro subscription.

Font Awesome allows you to set up what it calls kits, which are described in the Font Awesome docs as a “knapsack that carries all the icons and awesomeness you need in a neat little lightweight bundle you can sling on the back of your project with ease.” So, rather than importing any and every CSS file, a kit gives you a single script tag you can add to your HTML file’s <head>, and from there, the kit only sends down the font glyphs you actually need from the font file.

Creating a kit takes about a minute. You’re handed script tag that looks something like this:

<script src="https://kit.fontawesome.com/xyzabc.js" crossorigin="anonymous"></script>

When the script loads, we now have no CSS files at all, and the JavaScript file is a mere 4KB. Let’s look again at the DevTools Fonts tab to see which font files are loaded now that we’ve done some subsetting.

Screenshot of DevTools Font tab showing 24 loaded font files from subsetting Font Awesome with its auto subsetter.

We’ve gone from 757KB down to 331KB. That’s a more than 50% reduction. But we can still do better than that, especially if all we’re rendering is 54 icons. That’s where Font Awesome’s manual font subsetter comes into play.

Subsetting fonts manually

Wouldn’t it be nice if Font Awesome gave us a tool to literally pick the exact icons we wanted, and then provide a custom build for that? Well, they do. They don’t advertise this too loudly for some reason, but they actually have a desktop application exactly for subsetting fonts manually. The app is available to download from their site — but, like the automatic subsetter, this app requires a paid Font Awesome subscription to actually use.

Screenshot of the Font Awesome desktop app. Icons are displayed as tiles in a grid layout.

Search the icons, choose the family, add what you want, and then click the big blue Build button. That’s really all it takes to generate a custom subset of Font Awesome icons.

Once you hit the button, Font Awesome will ask where it should save your custom build, then it dumps a ZIP file that contains everything you need. In fact, the structure you’ll get is exactly the same as the normal Font Awesome download, which makes things especially simple. And naturally, it lets you save the custom build as a project file so you can open it back up later to add or remove icons as needed.

We’ll open up DevTools to see the final size of the icons we’re loading, but first, let’s look at the actual font files themselves. The custom build creates many different types, depending on what your browser uses. Let’s focus on the .woff2 files, which is what Chrome loads. The same light, regular, duotone, solid, and brand files that were there before are still in place, except this time no file is larger than 5KB… and that’s before they’re gzipped!

Screenshot of the various font files in a project directory.

And what about the CSS file? It slims down to just 8KB. With gzip, it’s only 2KB!

Here’s the final tally in DevTools:

Screenshot of the DevTools Network tab showing five loaded fonts with Base64 encoding from font subsetting.

Before we go, take a quick peek at those font filenames. The fa-light-300.woff2 font file is still there, but the others look different. That’s because I’m using Vite here, and it decided to automatically inline the font files into the CSS, since they’re so tiny.

Screenshot of the inlined Base64 encoding in th at-font-face declaration of a CSS file.

That’s why our CSS file looks a little bigger in the DevTools Network tab than the 2KB we saw before on disk. The tradeoff is that most of those font “files” from above aren’t files at all, but rather Base64-encoded strings embedded right in this CSS file, saving us additional network requests.

Screenshot of the DevTools Network tab showing a single CSS file that weighs 20.7 kilobytes.

All that said, Vite is inlining many different font formats that the browser will never use. But overall it’s a pretty small number of bytes, especially compared to what we were seeing before.

Before leaving, if you’re wondering whether that desktop font subsetting GUI tool comes in a CLI that can integrate with CI/CD to generate these files at build time, the answer is… not yet. I emailed the Font Awesome folks, and they said something is planned. That’ll allow users to streamline their build process if and when it ships.


As you’ve seen, using something like Font Awesome for icons is super cool. But the default usage might not always be the best approach for your project. To get the smallest file size possible, subsetting fonts is something we can do to trim what we don’t need, and only serve what we do. That’s the kind of performance we want, especially when it comes to loading fonts, which have traditionally been tough to wrangle.


Subsetting Font Awesome to Improve Performance originally published on CSS-Tricks. You should get the newsletter.

CSS-Tricks

, , , ,
[Top]

Links on Performance V

CSS-Tricks

,
[Top]

Quickly Get Alerted to Front-End Errors and Performance Issues

(This is a sponsored post.)

Measuring things is great. They say what you only fix what you measure. Raygun is great at measuring websites. Measuring performance, measuring errors and crashes, measuring code problems.

You know what’s even better than measuring? Having a system in place to notify you when anything significant happens with those measurements. That’s why Raygun now has powerful alerting.

Let’s look at some of the possibilities of alerts you can set up on your website so that you’re alerted when things go wrong.

Alert 1) Spike in Errors

In my experience, when you see a spike in errors being thrown in your app, it’s likely because a new release has gone to production, and it’s not behaving how you expected it to.

You need to know now, because errors like this can be tricky. Maybe it worked just fine in development, so you need as much time as you can get to root out what the problem is.

Creating a customized alert situation like this in Raygun is very straightforward! Here’s a quick video:

Alert 2) Critical Error

You likely want to be keeping an eye on all errors, but some errors are more critical than others. If a user throws an error trying to update their biography to a string that contains an emoji, well that’s unfortunate and you want to know about it so you can fix it. But if they can’t sign up, add to cart, or check out — well, that’s extra bad, and you need to know about it instantly so you can fix it as immediately as possible. If your users can’t do the main thing they are on your website to do, you’re seriously jeopardizing your business.

With Raygun Alerting, there are actually a couple ways to set this up.

  1. Set up the alert to watch for an Error Message containing any particular text
  2. (and/or) Set up the alert to watch for a particular tag

Error Message text is a nice catch-all as you should be able to catch anything with that. But tagging is more targetted. These tags are of your own design, as you send them over yourself from your own app. For example in JavaScript, say you performed some mission-critical operation in a try/catch block. Should the catch happen, you could send Raygun an event like:

rg4js('send', {   error: e,   tags: ['signup', 'mission_critical']; });

Then create alerts based on those tags as needed.

Alert 3) Slow Load Time

I’m not sure most people think about website performance tracking as something you tie real time alerting to, but you should! There is no reason a websites load time would all the sudden nose dive (e.g. change from, say 2 seconds to 5 seconds), unless something has changed. So if it does nose dive, you should be alerted right away, so you can examine recent changes and fix it.

With Raygun, an alert like this is extremely simple to set up. Here’s an example alert set up to watch for a certain load time threshold and email if there is ever a 10 minute time period in which loading times exceed that.

Setting up the alert in Raygun
Email notification of slowness

If you don’t want to be that aggressive to start with loading time, try 4 seconds. That’s the industry standard for slow loading. If you never get any alerts, slowly notch it down over time, giving you and your team progressively more impressive loading times to stay vigilant about.

Aside from alerts, you’ll also get weekly emails giving you an overview of performance issues.

Alert 4) Core Web Vitals

The new gold-standard web performance metrics are Core Web Vitals (which we’ve written about how Raygun helps with before) (CWV) because they measure things that really matter to users, as well as are an SEO ranking factor for Google. Those are two big reasons to be extra careful with them and set up alerts if your website breaks acceptable thresholds you set up.

For example, CLS is Culumative Layout Shift. Google tells us CLS under 0.1 is good and above 0.25 is bad. So why don’t we shoot for staying under 0.1?

Here we’ve got an alert where if the CLS creeps up over 0.1, we’ll be alerted. Maybe we accidentally added some new content to the site (ads?) that arrive after the page loads and push content around. Perhaps we’ve adjusted a layout in a way that makes things more shifty than they were. Perhaps we’ve updated our custom fonts such that when the load they cause shifting. If we’re alerted, we can fix it the moment we’re aware of it so the negative consequences don’t stick around.

Conclusion

For literally everything that you measure that you know is important to you, there should be an alerting mechanic in place. For anything website performance or error tracking related, Raygun has a perfect solution.


The post Quickly Get Alerted to Front-End Errors and Performance Issues appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , ,
[Top]

ct.css — Performance Hints via Injected Stylesheet Alone

This is some bonafide CSS trickery from Harry that gives you some generic performance advice based on what it sees in your <head> element.

First, it’s possible to make a <style> block visible like any other element by changing the display away from the default of none. It’s a nice little trick. You can even do that for things in the <head>, for example…

head, head style, head script {   display: block; }

From there, Harry gets very clever with selectors, determining problematic situations from the usage and placement of certain tags. For example, say there is a <script> that comes after some styles…

<head>   <link rel="stylesheet" href="...">   <script src="..."></script>   <title>Page Title</title>   <!-- ...  -->

Well, that’s bad, because the script is blocked by CSS likely unnecessarily. Perhaps some sophisticated performance tooling software could tell you that. But you know what else can? A CSS selector!

head [rel="stylesheet"]:not([media="print"]):not(.ct) ~ script, head style:not(:empty) ~ script {  }

That’s kinda like saying head link ~ script, but a little fancier in that it only selects actual stylesheets or style blocks that are truly blocking (and not itself). Harry then applies styling and pseudo-content to the blocks so you can use the stylesheet as a visual performance debugging tool.

That’s just darn clever, that. The stylesheet has loads of little things you can test for, like attributes you don’t need, blocking resources, and elements that are out of order.

Direct Link to ArticlePermalink


The post ct.css — Performance Hints via Injected Stylesheet Alone appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , ,
[Top]

Links on Performance IV


The post Links on Performance IV appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

,
[Top]

Links on Performance

  • Making GitHub’s new homepage fast and performant — Tobias Ahlin describes how the scrolling effects are done more performantly thanks to IntersectionObserver and the fact that it avoids the use of methods that trigger reflows, like getBoundingClientRect. Also, WebP + SVG masks!
  • Everything we know about Core Web Vitals and SEO — Simon Hearne covers why everyone is so obsessed with CWV right now: SEO. Simon says something I’ve heard a couple of times: The Page Experience Update is more of a carrot approach than stick — there is no direct penalty for failing to meet Google’s goals. That is, you aren’t penalized for poor CWV, but are given a bonus for good numbers. But if everyone around you is getting that bonus except you, isn’t that the same as a penalty?
  • Setting up Cloudflare Workers for web performance optimisation and testing — Matt Hobbs starts with a 101 intro on setting up a Cloudflare Worker, using it to intercept a CSS file and replace all the font-family declarations with Comic Sans. Maybe that will open your eyes to the possibilities: if you can manipulate all assets like HTML, CSS, and JavaScript, you can force those things into doing more performant things.
  • Now THAT’S What I Call Service Worker! — Jeremy Wagner sets up a “Streaming” Service Worker that caches common partials on a website (e.g. the header and footer) such that the people of Waushara County, Wisconsin, who have slow internet can load the site somewhere in the vicinity of twice as fast. This is building upon Philip Walton’s “Smaller HTML Payloads with Service Workers” article.
  • Who has the fastest F1 website in 2021? — Jake Archibald’s epic going-on-10-part series analyzing the performance of F1 racing websites (oh, the irony). Looks like Red Bull is in the lead so far with Ferarri trailing. There is a lot to learn in all these, and it’s somewhat cathartic seeing funny bits like, Their site was slow because of a 1.8MB blocking script, but 1.7MB of that was an inlined 2300×2300 PNG of a horse that was only ever displayed at 20×20. Also, I don’t think I knew that Jake was the original builder of Sprite Cow! (Don’t use that because it turns out that sprites are bad.)
  • Real-world CSS vs. CSS-in-JS performance comparison — Tomas Pustelnik looks at the performance implications of CSS-in-JS. Or, as I like to point out: CSS-in-React, as that’s always what it is since all the other big JavaScript frameworks have their own blessed styling solutions. Tomas didn’t compare styled-components to hand-written vanilla CSS, but to Linaria, which I would think most people still think of as CSS-in-JS — except that instead of bundling the styles in JavaScript, it outputs CSS. I agree that, whatever a styling library does for DX, producing CSS seems like the way to go for production. Yet another reason I like css-modules. Newer-fancier libs are doing it too.
  • The Case of the 50ms request — Julia Evans put together this interactive puzzle for trying to figure out why a server request is taking longer than it should. More of a back-end thing than front-end, but the troubleshooting steps feel familiar. Try it on your machine, try it on my machine, see what the server is doing, etc.

The post Links on Performance appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

,
[Top]

How to Improve CSS Performance

There is no doubt that CSS plays a huge role in web performance. Milica Mihajlija puts a point on exactly why:

When there is CSS available for a page, whether it’s inline or an external stylesheet, the browser delays rendering until the CSS is parsed. This is because pages without CSS are often unusable.

The browser has to wait until the CSS is both downloaded and parsed to show us that first rendering of the page, otherwise browsing the web would be a terribly visually jerky to browse. We’d probably write JavaScript to delay page rendering on purpose if that’s how the native web worked.

So how do you improve it? The classics like caching, minification, and compression help. But also, shipping less of it, and only loading the bit you need and the rest after the first render.

It’s entirely about how and how much CSS you load, and has very little to do with the contents of the the CSS.

Direct Link to ArticlePermalink


The post How to Improve CSS Performance appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

,
[Top]

The Mobile Performance Inequality Gap

Alex Russell made some interesting notes about performance and how it impacts folks on mobile:

[…] CPUs are not improving fast enough to cope with frontend engineers’ rosy resource assumptions. If there is unambiguously good news on the tooling front, multiple popular tools now include options to prevent sending first-party JS in the first place (Next.jsGatsby), though the JS community remains in stubborn denial about the costs of client-side script. Hopefully, toolchain progress of this sort can provide a more accessible bridge as we transition costs to a reduced-script-emissions world.

A lot of the stuff I read when it comes to performance is focused on America, but what I like about Russell’s take here is that he looks at a host of other countries such as India, too. But how does the rollout of 5G networks impact performance around the world? Well, we should be skeptical of how improved networks impact our work. Alex argues:

5G looks set to continue a bumpy rollout for the next half-decade. Carriers make different frequency band choices in different geographies, and 5G performance is heavily sensitive to mast density, which will add confusion for years to come. Suffice to say, 5G isn’t here yet, even if wealthy users in a few geographies come to think of it as “normal” far ahead of worldwide deployment

This is something I try to keep in mind whenever I’m thinking about performance: how I’m viewing my website is most likely not how other folks are viewing it.

Direct Link to ArticlePermalink


The post The Mobile Performance Inequality Gap appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, ,
[Top]