Tag: Performance

Enforcing performance budgets with webpack

As you probably know, a single monolithic JavaScript bundle — once a best practice — is no longer the way to go for modern web applications. Research has shown that larger bundles increase memory usage and CPU costs, especially on mid-range and low-end mobile devices.

webpack has a lot of features to help you achieve smaller bundles and control the loading priority of resources. The most compelling of them is code splitting, which provides a way to split your code into various bundles that can then be loaded on demand or in parallel. Another one is performance hints which indicates when emitted bundle sizes cross a specified threshold at build time so that you can make optimizations or remove unnecessary code.

The default behavior for production builds in webpack is to show a warning when an asset size or entry point is over 250KB (244KiB) in size, but you can configure how performance hints are shown and size thresholds through the performance object in your webpack.config.js file.

Production builds will emit a warning by default for assets over 250KB in size

We will walk through this feature and how to leverage it as a first line of defense against performance regressions.

First, we need to set a custom budget

The default size threshold for assets and entry points (where webpack looks to start building the bundle) may not always fit your requirements, but they can be configured to.

For example, my blog is pretty minimal and my budget size is a modest 50KB (48.8KiB) for both assets and entry points. Here’s the relevant setting in my webpack.config.js:

module.exports = {   performance: {     maxAssetSize: 50000,     maxEntrypointSize: 50000,   } };

The maxAssetSize and maxEntrypointSize properties control the threshold sizes for assets and entry points, respectively, and they are both set in bytes. The latter ensures that bundles created from the files listed in the entry object (usually JavaScript or Sass files) do not exceed the specified threshold while the former enforces the same restrictions on other assets emitted by webpack (e.g. images, fonts, etc.).

Let’s show an error if thresholds are exceeded

webpack’s default warning emits when budget thresholds are exceeded. It’s good enough for development environments but insufficient when building for production. We can trigger an error instead by adding the hints property to the performance object and setting it to 'error':

module.exports = {   performance: {     maxAssetSize: 50000,     maxEntrypointSize: 50000,     hints: 'error',   } };
An error is now displayed instead of a warning

There are other valid values for the hints property, including 'warning' and false, where false completely disables warnings, even when the specified limits are encroached. I wouldn’t recommend using false in production mode.

We can exclude certain assets from the budget

webpack enforces size thresholds for every type of asset that it emits. This isn’t always a good thing because an error will be thrown if any of the emitted assets go above the specified limit. For example, if we set webpack to process images, we’ll get an error if just one of them crosses the threshold.

webpack’s performance budgets and asset size limit errors also apply to images

The assetFilter property can be used to control the files used to calculate performance hints:

module.exports = {   performance: {     maxAssetSize: 50000,     maxEntrypointSize: 50000,     hints: 'error',     assetFilter: function(assetFilename) {       return !assetFilename.endsWith('.jpg');     },   } };

This tells webpack to exclude any file that ends with a .jpg extension when it runs the calculations for performance hints. It’s capable of more complex logic to meet all kinds of conditions for environments, file types, and other resources.

The build is now successful but you may need to look for a different way to control your image sizes.

Limitations

While this has been a good working solution for me, a limitation that I’ve come across is that the same budget thresholds are applied to all assets and entry points. In other words, it isn’t yet possible to set multiple budgets as needed, such as different limits for JavaScript, CSS, and image files.

That said, there is an open pull request that should remove this limitation but it is not merged yet. Definitely something to keep an eye on.

Conclusion

It’s so useful to set a performance budget and enforcing one with webpack is something worth considering at the start of any project. It will draw attention to the size of your dependencies and encourage you to look for lighter alternatives where possible to avoid exceeding the budget.

That said, performance budgeting does not end here! Asset size is just one thing of many that affect performance, so there’s still more work to be done to ensure you are delivering an optimal experience. Running a Lighthouse test is a great first step to learn about other metrics you can use as well as suggestions for improvements.

Thanks for reading, and happy coding!


The post Enforcing performance budgets with webpack appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , ,

content-visibility: the new CSS property that boosts your rendering performance

Una Kravets and Vladimir Levin:

[…] you can use another CSS property called content-visibility to apply the needed containment automatically. content-visibility ensures that you get the largest performance gains the browser can provide with minimal effort from you as a developer.

The content-visibility property accepts several values, but auto is the one that provides immediate performance improvements.

The perf benefits seems pretty big:

In our example, we see a boost from a 232ms rendering time to a 30ms rendering time. That’s a 7x performance boost.

It’s manual work though. You have to “section” large vertical chunks of the page yourself, apply content-visibility: auto; to them, then take a stab at about how tall they are, something like contain-intrinsic-size: 1000px;. That part seems super weird to me. Just guess at a height? What if I’m wrong? Can I hurt performance? Can (or should) I change that value at different viewports if the height difference between small and large screens is drastic?

Seems like you’d have to be a pretty skilled perf nerd to get this right, and know how to look at and compare rendering profiles in DevTools. All the more proof that web perf is its own vocation.

Direct Link to ArticlePermalink


The post content-visibility: the new CSS property that boosts your rendering performance appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , ,
[Top]

radEventListener: a Tale of Client-side Framework Performance

React is popular, popular enough that it receives its fair share of criticism. Yet, this criticism of React isn’t completely unwarranted: React and ReactDOM total about 120 KiB of minified JavaScript, which definitely contributes to slow startup time. When client-side rendering in React is relied upon entirely, it churns. Even if you render components on the server and hydrate them on the client, it still churns because component hydration is computationally expensive.

React certainly has its place when it comes to applications requiring complex state management, but in my professional experience, it doesn’t belong in most scenarios I see it used. When even a bit of React can be a problem on devices slow and fast alike, using it is an intentional choice that effectively excludes people with low-end hardware.

If it sounds like I have a grudge against React, then I must confess that I really like its componentization model. It makes organizing code easier. I think JSX is great. Server rendering is also cool—even if that’s just how we say “send HTML over the network” these days.

Still, even though I happily use React components on the server (or Preact, as is my preference), figuring out when it’s appropriate to use on the client is a bit challenging. What follows are my findings on React performance as I’ve tried to meet this challenge in a way that’s best for users.

Setting the scene

Lately, I’ve been chipping away at an RSS feed app side project called bylines.fyi. This app uses JavaScript on both the back and front end. I don’t think client-side frameworks are horrid things, but I’ve frequently observed two things about the client-side framework implementations I tend to run into in my day-to-day work and research:

  1. Frameworks have the potential to inhibit a deeper understanding of the things they abstract, which is the web platform. Without knowing at least some of the lower level APIs that frameworks rely on, we can’t know what projects benefit from a framework, and which projects are better off without one.
  2. Frameworks don’t always provide a clear path toward good user experiences.

You may be able to argue the validity of my first point, but the second point is becoming more difficult to refute. You might remember a little while ago when Tim Kadlec did some research on HTTPArchive about web framework performance, and came to the conclusion that React wasn’t exactly a stellar performer.

Still, I wanted to see if it was possible to use what I thought was best about React on the server while mitigating its ill effects on the client. To me, it makes sense to simultaneously want to use a framework to help to organize my code, but also restrict that framework’s negative impact on the user experience. That required a little experimentation to see what approach would be best for my app.

The experiment

I make sure to render every component I use on the server because I believe that the burden of providing markup should be assumed by the web app’s server, not the user’s device. However, I needed some JavaScript in my RSS feed app in order to get a toggleable mobile nav to work.

The mobile nav toggle functionality. At left, the mobile nav is in the closed state. On the right, it’s in the open state, which overlays the entire screen with the navigation.

This scenario aptly describes what I refer to as simple state. In my experience, a prime example of simple state are linear A to B interactions. We toggle a thing on, and then we toggle it off. Stateful, but simple.

Unfortunately, I often see stateful React components used to manage simple state, which is a trade-off that’s problematic for performance. Though that may be a vague utterance for the moment, you’ll come to find out as you read on. That said, it’s important to emphasize that this is a trivial example, but it’s also a canary. Most developers—I hope—aren’t going to rely solely on React to drive such simple behavior for just one thing on their website. So it’s vital to understand that the results you’re going to see are intended to inform you on how you architect your applications, and how the effects of your framework choices could scale when it comes to runtime performance.

The conditions

My RSS feed app is still in development. It contains no third party code, which makes for easy testing in a quiet environment. The experiment I conducted compared the mobile nav toggle behavior across three implementations:

  1. A stateful React component (React.Component) rendered on the server and hydrated on the client.
  2. A stateful Preact component, also server-rendered and hydrated on the client.
  3. A server-rendered stateless Preact component which was not hydrated. Instead, regular ol’ event listeners provide the mobile nav functionality on the client.

Each of these scenarios were measured across four distinct environments:

  1. A Nokia 2 Android phone on Chrome 83.
  2. A ASUS X550CC laptop from 2013 running Windows 10 on Chrome 83.
  3. An old first generation iPhone SE on Safari 13.
  4. A new second generation iPhone SE, also on Safari 13.

I believe this range of mobile hardware will be illustrative of performance across a broad spectrum of device capabilities, even if it’s slightly heavy on the Apple side.

What was measured

I wanted to measure four things for each implementation in each environment:

  1. Startup time. For React and Preact, this included the time it took to load the framework code as well as hydrating the component on the client. For the event listener scenario, this included only the event listener code itself.
  2. Hydration time. For the React and Preact scenarios, this is a subset of the startup time. Because of issues with remote debugging crashing in Safari on macOS, I couldn’t measure hydration time alone on iOS devices. Event listener implementations incurred zero hydration cost.
  3. Mobile nav open time. This gives us insight into how much overhead frameworks introduce in their abstraction of event handlers, and how that compares to the frameworkless approach.
  4. Mobile nav close time. As it turned out, this was quite a bit less than the cost of opening the menu. I ultimately decided not to include those numbers in this article.

It should be noted that measurements of these behaviors include scripting time only. Any layout, paint, and compositing costs would be in addition to and outside of these measurements. One should take care to remember that those activities compete for main thread time in tandem with scripts that trigger them.

The procedure

To test each of the three mobile nav implementations on each device, I followed this procedure:

  1. I used remote debugging in Chrome on macOS for the Nokia 2. For iPhones, I used Safari’s equivalent of remote debugging.
  2. I accessed the RSS feed app running on my local network on each device to the same page where the mobile nav toggling code could be run. Because of this, network performance was not a factor in my measurements.
  3. Without CPU or network throttling applied, I began recording in the profiler, and reloaded the page.
  4. After page load, I opened the mobile nav and then closed it.
  5. I stopped the profiler, and recorded how much CPU time was involved in each of the four behaviors listed earlier.
  6. I cleared the performance timeline. In Chrome, I also clicked the garbage collection button to free up any memory that may have been tied up by my app’s code from a previous session recording.

I repeated this procedure ten times for each scenario for each device. Ten iterations seemed to get just enough data to see a few outliers while getting a reasonably accurate picture, but I’ll let you decide as we go over the results. If you don’t want a play-by-play of my findings, you can view the results at this spreadsheet and draw your own conclusions, as well as the mobile nav code for each implementation.

The results

I initially wanted to present this information in a graph, but because of the complexity of what I was measuring, I wasn’t certain how to present the results without cluttering the visualization. Therefore, I’ll present the minimum, maximum, median, and average CPU times in a series of tables, all of which effectively illustrate the range of outcomes I encountered in each test.

Google Chrome on Nokia 2

The Nokia 2 is a low-cost Android device with a Snapdragon 212 processor. It is not a powerhouse, but rather a cheap and easily obtainable device. Android usage worldwide is currently around 40%, and though Android device specs vary greatly from one device to the next, low-end Android devices are not rare. This is a problem we must recognize as being one of both wealth and proximity to fast network infrastructure.

Let’s see what the numbers look like for startup cost.

Startup time
React Component Preact Component addEventListener Code
Min 137.21 31.23 4.69
Median 147.76 42.06 5.99
Avg 162.73 43.16 6.81
Max 280.81 62.03 12.06

I believe it says something that it takes, on average, over 160 ms to parse and compile React, and hydrate one component. To remind you, startup cost in this case includes the time it takes for the browser to evaluate the scripts needed for the mobile nav to work. For React and Preact, it also includes hydration time, which in both cases can contribute to the uncanny valley effect we sometimes experience during startup.

Preact fares much better, taking around 73% less time than React, which makes sense considering how tiny Preact is at 10 KiB sans compression. Still, it’s important to note that the frame budget in Chrome is about 10 ms to avoid jank at 60 fps. Janky startup is as bad as janky anything else, and is a factor when calculating First Input Delay. All things considered, though, Preact performs relatively well.

As for the addEventListener implementation, it turns out that parse and compile time for a tiny script with no overhead is unsurprisingly very low. Even at the sampled maximum time of 12ms, you’re barely in the outer ring of the Janksburg Metropolitan Area. Now let’s have a look at hydration cost alone.

Hydration time
React Component Preact Component
Min 67.04 19.17
Median 70.33 26.91
Avg 74.87 26.77
Max 117.86 44.62

For React, this is still in the vicinity of Yikes Peak. Sure, a median hydration time of 70 ms for one component isn’t a big deal, but think about how hydration cost scales when you have a bunch of components on the same page. It’s no surprise that the React websites I test on this device feel more like endurance trials than user experiences.

Preact’s hydration times are quite a bit less, which makes sense because Preact’s documentation for its hydrate method states that it “skips most diffing while still attaching event listeners and setting up your component tree.” Hydration time for the addEventListener scenario isn’t reported, because hydration isn’t a thing outside of VDOM frameworks. Next, let’s take a peek at the time it takes to open the mobile nav.

Mobile nav open time
React Component Preact Component addEventListener Code
Min 30.89 11.94 3.94
Median 43.62 14.29 6.14
Avg 43.16 14.66 6.12
Max 53.19 20.46 8.60

I find these figures a bit surprising, because React commands almost seven times as much CPU time to execute an event listener callback than an event listener you could register yourself. This makes sense, as React’s state management logic is necessary overhead, but one has to wonder if it’s worth it for simplistic, linear interactions.

On the other hand, Preact manages to limit its overhead on event listeners to the point where it takes “only” twice as much CPU time to run an event listener callback.

CPU time involved in closing the mobile nav was quite a bit less at an average approximate time of 16.5 ms for React, with Preact and bare event listeners coming in at around 11 ms and 6 ms, respectively. I’d post the full table for the measurements on closing the mobile nav, but we have a lot left to sift through yet. Besides, you can check out those figures yourself in the spreadsheet I referred to earlier on.

A quick note on JavaScript samples

Before moving onto the iOS results, one potential sticking point I want to address is the impact of disabling JavaScript samples in Chrome DevTools when recording sessions on remote devices. After compiling my initial results, I wondered if the overhead of capturing entire call stacks was skewing my results, so I re-tested the React scenario samples disabled. As it turned out, this setting had no significant impact on the results.

Additionally, because the call stacks were truncated, I was unable to measure component hydration time. Average startup cost with samples disabled vs. samples enabled was 160.74 ms and 162.73 ms, respectively. The respective median figures were 157.81 ms and 147.76 ms. I would consider this squarely “in the noise.”

Safari on 1st Generation iPhone SE

The original iPhone SE is a great phone. Despite its age, it still enjoys devoted ownership owing to its more comfortable physical size. It shipped with the Apple A9 processor which is still a solid contender. Let’s see how it did on startup time.

Startup time
React Component Preact Component addEventListener Code
Min 32.06 7.63 0.81
Median 35.60 9.42 1.02
Avg 35.76 10.15 1.07
Max 39.18 16.94 1.56

This is a big improvement from the Nokia 2, and it’s illustrative of the gulf between low-end Android devices and even older Apple devices with significant mileage.

React performance still isn’t great, but Preact gets us within a typical frame budget for Chrome. Event listeners alone, of course, are blazingly fast, leaving plenty of room in the frame budget for other activity.

Unfortunately, I couldn’t measure hydration times on the iPhone, as the remote debugging session would crash every time I would traverse the call stack in Safari’s DevTools. Considering that hydration time was a subset of the overall startup cost, you can expect that it probably accounts for at least half of the startup time if results from the Nokia 2 trials are any indicator.

Mobile nav open time
React Component Preact Component addEventListener Code
Min 16.91 5.45 0.48
Median 21.11 8.62 0.50
Avg 21.09 11.07 0.56
Max 24.20 19.79 1.00

React does alright here, but Preact seems to handle event listeners a bit more efficiently. Bare event listeners are lightning fast, even on this old iPhone.

Safari on 2nd Generation iPhone SE

In mid-2020, I picked up the new iPhone SE. It has the same physical size as an iPhone 8 and similar phones, but the processor is the same Apple A13 used in the iPhone 11. It is very fast for its relatively low $ 400 USD retail price. Given such a beefy processor, how does it deal?

Startup time
React Component Preact Component addEventListener Code
Min 20.26 5.19 0.53
Median 22.20 6.48 0.69
Avg 22.02 6.36 0.68
Max 23.67 7.18 0.88

I guess at some point there are diminishing returns when it comes to the relatively small workload of loading a single framework and hydrating one component. Things are a little faster on a 2nd generation iPhone SE than its first generation variant in some cases, but not terribly so. I’d imagine that this phone would tackle larger and sustained workloads better than its predecessor.

Mobile nav open time
React Component Preact Component addEventListener Code
Min 13.15 12.06 0.49
Median 16.41 12.57 0.53
Avg 16.11 12.63 0.56
Max 17.51 13.26 0.78

Slightly better React performance here, but not much else. Strangely, Preact seems to take longer on average to open the mobile nav on this device than its first generation counterpart, but I’ll chalk that up to outliers skewing a relatively small dataset. I certainly would not assume the first generation iPhone SE is a faster device based on this.

Chrome on a dated Windows 10 Laptop

Admittedly, these were the results I was most excited to see: how does an ASUS laptop from 2013 with Windows 10 and an Ivy Bridge i5 of the day handle this stuff?

Startup time
React Component Preact Component addEventListener Code
Min 43.15 13.11 1.81
Median 45.95 14.54 2.03
Avg 45.92 14.47 2.39
Max 48.98 16.49 3.61

The numbers aren’t bad when you consider that the device is seven years old. The Ivy Bridge i5 was a good processor in its day, and when you couple that with the fact that it’s actively cooled (rather than passively cooled as mobile device processors are), it probably doesn’t run into thermal throttling scenarios as often as mobile devices.

Hydration time
React Component Preact Component
Min 17.75 7.64
Median 23.55 8.73
Avg 23.12 8.72
Max 26.25 9.55

Preact does well here, and manages to stay within Chrome’s frame budget, and is almost three times faster than React. Things could look quite a bit different if you’re hydrating ten components on the page at startup time, possibly even in Preact.

Mobile nav open time
Preact Component addEventListener Code
Min 6.06 2.50 0.88
Median 10.43 3.09 0.97
Avg 11.24 3.21 1.02
Max 14.44 4.34 1.49

When it comes to this isolated interaction, we see performance that’s similar to high-end mobile devices. It’s encouraging to see such an old laptop still keep up reasonably well. That said, this laptop’s fan spins up often when browsing the web, so active cooling is probably this device’s saving grace. If this device’s i5 was passively cooled, I suspect its performance might drop.

Shallow call stacks for the win

It’s not a mystery as to why it takes React and Preact longer to start up than it does for a solution that eschews frameworks altogether. Less work equals less processing time.

While I think startup time is crucial, it’s probably inevitable that you’ll trade some amount of speed for a better developer experience. Though I’d strenuously argue that we tend to trade too much toward developer experience than user experience far too often.

The dragons also lie in what we do after the framework loads. Client-side hydration is something that I think is far too often abused, and can sometimes be completely unnecessary. Every time you hydrate a component in React, this is what you’re throwing at the main thread:

A React stateful component hydration call stack captured in Chrome DevTools.

Recall that on the Nokia 2, the minimum time I measured for hydrating the mobile nav component was about 67 ms. In Preact—for which you’ll see the hydration call stack below—takes about 20 ms.

A Preact stateful component hydration call stack captured in Chrome DevTools.

These two call stacks aren’t to the same scale, but Preact’s hydration logic is simplified, probably because “most diffing is skipped” as Preact’s documentation states. There’s quite a bit less going on here. When you get closer to the metal by using addEventListener instead of a framework, you can get even faster.

A call stack of event listeners attaching to DOM elements.

Not every situation calls for this approach, but you’d be surprised at what you can accomplish when your tools are addEventListener, querySelector, classList, setAttribute/getAttribute, and so on.

These methods—and many more like them—are what frameworks themselves rely on. The trick is to evaluate what functionality you can safely deliver outside of what the framework provides, and rely on the framework when it makes sense.

A call stack of React firing a click event handler to open a mobile nav.

If this were a call stack for, say, making a request for API data on the client and managing the complex state of the UI in that situation, I’d find this cost more acceptable. Yet, it’s not. We’re just making a nav appear on the screen when the user taps a button. It’s like using a bulldozer when a shovel would be a better fit for the job.

Preact at least strikes the middle ground:

A call stack of Preact firing a click event handler to open a mobile nav.

Preact takes about a third of the time to do the same work React does, but on that budget device, it exceeds the frame budget often. This means opening that nav on some devices will animate sluggishly because the layout and paint work may not have enough time to finish without entering long task territory.

A call stack of a bare event listener opening the mobile nav.

In this case, an event listener is what I needed. It gets the job done seven times faster on that budget device than React.

Conclusion

This is not a React hit piece, but rather a plea for consideration of how we do our work. Some of these performance pitfalls can be avoided if we take care to evaluate what tools make sense for the job, even for apps with a great deal of complex interactivity. To be fair to React, these pitfalls likely exist in many VDOM frameworks, because the nature of them adds necessary overhead to manage all sorts of things for us.

Even if you’re working on something that doesn’t call for React or Preact, but you want to take advantage of componentization, consider keeping it all on the server to start with. This approach means you can decide if and when it’s appropriate to extend functionality to the client—and how you’ll do that.

In the case of my RSS feed app, I can manage this by putting lightweight event listener code in the entry point for that page of the app, and using an asset manifest to put the minimal amount of script necessary in order for each page to work.

Now let’s suppose that you have an app that truly needs what React provides. You have complex interactivity with lots of state. Here are some things you can do to try and get things going a bit faster.

  1. Check all of your stateful components—that is, any component which extends React.Component—and see if they can be refactored as stateless components. If a component doesn’t use lifecycle methods or state, you can refactor it to be stateless.
  2. Then, if possible, avoid sending JavaScript to the client for those stateless components, as well as hydrating them. If a component is stateless, only render it on the server. Prerender components when possible to minimize server response time, because server rendering has its own performance pitfalls.
  3. If you have a stateful component with simple interactivity, consider prerendering/server-rendering that component, and replace its interactivity with framework-independent event listeners. This avoids hydration entirely, and user interactions won’t have to filter through the framework’s state management logic.
  4. If you must hydrate stateful components on the client, consider lazily hydrating components that aren’t near the top of the page. An Intersection Observer that triggers a callback works very well for this, and will give more main thread time to critical components on the page.
  5. For lazily-hydrated components, assess whether you can schedule their hydration during main thread idle time with requestIdleCallback.
  6. If possible, consider switching from React to Preact. Given how much faster it runs than React on the client, it’s worth having the discussion with your team to see if this is possible. The latest version of Preact is nearly 1:1 with React for most things, and preact/compat does a great job of easing this transition. I don’t think Preact is a panacea for performance, but it gets you closer to where you need to be.
  7. Consider adapting your experience to users with low device memory. navigator.deviceMemory (available in Chrome and derived browsers) enables you to change the user experience for users on devices with little memory. If someone has such a device, it’s probable that its processor isn’t so fast either.

Whatever you decide to do with this information, the thrust of my argument is this: if you use React or any VDOM library, you should spend some time investigating its impact on an array of devices. Get a cheap Android device and see how your app feels to use. Contrast that experience with your high-end devices.

Most of all, don’t follow “best practices” if the result is that your app effectively excludes a part of your audience that can’t afford high end devices. Keep pushing for everything to be faster. If our daily work is any indication, this is an endeavor that will keep you busy for some time to come, but that’s OK. Making the web faster makes the web more accessible in more places. Making the web more accessible makes the web more inclusive. That’s the really good work we should all be trying our best to do.

I’d like to express my gratitude to Eric Bailey for his editorial feedback this piece, as well as the CSS-Tricks staff for their willingness to publish it.


The post radEventListener: a Tale of Client-side Framework Performance appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , ,
[Top]

We need more inclusive web performance metrics

Scott Jehl argues that performance metrics such as First Contentful Paint and Largest Contentful Paint don’t really capture the full picture of everyone’s experience with websites:

These metrics are often touted as measures of usability or meaning, but they are not necessarily meaningful for everyone. In particular, users relying on assistive technology (such as a screenreader) may not perceive steps in the page loading process until after the DOM is complete, or even later depending on how JavaScript may block that process. Also, a page may not be usable to A.T. until it becomes fully interactive, since many applications often deliver accessible interactivity via external JavaScript

Scott then jots down some thoughts on how we might do that. I think this is always so very useful to keep in mind: what we experience on our site, and what we measure too, might not be the full picture.

Direct Link to ArticlePermalink

The post We need more inclusive web performance metrics appeared first on CSS-Tricks.

CSS-Tricks

, , , ,
[Top]

Some Performance Links

Just had a couple of good performance links burning a hole in my pocket, so blogging them like a good little blogger.

Web Performance Recipes With Puppeteer

Puppeteer is an Node library for spinning up a copy of Chrome “headlessly” (i.e. no UI) and controlling it. People use it for stuff like taking a screenshot of a website or running integration tests. You can even run it in a Lambda.

Another use case is running synthetic (i.e. not based on real-users) performance tests, like some of these new Web Core Vitals

Addy Osmani lists out a bunch of these “recipes” for measuring certain performance things in Puppeteer. These would be super useful as part of a build process alongside other tests. Did the unit tests pass? Did the integration tests pass? Did the accessibility tests pass? Did the performance metrics tests pass?


BrowserStack SpeedLab

BrowserStack released a thing to measure your site and give you a performance score.

You get the tests back super quick which is cool. I can see how tools like this are good for starting conversations with teams about improving performance.

But… that number seems a little weird. They don’t exactly document how it’s calculated, but it seems to be based on stuff like Time to First Byte (TTFB) and the page load event, which aren’t particularly useful performance metrics.

It’s not bad that this tool exists or anything, but I don’t think it’s for practitioners doing performance work.


5 Common Mistakes Teams Make When Tracking Performance

Karolina Szczur from Calibre documents some common team struggles like, for example, having a team be able to identify real issues from variability noise.

Many people from different backgrounds can view performance dashboards. Not knowing what constitutes a meaningful change that needs investigation can result in false positives, lack of trust in monitoring and cycles spent looking for reasons for performance regressions or upgrades that aren’t there.


Are your JavaScript long tasks frustrating users?

50ms. That’s how long until any particular JavaScript task starts affecting user experience. Might as well track and (ideally) fix them.

When the browser’s main thread hits max CPU for more than 50ms, a user starts to notice that their clicks are delayed and that scrolling the page has become janky and unresponsive. Batteries drain faster. People rage click or go elsewhere.

The post Some Performance Links appeared first on CSS-Tricks.

CSS-Tricks

, ,
[Top]

Analyzing Notion app performance

Here’s a fantastic case study where Ivan Akulov looks at the rather popular writing app Notion and how the team might improve the performance in a variety of ways; through code splitting, removing unused vendor code, module concatenation, and deferring JavaScript execution. Not so long ago, we made a list for getting started with web performance but this article goes so much further into the app side of things: making sure that users are loading only the JavaScript that they need, and doing that as quickly as possible.

I love that this piece just doesn’t feel like dunking on the Notion team, and bragging about how Ivan might do things better. There’s always room for improvement and constructive feedback is better than guilting someone into it. Yay for making things fast while being nice about it!

Direct Link to ArticlePermalink

The post Analyzing Notion app performance appeared first on CSS-Tricks.

CSS-Tricks

, ,
[Top]

Web Performance Checklist

The other day, I realized that web performance is an enormous topic covering so very much — from minimizing assets to using certain file formats, it can be an awful lot to keep in mind while building a website. It’s certainly far too much for me to remember!

So I made a web performance checklist. It’s a Notion doc that I can fork and use to mark completed items whenever I start a new project. It also contains a bunch of links for references.

This doc is still a work in progress. Any recommendations or links?Feel free to suggest something in the comments below!

The post Web Performance Checklist appeared first on CSS-Tricks.

CSS-Tricks

,
[Top]

Maintaining Performance

Real talk from Dave:

I, Dave Rupert, a person who cares about web performance, a person who reads web performance blogs, a person who spends lots of hours trying to keep up on best practices, a person who co-hosts a weekly podcast about making websites and speak with web performance professionals… somehow goofed and added 33 SECONDS to their page load.

This stuff is hard even when you care a lot. The 33 seconds came from font preloading rather than the one-line wonder of font-display.

I also care about making fast websites, but mine aren’t winning any speed awards because I’ll take practical and maintainable over peak performance any day. (Sorry, world)

Direct Link to ArticlePermalink

The post Maintaining Performance appeared first on CSS-Tricks.

CSS-Tricks

,
[Top]

Performance Links

I’ve had a number of browser tabs open to articles all related to web performance and gosh darn it if blogging them is a way for me get some closure. They are all good!

Manuel Matuzovic, Why 543 KB keep me up at night:

Yes, I know, it depends. 543 KB aren’t always bad, but on that specific page there’s only a single image (the logo ~20 KB) and a single paragraph. So why then is the page still relatively large, where are the remaining 523 KB coming from?

Spoiler: it was the JavaScript. Also, I had no idea Google has a recommended ideal DOM that:

  • has less than 1500 nodes total.
  • has a maximum depth of 32 nodes.
  • has no parent node with more than 60 child nodes.

Next up, Performant front-end architecture (no byline):

Bundle splitting will result in more requests being made to load your app. But as long as the requests are made in parallel that’s not a big problem, especially if your site served over HTTP/2.

This is all about assuming the app is largely a client-side JavaScript site. I think there is a huge pile of low-hanging performance fruit, but it’s almost like a different list when talking about client-side JavaScript sites. It makes code-splitting one of the top priorities.


Jeremy Keith, Telling the story of performance:

Web Page Test is a terrific tool for measuring performance. It can also be used as a story-telling tool.

WPT ouputs video of the site loading. Put it side-by-side with a competitor and show it to the client.


CP Clermont, The Impact of Web Performance:

In this post, I’ll discuss what I did at ALDO to measure the revenue impact of web performance without having to spend time making performance improvements.

Not surprising that users with faster experiences generate more revenue. What is surprising is that it’s a lot more. Over 3x more on mobile and nearly 6x more on desktop.

The post Performance Links appeared first on CSS-Tricks.

CSS-Tricks

,
[Top]

In-Browser Performance Linting With Feature Policies

Here’s a neat idea from Tim Kadlec. He uses the Modheader extension to toggle custom headers in his browser. It also lets him see when images are too big and need to be optimized in some way. This is a great way to catch issues like this in a local environment because browsers will throw an error and won’t display them at all!

As Tim mentions, the trick is with the Feature Policy header with the oversized-images policy, and he toggles it on like this:

Feature-Policy: oversized-images ‘none’;

Tim writes:

By default, if you provide the browser an image in a format it supports, it will display it. It even helpful scales those images so they look great, even if you’ve provided a massive file. Because of this, it’s not immediately obvious when you’ve provided an image that is larger than the site needs.

The oversized-images policy tells the browser not to allow any images that are more than some predefined factor of their container size. The recommended default threshold is 2x, but you are able to override that if you would like.

I love this idea of using the browser to do linting work for us! I wonder what other ways we could use the browser to place guard rails around our work to prevent future mistakes…

Direct Link to ArticlePermalink

The post In-Browser Performance Linting With Feature Policies appeared first on CSS-Tricks.

CSS-Tricks

, , , ,
[Top]