Tag: Slow

A font-display setting for slow connections

Me, I really dislike FOUT. I like that it’s an option, because not displaying text quickly on the web is no good. I know font-display: swap; is popular because it’s good for performance, but that FOUT stuff pains me. Matt Hobbs:

If there’s one thing I’d like readers to take away from this post it’s that font-display: swap is a very good option for users with a fast internet connection. But its infinite swap period could be frustrating for users on very slow and unstable connections. If you have users viewing your site under these conditions (I’m pretty certain you will at some point in time), then it may be worth considering font-display: fallback or even font-display: optional.

Seeeee, I told ya. I like how font-display: optional; totally stops FOUT. The font is either applied super fast, or isn’t used at all (but still downloaded async). Chances are, on the next page load, the font is loaded and cached and will be used.

Note this is about slow connections, not necessarily connections where the user would prefer as little data usage as possible. If that’s the case, check out some of the recent posts we linked up in Responsible, Conditional Loading.


And boy howdy, the Web Performance Calendar this year was just loaded in great articles.

Direct Link to ArticlePermalink


The post A font-display setting for slow connections appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , ,

Slow Movement

There was a time when I felt overwhelmed by how fast the web developed. It seemed like not a single day passed without a new plugin, framework, technique, or language feature being released. I believed that in order to survive as a freelancer and to compete with others I had to learn everything everyone else was so good at: webpack, React, Angular, SVGs, Houdini, CSS Grid Layout, ES6, you name it. Being active on Twitter and going to conferences didn’t help with that because I was constantly exposed to all the new things.

Surrender

At some point, I surrendered. I decided for myself that I can’t keep up. Professionally it changed nothing for me because, in reality, no one expected me to know everything and this impression I had was only happening in my bubble anyway. Slowing down was a brilliant decision because it wasn’t just a mental relief, it also helped me focus on the things I actually wanted to learn. I still read newsletters, blogs and Twitter, and I still take some time to try something new every now and then, but I do it without pressure. I try to keep up-to-date but I don’t feel the urge to know everything.

This is how I have been dealing with developments on the web over the past few years, but recently, especially this year, I learned something new. It wasn’t a framework or language — it was the insight that in our aspiration for innovation and progress, we’re neglecting to draw on the many features HTML, CSS, and JavaScript offer today. In other words: there’s so much we can learn if we look back instead of ahead.

Don’t go chasing waterfalls

I’m speaking of neglect because I believe that there’s a significant divide between the things we believe we know about front-end languages and what we actually should know.

HTML

It’s part of my job and a hobby to inspect websites and evaluate the quality of their front-end. I’ve looked under the hood on many websites, and I can only confirm what web accessibility experts preach every day: most HTML documents are in terrible shape. If you don’t believe me, just look at the data:

There’s a massive difference between knowing HTML syntax and knowing how to use it properly. When it comes to writing well-structured, semantic HTML documents, we all can use a little refresher. In 2020, I’ve spent a good deal of my time learning HTML and I hope that users of the websites I build can benefit from my insights.

Two of my favorite things I’ve learned about HTML in 2020:

You can change the filename of a downloadable file by defining a value in the download attribute.
<a href="files/yxcvc27.pdf" download="report.pdf">Download (2MB)</a>
You can use the value attribute to change the numbering in an ordered list.
<ol>   <li value="3">C</li>   <li value="2">B</li>   <li value="1">A</li> </ol>

CSS

Almost every time I look up a CSS property on MDN or CSS-Tricks, I discover something new. Try it yourself. Search for margin, list-style-type or color. I’m sure you’ll learn something.

The list of things I’ve learned about CSS in 2020 is pretty long, here are two of my favorites.

You can use the url() function as (part of) the value of the content property.
div::before {   content: url('marker-icon.png'); }
You can implement native smooth scrolling in CSS.
// Animate scrolling only if users don’t prefer reduced motion @media (prefers-reduced-motion: no-preference) {   html {     scroll-behavior: smooth;   }      // Add some spacing between the target and the top of the viewport   :target {     scroll-margin-top: 0.8em;   } }

JavaScript

I write JavaScript regularly, but it’s not one of my core strengths, so I learn new things about it all the time. Here are two of my favorites this year:

You can use the nomodule attribute to run JavaScript code only in browsers that don’t support JavaScript modules.
<script nomodule>   console.log('This browser doesn’t support JS Modules.'); </script> <script type="module">   console.log('This browser supports JS Modules.'); </script>

Conclusion

HTML is the backbone of every website; knowing how to write semantic documents should be every web developer’s top priority. CSS is, to its own extent, so complex that in order to learn new concepts we must understand which problems they solve compared to older techniques. JavaScript frameworks and libraries come and go, but what they all have in common is that they’re written in vanilla JavaScript. 

In 2020, I relearned things I had already forgotten and discovered new things about established elements and properties. There’s so much hidden knowledge to find if you only look for it. I’ll expand on that in 2021 because there’s so much awesome stuff to discover.


The post Slow Movement appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

,
[Top]

Make Jamstack Slow? Challenge Accepted.

“Jamstack is slowwwww.” That’s not something you hear often, right? Especially, when one of the main selling points of Jamstack is performance. But yeah, it’s true that even a Jamstack site can suffer hits to performance just like any other site. 

Don’t think that by choosing Jamstack you no longer have to think about performance. Jamstack can be fast — really fast — but you have to make the right choices. Let’s see if we can spot some of the poor decisions that can lead to a “slow” Jamstack site.

To do that, we’re going to build a really slow Gatsby site. Seems strange right? Why would we intentionally do that!? It’s the sort of thing where, if we make it, then perhaps we can gain a better understanding of what affects Jamstack performance and how to avoid bottlenecks.

We will use continuous performance testing and Google Lighthouse to audit every change. This will highlight the importance of testing every code change. Our site will start with a top Lighthouse performance score of 100. From there, we will make changes until it scores a mere 17. It is easier to do than you might think!

Let’s get started!

Creating our Jamstack site

We are going to use Gatsby for our test site. Let’s start by installing the Gatsby CLI installed:

npm install -g gatsby-cli

We can up a new Gatsby site using this command:

gatsby new slow-jamstack

Let’s cd into the new slow-jamstack project directory and start the development server:

cd slow-jamstack gatsby develop

To add Lighthouse to the mix, we need a Gatsby production build. We can use Vercel to host the site, giving Lighthouse a way to runs its tests. That requires installing the Vercel command-line tool and logging in:

npm install -g vercel-cli vercel

This will create the site in Vercel and put it on a live server. Here’s the example I’ve already set up that we’ll use for testing.

We’ve gotta use Chrome to access directly from DevTools and run a performance audit. No surprise here, the default Gatsby site is fast:

Screenshot of a Lighthouse audit showing a score of 100 out of 100.

A score of 100 is the fastest you can get. Let’s see what we can do to slow it down.

Slow CSS

CSS frameworks are great. They can do a lot of heavy lifting for you. When deciding on a CSS framework use one that is modular or employs CSS-in-JS so that the only CSS you need is what’s loaded.

But let’s make the bad decision to reach for an entire framework just to style a button component. In fact, let’s even grab the heaviest framework while we’re at it. These are the sizes of some popular frameworks:

Framework CSS Size (gzip)
Bootstrap 68kb (12kb)
Bulma 73kb (10kb)
Foundation 30kb (7kb)
Milligram 10kb (3kb)
Pure 17kb (4kb)
SemanticUI 146kb (20kb)
UIKit 33kb (6kb)

Alright, SemanticUI it is! The “right” way to load this framework would be to use a Sass or Less package, which would allow us to choose the parts of the framework we need. The wrong way would be to load all the CSS and JavaScript files in the <head> of the HTML. That’s what we’ll do with the full SemanticUI stylesheet. Plus, we’re going to link up jQuery because it’s a SemanticUI dependency.

We want these files to load in the head so let’s jump into the html.js file. This is not available in the src directory until we run a command to copy over the default from the cache:

cp .cache/default-html.js src/html.js

That gives us html.js in the src directory. Open it up and add the required stylesheet and scripts:

<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/semantic-ui@2.4.2/dist/semantic.css"></link> <script src="https://code.jquery.com/jquery-3.1.1.js"></script> <script src="https://cdn.jsdelivr.net/npm/semantic-ui@2.4.2/dist/semantic.js"></script>  

Now let’s push the changes straight to our production URL:

vercel --prod

OK, let’s view the audit…

Screenshot of a Lighthouse report showing a score of 66 out of 100.
Zoikes! A 33% reduction!

We have reduced the speed of the site down to a score of 66. Remember that we are not even using this framework at the moment. All we have done is load the files in the head and that reduced the performance score by one-third. Our Time to Interactive (TTI) jumped from a quick 1.9 seconds to a noticeable 4.9 seconds. And look at the possible savings we could get with from Lighthouse’s recommendations.

Slow marketing dependencies

Next, we are going to look at marketing tags and how these third-party scripts can affect performance. Let’s pretend we work with a marketing department and they want to start measuring traffic with Google Analytics. They also have a Facebook campaign and want to track it as well. 

They give us the details of the scripts that we need to add to get everything working. First, for Google Analytics:

<script async src="https://www.googletagmanager.com/gtag/js?id=UA-4369823-4"></script> <script   dangerouslySetInnerHTML={{ __html: `   window.dataLayer = window.dataLayer || [];   function gtag(){dataLayer.push(arguments);}   gtag('js', new Date());   gtag('config', 'UA-4369823-4');   `}} />

Then for the Facebook campaign:

<script   dangerouslySetInnerHTML={{ __html: `     !function(f,b,e,v,n,t,s)     {if(f.fbq)return;n=f.fbq=function(){n.callMethod?     n.callMethod.apply(n,arguments):n.queue.push(arguments)};     if(!f._fbq)f._fbq=n;n.push=n;n.loaded=!0;n.version='2.0';     n.queue=[];t=b.createElement(e);t.async=!0;     t.src=v;s=b.getElementsByTagName(e)[0];     s.parentNode.insertBefore(t,s)}(window, document,'script',     'https://connect.facebook.net/en_US/fbevents.js');     fbq('init', '3180830148641968');     fbq('track', 'PageView');     `}} />  <noscript><img height="1" width="1" src="https://www.facebook.com/tr?id=3180830148641968&ev=PageView&noscript=1"/></noscript>

We’ll place these scripts inside html.js, again in the <head> section, before the closing </head> tag.

Just like before, let’s push to Vercel and re-run Lighthouse:

vercel --prod
Screenshot of a Lighthouse audit showing a score of 51 out of 100.

Wow, the site is already down to 51 and all we’ve done is tack on one framework and a couple of measly scripts. Together, they/ve reduced the score by a whopping 49 points, nearly half of where we started.

Slow images

We haven’t added any images to the site yet but we know we absolutely would in a real-life scenario. We are going to add 100 images to the page. Sure, 100 is a lot for a single page but, then again, we know that images are often the biggest culprits of bloated web pages so we might as well let them shine.

We’ll make things a little worse by hot loading the images directly from https://placeimg.com instead of serving them on our own server.

Let’s crack open index.js and drop this code in, which will loop through 100 instances of images:

const IndexPage = () => {   const items = []   for(var i = 0; i < 100; i++) {     const url = `http://placeimg.com/640/360/any?=$ {i}`     items.push(<img key={i} alt={i} src=https://css-tricks.com/make-jamstack-slow-challenge-accepted/ />)   }      return (     <Layout>       // ...       {items}       // ...     </Layout>   ) }

The 100 images are all different and will all load as the page loads, thereby blocking the rendering. OK, let’s push to Vercel and see what’s up.

vercel --prod
Screenshot of a Lighthouse audit showing a score of 17 out of 100.
That score deserves a sad trombone. 🎺

OK, we now have a very slow Jamstack site. The images are blocking the rendering of the page and the TTI is now a whopping 16.5 seconds. We have taken a very fast Jamstack site and dropped it to a Lighthouse score of 17 — a reduction of 83 points!

Now, you may be think that you would never make these poor decisions when building an app. But you are missing the point. Every choice we make has an impact on performance. It’s a balance and performance does not come free. Even on Jamstack sites.

Making Jamstack fast again

You have seen that we cannot ignore client-side performance when using Jamstack. 

So why do people say that Jamstack is fast? Well, the main advantage of Jamstack — or using static site generators in general — is caching. Static files are cached on the edge reducing Time to First Byte (TTFB).

This is always going to be faster than going to a single-origin web server before generating the page. This is a great feature of Jamstack and gives you a fighting chance to create a page that can hit 100 in Lighthouse. (But, hey, as a side note, remember that great scores aren’t always indicative of an actual user experience.)


See, I told you we could make Jamstack slow! There are also many other things that can slow it down, but hopefully this drives home the point.

While we’re talking about performance, here are a few of my favorite performance articles on here at CSS-Tricks:


The post Make Jamstack Slow? Challenge Accepted. appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , ,
[Top]

“Link In Bio” is a slow knife

Anil Dash:

If Instagram users could post links willy-nilly, they might even be able to connect directly to their users, getting their email addresses or finding other ways to communicate with them. Links represent a threat to closed systems.

On CodePen, we have a TextExpander snippet we use for every single Instagram post we schedule through Buffer and it expands to this:

Looking for the code? It’s open-source on CodePen, follow the link 🔗 in our profile and find the author’s Pen there.

I can’t quite explain it, but I feel taken in by this sleight of hand. My brain goes, “Oh, they just can’t allow links on Instagram posts because it would just turn into a cesspool of spam and bad behavior.”. But of course, that isn’t the real reason. Instagram is already a mess of spam, and it’s fairly easy to avoid. Links wouldn’t change that. If anything they would be a helpful honeypot for catching bad actors. Links just make it easy to leave.

Minor note about linking out: Business accounts with over 10,000 followers can add a URL as a “swipe up” gesture on Instagram Stories. Whoop-de-doo.

Direct Link to ArticlePermalink

The post “Link In Bio” is a slow knife appeared first on CSS-Tricks.

CSS-Tricks

, , ,
[Top]

Optimizing Images for Users with Slow Network Speeds

For every website, page load time is a critical factor that can make or break the business. Thanks to the better user experience that comes with a fast-loading webpage, those who focus on page load optimization enjoy better conversion rates, better SEO, better retention, and lower bounce rates.

And this fact has been highlighted in several studies. For example, according to one of the studies, 47% of consumers expect a web page to load in 2 seconds or less. No wonder that developers across the globe focus a lot on improving the webpage load time.

Logic dictates that keeping other factors the same, a lighter webpage should load faster than a heavier webpage, and that is the direction in which our webpages should head too. However, contrary to that logic, our websites have become heavier over the years. A look at data from HTTP Archive reveals that an average webpage in 2017 was almost three times heavier than what it used to be in 2011.

With more powerful user devices, faster networks, and the growing popularity of client-side frameworks and media-rich experiences, we have started pushing more and more data to the user’s device.

However, as developers, we miss a crucial point. We usually develop and test our websites in our offices over stable WiFi or wired connections. However, when a real-user uses our website, the network speed and stability may not be that great. Especially with more and more users coming online via mobile devices, the problem of fluctuating network conditions is even more significant.

Don’t believe it? ImageKit.io conducted a study to determine the network speed reported by the Network Info API of Chrome browser for users of a website (with visitors mostly from India). It is not very surprising that almost 40% of the visitors tracked had reported speed lower than 4G, i.e., less than 700 Kbps as per the Network Info API Spec.

While this percentage of users experiencing poor network conditions might be lower if we get visitors from developed countries like the USA or those in Europe, we can still safely assume that varying network conditions impact a sizeable chunk of our users. And we have all experienced it as well. When we enter an elevator or move to the basement parking lot of a building or travel to a remote location, the mobile data download speeds drop significantly. 

Therefore, we need to keep in mind that our users, especially the ones on mobile, will invariably try to visit our website on a slow network, and our goal as a developer should be to provide them with at least a decent browsing experience.

Why optimize images for slow networks?

The ultimate goal of optimizing a website for slower networks is to be able to serve its lighter variant. This way, all the essential stuff gets downloaded and displayed quickly on the user’s device. 

Amongst all the resources that load on a webpage, images make up for most of the payload. And even if we do take care of optimizing the images in general, optimizing them further for slower networks can reduce the overall page weight by almost 30%. 

Also, additional compression of images doesn’t break the critical functionality of any application. Yes, the image quality drops a bit to provide for better user experience. But unlike stripping away Javascript, which would require a lot of thought, compressing images is relatively straightforward.

How to optimize images for a slow network?

Now that we have established that optimizing our webpage based on the user’s network speed is essential and that images are the lowest-hanging fruit to get started, let’s look at how we can achieve network-based image optimization.

There are two parts of the solution.

Determine the user’s network speed

We need to determine the network speed that the user is experiencing and divide them into buckets. For example, users experiencing speed above a certain threshold should be classified in a single group and served a particular quality level of an image. This classification is simple in modern web browsers with the Network Information API. This API automatically classifies the users into four buckets – 4G, 3G, 2G, and slow 2G, with 4G being the fastest and slow 2G being the slowest. 

// returns '4g', '3g', '2g' or 'slow-2g' var effectiveType = NetworkInformation.effectiveType;

Compress the images to an appropriate quality level

The second part of the solution is to be able to alter the compression level of an image in real-time, depending on the user’s network speed determined in step 1. It should be as simple as passing an additional parameter in the image URL when the browser triggers a load for it.

While we rely on the browser to determine the user’s network speed, a tool like ImageKit.io makes the real-time compression bit simple. 

ImageKit.io is a real-time image optimization and transformation product that helps us deliver images in the right format, change compression levels, resize, crop, and transform images directly from the URL and deliver those images via a global image CDN. We can get the image with the desired compression level by just passing the image quality parameter in the URL. Quality is directly proportional to image size, i.e., higher the quality number, larger will be the resulting image.

// ImageKit URL with quality 90 https://ik.imagekit.io/demo/default-image.jpg?tr=q-90  // ImageKit URL with quality 50 https://ik.imagekit.io/demo/default-image.jpg?tr=q-50

How else does ImageKit help with network-based image optimization?

While ImageKit has always supported real-time URL-based image quality modification, it has started supporting the network-based image optimization features recently. With these new features, it has become effortless to implement complete network-based optimization with minimum effort.

Of course, first, we need to sign up for ImageKit and start delivering the images on our website through it. Once this is done, in the ImageKit dashboard, we have to enable the setting for network-based image optimization. We get a code snippet right there that and add it to an existing service worker on our website or to a new service worker. 

// Adding the code snippet in a service worker importScripts("https://runtime.imagekit.io/<your_imagekit_id>/v1/js/network-based-adaption.js?v=" + new Date().getTime());

Within the dashboard itself, we also need to specify the desired quality level for different network speed buckets. For example, we have used a quality of 90 for images for users classified as 4G users and a quality of 50 for our slow 2G users. Remember that lower quality results in smaller image sizes.

This code snippet is like a plugin meant for use in service workers. It intercepts the image requests originating from the user’s browser, detects the user’s network speed, and adds the necessary parameters to the image URL. These parameters can be understood by the ImageKit server to compress the image to the desired level and maintain efficient client-side caching. For network-based image optimization, all that we need to do is to include it on our website and done!

Additionally, in the ImageKit dashboard, we can specify the image URLs (or patterns in URLs) that should not be optimized based on the network type. For example, we would want to present the same crisp logo of our brand to our users regardless of their network speed.

Verifying the setup

Once set up correctly, we can quickly check if the network-based optimization is working using Chrome Developer Tools. We can emulate a slow network on our browser using the developer tools. If set up correctly, the service worker should add some parameters to the original image request indicating the current network speed. These parameters are understood by ImageKit’s servers to compress the image as per the quality settings specified in our ImageKit dashboard corresponding to that network speed.

How does caching work with the service worker in place?

ImageKit’s service worker plugin by-passes the browser cache in favor of network-based image cache in the browser. By-passing the browser cache means that the service worker can maintain different caches for different network types and choose the correct image from the cache or request a new one based on the user’s current network condition. 

The service worker plugin automatically uses the cache-first technique to load the images and also implements a waterfall method to pick the right one from the cache. With this waterfall method, images at higher quality get preference over images at a lower quality. What it means is that, if the user’s speed drops to 2G and he has a particular image cached from the time when he was experiencing good download speed on a 4G network, the service worker will use that cached 4G image for delivery instead of downloading the image over the 2G network. But the reverse is not valid. If the user is experiencing 4G network speeds, the service worker won’t pick up the 2G image from the cache, because it is possible to fetch a better quality image and the resources allow for it.

And there is more!

Apart from a simple, ready-to-use service worker plugin and dashboard settings, ImageKit has a few more things to offer that make it an attractive tool to get started with network-based optimization.

Analytics

ImageKit provides us with analytics on our user’s observed network type. It gives us an insight into the number of times network-based optimization gets triggered and what does the user distribution look like across different network types. This distribution analysis can be helpful even for optimizing other resources on our website.

Cost of optimization

With network-based image optimizations, the size of the images goes down, but at the same time, the number of transformation requests can potentially go up. Unlike a lot of other real-time image optimization products out there, ImageKit’s pricing is based only on the output image bandwidth and nothing else. So with the favorable pricing model, implementing network-based image optimization not only provides a lot more value to our users but also helps us cut down on image delivery costs.

Conclusion

Improving page load performance is essential. However, there is one bit that we have all been missing – optimizing it for slow networks.

Images present an easy start when it comes to optimizing our entire website for different networks. With the support of network-based optimization features via service workers, it has become effortless to achieve it with ImageKit. 

It will be a great value add for our users and will help to improve the user experience even further, which will have a positive impact on the conversions on our website. 

Sign up now for ImageKit and get started with it now!

The post Optimizing Images for Users with Slow Network Speeds appeared first on CSS-Tricks.

CSS-Tricks

, , , , ,
[Top]

The Slow and Steady Refactor

Over the past week or so, I’ve been reading Refactoring by Martin Fowler and it’s all about how to make sweeping changes to a large codebase in a way that doesn’t cause everything to break. I bring this up because there’s a lot of really good notes in this book that have challenged my recent approach to auditing and refactoring a ton of CSS. A lot of the advice is small, kinda obvious stuff, but I realized that I’ve recently been lazy when it comes to how many of those small, obvious things I brush off on projects like this.

Martin writes:

…if I can’t immediately see and fix the problem, I’ll revert to my last good commit and redo what I just did with smaller steps. That works because I commit so frequently and because small steps are the key to moving quickly, particularly when working with difficult code.

So: commit frequently and only do one thing in that commit. Further, constantly test those changes as you code.

The other thing I’ve started to be more aware of — thanks to this book — is that commit messages are precious things because they help other folks understand the meaning of changed work. We’ve all seen seemingly simple commit messages, like “refactored typography” that turn out to be thousands of lines long and we roll our eyes. That’s just asking for bugs to be introduced and visual regressions to happen. Smaller commits should prevent that sort of thing from ever happening. A good string of commit messages should sort of feel like you’re pairing with someone, as if you’re walking them through the changes step-by-step.

Although I’m getting better at this, I find this method of working extraordinarily difficult because it feels slower than sweeping changes and hoping for the best. In his book, Martin encourages us to subside that feeling. When we’re refactoring large portions of our codebase, he argues, we should always be slow and steady, patient and disciplined.

The post The Slow and Steady Refactor appeared first on CSS-Tricks.

CSS-Tricks

, ,
[Top]