Tag: Optimizing

Maximally optimizing image loading for the web in 2021

Malte Ubl’s list for:

8 image loading optimization techniques to minimize both the bandwidth used for loading images on the web and the CPU usage for image display.

  1. Fluid width images in CSS, not forgetting the height and width attributes in HTML so you get proper aspect-ratio on first render.
  2. Use content-visibility: auto;
  3. Send AVIF when you can.
  4. Use responsive images syntax.
  5. Set far-out expires headers on images and have a cache-busting strategy (like changing the file name).
  6. Use loading="lazy"
  7. Use decoding="async"
  8. Use inline CSS/SVG for a blurry placeholder.

Apparently, there is but one tool that does it all: eleventy-high-performance-blog.

My thoughts:

  • If you are lazy loading, do you really need to do the content-visibilty thing also? They seem very related.
  • Serving AVIF is usually good, but it seems less cut-and-dry than WebP was. You need to make sure your AVIF version is both better and smaller, which feels like a manual process right now.
  • The decoding thing seems weird. I’ll totally use it if it’s a free perf win, but if it’s always a good idea, shouldn’t the browser just always do it?
  • I’m not super convinced blurry placeholders are in the same category of necessary as the rest of this stuff. Feels like a trend.

Direct Link to ArticlePermalink


The post Maximally optimizing image loading for the web in 2021 appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , ,

Converting and Optimizing Images From the Command Line

Images take up to 50% of the total size of an average web page. And if images are not optimized, users end up downloading extra bytes. And if they’re downloading extra bytes, the site not only takes that much more time to load, but users are using more data, both of which can be resolved, at least in part, by optimizing the images before they are downloaded.

Researchers around the world are busy developing new image formats that possess high visual quality despite being smaller in size compared to other formats like PNG or JPG. Although these new formats are still in development and generally have limited browser support, one of them, WebP, is gaining a lot of attention. And while they aren’t really in the same class as raster images, SVGs are another format many of us have been using in recent years because of their inherently light weight.

There are tons of ways we can make smaller and optimized images. In this tutorial, we will write bash scripts that create and optimize images in different image formats, targeting the most common formats, including JPG, PNG, WebP, and SVG. The idea is to optimize images before we serve them so that users get the most visually awesome experience without all the byte bloat.

Our targeted directory of images

Our directory of optimized images

This GitHub repo has all the images we’re using and you’re welcome to grab them and follow along.

Set up

Before we start, let’s get all of our dependencies in order. Again, we’re writing Bash scripts, so we’ll be spending time in the command line.

Here are the commands for all of the dependencies we need to start optimizing images:

sudo apt-get update sudo apt-get install imagemagick webp jpegoptim optipng npm install -g svgexport svgo

It’s a good idea to know what we’re working with before we start using them:

OK, we have our images in the original-images directory from the GitHub repo. You can follow along at commit 3584f9b.

Note: It is strongly recommended to backup your images before proceeding. We’re about to run programs that alter these images, and while we plan to leave the originals alone, one wrong command might change them in some irreversible way. So back anything up that you plan to use on a real project to prevent cursing yourself later.

Organize images

OK, we’re technically set up. But before we jump into optimizing all the things, we should organize our files a bit. Let’s organize them by splitting them up into different sub-directories based on their MIME type. In fact, we can create a new bash to do that for us!

The following code creates a script called organize-images.sh:

#!/bin/bash  input_dir="$ 1"  if [[ -z "$ input_dir" ]]; then   echo "Please specify an input directory."   exit 1 fi  for img in $ ( find $ input_dir -type f -iname "*" ); do   # get the type of the image   img_type=$ (basename `file --mime-type -b $ img`)    # create a directory for the image type   mkdir -p $ img_type    # move the image into its type directory   rsync -a $ img $ img_type done

This might look confusing if you’re new to writing scripts, but what it’s doing is actually pretty simple. We give the script an input directory where it looks for images. the script then goes into that input directory, looks for image files and identifies their MIME type. Finally, it creates subdirectories in the input folder for each MIME type and drops a copy of each image into their respective sub-directory.

Let’s run it!

bash organize-images.sh original-images

Sweet. The directory looks like this now. Now that our images are organized, we can move onto creating variants of each image. We’ll tackle one image type at a time.

Convert to PNG

We will convert three types of images into PNG in this tutorial: WebP, JPEG, and SVG. Let’s start by writing a script called webp2png.sh, which pretty much says what it does: convert WebP files to PNG files.

#!/bin/bash  # directory containing images input_dir="$ 1"  if [[ -z "$ input_dir" ]]; then   echo "Please specify an input directory."   exit 1 fi  # for each webp in the input directory for img in $ ( find $ input_dir -type f -iname "*.webp" ); do   dwebp $ img -o $ {img%.*}.png done

Here’s what happening:

  • input_dir="$ 1": Stores the command line input to the script
  • if [[ -z "$ input_dir" ]]; then: Runs the subsequent conditional if the input directory is not defined
  • for img in $ ( find $ input_dir -type f -iname "*.webp" );: Loops through each file in the directory that has a .webp extension.
  • dwebp $ img -o $ {img%.*}.png: Converts the WebP image into a PNG variant.

And away we go:

bash webp2png.sh webp

We now have our PNG images in the webp directory. Next up, let’s convert JPG/JPEG files to PNG with another script called jpg2png.sh:

#!/bin/bash  # directory containing images input_dir="$ 1"  if [[ -z "$ input_dir" ]]; then   echo "Please specify an input directory."   exit 1 fi  # for each jpg or jpeg in the input directory for img in $ ( find $ input_dir -type f -iname "*.jpg" -o -iname "*.jpeg" ); do   convert $ img $ {img%.*}.png done

This uses the convert command provided by the ImageMagick package we installed. Like the last script, we provide an input directory that contains JPEG/JPG images. The script looks in that directory and creates a PNG variant for each matching image. If you look closely, we have added -o -iname "*.jpeg" in the find. This refers to Logical OR, which is the script that finds all the images that have either a .jpg or .jpeg extension.

Here’s how we run it:

bash jpg2png.sh jpeg

Now that we have our PNG variants from JPG, we can do the exact same thing for SVG files as well:

#!/bin/bash  # directory containing images input_dir="$ 1"  # png image width width="$ 2"  if [[ -z "$ input_dir" ]]; then   echo "Please specify an input directory."   exit 1 elif [[ -z "$ width" ]]; then   echo "Please specify image width."   exit 1 fi  # for each svg in the input directory for img in $ ( find $ input_dir -type f -iname "*.svg" ); do   svgexport $ img $ {img%.*}.png $ width: done

This script has a new feature. Since SVG is a scalable format, we can specify the width directive to scale our SVGs up or down. We use the svgexport package we installed earlier to convert each SVG file into a PNG:

bash svg2png.sh svg+xml

Commit 76ff80a shows the result in the repo.

We’ve done a lot of great work here by creating a bunch of PNG files based on other image formats. We still need to do the same thing for the rest of the image formats before we get to the real task of optimizing them.

Convert to JPG

Following in the footsteps of PNG image creation, we will convert WebP, JPEG, and SVG into JPG. Let’s start by writing a script called png2jpg.sh that converts PNG to SVG:

#!/bin/bash  # directory containing images input_dir="$ 1"  # jpg image quality quality="$ 2"  if [[ -z "$ input_dir" ]]; then   echo "Please specify an input directory."   exit 1 elif [[ -z "$ quality" ]]; then   echo "Please specify image quality."   exit 1 fi  # for each png in the input directory for img in $ ( find $ input_dir -type f -iname "*.png" ); do   convert $ img -quality $ quality% $ {img%.*}.jpg done

You might be noticing a pattern in these scripts by now. But this one introduces a new power where we can set a -quality directive to convert PNG images to JPG images. Rest is the same.

And here’s how we run it:

bash png2jpg.sh png 90

Woah. We now have JPG images in our png directory. Let’s do the same with a webp2jpg.sh script:

#!/bin/bash  # directory containing images input_dir="$ 1"  # jpg image quality quality="$ 2"  if [[ -z "$ input_dir" ]]; then   echo "Please specify an input directory."   exit 1 elif [[ -z "$ quality" ]]; then   echo "Please specify image quality."   exit 1 fi  # for each webp in the input directory for img in $ ( find $ input_dir -type f -iname "*.webp" ); do   # convert to png first   dwebp $ img -o $ {img%.*}.png    # then convert png to jpg   convert $ {img%.*}.png -quality $ quality% $ {img%.*}.jpg done

Again, this is the same thing we wrote for converting WebP to PNG. However, there is a twist. We cannot convert WebP format directly into a JPG format. Hence, we need to get a little creative here and convert WebP to PNG using dwebp and then convert PNG to JPG using convert. That is why, in the for loop, we have two different steps.

Now, let’s run it:

bash webp2jpg.sh jpeg 90

Voilà! We have created JPG variants for our WebP images. Now let’s tackle SVG to JPG:

#!/bin/bash  # directory containing images input_dir="$ 1"  # jpg image width width="$ 2"  # jpg image quality quality="$ 3"  if [[ -z "$ input_dir" ]]; then   echo "Please specify an input directory."   exit 1 elif [[ -z "$ width" ]]; then   echo "Please specify image width."   exit 1 elif [[ -z "$ quality" ]]; then   echo "Please specify image quality."   exit 1 fi  # for each svg in the input directory for img in $ ( find $ input_dir -type f -iname "*.svg" ); do     svgexport $ img $ {img%.*}.jpg $ width: $ quality% done

You might bet thinking that you have seen this script before. You have! We used the same script for to create PNG images from SVG. The only addition to this script is that we can specify the quality directive of our JPG images.

bash svg2jpg.sh svg+xml 512 90

Everything we just did is contained in commit 884c6cf in the repo.

Convert to WebP

WebP is an image format designed for modern browsers. At the time of this writing, it enjoys roughly 90% global browser support, including with partial support in Safari. WebP’s biggest advantage is it’s a much smaller file size compared to other mage formats, without sacrificing any visual quality. That makes it a good format to serve to users.

But enough talk. Let’s write a png2webp.sh that — you guessed it — creates WebP images out of PNG files:

#!/bin/bash  # directory containing images input_dir="$ 1"  # webp image quality quality="$ 2"  if [[ -z "$ input_dir" ]]; then   echo "Please specify an input directory."   exit 1 elif [[ -z "$ quality" ]]; then   echo "Please specify image quality."   exit 1 fi  # for each png in the input directory for img in $ ( find $ input_dir -type f -iname "*.png" ); do   cwebp $ img -q $ quality -o $ {img%.*}.webp done

This is just the reverse of the script we used to create PNG images from WebP files. Instead of using dwebp, we use cwebp.

bash png2webp.sh png 90

We have our WebP images. Now let’s convert JPG images. The tricky thing is that there is no way to directly convert a JPG files into WebP. So, we will first convert JPG to PNG and then convert the intermediate PNG to WebP in our jpg2webp.sh script:

#!/bin/bash  # directory containing images input_dir="$ 1"  # webp image quality quality="$ 2"  if [[ -z "$ input_dir" ]]; then   echo "Please specify an input directory."   exit 1 elif [[ -z "$ quality" ]]; then   echo "Please specify image quality."   exit 1 fi  # for each webp in the input directory for img in $ ( find $ input_dir -type f -iname "*.jpg" -o -iname "*.jpeg" ); do   # convert to png first   convert $ img $ {img%.*}.png    # then convert png to webp   cwebp $ {img%.*}.png -q $ quality -o $ {img%.*}.webp done

Now we can use it like this to get our WebP variations of JPG files:

bash jpg2webp.sh jpeg 90

Commit 6625f26 shows the result.

Combining everything into a single directory

Now that we are done converting stuff, we’re one step closer to optimize our work. But first, we’re gong to bring all of our images back into a single directory so that it is easy to optimize them with fewer commands.

Here’s code that creates a new bash script called combine-images.sh:

#!/bin/bash  input_dirs="$ 1" output_dir="$ 2"  if [[ -z "$ input_dirs" ]]; then   echo "Please specify an input directories."   exit 1 elif [[ -z "$ output_dir" ]]; then   echo "Please specify an output directory."   exit 1 fi  # create a directory to store the generated images mkdir -p $ output_dir  # split input directories comma separated string into an array input_dirs=($ {input_dirs//,/ })  # for each directory in input directory for dir in "$ {input_dirs[@]}" do   # copy images from this directory to generated images directory   rsync -a $ dir/* $ output_dir/ done

The first argument is a comma-separated list of input directories that will transfer images to a target combined directory. The second argument is defines that combined directory.

bash combine-images.sh jpeg,svg+xml,webp,png generated-images

The final output can be seen in the repo.

Optimize SVG

Let us start by optimizing our SVG images. Add the following code to optimize-svg.sh:

#!/bin/bash  # directory containing images input_dir="$ 1"  if [[ -z "$ input_dir" ]]; then echo "Please specify an input directory." exit 1 fi  # for each svg in the input directory for img in $ ( find $ input_dir -type f -iname "*.svg" ); do   svgo $ img -o $ {img%.*}-optimized.svg done

We’re using the SVGO package here. It’s got a lot of options we can use but, to keep things simple, we’re just sticking with the default behavior of optimizing SVG files:

bash optimize-svg.sh generated-images
This gives us a 4KB saving on each image. Let’s say we were serving 100 SVG icons — we just saved 400KB!

The result can be seen in the repo at commit 75045c3.

Optimize PNG

Let’s keep rolling and optimize our PNG files using this code to create an optimize-png.sh command:

#!/bin/bash  # directory containing images input_dir="$ 1"  if [[ -z "$ input_dir" ]]; then   echo "Please specify an input directory."   exit 1 fi  # for each png in the input directory for img in $ ( find $ input_dir -type f -iname "*.png" ); do   optipng $ img -out $ {img%.*}-optimized.png done

Here, we are using the OptiPNG package to optimize our PNG images. The script looks for PNG images in the input directory and creates an optimized version of each one, appending -optimized to the file name. There is one interesting argument, -o, which we can use to specify the optimization level. The default value is 2 **and values range from 0 to 7. To optimize our PNGs, we run:

bash optimize-png.sh generated-images
PNG optimization depends upon the information stored in the image. Some images can be greatly optimized while some show little to no optimization.

As we can see, OptiPNG does a great job optimizing the images. We can play around with the -o argument to find a suitable value by trading off between image quality and size. Check out the results in commit 4a97f29.

Optimize JPG

We have reached the final part! We’re going to wrap things up by optimizing JPG images. Add the following code to optimize-jpg.sh:

#!/bin/bash  # directory containing images input_dir="$ 1"  # target image quality quality="$ 2"  if [[ -z "$ input_dir" ]]; then   echo "Please specify an input directory."   exit 1 elif [[ -z "$ quality" ]]; then   echo "Please specify image quality."   exit 1 fi  # for each jpg or jpeg in the input directory for img in $ ( find $ input_dir -type f -iname "*.jpg" -o -iname "*.jpeg" ); do   cp $ img $ {img%.*}-optimized.jpg   jpegoptim -m $ quality $ {img%.*}-optimized.jpg done

This script uses JPEGoptim. The problem with this package is that it doesn’t have any option to specify the output file. We can only optimize the image file in place. We can overcome this by first creating a copy of the image, naming it whatever we like, then optimizing the copy. The -m argument is used to specify image quality. It is good to experiment with it a bit to find the right balance between quality and file size.

bash optimize-jpg.sh generated-images 95

The results are shows in commit 35630da.

Wrapping up

See that? With a few scripts, we can perform heavy-duty image optimizations right from the command line, and use them on any project since they’re installed globally. We can set up CI/CD pipelines to create different variants of each image and serve them using valid HTML, APIs, or even set up our own image conversion websites.

I hope you enjoyed reading and learning something from this article as much as I enjoyed writing it for you. Happy coding!


The post Converting and Optimizing Images From the Command Line appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , ,
[Top]

Optimizing Image Depth

Something I learned (or, I guess, re-learned) this year is how important it is to pay close attention to the bit depth of images. Way back in the day, we used to obsessively choose between 2-, 4-, or 8-bit color depth on our GIFs, because when lots of users were using dialup modems to surf the web, every kilobyte counted.

Now that a huge number of us access the web via broadband, guess what? Every kilobyte still counts. Because not everyone has access to broadband, particularly in the mobile space; and also, any time we can shave off page rendering is worth pursuing. I’d assumed that optimization tools handled things as trivial as color depth optimization that for us, but discovered I was wrong there.

This is particularly true for PNGs.  By default, lots of image editing tools save PNGs with 2^24 color depth, just in case.

For a photograph, that makes some sense (though if it’s a photograph, you should probably save it as JPG or WebP) but for things like logos and icons, that’s approximately 2^24 more colors than you’re going to be using.

So in Acorn, my image editor of choice, I’ve been taking special care to crank down the bit depth on PNGs in the export dialog. In many cases, I’ve cut image weight 80% or more by indexing colors to a palette of 256 or fewer values, with no loss of visual fidelity.  (Again, these aren’t photographs I’m talking about.)

Here’s an example:

PNG export from Acorn

That PNG at full-color depth is about 379KB. Restricted to a palette of 32 colors, it’s 61KB. And that’s just at the export time: once I run them through ImageOptim, the optimized sizes are 359KB and 48KB. That’s a weight savings of about 85%, just by lowering the color depth. And if I deployed the image and discovered it needs a few more colors, I could re-run the process to use 64 colors: the final size, in that case, is 73KB, still enormous savings.

Image run through ImageOptim, reducing size by another 22%

Reducing color depth by eye is clearly more onerous than throwing an optimization script at a directory of images, but in my experience, the results are much more efficient in terms of image weight and therefore user experience. And that’s really what all this is about, isn’t it?


The post Optimizing Image Depth appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, ,
[Top]

Optimizing CSS for faster page loads

A straightforward post with some perf data from Tomas Pustelnik. It’s a good reminder that CSS is a crucial part of thinking web performance, and for a huge reason:

Any time [the browser] encounters any external resource (CSS, JS, images, etc.) it will assign it a download priority and initiate its download. Priorities are important because some resources are critical to render a page (eg. main stylesheet and JS files) while others may be less important (like images or stylesheets for other media types).

In the case of CSS, this priority is usually high because stylesheets are necessary to create CSSOM (CSS Object Model). To render a webpage browser has to construct both DOM and CSSOM.

That’s why CSS is often referred to as a “blocking” resource. That’s desirable to some degree: we wouldn’t want to see flash-of-unstyled-websites. But we get real perf gains when we make CSS smaller because it’s quicker to download, parse, and apply.

Aside from the techniques in the post, I’m sure advocates of atomic/all-utility CSS would love it pointed out that the stylesheets from those approaches are generally way smaller, and thus more performant. CSS-in-JS approaches will sometimes bundle styles into scripts so, to be fair, you get a little perf gain at the top by not loading the CSS there, but a perf loss from increasing the JavaScript bundle size in the process. (I haven’t seen a study with a fair comparison though, so I don’t know if it’s a wash or what.)

Direct Link to ArticlePermalink


The post Optimizing CSS for faster page loads appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , ,
[Top]

Tools for Optimizing SVG

Command Line Tools

The biggest (and maybe only?) player in this space: SVGO. It’s even used under the hood of other tools.

Using with a Task Runner

Desktop Apps

There are generic app for optimizing images on Windows and Linux, but I haven’t yet confirmed any of them do SVG.

Web Apps

The idea with these tools is that you have an SVG on your machine and you upload it to this web app to optimize this one SVG.

APIs

Optimizing Directly Out of Design Tools

Typically, when you export SVG out of a design tool it is in dire need of optimization. Hence, all the tools in this article. The design tool itself can help though, at least somewhat.

The post Tools for Optimizing SVG appeared first on CSS-Tricks.

CSS-Tricks

,
[Top]

Optimizing Images for Users with Slow Network Speeds

For every website, page load time is a critical factor that can make or break the business. Thanks to the better user experience that comes with a fast-loading webpage, those who focus on page load optimization enjoy better conversion rates, better SEO, better retention, and lower bounce rates.

And this fact has been highlighted in several studies. For example, according to one of the studies, 47% of consumers expect a web page to load in 2 seconds or less. No wonder that developers across the globe focus a lot on improving the webpage load time.

Logic dictates that keeping other factors the same, a lighter webpage should load faster than a heavier webpage, and that is the direction in which our webpages should head too. However, contrary to that logic, our websites have become heavier over the years. A look at data from HTTP Archive reveals that an average webpage in 2017 was almost three times heavier than what it used to be in 2011.

With more powerful user devices, faster networks, and the growing popularity of client-side frameworks and media-rich experiences, we have started pushing more and more data to the user’s device.

However, as developers, we miss a crucial point. We usually develop and test our websites in our offices over stable WiFi or wired connections. However, when a real-user uses our website, the network speed and stability may not be that great. Especially with more and more users coming online via mobile devices, the problem of fluctuating network conditions is even more significant.

Don’t believe it? ImageKit.io conducted a study to determine the network speed reported by the Network Info API of Chrome browser for users of a website (with visitors mostly from India). It is not very surprising that almost 40% of the visitors tracked had reported speed lower than 4G, i.e., less than 700 Kbps as per the Network Info API Spec.

While this percentage of users experiencing poor network conditions might be lower if we get visitors from developed countries like the USA or those in Europe, we can still safely assume that varying network conditions impact a sizeable chunk of our users. And we have all experienced it as well. When we enter an elevator or move to the basement parking lot of a building or travel to a remote location, the mobile data download speeds drop significantly. 

Therefore, we need to keep in mind that our users, especially the ones on mobile, will invariably try to visit our website on a slow network, and our goal as a developer should be to provide them with at least a decent browsing experience.

Why optimize images for slow networks?

The ultimate goal of optimizing a website for slower networks is to be able to serve its lighter variant. This way, all the essential stuff gets downloaded and displayed quickly on the user’s device. 

Amongst all the resources that load on a webpage, images make up for most of the payload. And even if we do take care of optimizing the images in general, optimizing them further for slower networks can reduce the overall page weight by almost 30%. 

Also, additional compression of images doesn’t break the critical functionality of any application. Yes, the image quality drops a bit to provide for better user experience. But unlike stripping away Javascript, which would require a lot of thought, compressing images is relatively straightforward.

How to optimize images for a slow network?

Now that we have established that optimizing our webpage based on the user’s network speed is essential and that images are the lowest-hanging fruit to get started, let’s look at how we can achieve network-based image optimization.

There are two parts of the solution.

Determine the user’s network speed

We need to determine the network speed that the user is experiencing and divide them into buckets. For example, users experiencing speed above a certain threshold should be classified in a single group and served a particular quality level of an image. This classification is simple in modern web browsers with the Network Information API. This API automatically classifies the users into four buckets – 4G, 3G, 2G, and slow 2G, with 4G being the fastest and slow 2G being the slowest. 

// returns '4g', '3g', '2g' or 'slow-2g' var effectiveType = NetworkInformation.effectiveType;

Compress the images to an appropriate quality level

The second part of the solution is to be able to alter the compression level of an image in real-time, depending on the user’s network speed determined in step 1. It should be as simple as passing an additional parameter in the image URL when the browser triggers a load for it.

While we rely on the browser to determine the user’s network speed, a tool like ImageKit.io makes the real-time compression bit simple. 

ImageKit.io is a real-time image optimization and transformation product that helps us deliver images in the right format, change compression levels, resize, crop, and transform images directly from the URL and deliver those images via a global image CDN. We can get the image with the desired compression level by just passing the image quality parameter in the URL. Quality is directly proportional to image size, i.e., higher the quality number, larger will be the resulting image.

// ImageKit URL with quality 90 https://ik.imagekit.io/demo/default-image.jpg?tr=q-90  // ImageKit URL with quality 50 https://ik.imagekit.io/demo/default-image.jpg?tr=q-50

How else does ImageKit help with network-based image optimization?

While ImageKit has always supported real-time URL-based image quality modification, it has started supporting the network-based image optimization features recently. With these new features, it has become effortless to implement complete network-based optimization with minimum effort.

Of course, first, we need to sign up for ImageKit and start delivering the images on our website through it. Once this is done, in the ImageKit dashboard, we have to enable the setting for network-based image optimization. We get a code snippet right there that and add it to an existing service worker on our website or to a new service worker. 

// Adding the code snippet in a service worker importScripts("https://runtime.imagekit.io/<your_imagekit_id>/v1/js/network-based-adaption.js?v=" + new Date().getTime());

Within the dashboard itself, we also need to specify the desired quality level for different network speed buckets. For example, we have used a quality of 90 for images for users classified as 4G users and a quality of 50 for our slow 2G users. Remember that lower quality results in smaller image sizes.

This code snippet is like a plugin meant for use in service workers. It intercepts the image requests originating from the user’s browser, detects the user’s network speed, and adds the necessary parameters to the image URL. These parameters can be understood by the ImageKit server to compress the image to the desired level and maintain efficient client-side caching. For network-based image optimization, all that we need to do is to include it on our website and done!

Additionally, in the ImageKit dashboard, we can specify the image URLs (or patterns in URLs) that should not be optimized based on the network type. For example, we would want to present the same crisp logo of our brand to our users regardless of their network speed.

Verifying the setup

Once set up correctly, we can quickly check if the network-based optimization is working using Chrome Developer Tools. We can emulate a slow network on our browser using the developer tools. If set up correctly, the service worker should add some parameters to the original image request indicating the current network speed. These parameters are understood by ImageKit’s servers to compress the image as per the quality settings specified in our ImageKit dashboard corresponding to that network speed.

How does caching work with the service worker in place?

ImageKit’s service worker plugin by-passes the browser cache in favor of network-based image cache in the browser. By-passing the browser cache means that the service worker can maintain different caches for different network types and choose the correct image from the cache or request a new one based on the user’s current network condition. 

The service worker plugin automatically uses the cache-first technique to load the images and also implements a waterfall method to pick the right one from the cache. With this waterfall method, images at higher quality get preference over images at a lower quality. What it means is that, if the user’s speed drops to 2G and he has a particular image cached from the time when he was experiencing good download speed on a 4G network, the service worker will use that cached 4G image for delivery instead of downloading the image over the 2G network. But the reverse is not valid. If the user is experiencing 4G network speeds, the service worker won’t pick up the 2G image from the cache, because it is possible to fetch a better quality image and the resources allow for it.

And there is more!

Apart from a simple, ready-to-use service worker plugin and dashboard settings, ImageKit has a few more things to offer that make it an attractive tool to get started with network-based optimization.

Analytics

ImageKit provides us with analytics on our user’s observed network type. It gives us an insight into the number of times network-based optimization gets triggered and what does the user distribution look like across different network types. This distribution analysis can be helpful even for optimizing other resources on our website.

Cost of optimization

With network-based image optimizations, the size of the images goes down, but at the same time, the number of transformation requests can potentially go up. Unlike a lot of other real-time image optimization products out there, ImageKit’s pricing is based only on the output image bandwidth and nothing else. So with the favorable pricing model, implementing network-based image optimization not only provides a lot more value to our users but also helps us cut down on image delivery costs.

Conclusion

Improving page load performance is essential. However, there is one bit that we have all been missing – optimizing it for slow networks.

Images present an easy start when it comes to optimizing our entire website for different networks. With the support of network-based optimization features via service workers, it has become effortless to achieve it with ImageKit. 

It will be a great value add for our users and will help to improve the user experience even further, which will have a positive impact on the conversions on our website. 

Sign up now for ImageKit and get started with it now!

The post Optimizing Images for Users with Slow Network Speeds appeared first on CSS-Tricks.

CSS-Tricks

, , , , ,
[Top]