Through a little series of articles, we are going to explore some fancy and uncommon CSS-only sliders. If you are of tired seeing the same ol’ classic sliders, then you are in the right place!
For this first article, we will start with something I call the “circular rotating image slider”:
Cool right? let’s dissect the code!
The HTML markup
If you followed my series of fancy image decorations or CSS grid and custom shapes, then you know that my first rule is to work with the smallest HTML possible. I always try hard to find CSS solutions before cluttering my code with a lot <div>s and other stuff.
The same rule applies here — our code is nothing but a list of images in a container.
That’s it! Now let’s move to the interesting part of the code. But first, we’re going to dive into this to understand the logic of how our slider works.
How does it work?
Here is a video where I remove overflow: hidden from the CSS so we can better understand how the images are moving:
It’s like our four images are placed on a large circle that rotates counter-clockwise.
All the images have the same size (denoted by S in the figure). Note the blue circle which is the circle that intersects with the center of all the images and has a radius (R). We will need this value later for our animation. R is equal to 0.707 * S. (I’m going to skip the geometry that gives us that equation.)
Let’s write some CSS!
We will be using CSS Grid to place all the images in the same area above each other:
.gallery { --s: 280px; /* control the size */ display: grid; width: var(--s); aspect-ratio: 1; padding: calc(var(--s) / 20); /* we will see the utility of this later */ border-radius: 50%; } .gallery > img { grid-area: 1 / 1; width: 100%; height: 100%; object-fit: cover; border-radius: inherit; }
Nothing too complex so far. The tricky part is the animation.
We talked about rotating a big circle, but in reality, we will rotate each image individually creating the illusion of a big rotating circle. So, let’s define an animation, m, and apply it to the image elements:
.gallery > img { /* same as before */ animation: m 8s infinite linear; transform-origin: 50% 120.7%; } @keyframes m { 100% { transform: rotate(-360deg); } }
The main trick relies on that highlighted line. By default, the CSS transform-origin property is equal to center (or 50% 50%) which makes the image rotate around its center, but we don’t need it to do that. We need the image to rotate around the center of the big circle that contains our images hence the new value for transform-origin.
Since R is equal to 0.707 * S, we can say that R is equal to 70.7% of the image size. Here’s a figure to illustrate how we got the 120.7% value:
Let’s run the animation and see what happens:
I know, I know. The result is far from what we want, but in reality we are very close. It may looks like there’s just one image there, but don’t forget that we have stacked all the images on top of each other. All of them are rotating at the same time and only the top image is visible. What we need is to delay the animation of each image to avoid this overlap.
If we hide the overflow on the container we can already see a slider, but we will update the animation a little so that each image remains visible for a short period before it moves along.
We’re going to update our animation keyframes to do just that:
For each 90deg (360deg/4, where 4 is the number of images) we will add a small pause. Each image will remain visible for 5% of the overall duration before we slide to the next one (27%-22%, 52%-47%, etc.). I’m going to update the animation-timing-functionusing a cubic-bezier() function to make the animation a bit fancier:
Now our slider is perfect! Well, almost perfect because we are still missing the final touch: the colorful circular border that rotates around our images. We can use a pseudo-element on the .gallery wrapper to make it:
I have created a circle with a repeating conic gradient for the background while using a masking trick that only shows the padded area. Then I apply to it the same animation we defined for the images.
We are done! We have a cool circular slider:
Let’s add more images
Working with four images is good, but it would be better if we can scale it to any number of images. After all, this is the purpose of an image slider. We should be able to consider N images.
For this, we are going to make the code more generic by introducing Sass. First, we define a variable for the number of images ($ n) and we will update every part where we hard-coded the number of images (4).
The step between each state is equal to 25% — which is 100%/4 — and we add a -90deg angle — which is -360deg/4. That means we can write our loop like this instead:
@keyframes m { 0% { transform: rotate(0); } @for $ i from 1 to $ n { #{($ i / $ n) * 100}% { transform: rotate(#{($ i / $ n) * -360}deg); } } 100% { transform: rotate(-360deg); } }
Since each image takes 5% of the animation, we change this:
#{($ i / $ n) * 100}%
…with this:
#{($ i / $ n) * 100 - 2}%, #{($ i / $ n) * 100 + 3}%
It should be noted that 5% is an arbitrary value I choose for this example. We can also make it a variable to control how much time each image should stay visible. I am going to skip that for the sake of simplicity, but for homework, you can try to do it and share your implementation in the comments!
@keyframes m { 0%,3% { transform: rotate(0); } @for $ i from 1 to $ n { #{($ i / $ n) * 100 - 2}%, #{($ i / $ n) * 100 + 3}% { transform: rotate(#{($ i / $ n) * -360}deg); } } 98%,100% { transform: rotate(-360deg); } }
The last bit is to update transform-origin. We will need some geometry tricks. Whatever the number of images, the configuration is always the same. We have our images (small circles) placed inside a big circle and we need to find the value of the radius, R.
You probably don’t want a boring geometry explanation so here’s how we find R:
R = S / (2 * sin(180deg / N))
If we express that as a percentage, that gives us:
We’re done! We have a slider that works with any number images!
Let’s toss nine images in there:
Add as many images as you want and update the $ n variable with the total number of images.
Wrapping up
With a few tricks using CSS transforms and standard geometry, we created a nice circular slider that doesn’t require a lot of code. What is cool about this slider is that we don’t need to bother duplicating the images to keep the infinite animation since we have a circle. After a full rotation, we will get back to the first image!
We’ve spent the last two articles in this three-part series playing with gradients to make really neat image decorations using nothing but the <img> element. In this third and final piece, we are going to explore more techniques using the CSS outline property. That might sound odd because we generally use outline to draw a simple line around an element — sorta like border but it can only draw all four sides at once and is not part of the Box Model.
We can do more with it, though, and that’s what I want to experiment with in this article.
Let’s start with our first example — an overlay that disappears on hover with a cool animation:
We could accomplish this by adding an extra element over the image, but that’s what we’re challenging ourselves not to do in this series. Instead, we can reach for the CSS outline property and leverage that it can have a negative offset and is able to overlap its element.
img { --s: 250px; /* the size of the image */ --b: 8px; /* the border thickness*/ --g: 14px; /* the gap */ --c: #4ECDC4; width: var(--s); aspect-ratio: 1; outline: calc(var(--s) / 2) solid #0009; outline-offset: calc(var(--s) / -2); cursor: pointer; transition: 0.3s; } img:hover { outline: var(--b) solid var(--c); outline-offset: var(--g); }
The trick is to create an outline that’s as thick as half the image size, then offset it by half the image size with a negative value. Add in some semi-transparency with the color and we have our overlay!
The rest is what happens on :hover. We update the outline and the transition between both outlines creates the cool hover effect. The same technique can also be used to create a fading effect where we don’t move the outline but make it transparent.
Instead of using half the image size in this one, I am using a very big outline thickness value (100vmax) while applying a CSS mask. With this, there’s no longer a need to know the image size — it trick works at all sizes!
You may face issues using 100vmax as a big value in Safari. If it’s the case, consider the previous trick where you replace the 100vmax with half the image size.
We can take things even further! For example, instead of simply clipping the extra outline, we can create shapes and apply a fancy reveal animation.
Cool right? The outline is what creates the yellow overlay. The clip-path clips the extra outline to get the star shape. Then, on hover, we make the color transparent.
Oh, you want hearts instead? We can certainly do that!
Imagine all the possible combinations we can create. All we have to do is to draw a shape with a CSS mask and/or clip-path and combine it with the outline trick. One solution, infinite possibilities!
And, yes, we can definitely animate this as well. Let’s not forget that clip-path is animatable and mask relies on gradients — something we covered in super great detail in the first two articles of this series.
I know, the animation is a bit glitchy. This is more of a demo to illustrate the idea rather than the “final product” to be used in a production site. We’d wanna optimize things for a more natural transition.
Here is a demo that uses mask instead. It’s the one I teased you with at the end of the last article:
Did you know that the outline property was capable of so much awesomeness? Add it to your toolbox for fancy image decorations!
Combine all the things!
Now that we have learned many tricks using gradients, masks, clipping, and outline, it’s time for the grand finale. Let’s cap off this series by combine all that we have learned the past few weeks to showcase not only the techniques, but demonstrate just how flexible and modular these approaches are.
If you were seeing these demos for the first time, you might assume that there’s a bunch of extra divs wrappers and pseudo-elements being used to pull them off. But everything is happening directly on the <img> element. It’s the only selector we need to get these advanced shapes and effects!
Wrapping up
Well, geez, thanks for hanging out with me in this three-part series the past few weeks. We explored a slew of different techniques that turn simple images into something eye-catching and interactive. Will you use everything we covered? Certainly not! But my hope is that this has been a good exercise for you to dig into advanced uses of CSS features, like gradients, mask, clip-path, and outline.
And we did everything with just one <img> element! No extra div wrappers and pseudo-elements. Sure, it’s a constraint we put on ourselves, but it also pushed us to explore CSS and try to find innovative solutions to common use cases. So, before pumping extra markup into your HTML, think about whether CSS is already capable of handling the task.
Welcome to Part 2 of this three-part series! We are still decorating images without any extra elements and pseudo-elements. I hope you already took the time to digest Part 1 because we will continue working with a lot of gradients to create awesome visual effects. We are also going to introduce the CSS mask property for more complex decorations and hover effects.
As we saw in the previous article, the first step is to make space around the image with padding so we can draw a background gradient and see it there. Then we use a combination of radial-gradient() and linear-gradient() to cut those circles around the image.
Here is a step-by-step illustration that shows how the gradients are configured:
Note the use of the round value in the second step. It’s very important for the trick as it ensures the size of the gradient is adjusted to be perfectly aligned on all the sides, no matter what the image width or height is.
From the specification: The image is repeated as often as will fit within the background positioning area. If it doesn’t fit a whole number of times, it is rescaled so that it does.
The Rounded Frame
Let’s look at another image decoration that uses circles…
This example also uses a radial-gradient(), but this time I have created circles around the image instead of the cut-out effect. Notice that I am also using the round value again. The trickiest part here is the transparent gap between the frame and the image, which is where I reach for the CSS mask property:
Masking allows us to show the area of the image — thanks to the linear-gradient() in there — as well as 20px around each side of it — thanks to the conic-gradient(). The 20px is nothing but the variable --s that defines the size of the frame. In other words, we need to hide the gap.
Here’s what I mean:
The linear gradient is the blue part of the background while the conic gradient is the red part of the background. That transparent part between both gradients is what we cut from our element to create the illusion of an inner transparent border.
The Inner Transparent Border
For this one, we are not going to create a frame but rather try something different. We are going to create a transparent inner border inside our image. Probably not that useful in a real-world scenario, but it’s good practice with CSS masks.
Similar to the previous example, we are going to rely on two gradients: a linear-gradient() for the inner part, and a conic-gradient() for the outer part. We’ll leave a space between them to create the transparent border effect.
You may have noticed that the conic gradient of this example has a different syntax from the previous example. Both are supposed to create the same shape, so why are they different? It’s because we can reach the same result using different syntaxes. This may look confusing at first, but it’s a good feature. You are not obliged to find the solution to achieve a particular shape. You only need to find one solution that works for you out of the many possibilities out there.
Here are four ways to create the outer square using gradients:
There are even more ways to pull this off, but you get the point.
There is no Best™ approach. Personally, I try to find the one with the smallest and most optimized code. For me, any solution that requires fewer gradients, fewer calculations, and fewer repeated values is the most suitable. Sometimes I choose a more verbose syntax because it gives me more flexibility to change variables and modify things. It comes with experience and practice. The more you play with gradients, the more you know what syntax to use and when.
Let’s get back to our inner transparent border and dig into the hover effect. In case you didn’t notice, there is a cool hover effect that moves that transparent border using a font-size trick. The idea is to define the --d variable with a value of 1em. This variables controls the distance of the border from the edge. We can transform like this:
--_d: calc(var(--d) + var(--s) * 1em)
…giving us the following updated CSS:
img { --b: 5px; /* the border thickness */ --d: 20px; /* the distance from the edge */ --o: 15px; /* the offset on hover */ --s: 1; /* the direction of the hover effect (+1 or -1)*/ --_d: calc(var(--d) + var(--s) * 1em); --_g: calc(100% - 2 * (var(--_d) + var(--b))); mask: conic-gradient(from 90deg at var(--_d) var(--_d), #0000 25%, #000 0) 0 0 / calc(100% - var(--_d)) calc(100% - var(--_d)), linear-gradient(#000 0 0) 50% / var(--_g) var(--_g) no-repeat; font-size: 0; transition: .35s; } img:hover { font-size: var(--o); }
The font-size is initially equal to 0 ,so 1em is also equal to 0 and --_d is be equal to --d. On hover, though, the font-size is equal to a value defined by an --o variable that sets the border’s offset. This, in turn, updates the --_d variable, moving the border by the offset. Then I add another variable, --s, to control the sign that decides whether the border moves to the inside or the outside.
The font-size trick is really useful if we want to animate properties that are otherwise unanimatable. Custom properties defined with@property can solve this but support for it is still lacking at the time I’m writing this.
The Frame Reveal
We made the following reveal animation in the first part of this series:
We can take the same idea, but instead of a border with a solid color we will use a gradient like this:
If you compare both codes you will notice the following changes:
I used the same gradient configuration from the first example inside the mask property. I simply moved the gradients from the background property to the mask property.
I added a repeating-linear-gradient() to create the gradient border.
That’s it! I re-used most of the same code we already saw — with super small tweaks — and got another cool image decoration with a hover effect.
/* Solid color border */ img { --c: #8A9B0F; /* the border color */ --b: 10px; /* the border thickness*/ --g: 5px; /* the gap on hover */ padding: calc(var(--g) + var(--b)); --_g: #0000 25%, var(--c) 0; background: conic-gradient(from 180deg at top var(--b) right var(--b), var(--_g)) var(--_i, 200%) 0 / 200% var(--_i, var(--b)) no-repeat, conic-gradient(at bottom var(--b) left var(--b), var(--_g)) 0 var(--_i, 200%) / var(--_i, var(--b)) 200% no-repeat; transition: .3s, background-position .3s .3s; cursor: pointer; } img:hover { --_i: 100%; transition: .3s, background-size .3s .3s; }
Let’s try another frame animation. This one is a bit tricky as it has a three-step animation:
The first step of the animation is to make the bottom edge bigger. For this, we adjust the background-size of a linear-gradient():
You are probably wondering why I am also adding the top edge. We need it for the third step. I always try to optimize the code I write, so I am using one gradient to cover both the top and bottom sides, but the top one is hidden and revealed later with a mask.
For the second step, we add a second gradient to show the left and right edges. But this time, we do it using background-position:
We can stop here as we already have a nice effect with two gradients but we are here to push the limits so let’s add a touch of mask to achieve the third step.
The trick is to make the top edge hidden until we show the bottom and the sides and then we update the mask-size (or mask-position) to show the top part. As I said previously, we can find a lot of gradient configurations to achieve the same effect.
Here is an illustration of the gradients I will be using:
I am using two conic gradients having a width equal to 200%. Both gradients cover the area leaving only the top part uncovered (that part will be invisible later). On hover, I slide both gradients to cover that part.
Here is a better illustration of one of the gradients to give you a better idea of what’s happening:
Now we put this inside the mask property and we are done! Here is the full code:
I have also introduced some variables to optimize the code, but you should be used to this right now.
What about a four-step animation? Yes, it’s possible!
No explanation for this because it’s your homework! Take all that you have learned in this article to dissect the code and try to articulate what it’s doing. The logic is similar to all the previous examples. The key is to isolate each gradient to understand each step of the animation. I kept the code un-optimized to make things a little easier to read. I do have an optimized version if you are interested, but you can also try to optimize the code yourself and compare it with my version for additional practice.
Wrapping up
That’s it for Part 2 of this three-part series on creative image decorations using only the <img> element. We now have a good handle on how gradients and masks can be combined to create awesome visual effects, and even animations — without reaching for extra elements or pseudo-elements. Yes, a single <img> tag is enough!
We have one more article in this series to go. Until then, here is a bonus demo with a cool hover effect where I use mask to assemble a broken image.
As the title says, we are going to decorate images! There’s a bunch of other articles out there that talk about this, but what we’re covering here is quite a bit different because it’s more of a challenge. The challenge? Decorate an image using only the <img> tag and nothing more.
That right, no extra markup, no divs, and no pseudo-elements. Just the one tag.
Sounds difficult, right? But by the end of this article — and the others that make up this little series — I’ll prove that CSS is powerful enough to give us great and stunning results despite the limitation of working with a single element.
Fancy Image Decorations series
Single Element Magic — you are here
Masks and Advanced Hover Effects (coming October 21 )
Outlines and Complex Animations (coming October 28 )
Let’s start with our first example
Before digging into the code let’s enumerate the possibilities for styling an <img> without any extra elements or pseudo-elements. We can use border, box-shadow, outline, and, of course, background. It may look strange to add a background to an image because we cannot see it as it will be behind the image — but the trick is to create space around the image using padding and/or border and then draw our background inside that space.
I think you know what comes next since I talked about background, right? Yes, gradients! All the decorations we are going to make rely on a lot of gradients. If you’ve followed me for a while, I think this probably comes as no surprise to you at all. 😁
We are defining padding and a transparent border using the variable --s to create a space around our image equal to three times that variable.
Why are we using both padding and border instead of one or the other? We can get by using only one of them but I need this combination for my gradient because, by default, the initial value of background-clip is border-box and background-origin is equal to padding-box.
Here is a step-by-step illustration to understand the logic:
Initially, we don’t have any borders on the image, so our gradient will create two segments with 1px of thickness. (I am using 3px in this specific demo so it’s easier to see.) We add a colored border and the gradient still gives us the same result inside the padding area (due to background-origin) but it repeats behind the border. If we make the color of the border transparent, we can use the repetition and we get the frame we want.
The outline in the demo has a negative offset. That creates a square shape at the top of the gradient. That’s it! We added a nice decoration to our image using one gradient and an outline. We could have used more gradients! But I always try to keep my code as simple as possible and I found that adding an outline is better that way.
Here is a gradient-only solution where I am using only padding to define the space. Still the same result but with a more complex syntax.
Let’s try another idea:
For this one, I took the previous example removed the outline, and applied a clip-path to cut the gradient on each side. The clip-path value is a bit verbose and confusing but here is an illustration to better see its points:
I think you get the main idea. We are going to combine backgrounds, outlines, clipping, and some masking to achieve different kinds of decorations. We are also going to consider some cool hover animations as an added bonus! What we’ve looked at so far is merely a small overview of what’s coming!
The Corner-Only Frame
This one takes four gradients. Each gradient covers one corner and, on hover, we expand them to create a full frame around the image. Let’s dissect the code for one of the gradients:
--b: 5px; /* border thickness */ background: conic-gradient(from 90deg at top var(--b) left var(--b), #0000 90deg, darkblue 0) 0 0; background-size: 50px 50px; background-repeat: no-repeat;
We are going to draw a gradient with a size equal to 50px 50px and place it at the top-left corner (0 0). For the gradient’s configuration, here’s a step-by-step illustration showing how I reached that result.
We tend to think that gradients are only good for transitioning between two colors. But in reality, we can do so much more with them! They are especially useful when it comes to creating different shapes. The trick is to make sure we have hard stops between colors — like in the example above — rather than smooth transitions:
#0000 25%, darkblue 0
This is basically saying: “fill the gradient with a transparent color until 25% of the area, then fill the remaining area with darkblue.
You might be scratching your head over the 0 value. It’s a little hack to simplify the syntax. In reality, we should use this to make a hard stop between colors:
#0000 25%, darkblue 25%
That is more logical! The transparent color ends at 25% and darkblue starts exactly where the transparency ends, making a hard stop. If we replace the second one with 0, the browser will do the job for us, so it is a slightly more efficient way to go about it.
if a color stop or transition hint has a position that is less than the specified position of any color stop or transition hint before it in the list, set its position to be equal to the largest specified position of any color stop or transition hint before it.
0 is always smaller than any other value, so the browser will always convert it to the largest value that comes before it in the declaration. In our case, that number is 25%.
Now, we apply the same logic to all the corners and we end with the following code:
img { --b: 5px; /* border thickness */ --c: #0000 90deg, darkblue 0; /* define the color here */ padding: 10px; background: conic-gradient(from 90deg at top var(--b) left var(--b), var(--c)) 0 0, conic-gradient(from 180deg at top var(--b) right var(--b), var(--c)) 100% 0, conic-gradient(from 0deg at bottom var(--b) left var(--b), var(--c)) 0 100%, conic-gradient(from -90deg at bottom var(--b) right var(--b), var(--c)) 100% 100%; background-size: 50px 50px; /* adjust border length here */ background-repeat: no-repeat; }
I have introduced CSS variables to avoid some redundancy as all the gradients use the same color configuration.
For the hover effect, all I’m doing is increasing the size of the gradients to create the full frame:
img:hover { background-size: 51% 51%; }
Yes, it’s 51% instead of 50% — that creates a small overlap and avoids possible gaps.
Let’s try another idea using the same technique:
This time we are using only two gradients, but with a more complex animation. First, we update the position of each gradient, then increase their sizes to create the full frame. I also introduced more variables for better control over the color, size, thickness, and even the gap between the image and the frame.
Why do the --_i and --_p variables have an underscore in their name? The underscores are part of a naming convention I use to consider “internal” variables used to optimize the code. They are nothing special but I want to make a difference between the variables we adjust to control the frame (like --b, --c, etc.) and the ones I use to make the code shorter.
The code may look confusing and not easy to grasp but I wrote a three-part series where I detail such technique. I highly recommend reading at least the first article to understand how I reached the above code.
Here is an illustration to better understand the different values:
The Frame Reveal
Let’s try another type of animation where we reveal the full frame on hover:
Cool, right? And you if you look closely, you will notice that the lines disappear in the opposite direction on mouse out which makes the effect even more fancy! I used a similar effect in a previous article.
But this time, instead of covering all the element, I cover only a small portion by defining a height to get something like this:
This is the top border of our frame. We repeat the same process on each side of the image and we have our hover effect:
As you can see, I am applying the same gradient four times and each one has a different position to cover only one side at a time.
Another one? Let’s go!
This one looks a bit tricky and it indeed does require some imagination to understand how two conic gradients are pulling off this kind of magic. Here is a demo to illustrate one of the gradients:
The pseudo-element simulates the gradient. It’s initially out of sight and, on hover, we first change its position to get the top edge of the frame. Then we increase the height to get the right edge. The gradient shape is similar to the ones we used in the last section: two segments to cover two sides.
But why did I make the gradient’s width 200%? You’d think 100% would be enough, right?
100% should be enough but I won’t be able to move the gradient like I want if I keep its width equal to 100%. That’s another little quirk related to how background-position works. I cover this in a previous article. I also posted an answer over at Stack Overflow dealing with this. I know it’s a lot of reading, but it’s really worth your time.
Now that we have explained the logic for one gradient, the second one is easy because it’s doing exactly the same thing, but covering the left and bottom edges instead. All we have to do is to swap a few values and we are done:
img { --c: #8A9B0F; /* the border color */ --b: 10px; /* the border thickness*/ --g: 5px; /* the gap */ padding: calc(var(--g) + var(--b)); --_g: #0000 25%, var(--c) 0; background: conic-gradient(from 180deg at top var(--b) right var(--b), var(--_g)) var(--_i, 200%) 0 / 200% var(--_i, var(--b)) no-repeat, conic-gradient( at bottom var(--b) left var(--b), var(--_g)) 0 var(--_i, 200%) / var(--_i, var(--b)) 200% no-repeat; transition: .3s, background-position .3s .3s; cursor: pointer; } img:hover { --_i: 100%; transition: .3s, background-size .3s .3s; }
As you can see, both gradients are almost identical. I am simply swapping the values of the size and position.
The Frame Rotation
This time we are not going to draw a frame around our image, but rather adjust the look of an existing one.
You are probably asking how the heck I am able to transform a straight line into an angled line. No, the magic is different than that. That’s just the illusion we get after combining simple animations for four gradients.
Let’s see how the animation for the top gradient is made:
I am simply updating the position of a repeating gradient. Nothing fancy yet! Let’s do the same for the right side:
Are you starting to see the trick? Both gradients intersect at the corner to create the illusion where the straight line is changed to an angled one. Let’s remove the outline and hide the overflow to better see it:
Now, we add two more gradients to cover the remaining edges and we are done:
If we take this code and slightly adjust it, we can get another cool animation:
Can you figure out the logic in this example? That’s your homework! The code may look scary but it uses the same logic as the previous examples we looked at. Try to isolate each gradient and imagine how it animates.
Wrapping up
That’s a lot of gradients in one article!
It sure is and I warned you! But if the challenge is to decorate an image without an extra elements and pseudo-elements, we are left with only a few possibilities and gradients are the most powerful option.
Don’t worry if you are a bit lost in some of the explanations. I always recommend some of my old articles where I go into greater detail with some of the concepts we recycled for this challenge.
Hover effect using background properties: This article covers different background tricks to create cool animations with efficient code. This one is a must-read because there’s a lot on CSS variables and calc() which are big parts of what we covered here.
I am gonna leave with one last demo to hold you over until the next article in this series. This time, I am using radial-gradient() to create another funny hover effect. I’ll let you dissect the code to grok how it works. Ask me questions in the comments if you get stuck!
Fancy Image Decorations series
Single Element Magic — you are here
Masks and Advanced Hover Effects (coming October 21 )
Outlines and Complex Animations (coming October 28 )
So you want an auto-playing looping video without sound? In popular vernacular this is the very meaning of the word GIF. The word has stuck around but the image format itself is ancient and obsolete. Twitter, for example, has a “GIF” button that actually inserts a <video> element with an MP4 file into your tweet — no .gif in sight. There are a beguiling amount of ways to achieve the same outcome but one thing is clear: there’s really no good reason to use the bulky .gif file format anymore.
Use a HTML <video> element
It’s easy to recreate the behavior of a GIF using the HTML video element.
With this code the video will play automatically in a continuous loop with no audio. playsinline means that mobile browsers will play the video where it is on the page rather than opening in fullscreen.
While the HTML video element itself has been supported for many years, the same can’t be said for the wide variety of video formats.
Videos are made up of two parts: the container and the video codec. (If your video contains audio then it is made up of three parts, the third being the audio codec.) Containers can store video, audio, subtitles and meta information. The two most common containers for video on the web are MP4 and WebM. The container is the same as the file type — if a file ends with a .mp4 extension, that means it’s using an MP4 container. The file extension doesn’t tell you the codec though. Examples of video codecs commonly used on the web include VP8, VP9, H.264 and HEVC (H.265). For your video to play online, the browser needs to support both the video container and the codec.
Browser support for video is a labyrinthine mess, which is part of the reason YouTube embeds are ubiquitous, but that doesn’t work for our use case. Let’s look at the video formats that are worth considering.
Containers
MP4 was originally released in 2001. It is supported by all web browsers and has been for quite some time.
WebM was released in 2010. It works in all browsers except for iOS Safari.
HEVC/H.265, the successor of H.264, is supported by Safari, Edge, and Chrome (as of version 105).
VP9 is the successor to the VP8 codec. VP9 is supported by all the browsers that support WebM.
The AV1 codec has been supported in Chrome since 2018 and Firefox since 2019. It has not yet shipped in Edge or Safari.
An MP4 file using the H.264 codec will work everywhere, but it doesn’t deliver the best quality or the smallest file size.
AV1 doesn’t have cross-browser support yet but, released in 2018, it’s the most modern codec around. It’s already being used, at least for some videos and platforms, by Netflix, YouTube and Vimeo. AV1 is a royalty-free video codec designed specifically for the internet. AV1 was created by the Alliance for Open Media (AOM), a group founded by Google, Mozilla, Cisco, Microsoft, Netflix, Amazon, and Intel. Apple is now also a member, so it’s safe to assume all browsers will support AV1 eventually. Edge is “still evaluating options to support AVIF and AV1.”
The recently redesigned website from development consultancy Evil Martians is a testament to the file-size reduction that AV1 is capable of.
If you want to use newer video formats with fallbacks for older browsers, you can use multiple <source> elements. The order of the source elements matter. Specify the ideal source at the top, and the fallback after.
Given the above code, cats.webm will be used unless the browser does not support that format, in which case the MP4 will be displayed instead.
What if you want to include multiple MP4 files, but with each using a different codec? When specifying the type you can include a codecs parameter. The syntax is horrifically complicated for anybody who isn’t some kind of hardcore codec nerd, but it looks something like this:
Using the above code the browser will select AV1 if it can play that format and fallback to the universally-supported H.264 if not. For AV1, the codecs parameter always starts with av01. The next number is either 0 (for main profile), 1 (for high profile) or 2 (for professional profile). Next comes a two-digit level number. This is followed either by the letter M (for main tier) or H (for high tier). It’s difficult to understand what any those things mean, so you could provide your AV1 video in a WebM container and avoid specifying the codec entirely.
Most video editing software does not allow you to export as AV1, or even as WebM. If you want to use one of those formats you’ll need to export your video as something else, like a .mov, and then convert it using the command-line tool FFmpeg:
You should use the most high-resolution source file you can. Obviously, once image quality is lost you can’t improve it through conversion to a superior format. Using a .gif as a source file isn’t ideal because the visual quality of .gif isn’t great, but you’ll still get the benefit of a large reduction in file size:
Here’s a nice example of video in web design on the masterfully designed Oxide website:
Site serves transparent video where it can – WebM on most browsers, transparent mov on Safari, which lets you do some subtle but nice hover effects pic.twitter.com/wa0eTgGWa2
If you want to use the video as a background and place other elements on top of it, working with <video> is slightly more challenging than a CSS background-image, and requires code that goes something like this:
The <video> element is a perfectly okay option for replacing GIFs but it does have one unfortunate side-effect: it prevents a user’s screen from going to sleep, as explained in this post from an ex- product manager on the Microsoft Edge browser.
The benefits of using an image
Whether it’s an animated WebP or animated AVIF file, using images rather than video comes with some benefits.
I’m not sure how many people actually want to art-direct their GIFs, but using the <picture> element does open up some possibilities that couldn’t easily be achieved with <video>. You could specify different animations for light and dark mode, for example:
We might want a video on mobile to be a different aspect ratio than on desktop. We could just crop parts of the image with CSS, but that seems like a waste of bytes and somewhat haphazard. Using a media query we can display a different animated image file based on the screen size or orientation:
All of this is possible with video — you can use matchMedia to do any media queries in JavaScript and programmatically change the src of a <video> element:
I believe that whenever there’s a way to do something with markup it should be preferred over doing it JavaScript.
You can use raster images inside of an SVG using the <image> element. This includes animated image formats. There’s not much you can do with an image inside an SVG that you couldn’t already do with CSS, but if you group an image with vector elements inside an SVG, then you do get the benefit that the different elements move and scale together.
If you want to support older browsers you could use the <picture> element with a fallback of either an animated WebP or, just for Safari, an img with a video src, or if you care about ancient browsers, maybe an APNG (animated PNG) or a GIF. Using multiple image formats this way might be impractical if you’re optimizing images manually; but it is relatively trivial if you’re using a service like Cloudinary.
There’s still no well-supported way to specify fallback images for CSS backgrounds. image-set is an equivalent of the <picture> element, [but for background-image. Unfortunately, only Firefox currently supports the type attribute of image-set.
This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.
Desktop
Chrome
Firefox
IE
Edge
Safari
108*
89
No
105*
TP
Mobile / Tablet
Android Chrome
Android Firefox
Android
iOS Safari
105*
104
105*
16.1
Use animated WebP
The WebP image format was introduced by Google in 2010. WebP, including animated WebP, has broad browser support.
<img src="nyancat.webp" alt="A cat flying through space leaving a rainbow trail">
This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.
Desktop
Chrome
Firefox
IE
Edge
Safari
32
65
No
18
16.0
Mobile / Tablet
Android Chrome
Android Firefox
Android
iOS Safari
105
104
4.2-4.3
14.0-14.4
Use animated AVIF
WebP is now twelve years old. The more modern AV1 Image File Format (AVIF), released in 2019, is the best image format for most use cases on the web. Converting a .gif file to AVIF can reduce bytes by over 90%.
<img src="nyancat.avif" alt="A cat flying through space leaving a rainbow trail">
As its name suggests, AVIF is based on the the AV1 video codec. Like WebP, AVIF can be used for both still images and animation. There’s not much difference between an animated AVIF file and an AV1 video in an MP4 container.
You can put a shadow on AVIF animation, e.g.:
filter: drop-shadow(2px 4px 6px black);
AVIF is already supported by Safari, Firefox, Samsung Internet, and Chrome. Firefox only shipped support for still images, not animated AVIF. Safari supports animation as of version 16.1. Unfortunately, because Firefox does support AVIF, just not animated AVIF, it’s impossible to successfully use the <picture> element to display AVIF only to browsers that support animation. Given the following code, Firefox would display the AVIF, but as a static image, rather than showing the animated WebP version:
Tooling for AVIF is still improving. Video editing software does not enable you to export footage as animated AVIF or animated WebP. You’ll need to export it in some other format and then convert it. On the website ezgif.com you can upload a video file or a .gif and convert it to AVIF or WebP. You could also use FFmpeg. Using Cloudinary you can upload a video file or an old .gif and convert it to pretty much any format you want — including animated WebP and animated AVIF. As of time of writing, Squoosh, an image conversion app, doesn’t support animated AVIF.
Adoption remains lacking in design software. When viewing a prototype, Figma will play any animated GIFs included in the design. For AVIF, by contrast, you can’t even import or export a still image.
Use a video with an <img> element
In 2018, Safari11.1 gave developers the ability to use a video file as the source of the HTML <img> element. This works in Safari:
<img src="cat.mp4" alt="A Siamese cat walking in a circle">
All the same codecs that Safari supports for <video> are supported by <img>. This means you can use MP4, H.264, and HEVC.
In Safari, video files will also work anyplace in CSS where you could use an image, like background-image or border-image:
One strange consequence of this feature in Safari is that the poster image of a <video> element can also be a video. The poster will autoplay even if you have blocked video’s from auto-playing. Safari claimed this feature came with performance benefits, not just over using .gif files but also over using the <video> element. According to Apple:
By placing your videos in <img> elements, the content loads faster, uses less battery power, and gets better performance.
Colin Bendell, co-author of O‘Reilly’s High Performance Images, wrote about the shortcomings of the <video> tag for our use case:
Unlike <img> tags, browsers do not preload<video> content. Generally preloaders only preload JavaScript, CSS, and image resources because they are critical for the page layout. Since <video> content can be any length – from micro-form to long-form – <video> tags are skipped until the main thread is ready to parse its content. This delays the loading of <video> content by many hundreds of milliseconds.
[…]
Worse yet, many browsers assume that <video> tags contain long-form content. Instead of downloading the whole video file at once, which would waste your cell data plan in cases where you do not end up watching the whole video, the browser will first perform a 1-byte request to test if the server supports HTTP Range Requests. Then it will follow with multiple range requests in various chunk sizes to ensure that the video is adequately (but not over-) buffered. The consequence is multiple TCP round trips before the browser can even start to decode the content and significant delays before the user sees anything. On high-latency cellular connections, these round trips can set video loads back by hundreds or thousands of milliseconds.
Chrome has marked this as “WontFix” — meaning they don’t intend to ever support this feature, for various reasons. There is, however, an open issue on GitHub to add it to the HTML spec, which would force Google’s hand.
Respecting user preferences
Video has the benefit of automatically respecting a users preferences. Firefox and Safari allow users to block videos from automatically playing, even if they don’t have any audio. Here are the settings in Firefox, for example:
The user can still decide to watch a certain video by right-clicking and pressing play in the menu, or enable autoplay for all videos on a specific website.
For users who haven’t disabled autoplay, it’s nice to have the option to pause an animation if you happen to find it annoying or distracting (a user can still right-click to bring up the pause option in a menu when video controls aren’t shown). Success Criterion 2.2.2 Pause, Stop, Hide of the WCAG accessibility guidelines states:
For any moving, blinking or scrolling information that (1) starts automatically, (2) lasts more than five seconds, and (3) is presented in parallel with other content, there is a mechanism for the user to pause, stop, or hide it unless the movement, blinking, or scrolling is part of an activity where it is essential.
With the <video> element, you’ll achieve that criterion without any additional development.
There’s also a “reduce motion” user setting that developers can respect by reducing or removing CSS and JavaScript web animations.
You can also use it to display a still image instead of an animation. This takes extra code to implement — and you need to host a still image in additional to your animated image.
There’s another downside. When using the <picture> element in this way if the user has checked “reduce motion”there’s no way for them to see the animation. Just because a user prefers less animation, doesn’t mean they never want any — they might still want to be able to opt-in and watch one every now and then. Unlike the <video> element, displaying a still image takes away that choice.
Checking for progressive enhancement
If you want to check that your <picture> code is properly working and fallback images are being displayed, you can use the Rendering tab in Chrome DevTools to turn off support for AVIF and WebP image formats. Seeing as all browsers now support WebP, this is a pretty handy feature.
While it’s usually the best option to create animations with CSS, JavaScript, DOM elements, canvas and SVG, as new image and video formats offer smaller files than what was previously possible, they become a useful option for UI animation (rather than just nyancat loops). For one-off animations, an AVIF file is probably going to be more performant than importing an entire animation library.
Here’s a fun example of using video for UI from all the way back in 2017 for the League of Legends website.
Lottie
After Effects is a popular animation tool from Adobe. Using an extension called Bodymovin, you can export animation data from After Effects as a JSON file.
Then there’s Lottie, an open-source animation library from Airbnb that can take that JSON file and render it as an animation on different platforms. The library is available for native iOS, Android, and React Native applications, as well as for the web. You can see examples from Google Home, Target, and Walgreens, among others.
Once you’ve included the dependency you need to write a small amount of JavaScript code to get the animation to run:
You can optionally change those settings to only play after an event:
const lottieContainer = document.getElementById('lottie'); const animation = bodymovin.loadAnimation({ container: lottieContainer, path: 'myAnimation.json', renderer: 'svg', loop: true, autoplay: false, }) // Play the animation on hover lottieContainer.addEventListener('mouseover', () => { animation.play(); }); // Stop the animation after playing once animation.addEventListener('loopComplete', function() { animation.stop(); });
Here’s a cute example of a cat typing on a keyboard I took from Lottiefiles.com (the website is a useful website for previewing your own Lottie JSON file animations, rather than needing to install After Effects, as well finding animations from other creatives):
You can also programmatically play an animation backwards and change the playback rate.
If you do choose to use Lottie, there’s a Figma plugin for Lottie but all it does is convert JSON files to .gif so that they can be previewed in prototyping mode.
Abd what about Lottie’s performance? There’s size of the library — 254.6KB (63.8 gzipped) — and the size of the JSON file to consider. There’s also the amount of DOM elements that get created for the SVG parts. If you run into this issue, Lottie has the option to render to a HTML <canvas>, but you’ll need to use a different version of the JavaScript library.
Lottie isn’t a full replacement for gifs. While After Effects itself is often used with video clips, and Lottie can render to a HTML <canvas>, and a canvas can play video clips, you wouldn’t use a Lottie file for that purpose. Lottie is for advanced 2D animations, not so much for video. There are other tools for creating complex web animations with a GUI like SVGator and Rive, but I haven’t tried them myself. 🤷♂️
I wish there was a TL;DR for this article. For now, at least, there’s no clear winner…
Don’t you hate it when you load a website or web app, some content displays and then some images load — causing content to shift around? That’s called content reflow and can lead to an incredibly annoying user experience for visitors.
I’ve previously written about solving this with React’s Suspense, which prevents the UI from loading until the images come in. This solves the content reflow problem but at the expense of performance. The user is blocked from seeing any content until the images come in.
Wouldn’t it be nice if we could have the best of both worlds: prevent content reflow while also not making the user wait for the images? This post will walk through generating blurry image previews and displaying them immediately, with the real images rendering over the preview whenever they happen to come in.
So you mean progressive JPEGs?
You might be wondering if I’m about to talk about progressive JPEGs, which are an alternate encoding that causes images to initially render — full size and blurry — and then gradually refine as the data come in until everything renders correctly.
This seems like a great solution until you get into some of the details. Re-encoding your images as progressive JPEGs is reasonably straightforward; there are plugins for Sharp that will handle that for you. Unfortunately, you still need to wait for some of your images’ bytes to come over the wire until even a blurry preview of your image displays, at which point your content will reflow, adjusting to the size of the image’s preview.
You might look for some sort of event to indicate that an initial preview of the image has loaded, but none currently exists, and the workarounds are … not ideal.
Let’s look at two alternatives for this.
The libraries we’ll be using
Before we start, I’d like to call out the versions of the libraries I’ll be using for this post:
Most of us are used to using <img /> tags by providing a src attribute that’s a URL to some place on the internet where our image exists. But we can also provide a Base64 encoding of an image and just set that inline. We wouldn’t usually want to do that since those Base64 strings can get huge for images and embedding them in our JavaScript bundles can cause some serious bloat.
But what if, when we’re processing our images (to resize, adjust the quality, etc.), we also make a low quality, blurry version of our image and take the Base64 encoding of that? The size of that Base64 image preview will be significantly smaller. We could save that preview string, put it in our JavaScript bundle, and display that inline until our real image is done loading. This will cause a blurry preview of our image to show immediately while the image loads. When the real image is done loading, we can hide the preview and show the real image.
Let’s see how.
Generating our preview
For now, let’s look at Jimp, which has no dependencies on things like node-gyp and can be installed and used in a Lambda.
Here’s a function (stripped of error handling and logging) that uses Jimp to process an image, resize it, and then creates a blurry preview of the image:
If you’d like to take a closer look, here’s the same preview in a CodeSandbox.
Obviously, this preview encoding isn’t small, but then again, neither is our image; smaller images will produce smaller previews. Measure and profile for your own use case to see how viable this solution is.
Now we can send that image preview down from our data layer, along with the actual image URL, and any other related data. We can immediately display the image preview, and when the actual image loads, swap it out. Here’s some (simplified) React code to do that:
Don’t worry about the PreviewCanvas component yet. And don’t worry about the fact that things like a changing URL aren’t accounted for.
Note that we set the image component’s src after the onLoad handler to ensure it fires. We show the preview, and when the real image loads, we swap it in.
Improving things with BlurHash
The image preview we saw before might not be small enough to send down with our JavaScript bundle. And these Base64 strings will not gzip well. Depending on how many of these images you have, this may or may not be good enough. But if you’d like to compress things even smaller and you’re willing to do a bit more work, there’s a wonderful library called BlurHash.
BlurHash generates incredibly small previews using Base83 encoding. Base83 encoding allows it to squeeze more information into fewer bytes, which is part of how it keeps the previews so small. 83 might seem like an arbitrary number, but the README sheds some light on this:
First, 83 seems to be about how many low-ASCII characters you can find that are safe for use in all of JSON, HTML and shells.
Secondly, 83 * 83 is very close to, and a little more than, 19 * 19 * 19, making it ideal for encoding three AC components in two characters.
To generate your blurhash previews, you’ll likely want to run some sort of serverless function to process your images and generate the previews. I’ll be using AWS Lambda, but any alternative should work.
Just be careful about maximum size limitations. The binaries Sharp installs add about 9 MB to the serverless function’s size.
To run this code in an AWS Lambda, you’ll need to install the library like this:
"install-deps": "npm i && SHARP_IGNORE_GLOBAL_LIBVIPS=1 npm i --arch=x64 --platform=linux sharp"
And make sure you’re not doing any sort of bundling to ensure all of the binaries are sent to your Lambda. This will affect the size of the Lambda deploy. Sharp alone will wind up being about 9 MB, which won’t be great for cold start times. The code you’ll see below is in a Lambda that just runs periodically (without any UI waiting on it), generating blurhash previews.
This code will look at the size of the image and create a blurhash preview:
Again, I’ve removed all error handling and logging for clarity. Worth noting is the call to ensureAlpha. This ensures that each pixel has 4 bytes, one each for RGB and Alpha.
Jimp lacks this method, which is why we’re using Sharp; if anyone knows otherwise, please drop a comment.
Also, note that we’re saving not only the preview string but also the dimensions of the image, which will make sense in a bit.
We’re calling blurhash‘s encode method, passing it our image and the image’s dimensions. The last two arguments are componentX and componentY, which from my understanding of the documentation, seem to control how many passes blurhash does on our image, adding more and more detail. The acceptable values are 1 to 9 (inclusive). From my own testing, 4 is a sweet spot that produces the best results.
That’s incredibly small! The tradeoff is that using this preview is a bit more involved.
Basically, we need to call blurhash‘s decode method and render our image preview in a canvas tag. This is what the PreviewCanvas component was doing before and why we were rendering it if the type of our preview was not a string: our blurhash previews use an entire object — containing not only the preview string but also the image dimensions.
Not too terribly much going on here. We’re decoding our preview and then calling some fairly specific Canvas APIs.
Let’s see what the image previews look like:
In a sense, it’s less detailed than our previous previews. But I’ve also found them to be a bit smoother and less pixelated. And they take up a tiny fraction of the size.
Test and use what works best for you.
Wrapping up
There are many ways to prevent content reflow as your images load on the web. One approach is to prevent your UI from rendering until the images come in. The downside is that your user winds up waiting longer for content.
A good middle-ground is to immediately show a preview of the image and swap the real thing in when it’s loaded. This post walked you through two ways of accomplishing that: generating degraded, blurry versions of an image using a tool like Sharp and using BlurHash to generate an extremely small, Base83 encoded preview.
In recent years, the Jamstack methodology for building websites has become increasingly popular. Performance, scalable, and secure, it’s easy to see why it’s becoming an attractive way to build websites for developers.
GatsbyJS is a static site generator platform. It’s powered by React, a front-end JavaScript library, for building user interfaces. And uses GraphQL, an open-source data query and manipulation language, to pull structured data from other sources, typically a headless CMS like Contentful.
While GatsbyJS and similar platforms have revolutionized much about the web development process, one stubborn challenge remains: image optimization. Even using a modern front-end development framework like GatsbyJS, it tends to be a time-intensive and frustrating exercise.
For most modern websites, it doesn’t help much if you run on a performant technology but your images aren’t optimized. Today, images are the largest contributor to page weight, and growing, and have been singled out by Google as presenting the most significant opportunity for improving performance.
With that in mind, I want to discuss how using an image CDN as part of your technology stack can bring improvements both in terms of website performance and the entire development process.
A Quick Introduction to Gatsby
GatsbyJS is so much more than the conventional static site generators of old. Yes, you still have the ability to integrate with a software version control platform, like Git, as well as to build, deploy, and preview Gatsby projects. However, its services consist of a unified cloud platform that includes high-speed, scalable, and secure hosting as well as expert technical support and powerful third-party integrations.
What’s more, all of it comes wrapped in a user-friendly development platform that shares many similarities with the most popular CMSs of the day. For example, you can leverage pre-designed site templates or pre-configured functions (effectively website elements and modules) to speed up the production process.
It also offers many benefits for developers by allowing you to work with leading frameworks and languages, like JavaScript, React, WebPack, and GraphQL as well as baked-in capabilities to deal with performance, development iterations, etc.
For example, Gatsby does a lot to optimize your performance without any intervention. It comes with built-in code-splitting, prefetching resources, and lazy-loading. Static sites are generally known for being inherently performant, but Gatsby kicks it up a notch.
Does Gatsby Provide Built-in Image Optimization?
Gatsby does, in fact, offer built-in image optimization capabilities.
In fact, it recently upgraded in this regard, replacing the now deprecated gatsby-image package with the brand-new Gatsby image plugin. This plugin consists of two components for static and dynamic images, respectively. Typically, you would use the dynamic component if you’re handling images from a CMS, like Contentful.
Installing this plugin allows you to programmatically pass commands to the underlying framework in the form of properties, shown below:
Option
Default
Description
layout
constrained / CONSTRAINED
Determines the size of the image and its resizing behavior.
width/height
Source image size
Change the size of the image.
aspectRatio
Source image aspect ratio
Force a specific ratio between the image’s width and height.
placeholder
"dominantColor" / DOMINANT_COLOR
Choose the style of temporary image shown while the full image loads.
formats
["auto", "webp"] / [AUTO,WEBP]
File formats of the images generated.
transformOptions
[fit: "cover", cropFocus: "attention"]
Options to pass to sharp to control cropping and other image manipulations.
sizes
Generated automatically
The <img>sizes attribute, passed to the img tag. This describes the display size of the image, and does not affect generated images. You are only likely to change this if you are using full width images that do not span the full width of the screen.
quality
50
The default image quality generated. This is override by any format-specific option.
outputPixelDensities
For fixed images: [1, 2]
For constrained: [0.25, 0.5, 1, 2]
A list of image pixel densities to generate. It will never generate images larger than the source, and will always include a 1✕ image. The image is multiple by the image width, to give the generated sizes. For example, a 400px wide constrained image would generate 100, 200, 400 and 800px wide images by default. Ignored for full width layout images, which use breakpoints instead.
breakpoints
[750, 1000, 1366, 1920]
Output widths to generate for full width images. Default is to generate widths for common device resolutions. It will never generate an image larger than the source image. The browser will automatically choose the most appropriate.
blurredOptions
None
Options for the low-resolution placeholder image. Ignored unless placeholder is blurred.
tracedSVGOptions
None
Options for traced placeholder SVGs. See potrace options. Ignored unless placeholder is traced SVG.
jpgOptions
None
Options to pass to sharp when generating JPG images.
As you can see, that’s quite the toolkit to help developers process images in a variety of ways. The various options can be used to transform, style, and optimize images for performance as well as make images behave dynamically in a number of ways.
In terms of performance optimization, there are a few options that are particularly interesting:
Lazy-loading: Defers loading of off-screen images until they are scrolled into view.
Width/height: Resize image dimensions according to how they will be used.
Placeholder: When lazy-loading or while an image is loading in the background, use a placeholder. This can help to avoid performance penalties for core web vitals, like Cumulative Layout Shift (CLS).
Format: Different formats have inherently more efficient encoding. GatsbyJS supports WebP and AVIF, two of the most performant next-gen image formats.
Quality: Apply a specified level of quality compression to the image between 0 and 100.
Pixel density: A lower pixel density will save image data and can be optimized according to the screen size and PPI (pixels per inch).
Breakpoints: Breakpoints are important for ensuring that you serve a version of an image that’s sized appropriately for a certain threshold of screen sizes, especially that you serve smaller images for smaller screen sizes, like tablets or mobile phones. This is called responsive syntax.
So, all in all, Gatsby provides developers with a mature and sophisticated framework to process and optimize image content. The only important missing feature that’s missing is some type of built-in support for client hints.
However, there is one big catch: All of this has to be implemented manually. While GatsbyJS does use default settings for some image properties, it doesn’t offer built-in intelligence to automatically and dynamically process and serve optimized images tailored to the accessing device.
If you want to create an ideal image optimization solution, your developers will firstly have to implement device detection capabilities. They will then need to develop the logic to dynamically select optimization operations based on the specific device accessing your web app.
Finally, this code will continually need to be changed and updated. New devices come out all the time with differing properties. What’s more, standards regarding performance as well as image optimization are continually evolving. Even significant changes, additions, or updates to your own image assets may trigger the need to rework your implementation. Not to mention the time it takes to simply stay abreast of the latest information and trends and to make sure development is carried out accordingly.
Another problem is that you will need to continually test and refine your implementation. Without the help of an intelligent optimization engine, you will need to “feel out” how your settings will affect the visual quality of your images and continually fine-tune your approach to get the right results.
This will add a considerable amount of overhead to your development workload in the immediate and long term.
Gatsby also admits that these techniques are quite CPU intensive. In that case, you might want to preoptimize images. However, this also needs to be manually implemented in-code on top of being even less dynamic and flexible.
But, what if there was a better way to optimize your image assets while still enjoying all the benefits of using a platform like Gatsby? The solution I’m about to propose will help solve a number of key issues that arise from using Gatsby (and any development framework, for that matter) for the majority of your image optimization:
Reduce the impact optimizing images have on the development and design process in the immediate and long term.
Remove an additional burden and responsibility from your developers’ shoulders, freeing up time and resources to work on the primary aspects of your web app.
Improve your web app’s ability to dynamically and intelligently optimize image assets for each unique visitor.
All of this, while still integrating seamlessly with GatsbyJS as well as your CMS (in most cases).
Introducing a Better Way to Optimize Image Assets: ImageEngine
ImageEngine works just like any other CDN (content delivery network), such as Fastly, Akamai, or Cloudflare. However, it specializes in optimizing and serving image content specifically.
Like its counterparts, you provide ImageEngine with the location where your image files are stored, it pulls them to its own image optimization servers, and then generates and serves optimized variants of images to your site visitors.
In doing this, ImageEngine is designed to decrease image payload, deliver optimized images tailored to each unique device, and serve images from edge nodes across its global CDN.
Basically, image CDNs gather information on the accessing device by analyzing the ACCEPT header. A typical ACCEPT header looks like this (for Chrome):
As you can see, this only provides the CDN with the accepted image formats and the recommended quality compression.
More advanced CDNs, ImageEngine, included, can also leverage client hints for more in-depth data points, such as the DPR (device pixel ratio) and Viewport-Width. This allows a larger degree of intelligent decision-making to more effectively optimize image assets while preserving visual quality.
However, ImageEngine takes things another step further by being the only mainstream image CDN that has built-in WURFL device detection. This gives ImageEngine the ability to read more information on the device, such as the operating system, resolution, and PPI (pixels per inch).
Using AI and machine-learning algorithms, this extra data means ImageEngine has virtually unparalleled decision-making power. Without any manual intervention, ImageEngine can perform all of the following image optimization operations automatically:
Resize your images according to the device screen size without the need for responsive syntax.
Intelligently compress the quality of the image to reduce the payload while preserving visual quality, using metrics like the Structural Similarity Index Method (SSIM).
Convert images to the most optimal, next-gen encoding formats. On top of WebP and AVIF, ImagEngine also supports JPEG 2000 (Safari), JPEG XR (Internet Explorer & Edge), and MP4 (for aGIFs).
These settings also play well with GatsbyJS’ built-in capabilities. So, you can natively implement breakpoints, lazy-loading, and image placeholders that don’t require any expertise or intelligent decision-making using Gatsby. Then, you can let ImageEngine handle the more advanced and intelligence-driven operations, like quality compression, image formatting, and resizing.
The best part is that ImageEngine does all of this automatically, making it a completely hands-off image optimization solution. ImageEngine will automatically adjust its approach with time as the digital image landscape and standards change, freeing you from this concern.
In fact, ImageEngine recommends using default settings for getting the best results in most situations.
What’s more, this logic is built into the ImageEngine edge servers. Firstly, with over 20 global PoPs, it means that the images are processed and served as close to the end-user as possible. It also means that the majority of processing happens server-side. With the exception of installing the ImageEngine Gatsby plugin, there is virtually no processing overhead at build or runtime.
This type of dynamic, intelligent decision-making will only become more important in the near and medium-term. Thanks to the number and variety of devices growing by the year, it’s becoming harder and harder to implement image optimization in a way that’s sensitive to every device.
That’s why ImageEngine can give you the edge in a mobile-first future that’s continually evolving. Simply put, ImageEngine will help futureproof your Gatsby web app.
How to Integrate ImageEngine with Gatsby: A Quick Guide
Integrating ImageEngine with GatsbyJS is trivial if you have experience installing any other third-party plugins. However, the steps will differ somewhat based on which backend CMS you use with GatsbyJS and where you store your image assets.
For example, you could use it alongside WordPress, Drupal, Contentful, and a range of other popular CMSs.
Usually, your stack would look something like this:
A CMS, like Contentful, to host your “space” where you’ll manage your assets and create structured data. Your images will be uploaded and stored in your space.
A versioning platform, like Github, to host your code and manage your versions and branches.
GatsbyJS to host your workspace, where you’ll build, deploy, and host the front end of your website.
So, the first thing you need to do is set up a site, or project, using GatsbyJS and link it to your CMS.
Next, you’ll install the ImageEngine plugin for GatsbyJS:
You’ll also need to create a delivery address for your images via ImageEngine. You can get one by signing up for the 30-day trial here. The only thing you need to do is supply ImageEngine with the host origin URL. For Contentful, it’s images.ctfassets.net and for Sanity.io, it’s cdn.sanity.io.
ImageEngine will then provide you with a delivery address, usually in the format of {random_string}.cdn.imgeng.in.
You’ll use this delivery address to configure the ImageEngine plugin in your gatsby-config.js file. As part of this, you’ll indicate the source (Contentful, e.g.) as well as provide the ImageEngine delivery address. You can find examples of how that’s done in the documentation here.
Note that the ImageEngine plugin features built-in support for Contentful and Sanity.io as asset sources. You can also configure the plugin to pull locally stored images or from another custom source.
Once that’s done, development can begin!
Basically, Gatsby will create Graphql Nodes for the elements created in your CMS (e.g., ContentfulAsset, allSanityImageAsset, etc.). ImageEngine will then create a child node of childImageEngineAsset for each applicable element node.
You’ll then use GraphQL queries in the code for your Gatsby pages to specify the properties of the image variants you want to serve. For example, you can display an image that’s 500 ✕ 300px in the WebP format using the following query:
Once again, you should refer to the documentation for a more thorough treatment. You can find guides for integrating ImageEngine with Contentful, Sanity.io, and any other Gatsby project.
For a competent Gatsby user, integrating ImageEngine will only take a few minutes. And, ongoing maintenance will be minimal. If you know how to use GraphQL, then the familiar syntax to send directives and create specific image variants will be nearly effortless and should take about the same time as manually optimizing images using standard Gatsby React.
Conclusion
For most web projects, ImageEngine can reduce image payloads by up to 80%. That number can go up if you have especially high-res images.
However, you can really get the most out of your image optimization by combining the best parts of a static front-end development framework like Gatsby and an image CDN like ImageEngine. Specifically, you can use both to target Google’s core web vitals:
ImageEngine’s dynamic, intelligent, run-time optimization will optimize payloads to improve LCP, SI, FCP, and other data size-related metrics.
Using Gatsby, you can optimize for CLS and FID using best practices and by natively implementing lazy loading and image placeholders.
ImageEngine provides an Image Speed Test tool where you can quickly evaluate your current performance and see the impact of ImageEngine on key metrics. Even for a simple GatsbyJS project, the results in the Demo tool can be impressive. If you extrapolate those percentages for a larger, image-heavy site, combining Gatsby with ImageEngine could have a dramatic impact on the performance and user experience of your web app. What’s more, your developers will thank you for sparing them from the challenging and time-consuming chore of manual image optimization.
I feel like my quest to make sure this site had pretty sweet (and automatically-generated) social media images (e.g. Open Graph) came to a close once I found Social Image Generator.
The trajectory there was that I ended up talking about it far too much on ShopTalk, to the point it became a common topic in our Discord (join via Patreon), Andy Bell pointed me at Daniel Post’s Social Image Generator and I immediately bought and installed it. I heard from Daniel over Twitter, and we ended up having long conversations about the plugin and my desires for it. Ultimately, Daniel helped me code up some custom designs and write logic to create different social media image designs depending on the information it had (for example, if we provide quote text, it uses a special design for that).
As you likely know, Automattic has been an awesome and long time sponsor for this site, and we often promote Jetpack as a part of that (as I’m a heavy user of it, it’s easy to talk about). One of Jetpack’s many features is helping out with social media. (I did a video on how we do it.) So, it occurred to me… maybe this would be a sweet feature for Jetpack. I mentioned it to the Automattic team and they were into the idea of talking to Daniel. I introduced them back in May, and now it’s September and… Jetpack Acquires WordPress Plugin Social Image Generator
“When I initially saw Social Image Generator, the functionality looked like a ideal fit with our existing social media tools,’ said James Grierson, General Manager of Jetpack. ‘I look forward to the future functionality and user experience improvements that will come out of this acquisition. The goal of our social product is to help content creators expand their audience through increased distribution and engagement. Social Image Generator will be a key component of helping us deliver this to our customers.”
Daniel will also be joining Jetpack to continue developing Social Image Generator and integrating it with Jetpack’s social media features.
Rob Pugh
Heck yeah, congrats Daniel. My dream for this thing is that, eventually, we could start building social media images via regular WordPress PHP templates. The trick is that you need something to screenshot them, like Puppeteer or Playwright. An average WordPress install doesn’t have that available, but because Jetpack is fundamentally a service that leverages the great WordPress cloud to do above-and-beyond things, this is in the realm of possibility.
Automattic is always on the prowl for companies that are doing something interesting in the WordPress ecosystem. The Social Image Generator plugin expertly captured a new niche with an interface that feels like a natural part of WordPress and impressed our chief plugin critic, Justin Tadlock, in a recent review.
“Automattic approached me and let me know they were fans of my plugin,” Post said. “And then we started talking to see what it would be like to work together. We were actually introduced by Chris Coyier from CSS-Tricks, who uses both our products.”
Sarah Gooding
Just had to double-toot my own horn there, you understand.
I recently blogged about how images are hard and it ended up being a big ol’ checklist of things that you could/should think about and implement when placing images on websites.
I think it’s encouraging to see frameworks — these beloved tools that we leverage to help us build websites — offering additional tools within them to help tackle this checklist and take on the hard (but perfectly suited for computers) tasks of displaying images.
I’m not sure I’d give any of them flying colors as far as ease of use. There is stuff to install, configure, and it’s likely you’ll only reach for it if you already know you should be doing it, and your pre-existing knowledge of image performance can help you through the process. It’s not the failing of these frameworks; this stuff is complicated and the audience is developers who are, fair is fair, a little into the idea of control.
I do gotta hand it to my BFF WordPress on this one. You literally do nothing and just get responsive images out of the box. If you need to tap into the filters to control things, you can do that like you can anything else in WordPress: through hooks. If you go for Jetpack (and I highly encourage you to), you flip on the (incredibly, free) Site Accelerator feature, which takes all those images, optimizes them, CDN-hosts them, lazy loads them, and serves them in formats, like WebP, when possible (I would assume more next-gen formats will happen eventually). Jetpack is a sponsor, so full disclosure there, but I use it very much on purpose because the experience makes image handling something I literally don’t have to think about.
Another interesting aspect of frameworks-helping-with-images is that some of it was born out of Google getting involved. Google calls it “Aurora”:
For almost two years, we have worked with some of the most popular frameworks such as Next.js, Nuxt and Angular, working to improve web performance.
The project does all sorts of stuff, including hand out money to help fund open-source tools, and direct help specific initiatives. Like images:
An Image component in Next.js that encapsulates best practices for image loading, followed by a collaboration with Nuxt on the same. Use of this component has resulted in significant improvements to paint times and layout shift (example: 57% reduction in Largest Contentful Paint and 100% reduction in Cumulative Layout Shift on nextjs.org/give).
Cool, right? I think so? What weirds me out about this just a smidge is that it feels meaningful when Google’s squad rolls up to contribute to a framework. They didn’t pick underdog frameworks here, surely on purpose, because they want their work to impact the most people. So, frameworks that are already successful benefit from A-squad contributions. A rich-get-richer situation. I’m not sure it’s a huge problem, but it’s just something I think about.
In my previous article, I created a fragmentation effect using CSS mask and custom properties. It was a neat effect but it has one drawback: it uses a lot of CSS code (generated using Sass). This time I am going to redo the same effect but rely on the new Paint API. This drastically reduces the amount of CSS and completely removes the need for Sass.
Here is what we are making. Like in the previous article, only Chrome and Edge support this for now.
See that? No more than five CSS declarations and yet we get a pretty cool hover animation.
What is the Paint API?
The Paint API is part of the Houdini project. Yes, “Houdini” the strange term that everyone is talking about. A lot of articles already cover the theoretical aspect of it, so I won’t bother you with more. If I have to sum it up in a few words, I would simply say : it’s the future of CSS. The Paint API (and the other APIs that fall under the Houdini umbrella) allow us to extend CSS with our own functionalities. We no longer need to wait for the release of new features because we can do it ourselves!
The CSS Paint API is being developed to improve the extensibility of CSS. Specifically this allows developers to write a paint function which allows us to draw directly into an elements [sic] background, border, or content.
I think the idea is pretty clear. We can draw what we want. Let’s start with a very basic demo of background coloration:
We add the paint worklet using CSS.paintWorklet.addModule('your_js_file').
We register a new paint method called draw.
Inside that, we create a paint() function where we do all the work. And guess what? Everything is like working with <canvas>. That ctx is the 2D context, and I simply used some well-known functions to draw a red rectangle covering the whole area.
This may look unintuitive at first glance, but notice that the main structure is always the same: the three steps above are the “copy/paste” part that you repeat for each project. The real work is the code we write inside the paint() function.
Let’s add a variable:
As you can see, the logic is pretty simple. We define the getter inputProperties with our variables as an array. We add properties as a third parameter to paint() and later we get our variable using properties.get().
That’s it! Now we have everything we need to build our complex fragmentation effect.
Building the mask
You may wonder why the paint API to create a fragmentation effect. We said it’s a tool to draw images so how it will allow us to fragment an image?
In the previous article, I did the effect using different mask layer where each one is a square defined with a gradient (remember that a gradient is an image) so we got a kind of matrix and the trick was to adjust the alpha channel of each one individually.
This time, instead of using many gradients we will define only one custom image for our mask and that custom image will be handled by our paint API.
An example please!
In the above, I have created an image having an opaque color covering the left part and a semi-transparent one covering the right part. Applying this image as a mask gives us the logical result of a half-transparent image.
Now all we need to do is to split our image to more parts. Let’s define two variables and update our code:
The relevant part of the code is the following:
const n = properties.get('--f-n'); const m = properties.get('--f-m'); const w = size.width/n; const h = size.height/m; for(var i=0;i<n;i++) { for(var j=0;j<m;j++) { ctx.fillStyle = 'rgba(0,0,0,'+(Math.random())+')'; ctx.fillRect(i*w, j*h, w, h); } }
N and M define the dimension of our matrix of rectangles. W and H are the size of each rectangle. Then we have a basic FOR loop to fill each rectangle with a random transparent color.
With a little JavaScript, we get a custom mask that we can easily control by adjusting the CSS variables:
Now, we need to control the alpha channel in order to create the fading effect of each rectangle and build the fragmentation effect.
Let’s introduce a third variable that we use for the alpha channel that we also change on hover.
We defined a CSS custom property as a <number> that we transition from 1 to 0, and that same property is used to define the alpha channel of our rectangles. Nothing fancy will happen on hover because all the rectangles will fade the same way.
We need a trick to prevent fading of all the rectangles at the same time, instead creating a delay between them. Here is an illustration to explain the idea I am going to use:
The above is showing the alpha animation for two rectangles. First we define a variable L that should be bigger or equal to 1 then for each rectangle of our matrix (i.e. for each alpha channel) we perform a transition between X and Y where X - Y = L so we have the same overall duration for all the alpha channel. X should be bigger or equal to 1 and Y smaller or equal to 0.
Wait, the alpha value shouldn’t be in the range [1 0], right ?
Yes, it should! And all the tricks that we’re working on rely on that. Above, the alpha is animating from 8 to -2, meaning we have an opaque color in the [8 1] range, a transparent one in the [0 -2] range and an animation within [1 0]. In other words, any value bigger than 1 will have the same effect as 1, and any value smaller than 0 will have the same effect as 0.
Animation within [1 0] will not happen at the same time for both our rectangles. Rectangle 2 will reach [1 0] before Rectangle 1 will. We apply this to all the alpha channels to get our delayed animations.
L is the variable illustrated previously, and O is the value of our CSS variable that transitions from 1 to 0
When O=1, we have (Math.random()*(l-1) + 1). Considering the fact that the random() function gives us a value within the [0 1] range, the final value will be in the [L 1]range.
When O=0, we have (Math.random()*(l-1) + 1 - l) and a value with the [0 1-L] range.
L is our variable to control the delay.
Let’s see this in action:
We are getting closer. We have a cool fragmentation effect but not the one we saw in the beginning of the article. This one isn’t as smooth.
The issue is related the random() function. We said that each alpha channel need to animate between X and Y, so logically those value need to remain the same. But the paint() function is called a bunch during the transition, so each time, the random() function give us different X and Y values for each alpha channel; hence the “random” effect we are getting.
To fix this we need to find a way to store the generated value so they are always the same for each call of the paint() function. Let’s consider a pseudo-random function, a function that always generates the same sequence of values. In other words, we want to control the seed.
Unfortunately, we cannot do this with the JavaScript’s built-in random() function, so like any good developer, let’s pick one up from Stack Overflow:
const mask = 0xffffffff; const seed = 30; /* update this to change the generated sequence */ let m_w = (123456789 + seed) & mask; let m_z = (987654321 - seed) & mask; let random = function() { m_z = (36969 * (m_z & 65535) + (m_z >>> 16)) & mask; m_w = (18000 * (m_w & 65535) + (m_w >>> 16)) & mask; var result = ((m_z << 16) + (m_w & 65535)) >>> 0; result /= 4294967296; return result; }
And the result becomes:
We have our fragmentation effect without complex code:
a basic nested loop to create NxM rectangles
a clever formula for the channel alpha to create the transition delay
a ready random() function taken from the Net
That’s it! All you have to do is to apply the mask property to any element and adjust the CSS variables.
Fighting the gaps!
If you play with the above demos you will notice, in some particular case, strange gaps between the rectangles
To avoid this, we can extend the area of each rectangle with a small offset.
We update this:
ctx.fillRect(i*w, j*h, w, h);
…with this:
ctx.fillRect(i*w-.5, j*h-.5, w+.5, h+.5);
It creates a small overlap between the rectangles that compensates for the gaps between them. There is no particular logic with the value 0.5 I used. You can go bigger or smaller based on your use case.
Want more shapes?
Can the above be extended to consider more than rectangular shape? Sure it can! Let’s not forget that we can use Canvas to draw any kind of shape — unlike pure CSS shapes where we sometimes need some hacky code. Let’s try to build that triangular fragmentation effect.
After searching the web, I found something called Delaunay triangulation. I won’t go into the deep theory behind it, but it’s an algorithm for a set of points to draw connected triangles with specific properties. There are lots of ready-to-use implementations of it, but we’ll go with Delaunator because it’s supposed to be the fastest of the bunch.
We first define a set of points (we will use random() here) then run Delauntor to generate the triangles for us. In this case, we only need one variable that defines the number of points.
const n = properties.get('--f-n'); const o = properties.get('--f-o'); const w = size.width; const h = size.height; const l = 7; var dots = [[0,0],[0,w],[h,0],[w,h]]; /* we always include the corners */ /* we generate N random points within the area of the element */ for (var i = 0; i < n; i++) { dots.push([random() * w, random() * h]); } /**/ /* We call Delaunator to generate the triangles*/ var delaunay = Delaunator.from(dots); var triangles = delaunay.triangles; /**/ for (var i = 0; i < triangles.length; i += 3) { /* we loop the triangles points */ /* we draw the path of the triangles */ ctx.beginPath(); ctx.moveTo(dots[triangles[i]][0] , dots[triangles[i]][1]); ctx.lineTo(dots[triangles[i + 1]][0], dots[triangles[i + 1]][1]); ctx.lineTo(dots[triangles[i + 2]][0], dots[triangles[i + 2]][1]); ctx.closePath(); /**/ var alpha = (random()*(l-1) + 1) - (1-o)*l; /* the alpha value */ /* we fill the area of triangle with the semi-transparent color */ ctx.fillStyle = 'rgba(0,0,0,'+alpha+')'; /* we consider stroke to fight the gaps */ ctx.strokeStyle = 'rgba(0,0,0,'+alpha+')'; ctx.stroke(); ctx.fill(); }
I have nothing more to add to the comments in the above code. I simply used some basic JavaScript and Canvas stuff and yet we have a pretty cool effect.
We can make even more shapes! All we have to do is to find an algorithm for it.
I cannot move on without doing the hexagon one!
I took the code from this article written by Izan Pérez Cosano. Our variable is now R that will define the dimension of one hexagon.
What’s next?
Now that we have built our fragmentation effect, let’s focus on the CSS. Notice that the effect is as simple as changing the opacity value (or the value of whichever property you are working with) of an element on it hover state.
This means we can easily integrate this kind of effect to create more complex animations. Here are a bunch of ideas!
Responsive image slider
Another version of the same slider:
Noise effect
Loading screen
Card hover effect
That’s a wrap
And all of this is just the tip of the iceberg of what can be achieved using the Paint API. I’ll end with two important points:
The Paint API is 90% <canvas>, so the more you know about <canvas>, the more fancy things you can do. Canvas is widely used, which means there’s a bunch of documentation and writing about it to get you up to speed. Hey, here’s one right here on CSS-Tricks!
The Paint API removes all the complexity from the CSS side of things. There’s no dealing with complex and hacky code to draw cool stuff. This makes CSS code so much easier to maintain, not to mention less prone to error.