Tag: Image

Pre-Caching Image with React Suspense

Suspense is an exciting, upcoming feature of React that will enable developers to easily allow their components to delay rendering until they’re “ready,” leading to a much smoother user experience. “Ready,” in this context, can mean a number of things. For example, your data loading utility can tie into Suspense, allowing for consistent loading states to be displayed when any data are in flight, without needing to manually track loading state per query. Then, when your data are available, and your component is “ready,” it’ll render. This is the subject that’s most commonly discussed with Suspense, and I’ve written about it previously; however, data loading is only one use case among many where Suspense can improve user experience. Another one I want to talk about today is image preloading.

Have you ever made, or used a web app where, after landing on a screen, your place on it staggers and jumps as images download and render? We call that content reflow and it can both be jarring and unpleasant. Suspense can help with this. You know how I said that Suspense is all about holding a component back from rendering until it’s ready? Fortunately, “ready” in this context is pretty open-ended — and for our purposes can included “images we need that have preloaded.” Let’s see how!

Quick crash course on Suspense

Before we dive into specifics, let’s take a quick look at how Suspense works. It has two main parts. The first is the concept of a component suspending. This means React attempts to render our component, but it’s not “ready.” When this happens, the nearest “fallback” in the component tree will render. We’ll look at making fallbacks shortly (it’s fairly straightforward), but the way in which a component tells React it’s not ready is by throwing a promise. React will catch that promise, realize the component isn’t ready, and render the fallback. When the promise resolves, React will again attempt to r.ender. Rinse, wash and repeat. Yes, I’m over-simplifying things a tad, but this is the gist of how Suspense works and we’ll expand on some of these concepts as we go.

The second part of Suspense is the introduction of “transition” state updates. This means we set state, but tell React that the state change may cause a component to suspend, and if this happens, to not render a fallback. Instead, we want to continue viewing the current screen, until the state update is ready, at which point it’ll render. And, of course, React provides us with a “pending” boolean indicator that lets the developer know this is in progress so we can provide inline loading feedback.

Let’s preload some images!

First off, I want to note that there’s a full demo of what we’re making at the end of this article. Feel free to open the demo now if you just want to jump into the code. It’ll show how to preload images with Suspense, combined with transition state updates. The rest of this post will build that code up step-by-step, explaining the how’s the why’s along the way.

OK, let’s go!

We want our component to suspend until all of its images have preloaded. To make things as simple as possible, let’s make a <SuspenseImage> component that receives a src attribute, preloads the image, handles the exception throwing, and then renders an <img> when everything’s ready. Such a component would allow us to seamlessly drop our <SuspenseImage> component wherever we want an image displayed, and Suspense would handle the grunt work of holding onto it until everything is ready.

We can start by making a preliminary sketch of the code:

const SuspenseImg = ({ src, ...rest }) => {   // todo: preload and throw somehow   return <img alt="" src={src} {...rest} />; }; 

So we have two things to sort out: (1) how to preload an image, and (2) tying in exception throwing. The first part is pretty straightforward. We’re all used to using images in HTML via <img src="some-image.png"> but we can also create images imperatively using the Image() object in JavaScript; moreover, images we create like this have an onload callback that fires when the image has … loaded. It looks like this:

const img = new Image(); img.onload = () => {   // image is loaded }; 

But how do we tie that into exception throwing? If you’re like me, your first inclination might be something like this:

const SuspenseImg = ({ src, ...rest }) => {   throw new Promise((resolve) => {     const img = new Image();     img.onload = () => {       resolve();     };   });   return <img alt="" src={src} {...rest} />; }; 

The problem, of course, is that this will always throw a promise. Every single time React attempts to render a <SuspenseImg> instance, a new promise will be created, and promptly thrown. Instead, we only want to throw a promise until the image has loaded. There’s an old saying that every problem in computer science can be solved by adding a layer of indirection (except for the problem of too many layers of indirection) so let’s do just that and build an image cache. When we read a src, the cache will check if it’s loaded that image, and if not, it’ll begin the preload, and throw the exception. And, if the image is preloaded, it’ll just return true and let React get on with rendering our image. 

Here’s what our <SuspenseImage> component looks like:

export const SuspenseImg = ({ src, ...rest }) => {   imgCache.read(src);   return <img src={src} {...rest} />; };

And here’s what a minimal version of our cache looks like:

const imgCache = {   __cache: {},   read(src) {     if (!this.__cache[src]) {       this.__cache[src] = new Promise((resolve) => {         const img = new Image();         img.onload = () => {           this.__cache[src] = true;           resolve(this.__cache[src]);         };         img.src = src;       }).then((img) => {         this.__cache[src] = true;       });     }     if (this.__cache[src] instanceof Promise) {       throw this.__cache[src];     }     return this.__cache[src];   } };

It’s not perfect, but it’s good enough for now. Let’s go ahead and put it to use.

The implementation

Remember, there’s a link to the fully working demo below, so if I move too fast at any particular step, don’t despair. We’ll explain things as well go.

Let’s start by defining our fallback. We define a fallback by placing a Suspense tag in our component tree, and pass our fallback via the fallback prop. Any component which suspends will search upward for the nearest Suspense tag, and render its fallback (but if no Suspense tag is found, an error will be thrown). A real app would likely have many Suspense tags throughout, defining specific fallbacks for its various modules, but for this demo, we only need a single one wrapping our root app.

function App() {   return (     <Suspense fallback={<Loading />}>       <ShowImages />     </Suspense>   ); }

The <Loading> component is a basic spinner, but in a real app, you’d likely want to render some sort of empty shell of the actual component you’re trying to render, to provide a more seamless experience. 

With that in place, our <ShowImages> component eventually renders our images with this:

<FlowItems>   {images.map(img => (     <div key={img}>       <SuspenseImg alt="" src={img} />     </div>   ))} </FlowItems>

On initial load, our loading spinner will show, until our initial images are ready, at which point they all show at once, without any staggered reflow jankiness.

Transition state update

Once the images are in place, when we load the next batch of them, we’d like to have them show up after they’ve loaded, of course, but keep the existing images on the screen while they load. We do this with the useTransition hook. This returns a startTransition function, and an isPending boolean, which indicates that our state update is in progress, but has suspended (or even if it hasn’t suspended, may still be true if the state update is simply taking too long). Lastly, when calling useTransition, you need to pass a timeoutMs value, which is the maximum amount of time the isPending flag can be true, before React just gives up and renders the fallback (note, the timeoutMs argument will likely be removed in the near future, with the transition state updates simply waiting as long as necessary when updating existing content).

Here’s what mine looks like:

const [startTransition, isPending] = useTransition({ timeoutMs: 10000 });

We’ll allow for 10 seconds to pass before our fallback shows, which is likely too long in real life, but is suitable for the purposes of this demo, especially when you might be purposefully slowing your network speed down in DevTools to experiment.

Here’s how we use it. When you click the button to load more images, the code looks like this:

startTransition(() => {   setPage(p => p + 1); });

That state update will trigger a new data load using my GraphQL client micro-graphql-react, which, being Suspense-compatible, will throw a promise for us while the query is in flight. Once the data come back, our component will attempt to render, and suspend again while our images are preloading. While all of this is happening, our isPending value will be true, which will allow us to display a loading spinner on top of our existing content.

Avoiding network waterfalls 

You might be wondering how React blocks rendering while image preloading is taking place. With the code above, when we do this:

{images.map(img => (

…along with our <SuspenseImage> rendered therein, will React attempt to render the first image, Suspend, then re-attempt the list, get past the first image, which is now in our cache, only to suspend on the second image, then the third, fourth, etc. If you’ve read about Suspense before, you might be wondering if we need to manually preload all the images in our list before all this rendering occurs.

It turns out there’s no need to worry, and no need for awkward preloading because React is fairly smart about how it renders things in a Suspense world. As React is making its way through our component tree, it doesn’t just stop when it hits a suspension. Instead, it continues rendering all other paths through our component tree. So, yeah, when it attempts to render image zero, a suspension will occur, but React will continue attempting to render images 1 through N, and only then suspend.

You can see this in action by looking at the Network tab in the full demo, when you click the “Next images” button. You should see the entire bucket of images immediately show up in the network list, resolve one by one, and when all finished, the results should show up on screen. To really amplify this effect, you might want to slow your network speed down to “Fast 3G.”

For fun, we can force Suspense to waterfall over our images by manually reading each image from our cache before React attempts to render our component, diving through every path in the component tree.

images.forEach((img) => imgCache.read(img));

I created a demo that illustrates this. If you similarly look at the Network tab when a new set of images comes in, you’ll see them added sequentially in the network list (but don’t run this with your network speed slowed down).

Suspend late

There’s a corollary to keep in mind when using Suspense: suspend as late in the rendering and as low in the component tree as possible. If you have some sort of <ImageList> which renders a bunch of suspending images, make sure each and every image suspends in its own component so React can reach it separately, and so none will block the others, resulting in a waterfall. 

The data loading version of this rule is that data should be loaded as late as possible by the components that actually need it. That means we should avoid doing something like this in a single component:

const { data1 } = useSuspenseQuery(QUERY1, vars1); const { data2 } = useSuspenseQuery(QUERY2, vars2);

The reason we want to avoid that is because query one will suspend, followed by query two, causing a waterfall. If this is simply unavoidable, we’ll need to manually preload both queries before the suspensions.

The demo

Here’s the demo I promised. It’s the same one I linked up above.

If you run it with your dev tools open, make sure you uncheck the box that says “Disable Cache” in the DevTools Network tab, or you’ll defeat the entire demo. 

The code is almost identical to what I showed earlier. One improvement in the demo is that our cache read method has this line:

setTimeout(() => resolve({}), 7000);

It’s nice to have all our images preloaded nicely, but in real life we probably don’t want to hold up rendering indefinitely just because one or two straggling images are coming in slowly. So after some amount of time, we just give the green light, even though the image isn’t ready yet. The user will see an image or two flicker in, but it’s better than enduring the frustration of frozen software. I’ll also note that seven seconds is probably excessive, but for this demo, I’m assuming users might be slowing network speeds in DevTools to see Suspense features more clearly, and wanted to support that.

The demo also has a precache images checkbox. It’s checked by default, but you can uncheck it to replace the <SuspenseImage> component with a regular ol’ <img> tag, if you want to compare the Suspense version to “normal React” (just don’t check it while results are coming in, or the whole UI may suspend, and render the fallback).

Lastly, as always with CodeSandbox, some state may occasionally get out of sync, so hit the refresh button if things start to look weird or broken.

Odds and ends

There was one massive bug I accidentally made when putting this demo together. I didn’t want multiple runs of the demo to lose their effect as the browser caches images it’s already downloaded. So I manually modify all of the URLs with a cache buster:

const [cacheBuster, setCacheBuster] = useState(INITIAL_TIME); 
 const { data } = useSuspenseQuery(GET_IMAGES_QUERY, { page }); const images = data.allBooks.Books.map(   (b) => b.smallImage + `?cachebust=$  {cacheBuster}` );

INITIAL_TIME is defined at the modules level (i.e. globally) with this line:

const INITIAL_TIME = +new Date();

And if you’re wondering why I didn’t do this instead:

const [cacheBuster, setCacheBuster] = useState(+new Date());

…it’s because this does horrible, horrible things. On first render, the images attempt to render. The cache causes a suspension, and React cancels the render, and shows our fallback. When all of the promises have resolved, React will attempt this initial render anew, and our initial useState call will re-run, which means that this:

const [cacheBuster, setCacheBuster] = useState(+new Date());

…will re-run, with a new initial value, causing an entirely new set of image URLs, which will suspend all over again, ad infinitum. The component will never run, and the CodeSandbox demo grinds to a halt (making this frustrating to debug).

This might seem like a weird one-off problem caused by a unique requirement for this particular demo, but there’s a larger lesson: rendering should be pure, without side effects. React should be able to re-attempt rendering your component any number of times, and (given the same initial props) the same exact state should come out the other end.

The post Pre-Caching Image with React Suspense appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.


, , ,

Nailing the Perfect Contrast Between Light Text and a Background Image

Have you ever come across a site where light text is sitting on a light background image? If you have, you’ll know how difficult that is to read. A popular way to avoid that is to use a transparent overlay. But this leads to an important question: Just how transparent should that overlay be? It’s not like we’re always dealing with the same font sizes, weights, and colors, and, of course, different images will result in different contrasts.

Trying to stamp out poor text contrast on background images is a lot like playing Whac-a-Mole. Instead of guessing, we can solve this problem with HTML <canvas> and a little bit of math.

Like this:

We could say “Problem solved!” and simply end this article here. But where’s the fun in that? What I want to show you is how this tool works so you have a new way to handle this all-too-common problem.

Here’s the plan

First, let’s get specific about our goals. We’ve said we want readable text on top of a background image, but what does “readable” even mean? For our purposes, we’ll use the WCAG definition of AA-level readability, which says text and background colors need enough contrast between them such that that one color is 4.5 times lighter than the other.

Let’s pick a text color, a background image, and an overlay color as a starting point. Given those inputs, we want to find the overlay opacity level that makes the text readable without hiding the image so much that it, too, is difficult to see. To complicate things a bit, we’ll use an image with both dark and light space and make sure the overlay takes that into account.

Our final result will be a value we can apply to the CSS opacity property of the overlay that gives us the right amount of transparency that makes the text 4.5 times lighter than the background.

Optimal overlay opacity: 0.521

To find the optimal overlay opacity we’ll go through four steps:

  1. We’ll put the image in an HTML <canvas>, which will let us read the colors of each pixel in the image.
  2. We’ll find the pixel in the image that has the least contrast with the text.
  3. Next, we’ll prepare a color-mixing formula we can use to test different opacity levels on top of that pixel’s color.
  4. Finally, we’ll adjust the opacity of our overlay until the text contrast hits the readability goal. And these won’t just be random guesses — we’ll use binary search techniques to make this process quick.

Let’s get started!

Step 1: Read image colors from the canvas

Canvas lets us “read” the colors contained in an image. To do that, we need to “draw” the image onto a <canvas> element and then use the canvas context (ctx) getImageData() method to produce a list of the image’s colors.

function getImagePixelColorsUsingCanvas(image, canvas) {   // The canvas's context (often abbreviated as ctx) is an object   // that contains a bunch of functions to control your canvas   const ctx = canvas.getContext('2d'); 
   // The width can be anything, so I picked 500 because it's large   // enough to catch details but small enough to keep the   // calculations quick.   canvas.width = 500; 
   // Make sure the canvas matches proportions of our image   canvas.height = (image.height / image.width) * canvas.width; 
   // Grab the image and canvas measurements so we can use them in the next step   const sourceImageCoordinates = [0, 0, image.width, image.height];   const destinationCanvasCoordinates = [0, 0, canvas.width, canvas.height]; 
   // Canvas's drawImage() works by mapping our image's measurements onto   // the canvas where we want to draw it   ctx.drawImage(     image,     ...sourceImageCoordinates,     ...destinationCanvasCoordinates   ); 
   // Remember that getImageData only works for same-origin or    // cross-origin-enabled images.   // https://developer.mozilla.org/en-US/docs/Web/HTML/CORS_enabled_image   const imagePixelColors = ctx.getImageData(...destinationCanvasCoordinates);   return imagePixelColors; }

The getImageData() method gives us a list of numbers representing the colors in each pixel. Each pixel is represented by four numbers: red, green, blue, and opacity (also called “alpha”). Knowing this, we can loop through the list of pixels and find whatever info we need. This will be useful in the next step.

Image of a blue and purple rose on a light pink background. A section of the rose is magnified to reveal the RGBA values of a specific pixel.

Step 2: Find the pixel with the least contrast

Before we do this, we need to know how to calculate contrast. We’ll write a function called getContrast() that takes in two colors and spits out a number representing the level of contrast between the two. The higher the number, the better the contrast for legibility.

When I started researching colors for this project, I was expecting to find a simple formula. It turned out there were multiple steps.

To calculate the contrast between two colors, we need to know their luminance levels, which is essentially the brightness (Stacie Arellano does a deep dive on luminance that’s worth checking out.)

Thanks to the W3C, we know the formula for calculating contrast using luminance:

const contrast = (lighterColorLuminance + 0.05) / (darkerColorLuminance + 0.05);

Getting the luminance of a color means we have to convert the color from the regular 8-bit RGB value used on the web (where each color is 0-255) to what’s called linear RGB. The reason we need to do this is that brightness doesn’t increase evenly as colors change. We need to convert our colors into a format where the brightness does vary evenly with color changes. That allows us to properly calculate luminance. Again, the W3C is a help here:

const luminance = (0.2126 * getLinearRGB(r) + 0.7152 * getLinearRGB(g) + 0.0722 * getLinearRGB(b));

But wait, there’s more! In order to convert 8-bit RGB (0 to 255) to linear RGB, we need to go through what’s called standard RGB (also called sRGB), which is on a scale from 0 to 1.

So the process goes: 

8-bit RGB → standard RGB  → linear RGB → luminance

And once we have the luminance of both colors we want to compare, we can plug in the luminance values to get the contrast between their respective colors.

// getContrast is the only function we need to interact with directly. // The rest of the functions are intermediate helper steps. function getContrast(color1, color2) {   const color1_luminance = getLuminance(color1);   const color2_luminance = getLuminance(color2);   const lighterColorLuminance = Math.max(color1_luminance, color2_luminance);   const darkerColorLuminance = Math.min(color1_luminance, color2_luminance);   const contrast = (lighterColorLuminance + 0.05) / (darkerColorLuminance + 0.05);   return contrast; } 
 function getLuminance({r,g,b}) {   return (0.2126 * getLinearRGB(r) + 0.7152 * getLinearRGB(g) + 0.0722 * getLinearRGB(b)); } function getLinearRGB(primaryColor_8bit) {   // First convert from 8-bit rbg (0-255) to standard RGB (0-1)   const primaryColor_sRGB = convert_8bit_RGB_to_standard_RGB(primaryColor_8bit); 
   // Then convert from sRGB to linear RGB so we can use it to calculate luminance   const primaryColor_RGB_linear = convert_standard_RGB_to_linear_RGB(primaryColor_sRGB);   return primaryColor_RGB_linear; } function convert_8bit_RGB_to_standard_RGB(primaryColor_8bit) {   return primaryColor_8bit / 255; } function convert_standard_RGB_to_linear_RGB(primaryColor_sRGB) {   const primaryColor_linear = primaryColor_sRGB < 0.03928 ?     primaryColor_sRGB/12.92 :     Math.pow((primaryColor_sRGB + 0.055) / 1.055, 2.4);   return primaryColor_linear; }

Now that we can calculate contrast, we’ll need to look at our image from the previous step and loop through each pixel, comparing the contrast between that pixel’s color and the foreground text color. As we loop through the image’s pixels, we’ll keep track of the worst (lowest) contrast so far, and when we reach the end of the loop, we’ll know the worst-contrast color in the image.

function getWorstContrastColorInImage(textColor, imagePixelColors) {   let worstContrastColorInImage;   let worstContrast = Infinity; // This guarantees we won't start too low   for (let i = 0; i < imagePixelColors.data.length; i += 4) {     let pixelColor = {       r: imagePixelColors.data[i],       g: imagePixelColors.data[i + 1],       b: imagePixelColors.data[i + 2],     };     let contrast = getContrast(textColor, pixelColor);     if(contrast < worstContrast) {       worstContrast = contrast;       worstContrastColorInImage = pixelColor;     }   }   return worstContrastColorInImage; }

Step 3: Prepare a color-mixing formula to test overlay opacity levels

Now that we know the worst-contrast color in our image, the next step is to establish how transparent the overlay should be and see how that changes the contrast with the text.

When I first implemented this, I used a separate canvas to mix colors and read the results. However, thanks to Ana Tudor’s article about transparency, I now know there’s a convenient formula to calculate the resulting color from mixing a base color with a transparent overlay.

For each color channel (red, green, and blue), we’d apply this formula to get the mixed color:

mixedColor = baseColor + (overlayColor - baseColor) * overlayOpacity

So, in code, that would look like this:

function mixColors(baseColor, overlayColor, overlayOpacity) {   const mixedColor = {     r: baseColor.r + (overlayColor.r - baseColor.r) * overlayOpacity,     g: baseColor.g + (overlayColor.g - baseColor.g) * overlayOpacity,     b: baseColor.b + (overlayColor.b - baseColor.b) * overlayOpacity,   }   return mixedColor; }

Now that we’re able to mix colors, we can test the contrast when the overlay opacity value is applied.

function getTextContrastWithImagePlusOverlay({textColor, overlayColor, imagePixelColor, overlayOpacity}) {   const colorOfImagePixelPlusOverlay = mixColors(imagePixelColor, overlayColor, overlayOpacity);   const contrast = getContrast(this.state.textColor, colorOfImagePixelPlusOverlay);   return contrast; }

With that, we have all the tools we need to find the optimal overlay opacity!

Step 4: Find the overlay opacity that hits our contrast goal

We can test an overlay’s opacity and see how that affects the contrast between the text and image. We’re going to try a bunch of different opacity levels until we find the contrast that hits our mark where the text is 4.5 times lighter than the background. That may sound crazy, but don’t worry; we’re not going to guess randomly. We’ll use a binary search, which is a process that lets us quickly narrow down the possible set of answers until we get a precise result.

Here’s how a binary search works:

  • Guess in the middle.
  • If the guess is too high, we eliminate the top half of the answers. Too low? We eliminate the bottom half instead.
  • Guess in the middle of that new range.
  • Repeat this process until we get a value.

I just so happen to have a tool to show how this works:

In this case, we’re trying to guess an opacity value that’s between 0 and 1. So, we’ll guess in the middle, test whether the resulting contrast is too high or too low, eliminate half the options, and guess again. If we limit the binary search to eight guesses, we’ll get a precise answer in a snap.

Before we start searching, we’ll need a way to check if an overlay is even necessary in the first place. There’s no point optimizing an overlay we don’t even need!

function isOverlayNecessary(textColor, worstContrastColorInImage, desiredContrast) {   const contrastWithoutOverlay = getContrast(textColor, worstContrastColorInImage);   return contrastWithoutOverlay < desiredContrast; }

Now we can use our binary search to look for the optimal overlay opacity:

function findOptimalOverlayOpacity(textColor, overlayColor, worstContrastColorInImage, desiredContrast) {   // If the contrast is already fine, we don't need the overlay,   // so we can skip the rest.   const isOverlayNecessary = isOverlayNecessary(textColor, worstContrastColorInImage, desiredContrast);   if (!isOverlayNecessary) {     return 0;   } 
   const opacityGuessRange = {     lowerBound: 0,     midpoint: 0.5,     upperBound: 1,   };   let numberOfGuesses = 0;   const maxGuesses = 8; 
   // If there's no solution, the opacity guesses will approach 1,   // so we can hold onto this as an upper limit to check for the no-solution case.   const opacityLimit = 0.99; 
   // This loop repeatedly narrows down our guesses until we get a result   while (numberOfGuesses < maxGuesses) {     numberOfGuesses++; 
     const currentGuess = opacityGuessRange.midpoint;     const contrastOfGuess = getTextContrastWithImagePlusOverlay({       textColor,       overlayColor,       imagePixelColor: worstContrastColorInImage,       overlayOpacity: currentGuess,     }); 
     const isGuessTooLow = contrastOfGuess < desiredContrast;     const isGuessTooHigh = contrastOfGuess > desiredContrast;     if (isGuessTooLow) {       opacityGuessRange.lowerBound = currentGuess;     }     else if (isGuessTooHigh) {       opacityGuessRange.upperBound = currentGuess;     } 
     const newMidpoint = ((opacityGuessRange.upperBound - opacityGuessRange.lowerBound) / 2) + opacityGuessRange.lowerBound;     opacityGuessRange.midpoint = newMidpoint;   } 
   const optimalOpacity = opacityGuessRange.midpoint;   const hasNoSolution = optimalOpacity > opacityLimit; 
   if (hasNoSolution) {     console.log('No solution'); // Handle the no-solution case however you'd like     return opacityLimit;   }   return optimalOpacity; }

With our experiment complete, we now know exactly how transparent our overlay needs to be to keep our text readable without hiding the background image too much.

We did it!

Improvements and limitations

The methods we’ve covered only work if the text color and the overlay color have enough contrast to begin with. For example, if you were to choose a text color that’s the same as your overlay, there won’t be an optimal solution unless the image doesn’t need an overlay at all.

In addition, even if the contrast is mathematically acceptable, that doesn’t always guarantee it’ll look great. This is especially true for dark text with a light overlay and a busy background image. Various parts of the image may distract from the text, making it difficult to read even when the contrast is numerically fine. That’s why the popular recommendation is to use light text on a dark background.

We also haven’t taken where the pixels are located into account or how many there are of each color. One drawback of that is that a pixel in the corner could possibly exert too much influence on the result. The benefit, however, is that we don’t have to worry about how the image’s colors are distributed or where the text is because, as long as we’ve handled where the least amount of contrast is, we’re safe everywhere else.

I learned a few things along the way

There are some things I walked away with after this experiment, and I’d like to share them with you:

  • Getting specific about a goal really helps! We started with a vague goal of wanting readable text on an image, and we ended up with a specific contrast level we could strive for.
  • It’s so important to be clear about the terms. For example, standard RGB wasn’t what I expected. I learned that what I thought of as “regular” RGB (0 to 255) is formally called 8-bit RGB. Also, I thought the “L” in the equations I researched meant “lightness,” but it actually means “luminance,” which is not to be confused with “luminosity.” Clearing up terms helps how we code as well as how we discuss the end result.
  • Complex doesn’t mean unsolvable. Problems that sound hard can be broken into smaller, more manageable pieces.
  • When you walk the path, you spot the shortcuts. For the common case of white text on a black transparent overlay, you’ll never need an opacity over 0.54 to achieve WCAG AA-level readability.

In summary…

You now have a way to make your text readable on a background image without sacrificing too much of the image. If you’ve gotten this far, I hope I’ve been able to give you a general idea of how it all works.

I originally started this project because I saw (and made) too many website banners where the text was tough to read against a background image or the background image was overly obscured by the overlay. I wanted to do something about it, and I wanted to give others a way to do the same. I wrote this article in hopes that you’d come away with a better understanding of readability on the web. I hope you’ve learned some neat canvas tricks too.

If you’ve done something interesting with readability or canvas, I’d love to hear about it in the comments!

The post Nailing the Perfect Contrast Between Light Text and a Background Image appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.


, , , , , , ,

WebP Image Support Coming to iOS 14

Apple announced a ton of new updates at yesterday’s WWDC20 keynote address, from new hardware to updated applications. There’s lots to gawk at and enough device-envy to go around.

But there’s one little line in the Safari 14 Beta release notes that caught my eye:

Added WebP image support.


This excites me because WebP is a super progressive format that encodes images in lossless and lossy formats that we get with other image formats we already use, like JPEG, but at a fraction of the file size. We use WebP right here at CSS-Tricks, thanks to Jetpack and its Site Accelerator feature that serves WebP versions of the images we upload to browsers that support them. Jeremy Wagner has a great write-up on WebP and how to work with it, specifically for WordPress.

So, yes, this means WebP will be largely supported across the board (:IE-sad-trombone:) once Safari 14 ships.

This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.


Chrome Firefox IE Edge Safari
32 65 No 18 No

Mobile / Tablet

Android Chrome Android Firefox Android iOS Safari
81 68 4.2-4.3 No

Even with that great support, defining fallbacks with the <picture> element is still a good idea.

<picture>   <source srcset="img/cat.webp" type="image/webp">   <source srcset="img/cat.jpg" type="image/jpeg">    <img src="img/cat.jpg" alt="Alt Text!"> </picture>

Oh, and not to be completely overshadowed, Safari 14 also squeezes in some CSS goodies, like the :is() and :where() pseudo class functions, which we linked up a couple of weeks ago. Jen Simmons picked out other key features we should be excited about.

Direct Link to ArticlePermalink

The post WebP Image Support Coming to iOS 14 appeared first on CSS-Tricks.


, , ,

How to Repeat Text as a Background Image in CSS Using element()

There’s a design trend I’ve seen popping up all over the place. Maybe you’ve seen it too. It’s this sort of thing where text is repeated over and over. A good example is the price comparison website, GoCompare, who used it in a major multi-channel advertising campaign.

Nike has used it as well, like in this advertisement:

Diggin’ that orange! (Source)

I couldn’t help but wonder how I would implement this sort of design for the web. I mean, we could obviously just repeat the text in markup. We could also export the design as an image using something like Photoshop, but putting text in images is bad for both SEO and accessibility. Then there’s the fact that, even if we did use actual text, it’s not like we’d want a screen reader speak it out.


OK, stop already!

These considerations make it seem unrealistic to do something like this on the web. Then I found myself pining for the long-existing, yet badly supported, element() feature in CSS. It enables the use of any HTML element as a background image, whether it be a single button element, or an entire <div> full of content.

According to the spec:

The element() function only reproduces the appearance of the referenced element, not the actual content and its structure. Authors should only use this for decorative purposes.

For our purposes, we’d be referencing a text element to get that repeating effect.

Let’s define an ID we can apply to the text element we want to repeat. Let’s call it #thingy. Note that when we use #thingy, we’ve got to prefix the element() value with -moz-. While element() has been supported in Firefox since 2010, it sadly hasn’t landed in any other browser since.

.element {   background-image: -moz-element(#thingy); }

Here’s a somewhat loose recreation of the Nike advertisement we saw earlier. Again, Firefox is required to see the demo as intended.

See how that works conceptually? I placed an element (#versatility) on the page, hid it by giving it zero height, set it as the background-image on the body, then used the background-repeat property to duplicate it vertically down the page.

The element() background is live. That means the background-image appearance on the thing using it will change if the referenced HTML element changes. It’s the same sort of deal when working with custom properties: change the variable and it updates everywhere it’s used.

There are, of course, other use cases for this property. Check out how Preethi used it to make in-page scrolling navigation for an article. You could also use a HTML canvas element as a background if you want to get fancy. One way I’ve used it is to show screenshots of pages in a table of contents. Vincent De Oliveira, has documented some wildly creative examples. Here’s an image-reflection effect, if you’re into retro web design:

Pretty neat, right? Again, I wish I could say this is a production-ready approach to get that neat design effect, but things are what they are at the moment. Actually, that’s a good reminder to make your voice heard for features you’d like to see implemented in browsers. There are open tickets in WebKit and Chromium where you can do that. Hopefully we’ll eventually get this feature in Safari-world and Chrome-world browsers.

The post How to Repeat Text as a Background Image in CSS Using element() appeared first on CSS-Tricks.


, , , , ,

Client-Side Image Editing on Mobile

Michael Scharnagl:

Ever wanted to easily convert an image to a grayscale image on your phone? I do sometimes, and that’s why I build a demo using the Web Share Target API to achieve exactly that.

For this I used the Service Worker way to handle the data. Once the data is received on the client, I use drawImage from canvas to draw the image in canvas, use the grayscale filter to convert it to a grayscale image and output the final image.

So you “install” the little microsite like a PWA, then you natively “share” an image to it and it comes back edited. Clever. Android on Chrome only at the moment.

Reminds me of this “Browser Functions” idea in reverse. That was a server that did things a browser can do, this is a browser doing things a server normally does.

Direct Link to ArticlePermalink

The post Client-Side Image Editing on Mobile appeared first on CSS-Tricks.


, , ,

Creating a Modal Image Gallery With Bootstrap Components

Have you ever clicked on an image on a webpage that opens up a larger version of the image with navigation to view other photos?

Some folks call it a pop-up. Others call it a lightbox. Bootstrap calls it a modal. I mention Bootstrap because I want to use it to make the same sort of thing. So, let’s call it a modal from here on out.

Why Bootstrap? you might ask. Well, a few reasons:

  • I’m already using Bootstrap on the site where I want this effect, so there’s no additional overhead in terms of loading resources.
  • I want something where I have complete and easy control over aesthetics. Bootstrap is a clean slate compared to most modal plugins I’ve come across.
  • The functionality I need is fairly simple. There isn’t much to be gained by coding everything from scratch. I consider the time I save using the Bootstrap framework to be more beneficial than any potential drawbacks.

Here’s where we’ll end up:

Let’s go through that, bit by bit.

Step 1: Create the image gallery grid

Let’s start with the markup for a grid layout of images. We can use Bootstrap’s grid system for that.

<div class="row" id="gallery">   <div class="col-12 col-sm-6 col-lg-3">     <img class="w-100" src="/image-1">   </div>   <div class="col-12 col-sm-6 col-lg-3">     <img class="w-100" src="/image-2">   </div>   <div class="col-12 col-sm-6 col-lg-3">     <img class="w-100" src="/image-3">   </div>   <div class="col-12 col-sm-6 col-lg-3">     <img class="w-100" src="/image-4">   </div> </div>

Now we need data attributes to make those images interactive. Bootstrap looks at data attributes to figure out which elements should be interactive and what they should do. In this case, we’ll be creating interactions that open the modal component and allow scrolling through the images using the carousel component.

About those data attributes:

  1. We’ll add data-toggle="modal"  and data-target="#exampleModal" to the parent element (#gallery). This makes it so clicking anything in the gallery opens the modal. We should also add the data-target value (#exampleModal) as the ID of the modal itself, but we’ll do that once we get to the modal markup.
  2. Let’s add data-target="#carouselExample"  and a data-slide-to attribute to each image. We could add those to the image wrappers instead, but we’ll go with the images in this post. Later on, we’ll want to use the data-target value (#carouselExample) as the ID for the carousel, so note that for when we get there. The values for data-slide-to are based on the order of the images.

Here’s what we get when we put that together:

<div class="row" id="gallery" data-toggle="modal" data-target="#exampleModal">   <div class="col-12 col-sm-6 col-lg-3">     <img class="w-100" src="/image-1.jpg" data-target="#carouselExample" data-slide-to="0">   </div>   <div class="col-12 col-sm-6 col-lg-3">     <img class="w-100" src="/image-2.jpg" data-target="#carouselExample" data-slide-to="1">   </div>   <div class="col-12 col-sm-6 col-lg-3">     <img class="w-100" src="/image-3.jpg" data-target="#carouselExample" data-slide-to="2">   </div>   <div class="col-12 col-sm-6 col-lg-3">     <img class="w-100" src="/image-4.jpg" data-target="#carouselExample" data-slide-to="3">   </div> </div>

Interested in knowing more about data attributes? Check out the CSS-Tricks guide to them.

Step 2: Make the modal work

This is a carousel inside a modal, both of which are standard Bootstrap components. We’re just nesting one inside the other here. Pretty much a straight copy-and-paste job from the Bootstrap documentation.

Here’s some important parts to watch for though:

  1. The modal ID should match the data-target of the gallery element.
  2. The carousel ID should match the data-target of the images in the gallery.
  3. The carousel slides should match the gallery images and must be in the same order.

Here’s the markup for the modal with our attributes in place:

<!-- Modal markup: https://getbootstrap.com/docs/4.4/components/modal/ --> <div class="modal fade" id="exampleModal" tabindex="-1" role="dialog" aria-hidden="true">   <div class="modal-dialog" role="document">     <div class="modal-content">       <div class="modal-header">         <button type="button" class="close" data-dismiss="modal" aria-label="Close">           <span aria-hidden="true">×</span>         </button>       </div>       <div class="modal-body">                <!-- Carousel markup goes here --> 
       <div class="modal-footer">         <button type="button" class="btn btn-secondary" data-dismiss="modal">Close</button>       </div>     </div>   </div> </div>

We can drop the carousel markup right in there, Voltron style!

<!-- Modal markup: https://getbootstrap.com/docs/4.4/components/modal/ --> <div class="modal fade" id="exampleModal" tabindex="-1" role="dialog" aria-hidden="true">   <div class="modal-dialog" role="document">     <div class="modal-content">       <div class="modal-header">         <button type="button" class="close" data-dismiss="modal" aria-label="Close">           <span aria-hidden="true">×</span>         </button>       </div>       <div class="modal-body">                <!-- Carousel markup: https://getbootstrap.com/docs/4.4/components/carousel/ -->       <div id="carouselExample" class="carousel slide" data-ride="carousel">           <div class="carousel-inner">             <div class="carousel-item active">               <img class="d-block w-100" src="/image-1.jpg">             </div>             <div class="carousel-item">               <img class="d-block w-100" src="/image-2.jpg">             </div>             <div class="carousel-item">               <img class="d-block w-100" src="/image-3.jpg">             </div>             <div class="carousel-item">               <img class="d-block w-100" src="/image-4.jpg">             </div>           </div>           <a class="carousel-control-prev" href="#carouselExample" role="button" data-slide="prev">             <span class="carousel-control-prev-icon" aria-hidden="true"></span>             <span class="sr-only">Previous</span>           </a>           <a class="carousel-control-next" href="#carouselExample" role="button" data-slide="next">             <span class="carousel-control-next-icon" aria-hidden="true"></span>             <span class="sr-only">Next</span>           </a>         </div>       </div>        <div class="modal-footer">         <button type="button" class="btn btn-secondary" data-dismiss="modal">Close</button>       </div>     </div>   </div> </div>

Looks like a lot of code, right? Again, it’s basically straight from the Bootstrap docs, only with our attributes and images.

Step 3: Deal with image sizes

This isn’t necessary, but if the images in the carousel have different dimensions, we can crop them with CSS to keep things consistent. Note that we’re using Sass here.

// Use Bootstrap breakpoints for consistency. $  bootstrap-sm: 576px; $  bootstrap-md: 768px; $  bootstrap-lg: 992px; $  bootstrap-xl: 1200px; 
 // Crop thumbnail images. #gallery {   img {     height: 75vw;     object-fit: cover;          @media (min-width: $  bootstrap-sm) {       height: 35vw;     }          @media (min-width: $  bootstrap-lg) {       height: 18vw;     }   } } 
 // Crop images in the coursel .carousel-item {   img {     height: 60vw;     object-fit: cover;          @media (min-width: $  bootstrap-sm) {       height: 350px;     }   } }

Step 4: Optimize the images

You may have noticed that the markup uses the same image files in the gallery as we do in the modal. That doesn’t need to be the case. In fact, it’s a better idea to use smaller, more performant versions of the images for the gallery. We’re going to be blowing up the images to their full size version anyway in the modal, so there’s no need to have the best quality up front.

The good thing about Bootstrap’s approach here is that we can use different images in the gallery than we do in the modal. They’re not mutually exclusive where they have to point to the same file.

So, for that, I’d suggest updating the gallery markup with lower-quality images:

<div class="row" id="gallery" data-toggle="modal" data-target="#exampleModal">   <div class="col-12 col-sm-6 col-lg-3">     <img class="w-100" src="/image-1-small.jpg" data-target="#carouselExample" data-slide-to="0">      <!-- and so on... --> </div>

That’s it!

The site where I’m using this has already themed Bootstrap. That means everything is already styled to spec. That said, even if you haven’t themed Bootstrap you can still easily add custom styles! With this approach (Bootstrap vs. plugins), customization is painless because you have complete control over the markup and Bootstrap styling is relatively sparse.

Here’s the final demo:

The post Creating a Modal Image Gallery With Bootstrap Components appeared first on CSS-Tricks.


, , , , ,

Do This to Improve Image Loading on Your Website

Jen Simmons explains how to improve image loading by simply using width and height attributes. The issue is that there’s a lot of jank when an image is first loaded because an img will naturally have a height of 0 before the image asset has been successfully downloaded by the browser. Then it needs to repaint the page after that which pushes all the content around. I’ve definitely seen this problem a lot on big news websites.

Anyway, Jen is recommending that we should add height and width attributes to images like so:

<img src="dog.png" height="400" width="1000" alt="A cool dog" />

This is because Firefox will now take those values into consideration and remove all the jank before the image has loaded. That means content will always stay in the same position, even if the image hasn’t loaded yet. In the past, I’ve worked on a bunch of projects where I’ve placed images lower down the page simply because I want to prevent this sort of jank. I reckon this fixes that problem quite nicely.

Direct Link to ArticlePermalink

The post Do This to Improve Image Loading on Your Website appeared first on CSS-Tricks.


, , , ,

Native Image Lazy Loading in Chrome Is Way Too Eager

Interesting research from Aaron Peters on <img loading="lazy" ... >:

On my 13 inch macbook, with Dock positioned on the left, the viewport height in Chrome is 786 pixels so images with loading="lazy" that are more than 4x the viewport down the page are eagerly fetched by Chrome on page load.

In my opinion, that is waaaaay too eager. Why not use a lower threshold value like 1000 pixels? Or even better: base the threshold value on the actual viewport height.

My guess is they chose not to over-engineer the feature by default and will improve it over time. By choosing a fairly high threshold, they ran a lower risk of it annoying users with layout shifts on pages with images that don’t use width/height attributes.

I think this unmerged Pull Request is the closest thing we have to a spec and it uses language like “scrolled into the viewport” which suggests no threshold at all.

Direct Link to ArticlePermalink

The post Native Image Lazy Loading in Chrome Is Way Too Eager appeared first on CSS-Tricks.


, , , , ,

Simple Image Placeholders with SVG

A little open-source utility from Tyler Sticka that returns a data URL of an SVG to use as an image placeholder as needed.

I like the idea of self-running utilities like that, rather than depending on some third-party service, like placekitten or whatever. Not that I’d advocate for feature bloat here, but maybe it could be more fun like these generative placeholders, marching ants, or my favorite, adorable avatars.

Direct Link to ArticlePermalink

The post Simple Image Placeholders with SVG appeared first on CSS-Tricks.


, ,

ImageKit.io: Image Optimization That Plugs Into Your Infrastructure

Images are the most efficient means to showcase a product or service on a website. They make up for most of the visual content on our website.

But, the more images a webpage has, the more bandwidth it consumes, affecting the page load speed – a raging factor having a significant impact on not just our search ranking, but also on our conversion rates.

Image optimization helps serve the right images and improve the page load time. A faster website has a positive impact on our user experience.

With image optimization becoming a standard practice for websites and apps, finding a service that offers the most competent features with a coherent pricing model, one that integrates seamlessly with the existing infrastructure and business needs, is paramount to any website and its efficiency.

So what are the most common criteria for selecting an image optimization tool?

The importance of image optimization isn’t up for debate, our choice of tool or service for it may just be. And it is a factor we should consider carefully. So where are the challenges?

  • Inability to integrate with existing infrastructure – Image optimization is very fundamental on modern websites. To implement it, we should not have to make any changes to our existing setup. But unfortunately, a lot of these tools or services can only be used with specific CDNs or are incapable of being integrated with our existing image servers or storage.
  • Dependency on their image storage – Some tools require us to move our images to their system, making us more dependent on them — an added hassle. And nobody wants to spend their time and effort on a virtually unnecessary step like migrating assets from one platform to another (and may be migrating to some other service in the future, if this one doesn’t work out). Not when it’s not needed. Not if we have hundreds of thousands of images and can never be sure if all the images have been migrated to the new tool or not.

Apart from the above challenges, it is possible that the choice of tool could be expensive or has a complex pricing structure. Or could have only the basic image-related features or not deliver excellent performance across the globe.

Enter ImageKit.io.

It is the only tool that we will need for image optimization and transformation with minimal effort and almost no changes to our existing infrastructure.

It is a complete image CDN with optimization and transformation capabilities for images on websites and apps. What does that mean?

Feed ImageKit.io an original, un-optimized image and fetch it using an ImageKit.io URL, and it shall deliver an optimized, rightly resized image to our user’s devices!

For example:

ImageKit.io resizes the image automatically by simply specifying the size in the image URL.


Here, the width and height of the final image are specified with “w” and “h” parameters in the URL. The output image has dimensions 300×100px, scaled down from an original size of 1000×1000px.

As we know, the right image formats can have a significant impact on our website’s bandwidth consumption.

For instance, the following PNG image,


is almost 2.5x the size of its JPG variant.

ImageKit.io automatically uses the correct output image format based on image content, image quality, browser support, and user device capabilities. Also, the above image will be converted and delivered as a WebP for all browsers supporting the WebP image format.

Just turn it on, and we’re good to go!

And not just simple resizing and cropping, ImageKit.io allows for more advanced transformations like smart crop, image and text overlays, image trimming, blurring, contrast and sharpness correction, and many more. It also comes with advanced delivery features like Brotli compression for SVG images and image security. ImageKit.io is the solution to all our image optimization needs. And more.

It also comes bundled with a stellar infrastructure — AWS CloudFront CDN, multi-region core processing servers, and a media library with automatic global backups.


It isn’t enough to find the most appropriate tool for our website. It should also provide an optimum experience fitting in just right with our existing setup.

ImageKit.io integration with the existing infrastructure

With S3 bucket or other web servers

It is usually a tedious task to integrate any image optimization tool into our infrastructure. For example, for images, we would already have a CMS in place to upload the images. And a server or storage to store all of those images. We would not want to make any significant changes to the existing setup that could potentially disrupt all our team’s processes.

But with ImageKit.io, the whole process from integration to image delivery takes just a few minutes. ImageKit.io allows us to plug our S3 bucket or webserver and start delivering optimized and transformed images instantly. No movement or re-upload of images is required. It takes just a few minutes to get the best optimization results.

We can attach our server or storage using “ADD ORIGIN” in the ImageKit.io dashboard

Such simple integrations with AWS S3 or web servers are not available even with some leading players in the market.

Images fetched through ImageKit.io are automatically optimized and converted to the appropriate format. And we can resize and transform any image by just adding URL parameters.

For example:

Image with width 300px and height is adjusted automatically to preserve aspect ratio:




We are using https://ik.imagekit.io/your_imagekit_id format for the URLs, but custom domain names are available with ImageKIt.io.

Learn more about it here.

Media library

ImageKit.io also provides a media library, a feature missing in many prominent image optimization tools. Media library is a highly available file storage for all ImageKit.io users. It comes with a simple user interface to upload, search, and manage files, images, and folders.

ImageKit.io Media Library

With platforms like Magento and WordPress

Most SMBs use WordPress to build their websites. ImageKit.io proves handy and almost effortless to set up for these websites.

Installing a plugin on WordPress and making some minor changes in the settings from the WP plugin is all we need to do to integrate ImageKit.io with our WordPress website for all essential optimizations. Learn more about it here.

Similarly, we can integrate ImageKit.io with our Magento store. The essential format and quality optimizations do not require any code change and can be set up by making some changes in Settings in the Magento Admin Panel. Learn more about it here.

For advanced use cases like granular image quality control, smart cropping, watermarking, text overlays, and DPR support, ImageKit.io offers a much better solution, with more robust features, more control over the optimizations and more complex transformations, than the ones provided natively by WordPress or Magento. And we can implement them by making some changes in our website’s template files.

With other CDNs

Most businesses use CDNs for their websites, usually under contracts, or running processes other than or in addition to image optimizations on these CDNs, or simply because a particular CDN performs better in a specific region.

Even then, it is possible to use ImageKit.io with our choice of CDN. They support integrations with:

  • Akamai
  • Azure
  • Alibaba Cloud CDN
  • CloudFlare (if one is on an Enterprise plan in CloudFlare)
  • CloudFront
  • Fastly
  • Google CDN
  • Zenedge

To ensure correct optimizations and caching on our CDN, their team works closely with their customer’s team to set up the configuration.

Custom domain names

As image optimization has become a norm, services like LetsEncrypt, which make obtaining and deploying an SSL certificate free, have also become standard practice. In such a scenario, enabling a custom domain name should not be charged heavily.

ImageKit.io helps us out here too.

It offers one custom domain name like images.example.com, SSL and HTTP2 enabled, free of charge, with each paid account. Additional domain names are available at a nominal one-time fee of $ 50. There are no hidden costs after that.

One ImageKit.io account, multiple websites

For image optimization needs for multiple websites or agencies handling hundreds of websites, ImageKit.io would prove to be an ideal solution. We’re free to attach as many “origins” (storages or servers) with ImageKit.io. What that means is each website has its storage or image server, which can be mapped to separate URLs within the same ImageKit.io account.

Configure the origin for a new site in the ImageKit.io dashboard and map it against a URL endpoint

The added advantage of using ImageKit.io for multiple websites and apps is their pricing model. When one consolidates the usage across their many business accounts, and if it crosses the 1TB mark, they can reach out to ImageKit.io’s team for a special quote. Learn more about it here.

Multiple server regions

ImageKit.io is a global image CDN, It has a distributed infrastructure and processing regions around the world, in North Virginia (US East), North California (US West), Frankfurt (EU), Mumbai (India), Sydney (Australia) and Singapore, aiming to reduce latency between their servers and our image origins.

Multiple server regions with processing and delivery nodes across the globe

Additionally, ImageKit.io stores the images (transformed as well as original) in their cache after fetching them from our origins. This intermediate caching is done to avoid processing the images again. Doing so not only reduces the load on our origin servers and storage but also reduces data transfer costs from them.

Custom cache time

There are times when a shorter cache time is required. For cases where one or more images are set to change at intervals, while the others remain the same, ImageKit.io offers the option to set up custom cache times in place of the default 180 days preset in ImageKit.io, giving us more control over how our resources are cached.

Custom cache time

This feature is usually not provided by third-party image CDNs or is available to enterprise customers only, but it is available with ImageKit.io on request.

Deliver non-image files through ImageKit.io CDN

ImageKit.io comes integrated with a CDN to store and serve resources. And while the primary use case is to deliver optimized images, it can do the same for the other static assets on our website, like JS, CSS as well as fonts.

There is no point dividing our static resources between two services when one, ImageKit.io, can do the whole job on its own.

It’s worth a mention here that though ImageKit.io provides delivery for non-image assets, it does not process or optimize them.

With ImageKit.io’s pricing model in mind, using their CDN may prove beneficial as it bills only based on one parameter — the total bandwidth consumed.

Cost efficiency

Most image optimization tools charge us for image storage or bill us based on requests or the number of master images or transformations.

ImageKit.io bills its customers on a single parameter, their output bandwidth. With ImageKit.io, we can optimize and transform to our heart’s content, without worrying about the costs they may incur. With our focus no longer on the billing costs, we can now focus on important tasks, like optimizing our images and website.

Usually, when an ImageKit.io account crosses 1TB total bandwidth consumption, that account becomes eligible for special pricing.

One can contact their team for a quote after crossing that threshold.

Final thoughts

Image optimization holds many benefits, but are we doing it right? And are we using the best service in the market for it?

ImageKit.io might prove to be the solution for all image optimization and management needs. But we don’t have to take their word for it. We can know for ourselves by joining the many developers and companies using ImageKit.io to deliver best-in-class image optimizations to their users by signing up for free here.

The post ImageKit.io: Image Optimization That Plugs Into Your Infrastructure appeared first on CSS-Tricks.


, , , , ,