Tag: Image

Inline Image Previews with Sharp, BlurHash, and Lambda Functions

Don’t you hate it when you load a website or web app, some content displays and then some images load — causing content to shift around? That’s called content reflow and can lead to an incredibly annoying user experience for visitors.

I’ve previously written about solving this with React’s Suspense, which prevents the UI from loading until the images come in. This solves the content reflow problem but at the expense of performance. The user is blocked from seeing any content until the images come in.

Wouldn’t it be nice if we could have the best of both worlds: prevent content reflow while also not making the user wait for the images? This post will walk through generating blurry image previews and displaying them immediately, with the real images rendering over the preview whenever they happen to come in.

So you mean progressive JPEGs?

You might be wondering if I’m about to talk about progressive JPEGs, which are an alternate encoding that causes images to initially render — full size and blurry — and then gradually refine as the data come in until everything renders correctly.

This seems like a great solution until you get into some of the details. Re-encoding your images as progressive JPEGs is reasonably straightforward; there are plugins for Sharp that will handle that for you. Unfortunately, you still need to wait for some of your images’ bytes to come over the wire until even a blurry preview of your image displays, at which point your content will reflow, adjusting to the size of the image’s preview.

You might look for some sort of event to indicate that an initial preview of the image has loaded, but none currently exists, and the workarounds are … not ideal.

Let’s look at two alternatives for this.

The libraries we’ll be using

Before we start, I’d like to call out the versions of the libraries I’ll be using for this post:

Making our own previews

Most of us are used to using <img /> tags by providing a src attribute that’s a URL to some place on the internet where our image exists. But we can also provide a Base64 encoding of an image and just set that inline. We wouldn’t usually want to do that since those Base64 strings can get huge for images and embedding them in our JavaScript bundles can cause some serious bloat.

But what if, when we’re processing our images (to resize, adjust the quality, etc.), we also make a low quality, blurry version of our image and take the Base64 encoding of that? The size of that Base64 image preview will be significantly smaller. We could save that preview string, put it in our JavaScript bundle, and display that inline until our real image is done loading. This will cause a blurry preview of our image to show immediately while the image loads. When the real image is done loading, we can hide the preview and show the real image.

Let’s see how.

Generating our preview

For now, let’s look at Jimp, which has no dependencies on things like node-gyp and can be installed and used in a Lambda.

Here’s a function (stripped of error handling and logging) that uses Jimp to process an image, resize it, and then creates a blurry preview of the image:

function resizeImage(src, maxWidth, quality) {   return new Promise<ResizeImageResult>(res => {     Jimp.read(src, async function (err, image) {       if (image.bitmap.width > maxWidth) {         image.resize(maxWidth, Jimp.AUTO);       }       image.quality(quality);        const previewImage = image.clone();       previewImage.quality(25).blur(8);       const preview = await previewImage.getBase64Async(previewImage.getMIME());        res({ STATUS: "success", image, preview });     });   }); }

For this post, I’ll be using this image provided by Flickr Commons:

Photo of the Big Boy statue holding a burger.

And here’s what the preview looks like:

Blurry version of the Big Boy statue.

If you’d like to take a closer look, here’s the same preview in a CodeSandbox.

Obviously, this preview encoding isn’t small, but then again, neither is our image; smaller images will produce smaller previews. Measure and profile for your own use case to see how viable this solution is.

Now we can send that image preview down from our data layer, along with the actual image URL, and any other related data. We can immediately display the image preview, and when the actual image loads, swap it out. Here’s some (simplified) React code to do that:

const Landmark = ({ url, preview = "" }) => {     const [loaded, setLoaded] = useState(false);     const imgRef = useRef<HTMLImageElement>(null);        useEffect(() => {       // make sure the image src is added after the onload handler       if (imgRef.current) {         imgRef.current.src = url;       }     }, [url, imgRef, preview]);        return (       <>         <Preview loaded={loaded} preview={preview} />         <img           ref={imgRef}           onLoad={() => setTimeout(() => setLoaded(true), 3000)}           style={{ display: loaded ? "block" : "none" }}         />       </>     );   };      const Preview: FunctionComponent<LandmarkPreviewProps> = ({ preview, loaded }) => {     if (loaded) {       return null;     } else if (typeof preview === "string") {       return <img key="landmark-preview" alt="Landmark preview" src={preview} style={{ display: "block" }} />;     } else {       return <PreviewCanvas preview={preview} loaded={loaded} />;     }   };

Don’t worry about the PreviewCanvas component yet. And don’t worry about the fact that things like a changing URL aren’t accounted for.

Note that we set the image component’s src after the onLoad handler to ensure it fires. We show the preview, and when the real image loads, we swap it in.

Improving things with BlurHash

The image preview we saw before might not be small enough to send down with our JavaScript bundle. And these Base64 strings will not gzip well. Depending on how many of these images you have, this may or may not be good enough. But if you’d like to compress things even smaller and you’re willing to do a bit more work, there’s a wonderful library called BlurHash.

BlurHash generates incredibly small previews using Base83 encoding. Base83 encoding allows it to squeeze more information into fewer bytes, which is part of how it keeps the previews so small. 83 might seem like an arbitrary number, but the README sheds some light on this:

First, 83 seems to be about how many low-ASCII characters you can find that are safe for use in all of JSON, HTML and shells.

Secondly, 83 * 83 is very close to, and a little more than, 19 * 19 * 19, making it ideal for encoding three AC components in two characters.

The README also states how Signal and Mastodon use BlurHash.

Let’s see it in action.

Generating blurhash previews

For this, we’ll need to use the Sharp library.


Note

To generate your blurhash previews, you’ll likely want to run some sort of serverless function to process your images and generate the previews. I’ll be using AWS Lambda, but any alternative should work.

Just be careful about maximum size limitations. The binaries Sharp installs add about 9 MB to the serverless function’s size.

To run this code in an AWS Lambda, you’ll need to install the library like this:

"install-deps": "npm i && SHARP_IGNORE_GLOBAL_LIBVIPS=1 npm i --arch=x64 --platform=linux sharp"

And make sure you’re not doing any sort of bundling to ensure all of the binaries are sent to your Lambda. This will affect the size of the Lambda deploy. Sharp alone will wind up being about 9 MB, which won’t be great for cold start times. The code you’ll see below is in a Lambda that just runs periodically (without any UI waiting on it), generating blurhash previews.


This code will look at the size of the image and create a blurhash preview:

import { encode, isBlurhashValid } from "blurhash"; const sharp = require("sharp");  export async function getBlurhashPreview(src) {   const image = sharp(src);   const dimensions = await image.metadata();    return new Promise(res => {     const { width, height } = dimensions;      image       .raw()       .ensureAlpha()       .toBuffer((err, buffer) => {         const blurhash = encode(new Uint8ClampedArray(buffer), width, height, 4, 4);         if (isBlurhashValid(blurhash)) {           return res({ blurhash, w: width, h: height });         } else {           return res(null);         }       });   }); }

Again, I’ve removed all error handling and logging for clarity. Worth noting is the call to ensureAlpha. This ensures that each pixel has 4 bytes, one each for RGB and Alpha.

Jimp lacks this method, which is why we’re using Sharp; if anyone knows otherwise, please drop a comment.

Also, note that we’re saving not only the preview string but also the dimensions of the image, which will make sense in a bit.

The real work happens here:

const blurhash = encode(new Uint8ClampedArray(buffer), width, height, 4, 4);

We’re calling blurhash‘s encode method, passing it our image and the image’s dimensions. The last two arguments are componentX and componentY, which from my understanding of the documentation, seem to control how many passes blurhash does on our image, adding more and more detail. The acceptable values are 1 to 9 (inclusive). From my own testing, 4 is a sweet spot that produces the best results.

Let’s see what this produces for that same image:

{   "blurhash" : "UAA]{ox^0eRiO_bJjdn~9#M_=|oLIUnzxtNG",   "w" : 276,   "h" : 400 }

That’s incredibly small! The tradeoff is that using this preview is a bit more involved.

Basically, we need to call blurhash‘s decode method and render our image preview in a canvas tag. This is what the PreviewCanvas component was doing before and why we were rendering it if the type of our preview was not a string: our blurhash previews use an entire object — containing not only the preview string but also the image dimensions.

Let’s look at our PreviewCanvas component:

const PreviewCanvas: FunctionComponent<CanvasPreviewProps> = ({ preview }) => {     const canvasRef = useRef<HTMLCanvasElement>(null);        useLayoutEffect(() => {       const pixels = decode(preview.blurhash, preview.w, preview.h);       const ctx = canvasRef.current.getContext("2d");       const imageData = ctx.createImageData(preview.w, preview.h);       imageData.data.set(pixels);       ctx.putImageData(imageData, 0, 0);     }, [preview]);        return <canvas ref={canvasRef} width={preview.w} height={preview.h} />;   };

Not too terribly much going on here. We’re decoding our preview and then calling some fairly specific Canvas APIs.

Let’s see what the image previews look like:

In a sense, it’s less detailed than our previous previews. But I’ve also found them to be a bit smoother and less pixelated. And they take up a tiny fraction of the size.

Test and use what works best for you.

Wrapping up

There are many ways to prevent content reflow as your images load on the web. One approach is to prevent your UI from rendering until the images come in. The downside is that your user winds up waiting longer for content.

A good middle-ground is to immediately show a preview of the image and swap the real thing in when it’s loaded. This post walked you through two ways of accomplishing that: generating degraded, blurry versions of an image using a tool like Sharp and using BlurHash to generate an extremely small, Base83 encoded preview.

Happy coding!


Inline Image Previews with Sharp, BlurHash, and Lambda Functions originally published on CSS-Tricks. You should get the newsletter.

CSS-Tricks

, , , , , ,

Superior Image Optimization: An Ideal Solution Using Gatsby & ImageEngine

(This is a sponsored post.)

In recent years, the Jamstack methodology for building websites has become increasingly popular. Performance, scalable, and secure, it’s easy to see why it’s becoming an attractive way to build websites for developers.

GatsbyJS is a static site generator platform. It’s powered by React, a front-end JavaScript library, for building user interfaces. And uses GraphQL, an open-source data query and manipulation language, to pull structured data from other sources, typically a headless CMS like Contentful.

While GatsbyJS and similar platforms have revolutionized much about the web development process, one stubborn challenge remains: image optimization. Even using a modern front-end development framework like GatsbyJS, it tends to be a time-intensive and frustrating exercise.

For most modern websites, it doesn’t help much if you run on a performant technology but your images aren’t optimized. Today, images are the largest contributor to page weight, and growing, and have been singled out by Google as presenting the most significant opportunity for improving performance.

With that in mind, I want to discuss how using an image CDN as part of your technology stack can bring improvements both in terms of website performance and the entire development process.

A Quick Introduction to Gatsby

GatsbyJS is so much more than the conventional static site generators of old. Yes, you still have the ability to integrate with a software version control platform, like Git, as well as to build, deploy, and preview Gatsby projects. However, its services consist of a unified cloud platform that includes high-speed, scalable, and secure hosting as well as expert technical support and powerful third-party integrations.

What’s more, all of it comes wrapped in a user-friendly development platform that shares many similarities with the most popular CMSs of the day. For example, you can leverage pre-designed site templates or pre-configured functions (effectively website elements and modules) to speed up the production process.

It also offers many benefits for developers by allowing you to work with leading frameworks and languages, like JavaScript, React, WebPack, and GraphQL as well as baked-in capabilities to deal with performance, development iterations, etc.

For example, Gatsby does a lot to optimize your performance without any intervention. It comes with built-in code-splitting, prefetching resources, and lazy-loading. Static sites are generally known for being inherently performant, but Gatsby kicks it up a notch.

Does Gatsby Provide Built-in Image Optimization?

Gatsby does, in fact, offer built-in image optimization capabilities.

In fact, it recently upgraded in this regard, replacing the now deprecated gatsby-image package with the brand-new Gatsby image plugin. This plugin consists of two components for static and dynamic images, respectively. Typically, you would use the dynamic component if you’re handling images from a CMS, like Contentful.

Installing this plugin allows you to programmatically pass commands to the underlying framework in the form of properties, shown below:

Option Default Description
layout constrained / CONSTRAINED Determines the size of the image and its resizing behavior.
width/height Source image size Change the size of the image.
aspectRatio Source image aspect ratio Force a specific ratio between the image’s width and height.
placeholder "dominantColor" / DOMINANT_COLOR Choose the style of temporary image shown while the full image loads.
formats ["auto", "webp"] / [AUTO,WEBP] File formats of the images generated.
transformOptions [fit: "cover", cropFocus: "attention"] Options to pass to sharp to control cropping and other image manipulations.
sizes Generated automatically The <img> sizes attribute, passed to the img tag. This describes the display size of the image, and does not affect generated images. You are only likely to change this if you are using full width images that do not span the full width of the screen.
quality 50 The default image quality generated. This is override by any format-specific option.
outputPixelDensities For fixed images: [1, 2]

For constrained: [0.25, 0.5, 1, 2]

A list of image pixel densities to generate. It will never generate images larger than the source, and will always include a 1✕ image. The image is multiple by the image width, to give the generated sizes. For example, a 400px wide constrained image would generate 100, 200, 400 and 800px wide images by default. Ignored for full width layout images, which use breakpoints instead.
breakpoints [750, 1000, 1366, 1920] Output widths to generate for full width images. Default is to generate widths for common device resolutions. It will never generate an image larger than the source image. The browser will automatically choose the most appropriate.
blurredOptions None Options for the low-resolution placeholder image. Ignored unless placeholder is blurred.
tracedSVGOptions None Options for traced placeholder SVGs. See potrace options. Ignored unless placeholder is traced SVG.
jpgOptions None Options to pass to sharp when generating JPG images.

As you can see, that’s quite the toolkit to help developers process images in a variety of ways. The various options can be used to transform, style, and optimize images for performance as well as make images behave dynamically in a number of ways.

In terms of performance optimization, there are a few options that are particularly interesting:

  • Lazy-loading: Defers loading of off-screen images until they are scrolled into view.
  • Width/height: Resize image dimensions according to how they will be used.
  • Placeholder: When lazy-loading or while an image is loading in the background, use a placeholder. This can help to avoid performance penalties for core web vitals, like Cumulative Layout Shift (CLS).
  • Format: Different formats have inherently more efficient encoding. GatsbyJS supports WebP and AVIF, two of the most performant next-gen image formats.
  • Quality: Apply a specified level of quality compression to the image between 0 and 100.
  • Pixel density: A lower pixel density will save image data and can be optimized according to the screen size and PPI (pixels per inch).
  • Breakpoints: Breakpoints are important for ensuring that you serve a version of an image that’s sized appropriately for a certain threshold of screen sizes, especially that you serve smaller images for smaller screen sizes, like tablets or mobile phones. This is called responsive syntax.

So, all in all, Gatsby provides developers with a mature and sophisticated framework to process and optimize image content. The only important missing feature that’s missing is some type of built-in support for client hints.

However, there is one big catch: All of this has to be implemented manually. While GatsbyJS does use default settings for some image properties, it doesn’t offer built-in intelligence to automatically and dynamically process and serve optimized images tailored to the accessing device.

If you want to create an ideal image optimization solution, your developers will firstly have to implement device detection capabilities. They will then need to develop the logic to dynamically select optimization operations based on the specific device accessing your web app.

Finally, this code will continually need to be changed and updated. New devices come out all the time with differing properties. What’s more, standards regarding performance as well as image optimization are continually evolving. Even significant changes, additions, or updates to your own image assets may trigger the need to rework your implementation. Not to mention the time it takes to simply stay abreast of the latest information and trends and to make sure development is carried out accordingly.

Another problem is that you will need to continually test and refine your implementation. Without the help of an intelligent optimization engine, you will need to “feel out” how your settings will affect the visual quality of your images and continually fine-tune your approach to get the right results.

This will add a considerable amount of overhead to your development workload in the immediate and long term.

Gatsby also admits that these techniques are quite CPU intensive. In that case, you might want to preoptimize images. However, this also needs to be manually implemented in-code on top of being even less dynamic and flexible.

But, what if there was a better way to optimize your image assets while still enjoying all the benefits of using a platform like Gatsby? The solution I’m about to propose will help solve a number of key issues that arise from using Gatsby (and any development framework, for that matter) for the majority of your image optimization:

  • Reduce the impact optimizing images have on the development and design process in the immediate and long term.
  • Remove an additional burden and responsibility from your developers’ shoulders, freeing up time and resources to work on the primary aspects of your web app.
  • Improve your web app’s ability to dynamically and intelligently optimize image assets for each unique visitor.
  • All of this, while still integrating seamlessly with GatsbyJS as well as your CMS (in most cases).

Introducing a Better Way to Optimize Image Assets: ImageEngine

In short, ImageEngine is an intelligent, device-aware image CDN.

ImageEngine works just like any other CDN (content delivery network), such as Fastly, Akamai, or Cloudflare. However, it specializes in optimizing and serving image content specifically. 

Like its counterparts, you provide ImageEngine with the location where your image files are stored, it pulls them to its own image optimization servers, and then generates and serves optimized variants of images to your site visitors.

In doing this, ImageEngine is designed to decrease image payload, deliver optimized images tailored to each unique device, and serve images from edge nodes across its global CDN

Basically, image CDNs gather information on the accessing device by analyzing the ACCEPT header. A typical ACCEPT header looks like this (for Chrome):

image/avif,image/webp,image/apng,image/*,*/*;q=0.8

As you can see, this only provides the CDN with the accepted image formats and the recommended quality compression.

More advanced CDNs, ImageEngine, included, can also leverage client hints for more in-depth data points, such as the DPR (device pixel ratio) and Viewport-Width. This allows a larger degree of intelligent decision-making to more effectively optimize image assets while preserving visual quality.

However, ImageEngine takes things another step further by being the only mainstream image CDN that has built-in WURFL device detection. This gives ImageEngine the ability to read more information on the device, such as the operating system, resolution, and PPI (pixels per inch).

Using AI and machine-learning algorithms, this extra data means ImageEngine has virtually unparalleled decision-making power. Without any manual intervention, ImageEngine can perform all of the following image optimization operations automatically:

  • Resize your images according to the device screen size without the need for responsive syntax.
  • Intelligently compress the quality of the image to reduce the payload while preserving visual quality, using metrics like the Structural Similarity Index Method (SSIM).
  • Convert images to the most optimal, next-gen encoding formats. On top of WebP and AVIF, ImagEngine also supports JPEG 2000 (Safari), JPEG XR (Internet Explorer & Edge), and MP4 (for aGIFs).

These settings also play well with GatsbyJS’ built-in capabilities. So, you can natively implement breakpoints, lazy-loading, and image placeholders that don’t require any expertise or intelligent decision-making using Gatsby. Then, you can let ImageEngine handle the more advanced and intelligence-driven operations, like quality compression, image formatting, and resizing.

The best part is that ImageEngine does all of this automatically, making it a completely hands-off image optimization solution. ImageEngine will automatically adjust its approach with time as the digital image landscape and standards change, freeing you from this concern.

In fact, ImageEngine recommends using default settings for getting the best results in most situations.

What’s more, this logic is built into the ImageEngine edge servers. Firstly, with over 20 global PoPs, it means that the images are processed and served as close to the end-user as possible. It also means that the majority of processing happens server-side. With the exception of installing the ImageEngine Gatsby plugin, there is virtually no processing overhead at build or runtime.

This type of dynamic, intelligent decision-making will only become more important in the near and medium-term. Thanks to the number and variety of devices growing by the year, it’s becoming harder and harder to implement image optimization in a way that’s sensitive to every device.

That’s why ImageEngine can give you the edge in a mobile-first future that’s continually evolving. Simply put, ImageEngine will help futureproof your Gatsby web app.

How to Integrate ImageEngine with Gatsby: A Quick Guide

Integrating ImageEngine with GatsbyJS is trivial if you have experience installing any other third-party plugins. However, the steps will differ somewhat based on which backend CMS you use with GatsbyJS and where you store your image assets.

For example, you could use it alongside WordPress, Drupal, Contentful, and a range of other popular CMSs.

Usually, your stack would look something like this:

  • A CMS, like Contentful, to host your “space” where you’ll manage your assets and create structured data. Your images will be uploaded and stored in your space.
  • A versioning platform, like Github, to host your code and manage your versions and branches.
  • GatsbyJS to host your workspace, where you’ll build, deploy, and host the front end of your website.

So, the first thing you need to do is set up a site, or project, using GatsbyJS and link it to your CMS.

Next, you’ll install the ImageEngine plugin for GatsbyJS:

npm install @imageengine/gatsby-plugin-imageengine

You’ll also need to create a delivery address for your images via ImageEngine. You can get one by signing up for the 30-day trial here. The only thing you need to do is supply ImageEngine with the host origin URL. For Contentful, it’s images.ctfassets.net and for Sanity.io, it’s cdn.sanity.io.

ImageEngine will then provide you with a delivery address, usually in the format of {random_string}.cdn.imgeng.in.

You’ll use this delivery address to configure the ImageEngine plugin in your gatsby-config.js file. As part of this, you’ll indicate the source (Contentful, e.g.) as well as provide the ImageEngine delivery address. You can find examples of how that’s done in the documentation here.

Note that the ImageEngine plugin features built-in support for Contentful and Sanity.io as asset sources. You can also configure the plugin to pull locally stored images or from another custom source.

Once that’s done, development can begin!

Basically, Gatsby will create Graphql Nodes for the elements created in your CMS (e.g., ContentfulAsset, allSanityImageAsset, etc.). ImageEngine will then create a child node of childImageEngineAsset for each applicable element node.

You’ll then use GraphQL queries in the code for your Gatsby pages to specify the properties of the image variants you want to serve. For example, you can display an image that’s 500 ✕ 300px in the WebP format using the following query:

gatsbyImageData(width: 500, height: 300, format: jpg)

Once again, you should refer to the documentation for a more thorough treatment. You can find guides for integrating ImageEngine with Contentful, Sanity.io, and any other Gatsby project.

For a competent Gatsby user, integrating ImageEngine will only take a few minutes. And, ongoing maintenance will be minimal. If you know how to use GraphQL, then the familiar syntax to send directives and create specific image variants will be nearly effortless and should take about the same time as manually optimizing images using standard Gatsby React.

Conclusion

For most web projects, ImageEngine can reduce image payloads by up to 80%. That number can go up if you have especially high-res images.

However, you can really get the most out of your image optimization by combining the best parts of a static front-end development framework like Gatsby and an image CDN like ImageEngine. Specifically, you can use both to target Google’s core web vitals:

  • ImageEngine’s dynamic, intelligent, run-time optimization will optimize payloads to improve LCP, SI, FCP, and other data size-related metrics.
  • Using Gatsby, you can optimize for CLS and FID using best practices and by natively implementing lazy loading and image placeholders.

ImageEngine provides an Image Speed Test tool where you can quickly evaluate your current performance and see the impact of ImageEngine on key metrics. Even for a simple GatsbyJS project, the results in the Demo tool can be impressive. If you extrapolate those percentages for a larger, image-heavy site, combining Gatsby with ImageEngine could have a dramatic impact on the performance and user experience of your web app. What’s more, your developers will thank you for sparing them from the challenging and time-consuming chore of manual image optimization.


Superior Image Optimization: An Ideal Solution Using Gatsby & ImageEngine originally published on CSS-Tricks. You should get the newsletter.

CSS-Tricks

, , , , , , ,
[Top]

Social Image Generator + Jetpack

I feel like my quest to make sure this site had pretty sweet (and automatically-generated) social media images (e.g. Open Graph) came to a close once I found Social Image Generator.

The trajectory there was that I ended up talking about it far too much on ShopTalk, to the point it became a common topic in our Discord (join via Patreon), Andy Bell pointed me at Daniel Post’s Social Image Generator and I immediately bought and installed it. I heard from Daniel over Twitter, and we ended up having long conversations about the plugin and my desires for it. Ultimately, Daniel helped me code up some custom designs and write logic to create different social media image designs depending on the information it had (for example, if we provide quote text, it uses a special design for that).

As you likely know, Automattic has been an awesome and long time sponsor for this site, and we often promote Jetpack as a part of that (as I’m a heavy user of it, it’s easy to talk about). One of Jetpack’s many features is helping out with social media. (I did a video on how we do it.) So, it occurred to me… maybe this would be a sweet feature for Jetpack. I mentioned it to the Automattic team and they were into the idea of talking to Daniel. I introduced them back in May, and now it’s September and… Jetpack Acquires WordPress Plugin Social Image Generator

“When I initially saw Social Image Generator, the functionality looked like a ideal fit with our existing social media tools,’ said James Grierson, General Manager of Jetpack. ‘I look forward to the future functionality and user experience improvements that will come out of this acquisition. The goal of our social product is to help content creators expand their audience through increased distribution and engagement. Social Image Generator will be a key component of helping us deliver this to our customers.”

Daniel will also be joining Jetpack to continue developing Social Image Generator and integrating it with Jetpack’s social media features.

Rob Pugh

Heck yeah, congrats Daniel. My dream for this thing is that, eventually, we could start building social media images via regular WordPress PHP templates. The trick is that you need something to screenshot them, like Puppeteer or Playwright. An average WordPress install doesn’t have that available, but because Jetpack is fundamentally a service that leverages the great WordPress cloud to do above-and-beyond things, this is in the realm of possibility.

WP Tavern also covered the news:

Automattic is always on the prowl for companies that are doing something interesting in the WordPress ecosystem. The Social Image Generator plugin expertly captured a new niche with an interface that feels like a natural part of WordPress and impressed our chief plugin critic, Justin Tadlock, in a recent review.

“Automattic approached me and let me know they were fans of my plugin,” Post said. “And then we started talking to see what it would be like to work together. We were actually introduced by Chris Coyier from CSS-Tricks, who uses both our products.”

Sarah Gooding

Just had to double-toot my own horn there, you understand.


The post Social Image Generator + Jetpack appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , ,
[Top]

Frameworks Helping Image Usage

I recently blogged about how images are hard and it ended up being a big ol’ checklist of things that you could/should think about and implement when placing images on websites.

I think it’s encouraging to see frameworks — these beloved tools that we leverage to help us build websites — offering additional tools within them to help tackle this checklist and take on the hard (but perfectly suited for computers) tasks of displaying images.

Some examples:

I’m not sure I’d give any of them flying colors as far as ease of use. There is stuff to install, configure, and it’s likely you’ll only reach for it if you already know you should be doing it, and your pre-existing knowledge of image performance can help you through the process. It’s not the failing of these frameworks; this stuff is complicated and the audience is developers who are, fair is fair, a little into the idea of control.

I do gotta hand it to my BFF WordPress on this one. You literally do nothing and just get responsive images out of the box. If you need to tap into the filters to control things, you can do that like you can anything else in WordPress: through hooks. If you go for Jetpack (and I highly encourage you to), you flip on the (incredibly, free) Site Accelerator feature, which takes all those images, optimizes them, CDN-hosts them, lazy loads them, and serves them in formats, like WebP, when possible (I would assume more next-gen formats will happen eventually). Jetpack is a sponsor, so full disclosure there, but I use it very much on purpose because the experience makes image handling something I literally don’t have to think about.

Another interesting aspect of frameworks-helping-with-images is that some of it was born out of Google getting involved. Google calls it “Aurora”:

For almost two years, we have worked with some of the most popular frameworks such as Next.js, Nuxt and Angular, working to improve web performance.

The project does all sorts of stuff, including hand out money to help fund open-source tools, and direct help specific initiatives. Like images:

An Image component in Next.js that encapsulates best practices for image loading, followed by a collaboration with Nuxt on the same. Use of this component has resulted in significant improvements to paint times and layout shift (example: 57% reduction in Largest Contentful Paint and 100% reduction in Cumulative Layout Shift on nextjs.org/give).

Cool, right? I think so? What weirds me out about this just a smidge is that it feels meaningful when Google’s squad rolls up to contribute to a framework. They didn’t pick underdog frameworks here, surely on purpose, because they want their work to impact the most people. So, frameworks that are already successful benefit from A-squad contributions. A rich-get-richer situation. I’m not sure it’s a huge problem, but it’s just something I think about.


The post Frameworks Helping Image Usage appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , ,
[Top]

Exploring the CSS Paint API: Image Fragmentation Effect

In my previous article, I created a fragmentation effect using CSS mask and custom properties. It was a neat effect but it has one drawback: it uses a lot of CSS code (generated using Sass). This time I am going to redo the same effect but rely on the new Paint API. This drastically reduces the amount of CSS and completely removes the need for Sass.

Here is what we are making. Like in the previous article, only Chrome and Edge support this for now.

See that? No more than five CSS declarations and yet we get a pretty cool hover animation.

What is the Paint API?

The Paint API is part of the Houdini project. Yes, “Houdini” the strange term that everyone is talking about. A lot of articles already cover the theoretical aspect of it, so I won’t bother you with more. If I have to sum it up in a few words, I would simply say : it’s the future of CSS. The Paint API (and the other APIs that fall under the Houdini umbrella) allow us to extend CSS with our own functionalities. We no longer need to wait for the release of new features because we can do it ourselves!

From the specification:

An API for allowing web developers to define a custom CSS <image> with javascript [sic], which will respond to style and size changes.

And from the explainer:

The CSS Paint API is being developed to improve the extensibility of CSS. Specifically this allows developers to write a paint function which allows us to draw directly into an elements [sic] background, border, or content.

I think the idea is pretty clear. We can draw what we want. Let’s start with a very basic demo of background coloration:

  1. We add the paint worklet using CSS.paintWorklet.addModule('your_js_file').
  2. We register a new paint method called draw.
  3. Inside that, we create a paint() function where we do all the work. And guess what? Everything is like working with <canvas>. That ctx is the 2D context, and I simply used some well-known functions to draw a red rectangle covering the whole area.

This may look unintuitive at first glance, but notice that the main structure is always the same: the three steps above are the “copy/paste” part that you repeat for each project. The real work is the code we write inside the paint() function.

Let’s add a variable:

As you can see, the logic is pretty simple. We define the getter inputProperties with our variables as an array. We add properties as a third parameter to paint() and later we get our variable using properties.get().

That’s it! Now we have everything we need to build our complex fragmentation effect.

Building the mask

You may wonder why the paint API to create a fragmentation effect. We said it’s a tool to draw images so how it will allow us to fragment an image?

In the previous article, I did the effect using different mask layer where each one is a square defined with a gradient (remember that a gradient is an image) so we got a kind of matrix and the trick was to adjust the alpha channel of each one individually.

This time, instead of using many gradients we will define only one custom image for our mask and that custom image will be handled by our paint API.

An example please!

In the above, I have created an image having an opaque color covering the left part and a semi-transparent one covering the right part. Applying this image as a mask gives us the logical result of a half-transparent image.

Now all we need to do is to split our image to more parts. Let’s define two variables and update our code:

The relevant part of the code is the following:

const n = properties.get('--f-n'); const m = properties.get('--f-m');  const w = size.width/n; const h = size.height/m;  for(var i=0;i<n;i++) {   for(var j=0;j<m;j++) {     ctx.fillStyle = 'rgba(0,0,0,'+(Math.random())+')';         ctx.fillRect(i*w, j*h, w, h); } }

N and M define the dimension of our matrix of rectangles. W and H are the size of each rectangle. Then we have a basic FOR loop to fill each rectangle with a random transparent color.

With a little JavaScript, we get a custom mask that we can easily control by adjusting the CSS variables:

Now, we need to control the alpha channel in order to create the fading effect of each rectangle and build the fragmentation effect.

Let’s introduce a third variable that we use for the alpha channel that we also change on hover.

We defined a CSS custom property as a <number> that we transition from 1 to 0, and that same property is used to define the alpha channel of our rectangles. Nothing fancy will happen on hover because all the rectangles will fade the same way.

We need a trick to prevent fading of all the rectangles at the same time, instead creating a delay between them. Here is an illustration to explain the idea I am going to use:

The above is showing the alpha animation for two rectangles. First we define a variable L that should be bigger or equal to 1 then for each rectangle of our matrix (i.e. for each alpha channel) we perform a transition between X and Y where X - Y = L so we have the same overall duration for all the alpha channel. X should be bigger or equal to 1 and Y smaller or equal to 0.

Wait, the alpha value shouldn’t be in the range [1 0], right ?

Yes, it should! And all the tricks that we’re working on rely on that. Above, the alpha is animating from 8 to -2, meaning we have an opaque color in the [8 1] range, a transparent one in the [0 -2] range and an animation within [1 0]. In other words, any value bigger than 1 will have the same effect as 1, and any value smaller than 0 will have the same effect as 0.

Animation within [1 0] will not happen at the same time for both our rectangles. Rectangle 2 will reach [1 0] before Rectangle 1 will. We apply this to all the alpha channels to get our delayed animations.

In our code we will update this:

rgba(0,0,0,'+(o)+') 

…to this:

rgba(0,0,0,'+((Math.random()*(l-1) + 1) - (1-o)*l)+') 

L is the variable illustrated previously, and O is the value of our CSS variable that transitions from 1 to 0

When O=1, we have (Math.random()*(l-1) + 1). Considering the fact that the random() function gives us a value within the [0 1] range, the final value will be in the [L 1]range.

When O=0, we have (Math.random()*(l-1) + 1 - l) and a value with the [0 1-L] range.

L is our variable to control the delay.

Let’s see this in action:

We are getting closer. We have a cool fragmentation effect but not the one we saw in the beginning of the article. This one isn’t as smooth.

The issue is related the random() function. We said that each alpha channel need to animate between X and Y, so logically those value need to remain the same. But the paint() function is called a bunch during the transition, so each time, the random() function give us different X and Y values for each alpha channel; hence the “random” effect we are getting.

To fix this we need to find a way to store the generated value so they are always the same for each call of the paint() function. Let’s consider a pseudo-random function, a function that always generates the same sequence of values. In other words, we want to control the seed.

Unfortunately, we cannot do this with the JavaScript’s built-in random() function, so like any good developer, let’s pick one up from Stack Overflow:

const mask = 0xffffffff; const seed = 30; /* update this to change the generated sequence */ let m_w  = (123456789 + seed) & mask; let m_z  = (987654321 - seed) & mask;  let random =  function() {   m_z = (36969 * (m_z & 65535) + (m_z >>> 16)) & mask;   m_w = (18000 * (m_w & 65535) + (m_w >>> 16)) & mask;   var result = ((m_z << 16) + (m_w & 65535)) >>> 0;   result /= 4294967296;   return result; }

And the result becomes:

We have our fragmentation effect without complex code:

  • a basic nested loop to create NxM rectangles
  • a clever formula for the channel alpha to create the transition delay
  • a ready random() function taken from the Net

That’s it! All you have to do is to apply the mask property to any element and adjust the CSS variables.

Fighting the gaps!

If you play with the above demos you will notice, in some particular case, strange gaps between the rectangles

To avoid this, we can extend the area of each rectangle with a small offset.

We update this:

ctx.fillRect(i*w, j*h, w, h); 

…with this:

ctx.fillRect(i*w-.5, j*h-.5, w+.5, h+.5); 

It creates a small overlap between the rectangles that compensates for the gaps between them. There is no particular logic with the value 0.5 I used. You can go bigger or smaller based on your use case.

Want more shapes?

Can the above be extended to consider more than rectangular shape? Sure it can! Let’s not forget that we can use Canvas to draw any kind of shape — unlike pure CSS shapes where we sometimes need some hacky code. Let’s try to build that triangular fragmentation effect.

After searching the web, I found something called Delaunay triangulation. I won’t go into the deep theory behind it, but it’s an algorithm for a set of points to draw connected triangles with specific properties. There are lots of ready-to-use implementations of it, but we’ll go with Delaunator because it’s supposed to be the fastest of the bunch.

We first define a set of points (we will use random() here) then run Delauntor to generate the triangles for us. In this case, we only need one variable that defines the number of points.

const n = properties.get('--f-n'); const o = properties.get('--f-o'); const w = size.width; const h = size.height; const l = 7;   var dots = [[0,0],[0,w],[h,0],[w,h]]; /* we always include the corners */ /* we generate N random points within the area of the element */ for (var i = 0; i < n; i++) {   dots.push([random() * w, random() * h]); } /**/ /* We call Delaunator to generate the triangles*/ var delaunay = Delaunator.from(dots); var triangles = delaunay.triangles; /**/ for (var i = 0; i < triangles.length; i += 3) { /* we loop the triangles points */   /* we draw the path of the triangles */   ctx.beginPath();   ctx.moveTo(dots[triangles[i]][0]    , dots[triangles[i]][1]);   ctx.lineTo(dots[triangles[i + 1]][0], dots[triangles[i + 1]][1]);   ctx.lineTo(dots[triangles[i + 2]][0], dots[triangles[i + 2]][1]);     ctx.closePath();   /**/   var alpha = (random()*(l-1) + 1) - (1-o)*l; /* the alpha value */   /* we fill the area of triangle with the semi-transparent color */   ctx.fillStyle = 'rgba(0,0,0,'+alpha+')';   /* we consider stroke to fight the gaps */   ctx.strokeStyle = 'rgba(0,0,0,'+alpha+')';   ctx.stroke();   ctx.fill(); } 

I have nothing more to add to the comments in the above code. I simply used some basic JavaScript and Canvas stuff and yet we have a pretty cool effect.

We can make even more shapes! All we have to do is to find an algorithm for it.

I cannot move on without doing the hexagon one!

I took the code from this article written by Izan Pérez Cosano. Our variable is now R that will define the dimension of one hexagon.

What’s next?

Now that we have built our fragmentation effect, let’s focus on the CSS. Notice that the effect is as simple as changing the opacity value (or the value of whichever property you are working with) of an element on it hover state.

Opacity animation

img {   opacity:1;   transition:opacity 1s; }  img:hover {   opacity:0; }

Fragmentation effect

img {   -webkit-mask: paint(fragmentation);   --f-o:1;   transition:--f-o 1s; }  img:hover {   --f-o:0; }

This means we can easily integrate this kind of effect to create more complex animations. Here are a bunch of ideas!

Responsive image slider

Another version of the same slider:

Noise effect

Loading screen

Card hover effect

That’s a wrap

And all of this is just the tip of the iceberg of what can be achieved using the Paint API. I’ll end with two important points:

  • The Paint API is 90% <canvas>, so the more you know about <canvas>, the more fancy things you can do. Canvas is widely used, which means there’s a bunch of documentation and writing about it to get you up to speed. Hey, here’s one right here on CSS-Tricks!
  • The Paint API removes all the complexity from the CSS side of things. There’s no dealing with complex and hacky code to draw cool stuff. This makes CSS code so much easier to maintain, not to mention less prone to error.

The post Exploring the CSS Paint API: Image Fragmentation Effect appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , ,
[Top]

Your Image Is Probably Not Decorative

Eric doesn’t mince words, especially in the title, but also in the conclusion:

In modern web design and development, displaying an image is a highly intentional act. Alternate descriptions allow us to explain the content of the image, and in doing so, communicate why it is worth including.

Just because an image displays something fanciful doesn’t mean it isn’t worth describing. Announcing its presence ensures that anyone, regardless of ability or circumstance, can fully understand your digital experience.

I like the bit where, even when a CSS background-image is used, you can still use a “spacer GIF” to add alt text. And speaking of alt descriptions, did you know even Open Graph images can have them?

Direct Link to ArticlePermalink


The post Your Image Is Probably Not Decorative appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, ,
[Top]

Swipey Image Grids

I hope people think of SVG as a vector format that is good for drawing things. There is plenty more to know, but here’s one more: SVG is good for composition. You draw things at very specific coordinates in SVG and, while they can scale, they tend to stay put. And while SVG is a vector format, you can place raster images onto it. That’s my favorite part of Cassie’s “Swipey image grids” post. The swipey part is cool, but the composition is even cooler.

<svg viewBox="0 0 100 100">   <rect x="30" y="0" width="70" height="50" fill="blue"/>   <rect x="60" y="60" width="40" height="40" fill="green"/>   <rect x="0" y="30" width="50" height="70" fill="pink"/>    <image x="30" y="0" width="70" height="50" href="https://place-puppy.com/300x300"/>   <image x="60" y="60" width="40" height="40" href="https://place-puppy.com/700x300"/>   <image x="0" y="30" width="50" height="70" href="https://place-puppy.com/800x500"/> </svg>

You’ll need to check this out in Chrome, Edge or Firefox:

Don’t miss Cassie’s interactive examples explaining preserveAspectRatio. That’s a thing I normally think of on the <svg> itself, but is used to great effect on the <image> elements themselves here. It’s like a more powerful object-fit and object-position.

Direct Link to ArticlePermalink


The post Swipey Image Grids appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, ,
[Top]

Let’s Create an Image Pop-Out Effect With SVG Clip Path

Few weeks ago, I stumbled upon this cool pop-out effect by Mikael Ainalem. It showcases the clip-path: path() in CSS, which just got proper support in most modern browsers. I wanted to dig into it myself to get a better feel for how it works. But in the process, I found some issues with clip-path: path(); and wound up finding an alternative approach that I wanted to walk through with you in this article.

If you haven’t used clip-path or you are unfamiliar with it, it basically allows us to specify a display region for an element based on a clipping path and hide portions of the element that fall outside the clip path.

A rectangle with a pastel pattern, plus an unfilled star shape with a black border, equals a star shape with the pastel background pattern.
You can kind of think of it as though the star is a cookie cutter, the element is the cookie dough, and the result is a star-shaped cookie.

Possible values for clip-path include circle , ellipse and polygon which limit the use-case to just those specific shapes. This is where the new path value comes in — it allows us to use a more flexible SVG path to create various clipping paths that go beyond basic shapes.

Let’s take what we know about clip-path and start working on the hover effect. The basic idea of the is to make the foreground image of a person appear to pop-out from the colorful background and scale up in size when the element is hovered. An important detail is how the foreground image animation (scale up and move up) appears to be independent from the background image animation (scale up only).

This effect looks cool, but there are some issues with the path value. For starters, while we mentioned that support is generally good, it’s not great and hovers around 82% coverage at the time of writing. So, keep in mind that mobile support is currently limited to Chrome and Safari.

Besides support, the bigger and more bizarre issue with path is that it currently only works with pixel values, meaning that it is not responsive. For example, let’s say we zoom into the page. Right off the bat, the path shape starts to cut things off.

This severely limits the number of use cases for clip-path: path(), as it can only be used on fixed-sized elements. Responsive web design has been a widely-accepted standard for many years now, so it’s weird to see a new CSS property that doesn’t follow the principle and exclusively uses pixel units.

What we’re going to do is re-create this effect using standard, widely-supported CSS techniques so that it not only works, but is truly responsive as well.

The tricky part

We want anything that overflows the clip-path to be visible only on the top part of the image. We cannot use a standard CSS overflow property since it affects both the top and bottom.

Photo of a young woman against a pastel floral pattern cropped to the shape of a circle.
Using overflow-y: hidden, the bottom part looks good, but the image is cut-off at the top where the overflow should be visible.

So, what are our options besides overflow and clip-path? Well, let’s just use <clipPath> in the SVG itself. <clipPath> is an SVG property, which is different than the newly-released and non-responsive clip-path: path.

SVG <clipPath> element

SVG <clipPath> and <path> elements adapt to the coordinate system of the SVG element, so they are responsive out of the box. As the SVG element is being scaled, its coordinate system is also being scaled, and it maintains its proportions based on the various properties that cover a wide range of possible use cases. As an added benefit, using clip-path in CSS on SVG has 95% browser support, which is a 13% increase compared to clip-path: path.

Let’s start by setting up our SVG element. I’ve used Inkscape to create the basic SVG markup and clipping paths, just to make it easy for myself. Once I did that, I updated the markup by adding my own class attributes.

<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 -10 100 120" class="image">   <defs>     <clipPath id="maskImage" clipPathUnits="userSpaceOnUse">       <path d="..." />     </clipPath>     <clipPath id="maskBackground" clipPathUnits="userSpaceOnUse">       <path d="..." />     </clipPath>   </defs>   <g clip-path="url(#maskImage)" transform="translate(0 -7)">     <!-- Background image -->     <image clip-path="url(#maskBackground)" width="120" height="120" x="70" y="38" href="..." transform="translate(-90 -31)" />     <!-- Foreground image -->     <image width="120" height="144" x="-15" y="0" fill="none" class="image__foreground" href="..." />   </g> </svg>
A bright green circle with a bright red shape coming out from the top of it, as if another shape is behind the green circle.
SVG <clipPath> elements created in Inkscape. The green element represents a clipping path that will be applied to the background image. The red is a clipping path that will be applied to both the background and foreground image.

This markup can be easily reused for other background and foreground images. We just need to replace the URL in the href attribute inside image elements.

Now we can work on the hover animation in CSS. We can get by with transforms and transitions, making sure the foreground is nicely centered, then scaling and moving things when the hover takes place.

.image {   transform: scale(0.9, 0.9);   transition: transform 0.2s ease-in; }  .image__foreground {   transform-origin: 50% 50%;   transform: translateY(4px) scale(1, 1);   transition: transform 0.2s ease-in; }  .image:hover {   transform: scale(1, 1); }  .image:hover .image__foreground {   transform: translateY(-7px) scale(1.05, 1.05); }

Here is the result of the above HTML and CSS code. Try resizing the screen and changing the dimensions of the SVG element to see how the effect scales with the screen size.

This looks great! However, we’re not done. We still need to address some issues that we get now that we’ve changed the markup from an HTML image element to an SVG element.

SEO and accessibility

Inline SVG elements won’t get indexed by search crawlers. If the SVG elements are an important part of the content, your page SEO might take a hit because those images probably won’t get picked up.

We’ll need additional markup that uses a regular <img> element that’s hidden with CSS. Images declared this way are automatically picked up by crawlers and we can provide links to those images in an image sitemap to make sure that the crawlers manage to find them. We’re using loading="lazy" which allows the browser to decide if loading the image should be deferred.

We’ll wrap both elements in a <figure> element so that we markup reflects the relationship between those two images and groups them together:

<figure>   <!-- SVG element -->   <svg xmlns="http://www.w3.org/2000/svg" viewBox="0 -10 100 120" class="image">      <!-- ... -->   </svg>   <!-- Fallback image -->   <img src="..." alt="..." loading="lazy" class="fallback-image" /> </figure>

We also need to address some accessibility concerns for this effect. More specifically, we need to make improvements for users who prefer browsing the web without animations and users who browse the web using screen readers.

Making SVG elements accessible takes a lot of additional markup. Additionally, if we want to remove transitions, we would have to override quite a few CSS properties which can cause issues if our selector specificities aren’t consistent. Luckily, our newly added regular image has great accessibility features baked right in and can easily serve as a replacement for users who browse the web without animations.

<figure>   <!-- Animated SVG element -->   <svg xmlns="http://www.w3.org/2000/svg" viewBox="0 -10 100 120" class="image" aria-hidden="true">     <!-- ... -->   </svg>    <!-- Fallback SEO & a11y image -->   <img src="..." alt="..." loading="lazy" class="fallback-image" /> </figure>

We need to hide the SVG element from assistive devices, by adding aria-hidden="true", and we need to update our CSS to include the prefers-reduced-motion media query. We are inclusively hiding the fallback image for users without the reduced motion preference meanwhile keeping it available for assistive devices like screen readers.

@media (prefers-reduced-motion: no-preference) { .fallback-image {   clip: rect(0 0 0 0);    clip-path: inset(50%);   height: 1px;   overflow: hidden;   position: absolute;   white-space: nowrap;    width: 1px;   }  }  @media (prefers-reduced-motion) {   .image {     display: none;   } }

Here is the result after the improvements:

Please note that these improvements won’t change how the effect looks and behaves for users who don’t have the prefers-reduced-motion preference set or who aren’t using screen readers.

That’s a wrap

Developers were excited about path option for clip-path CSS attribute and new styling possibilities, but many were displeased to find out that these values only support pixel values. Not only does that mean the feature is not responsive, but it severely limits the number of use cases where we’d want to use it.

We converted an interesting image pop-out hover effect that uses clip-path: path into an SVG element that utilizes the responsiveness of the <clipPath> SVG element to achieve the same thing. But in doing so, we introduced some SEO and accessibility issues, that we managed to work around with a bit of extra markup and a fallback image.

Thank you for taking the time to read this article! Let me know if this approach gave you an idea on how to implement your own effects and if you have any suggestions on how to approach this effect in a different way.


The post Let’s Create an Image Pop-Out Effect With SVG Clip Path appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , , ,
[Top]

Image Fragmentation Effect With CSS Masks and Custom Properties

Geoff shared this idea of a checkerboard where the tiles disappear one-by-one to reveal an image. In it, an element has a background image, then a CSS Grid layout holds the “tiles” that go from a filled background color to transparent, revealing the image. A light touch of SCSS staggers the animation.

I have a similar idea, but with a different approach. Instead of revealing the image, let’s start with it fully revealed, then let it disappear one tile at a time, as if it’s floating away in tiny fragments.

Here’s a working demo of the result. No JavaScript handling, no SVG trickery. Only a single <img> and some SCSS magic.

Cool, right? Sure, but here’s the rub. You’re going to have to view this in Chrome, Edge or Opera because those are the only browsers with support for @property at the moment and that’s a key component to this idea. We won’t let that stop us because this is a great opportunity to get our hands wet with cool CSS features, like masks and animating linear gradients with the help of @property.

Masking things

Masking is sometimes hard to conceptualize and often gets confused with clipping. The bottom line: masks are images. When an image is applied as mask to an element, any transparent parts of the image allow us see right through the element. Any opaque parts will make the element fully visible.

Masks work the same way as opacity, but on different portions of the same element. That’s different from clipping, which is a path where everything outside the path is simply hidden. The advantages of masking is that we can have as many mask layers as we want on the same element — similar to how we can chain multiple images on background-image.

And since masks are images, we get to use CSS gradients to make them. Let’s take an easy example to better understand the trick.

img {   mask:     linear-gradient(rgba(0,0,0,0.8) 0 0) left,  /* 1 */     linear-gradient(rgba(0,0,0,0.5) 0 0) right; /* 2 */   mask-size: 50% 100%;   mask-repeat: no-repeat; }

Here, we’re defining two mask layers on an image. They are both a solid color but the alpha transparency values are different. The above syntax may look strange but it’s a simplified way of writing linear-gradient(rgba(0,0,0,0.8), rgba(0,0,0,0.8)).

It’s worth noting that the color we use is irrelevant since the default mask-mode is alpha. The alpha value is the only relevant thing. Our gradient can be linear-gradient(rgba(X,Y,Z,0.8) 0 0) where X, Y and Z are random values.

Each mask layer is equal to 50% 100% (or half width and full height of the image). One mask covers the left and the other covers the right. At the end, we have two non-overlapping masks covering the whole area of the image and, as we discussed earlier, each one has a differently defined alpha transparency value.

We’re looking at two mask layers created with two linear gradients. The first gradient, left, has an alpha value of 0.8. The second gradient, right, has an alpha value of 0.5. The first gradient is more opaque meaning more of the image shows through. The second gradient is more transparent meaning more of the of background shows through.

Animating linear gradients

What we want to do is apply an animation to the linear gradient alpha values of our mask to create a transparency animation. Later on, we’ll make these into asynchronous animations that will create the fragmentation effect.

Animating gradients is something we’ve been unable to do in CSS. That is, until we got limited support for @property. Jhey Tompkins did a deep dive into the awesome animating powers of @property, demonstrating how it can be used to transition gradients. Again, you’ll want to view this in Chrome or another Blink-powered browser:

In short, @property lets us create custom CSS properties where we’re able to define the syntax by specifying a type. Let’s create two properties, --c-0 and--c-1 , that take a number with an initial value of 1.

@property --c-0 {    syntax: "<number>";    initial-value: 1;    inherits: false; } @property --c-1 {    syntax: "<number>";    initial-value: 1;    inherits: false; }

Those properties are going to represent the alpha values in our CSS mask. And since they both default to fully opaque (i.e. 1 ), the entire image shows through the mask. Here’s how we can rewrite the mask using the custom properties:

/* Omitting the @property blocks above for brevity */  img {   mask:     linear-gradient(rgba(0,0,0,var(--c-0)) 0 0) left,  /* 1 */     linear-gradient(rgba(0,0,0,var(--c-1)) 0 0) right; /* 2 */   mask-size: 50% 100%;   mask-repeat: no-repeat;   transition: --c-0 0.5s, --c-1 0.3s 0.4s; }  img:hover {   --c-0:0;   --c-1:0; }

All we’re doing here is applying a different transition duration and delay for each custom variable. Go ahead and hover the image. The first gradient of the mask will fade out to an alpha value of 0 to make the image totally see through, followed but the second gradient.

More masking!

So far, we’ve only been working with two linear gradients on our mask and two custom properties. To create a tiling or fragmentation effect, we’ll need lots more tiles, and that means lots more gradients and a lot of custom properties!

SCSS makes this a fairly trivial task, so that’s what we’re turning to for writing styles from here on out. As we saw in the first example, we have a kind of matrix of tiles. We can think of those as rows and columns, so let’s define two SCSS variables, $ x and $ y to represent them.

Custom properties

We’re going to need @property definitions for each one. No one wants to write all those out by hand, though, so let’s allow SCSS do the heavy lifting for us by running our properties through a loop:

@for $ i from 0 through ($ x - 1) {   @for $ j from 0 through ($ y - 1) {     @property --c-#{$ i}-#{$ j} {       syntax: "<number>";       initial-value: 1;       inherits: false;     }   } }

Then we make all of them go to 0 on hover:

img:hover {   @for $ i from 0 through ($ x - 1) {     @for $ j from 0 through ($ y - 1) {       --c-#{$ i}-#{$ j}: 0;     }   } }

Gradients

We’re going to write a @mixin that generates them for us:

@mixin image() {   $ all_t: (); // Transition   $ all_m: (); // Mask   @for $ i from 0 through ($ x - 1) {     @for $ j from 0 through ($ y - 1) {       $ all_t: append($ all_t, --c-#{$ i}-#{$ j} transition($ i,$ j), comma);       $ all_m: append($ all_m, linear-gradient(rgba(0,0,0,var(--c-#{$ i}-#{$ j})) 0 0) calc(#{$ i}*100%/(#{$ x} - 1)) calc(#{$ j}*100%/(#{$ y} - 1)), comma);     }   }   transition: $ all_t;   mask: $ all_m; }

All our mask layers equally-sized, so we only need one property for this, relying on the $ x and $ y variables and calc():

mask-size: calc(100%/#{$ x}) calc(100%/#{$ y})

You may have noticed this line as well:

$ all_t: append($ all_t, --c-#{$ i}-#{$ j} transition($ i,$ j), comma);

Within the same mixing, we’re also generating the transition property that contains all the previously defined custom properties.

Finally, we generate a different duration/delay for each property, thanks to the random() function in SCSS.

@function transition($ i,$ j) {   @return $ s*random()+s $ s*random()+s; }

Now all we have to do is to adjust the $ x and $ y variables to control the granularity of our fragmentation.

Playing with the animations

We can also change the random configuration to consider different kind of animations.

In the code above, I defined the transition() function like below:

// Uncomment one to use it @function transition($ i,$ j) {   // @return (($ s*($ i+$ j))/($ x+$ y))+s (($ s*($ i+$ j))/($ x+$ y))+s; /* diagonal */   // @return (($ s*$ i)/$ x)+s (($ s*$ j)/$ y)+s; /* left to right */   // @return (($ s*$ j)/$ y)+s (($ s*$ i)/$ x)+s; /* top to bottom */   // @return  ($ s*random())+s (($ s*$ j)/$ y)+s; /* top to bottom random */   @return  ($ s*random())+s (($ s*$ i)/$ y)+s; /* left to right random */   // @return  ($ s*random())+s (($ s*($ i+$ j))/($ x+$ y))+s; /* diagonal random */   // @return ($ s*random())+s ($ s*random())+s; /* full random*/ }

By adjusting the formula, we can get different kinds of animation. Simply uncomment the one you want to use. This list is non-exhaustive — we can have any combination by considering more forumlas. (I’ll let you imagine what’s possible if we add advanced math functions, like sin(), sqrt(), etc.)

Playing with the gradients

We can still play around with our code by adjusting the gradient so that, instead of animating the alpha value, we animate the color stops. Our gradient will look like this:

linear-gradient(white var(--c-#{$ i}-#{$ j}),transparent 0)

Then we animate the variable from 100% to 0%. And, hey, we don’t have to stick with linear gradients. Why not radial?

Like the transition, we can define any kind of gradient we want — the combinations are infinite!

Playing with the overlap

Let’s introduce another variable to control the overlap between our gradient masks. This variable will set the mask-size like this:

calc(#{$ o}*100%/#{$ x}) calc(#{$ o}*100%/#{$ y})

There is no overlap if it’s equal to 1. If it’s bigger, then we do get an overlap. This allows us to make even more kinds of animations:

That’s it!

All we have to do is to find the perfect combination between variables and formulas to create astonishing and crazy image fragmentation effects.


The post Image Fragmentation Effect With CSS Masks and Custom Properties appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , ,
[Top]

Maximally optimizing image loading for the web in 2021

Malte Ubl’s list for:

8 image loading optimization techniques to minimize both the bandwidth used for loading images on the web and the CPU usage for image display.

  1. Fluid width images in CSS, not forgetting the height and width attributes in HTML so you get proper aspect-ratio on first render.
  2. Use content-visibility: auto;
  3. Send AVIF when you can.
  4. Use responsive images syntax.
  5. Set far-out expires headers on images and have a cache-busting strategy (like changing the file name).
  6. Use loading="lazy"
  7. Use decoding="async"
  8. Use inline CSS/SVG for a blurry placeholder.

Apparently, there is but one tool that does it all: eleventy-high-performance-blog.

My thoughts:

  • If you are lazy loading, do you really need to do the content-visibilty thing also? They seem very related.
  • Serving AVIF is usually good, but it seems less cut-and-dry than WebP was. You need to make sure your AVIF version is both better and smaller, which feels like a manual process right now.
  • The decoding thing seems weird. I’ll totally use it if it’s a free perf win, but if it’s always a good idea, shouldn’t the browser just always do it?
  • I’m not super convinced blurry placeholders are in the same category of necessary as the rest of this stuff. Feels like a trend.

Direct Link to ArticlePermalink


The post Maximally optimizing image loading for the web in 2021 appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , ,
[Top]