Customizing Color Fonts on the Web

Myles C. Maxfield on the WebKit Blog published a nifty how-to for color fonts. It comes on the heels of what Ollie wrote up here on CSS-Tricks the other day, and while they cover a lot of common ground, there’s some nice nuggets in the WebKit post that make them both worth reading.

Case in point: there’s a little progressive enhancement in there using @supports for older browsers lacking support the font-palette property. Then the post gets into a strategy that shows the property’s light and dark values at play to make the font more legible in certain contexts. There’s also a clever idea about how creating multiple @font-palette-values blocks with the same name can be used for fallbacks.

To Shared LinkPermalink on CSS-Tricks


Customizing Color Fonts on the Web originally published on CSS-Tricks. You should get the newsletter.

CSS-Tricks

, ,

A Perfect Table of Contents With HTML + CSS

Earlier this year, I self-published an ebook called Understanding JavaScript Promises (free for download). Even though I didn’t have any intention of turning it into a print book, enough people reached out inquiring about a print version that I decided to self-publish that as well .I thought it would be an easy exercise using HTML and CSS to generate a PDF and then send it off to the printer. What I didn’t realize was that I didn’t have an answer to an important part of a print book: the table of contents.

The makeup of a table of contents

At its core, a table of contents is fairly simple. Each line represents a part of a book or webpage and indicates where you can find that content. Typically, the lines contain three parts:

  1. The title of the chapter or section
  2. Leaders (i.e. those dots, dashes, or lines) that visually connect the title to the page number
  3. The page number

A table of contents is easy to generate inside of word processing tools like Microsoft Word or Google Docs, but because my content was in Markdown and then transformed into HTML, that wasn’t a good option for me. I wanted something automated that would work with HTML to generate the table of contents in a format that was suitable for print. I also wanted each line to be a link so it could be used in webpages and PDFs to navigate around the document. I also wanted dot leaders between the title and page number.

And so I began researching.

I came across two excellent blog posts on creating a table of contents with HTML and CSS. The first was “Build a Table of Contents from your HTML” by Julie Blanc. Julie worked on PagedJS, a polyfill for missing paged media features in web browsers that properly formats documents for print. I started with Julie’s example, but found that it didn’t quite work for me. Next, I found Christoph Grabo’s “Responsive TOC leader lines with CSS” post, which introduced the concept of using CSS Grid (as opposed to Julie’s float-based approach) to make alignment easier. Once again, though, his approach wasn’t quite right for my purposes.

After reading these two posts, though, I felt I had a good enough understanding of the layout issues to embark on my own. I used pieces from both blog posts as well as adding some new HTML and CSS concepts into the approach to come up with a result I’m happy with.

Choosing the correct markup

When deciding on the correct markup for a table of contents, I thought primarily about the correct semantics. Fundamentally, a table of contents is about a title (chapter or subsection) being tied to a page number, almost like a key-value pair. That led me to two options:

  • One option is to use a table (<table>) with one column for the title and one column for the page.
  • Then there’s the often unused and forgotten definition list (<dl>) element. It also acts as a key-value map. So, once again, the relationship between the title and the page number would be obvious.

Either of these seemed like good options until I realized that they really only work for single-level tables of contents, namely, only if I wanted to have a table of contents with just chapter names. If I wanted to show subsections in the table of contents, though, I didn’t have any good options. Table elements aren’t great for hierarchical data, and while definition lists can technically be nested, the semantics didn’t seem correct. So, I went back to the drawing board.

I decided to build off of Julie’s approach and use a list; however, I opted for an ordered list (<ol>) instead of an unordered list (<ul>). I think an ordered list is more appropriate in this case. A table of contents represents a list of chapters and subheadings in the order in which they appear in the content. The order matters and shouldn’t get lost in the markup.

Unfortunately, using an ordered list means losing the semantic relationship between the title and the page number, so my next step was to re-establish that relationship within each list item. The easiest way to solve this is to simply insert the word “page” before the page number. That way, the relationship of the number relative to the text is clear, even without any other visual distinction.

Here’s a simple HTML skeleton that formed the basis of my markup:

<ol class="toc-list">   <li>     <a href="#link_to_heading">       <span class="title">Chapter or subsection title</span>       <span class="page">Page 1</span>     </a>      <ol>       <!-- subsection items -->     </ol>   </li> </ol>

Applying styles to the table of contents

Once I had established the markup I planned to use, the next step was to apply some styles.

First, I removed the autogenerated numbers. You can choose to keep the autogenerated numbers in your own project if you’d like, but it’s common for books to have unnumbered forewords and afterwords included in the list of chapters, which makes the autogenerated numbers incorrect.

For my purpose, I would fill in the chapter numbers manually then adjust the layout so the top-level list doesn’t have any padding (thus aligning it with paragraphs) and each embedded list is indented by two spaces. I chose to use a 2ch padding value because I still wasn’t quite sure which font I would use. The ch length unit allows the padding to be relative to the width of a character — no matter what font is used — rather than an absolute pixel size that could wind up looking inconsistent.

Here’s the CSS I ended up with:

.toc-list, .toc-list ol {   list-style-type: none; }  .toc-list {   padding: 0; }  .toc-list ol {   padding-inline-start: 2ch; }

Sara Soueidan pointed out to me that WebKit browsers remove list semantics when list-style-type is none, so I needed to add role="list" into the HTML to preserve it:

<ol class="toc-list" role="list">   <li>     <a href="#link_to_heading">       <span class="title">Chapter or subsection title</span>       <span class="page">Page 1</span>     </a>      <ol role="list">       <!-- subsection items -->     </ol>   </li> </ol>

Styling the title and page number

With the list styled to my liking, it was time to move on to styling an individual list item. For each item in the table of contents, the title and page number must be on the same line, with the title to the left and the page number aligned to the right.

You might be thinking, “No problem, that’s what flexbox is for!” You aren’t wrong! Flexbox can indeed achieve the correct title-page alignment. But there are some tricky alignment issues when the leaders are added, so I instead opted to go with Christoph’s approach using a grid, which as a bonus as it also helps with multiline titles. Here is the CSS for an individual item:

.toc-list li > a {   text-decoration: none;   display: grid;   grid-template-columns: auto max-content;   align-items: end; }  .toc-list li > a > .page {   text-align: right; }

The grid has two columns, the first of which is auto-sized to fill up the entire width of the container, minus the second column, which is sized to max-content. The page number is aligned to the right, as is traditional in a table of contents.

The only other change I made at this point was to hide the “Page” text. This is helpful for screen readers but unnecessary visually, so I used a traditional visually-hidden class to hide it from view:

.visually-hidden {   clip: rect(0 0 0 0);   clip-path: inset(100%);   height: 1px;   overflow: hidden;   position: absolute;   width: 1px;   white-space: nowrap; }

And, of course, the HTML needs to be updated to use that class:

<ol class="toc-list" role="list">   <li>     <a href="#link_to_heading">       <span class="title">Chapter or subsection title</span>       <span class="page"><span class="visually-hidden">Page</span> 1</span>     </a>      <ol role="list">       <!-- subsection items -->     </ol>   </li> </ol>

With this foundation in place, I moved on to address the leaders between the title and the page.

Creating dot leaders

Leaders are so common in print media that you might be wondering, why doesn’t CSS already support that? The answer is: it does. Well, kind of.

There is actually a leader() function defined in the CSS Generated Content for Paged Media specification. However, as with much of the paged media specifications, this function isn’t implemented in any browsers, therefore excluding it as an option (at least at the time I’m writing this). It’s not even listed on caniuse.com, presumably because no one has implemented it and there are no plans or signals that they will.

Fortunately, both Julie and Christoph already addressed this problem in their respective posts. To insert the dot leaders, they both used a ::after pseudo-element with its content property set to a very long string of dots, like this:

.toc-list li > a > .title {   position: relative;   overflow: hidden; }  .toc-list li > a .title::after {   position: absolute;   padding-left: .25ch;   content: " . . . . . . . . . . . . . . . . . . . "       ". . . . . . . . . . . . . . . . . . . . . . . "       ". . . . . . . . . . . . . . . . . . . . . . . "       ". . . . . . . . . . . . . . . . . . . . . . . "       ". . . . . . . . . . . . . . . . . . . . . . . "       ". . . . . . . . . . . . . . . . . . . . . . . "       ". . . . . . . . . . . . . . . . . . . . . . . ";   text-align: right; }

The ::after pseudo-element is set to an absolute position to take it out of the flow of the page and avoid wrapping to other lines. The text is aligned to the right because we want the last dots of each line flush to the number at the end of the line. (More on the complexities of this later.) The .title element is set to have a relative position so the ::after pseudo-element doesn’t break out of its box. Meanwhile, the overflow is hidden so all those extra dots invisible. The result is a pretty table of contents with dot leaders.

However, there’s something else that needs consideration.

Sara also pointed out to me that all of those dots count as text to screen readers. So what do you hear? “Introduction dot dot dot dot…” until all of the dots are announced. That’s an awful experience for screen reader users.

The solution is to insert an additional element with aria-hidden set to true and then use that element to insert the dots. So the HTML becomes:

<ol class="toc-list" role="list">   <li>     <a href="#link_to_heading">       <span class="title">Chapter or subsection title<span class="leaders" area-hidden="true"></span></span>       <span class="page"><span class="visually-hidden">Page</span> 1</span>     </a>      <ol role="list">       <!-- subsection items -->     </ol>   </li> </ol>

And the CSS becomes:

.toc-list li > a > .title {   position: relative;   overflow: hidden; }  .toc-list li > a .leaders::after {   position: absolute;   padding-left: .25ch;   content: " . . . . . . . . . . . . . . . . . . . "       ". . . . . . . . . . . . . . . . . . . . . . . "       ". . . . . . . . . . . . . . . . . . . . . . . "       ". . . . . . . . . . . . . . . . . . . . . . . "       ". . . . . . . . . . . . . . . . . . . . . . . "       ". . . . . . . . . . . . . . . . . . . . . . . "       ". . . . . . . . . . . . . . . . . . . . . . . ";   text-align: right; }

Now screen readers will ignore the dots and spare users the frustration of listening to multiple dots being announced.

Finishing touches

At this point, the table of contents component looks pretty good, but it could use some minor detail work. To start, most books visually offset chapter titles from subsection titles, so I made the top-level items bold and introduced a margin to separate subsections from the chapters that followed:

.toc-list > li > a {   font-weight: bold;   margin-block-start: 1em; }

Next, I wanted to clean up the alignment of the page numbers. Everything looked okay when I was using a fixed-width font, but for variable-width fonts, the leader dots could end up forming a zigzag pattern as they adjust to the width of a page number. For instance, any page number with a 1 would be narrower than others, resulting in leader dots that are misaligned with the dots on previous or following lines.

Misaligned numbers and dots in a table of contents.

To fix this problem, I set font-variant-numeric to tabular-nums so all numbers are treated with the same width. By also setting the minimum width to 2ch, I ensured that all numbers with one or two digits are perfectly aligned. (You may want to set this to 3ch if your project has more than 100 pages.) Here is the final CSS for the page number:

.toc-list li > a > .page {   min-width: 2ch;   font-variant-numeric: tabular-nums;   text-align: right; }
Aligned leader dots in a table of contents.

And with that, the table of contents is complete!

Conclusion

Creating a table of contents with nothing but HTML and CSS was more of a challenge than I expected, but I’m very happy with the result. Not only is this approach flexible enough to accommodate chapters and subsections, but it handles sub-subsections nicely without updating the CSS. The overall approach works on web pages where you want to link to the various locations of content, as well as PDFs where you want the table of contents to link to different pages. And of course, it also looks great in print if you’re ever inclined to use it in a brochure or book.

I’d like to thank Julie Blanc and Christoph Grabo for their excellent blog posts on creating a table of contents, as both of those were invaluable when I was getting started. I’d also like to thank Sara Soueidan for her accessibility feedback as I worked on this project.


A Perfect Table of Contents With HTML + CSS originally published on CSS-Tricks. You should get the newsletter.

CSS-Tricks

, , ,
[Top]

Mastering SVG’s stroke-miterlimit Attribute

So, SVG has this stroke-miterlimit presentation attribute. You’ve probably seen it when exporting an SVG from a graphic editor program, or perhaps you find out you could remove it without noticing any change to the visual appearance.

After a good amount of research, one of the first things I discovered is that the attribute works alongside stroke-linejoin, and I’ll show you how as well as a bunch of other things I learned about this interesting (and possibly overlooked) SVG attribute.

TLDR;

stroke-miterlimit depends on stroke-linejoin: if we use round or bevel for joins, then there’s no need to declare stroke-miterlimit. But if we use miter instead, we can still delete it and maybe the default value will be enough. Beware that many graphic software editors will add this attribute even when is not necessary.

What is stroke-linejoin?

I know, we’re actually here to talk about stroke-miterlimit, but I want to start with stroke-linejoin because of how tightly they work together. This is the definition for stroke-linejoin pulled straight from the SVG Working Group (SVGWG):

stroke-linejoin specifies the shape to be used at the corners of paths or basic shapes when they are stroked.

This means we can define how the corner looks when two lines meet at a point. And this attribute accepts five possible values, though two of them have no browser implementation and are identified by the spec as at risk of being dropped. So, I’ll briefly present the three supported values the attribute accepts.

miter is the default value and it just so happens to be the most important one of the three we’re looking at. If we don’t explicitly declare stroke-linejoin in the SVG code, then miter is used to shape the corner of a path. We know a join is set to miter when both edges meet at a sharp angle.

But we can also choose round which softens the edges with — you guessed it — rounded corners.

The bevel value, meanwhile, produces a flat edge that sort of looks like a cropped corner.

What is stroke-miterlimit?

OK, now that we know what stroke-linejoin is, let’s get back to the topic at hand and pick apart the definition of stroke-miterlimit from the book Using SVG with CSS3 and HTML5:

[…] on really tight corners, you have to extend the stroke for quite a distance, before the two edges meet. For that reason, there is a secondary property: stroke-miterlimit. It defines how far you can extend the point when creating a miter corner.

In other words, stroke-miterlimit sets how far the stroke of the edges goes before they can meet at a point. And only when the stroke-linejoin is miter.

Miter join with miter limit in grey.

So, the stroke-miterlimit value can be any positive integer, where 4 is the default value. The higher the value, the further the corner shape is allowed to go.

How they work together

You probably have a good conceptual understanding now of how stroke-linejoin and stroke-miterlimit work together. But depending on the stroke-miterlimit value, you might get some seemingly quirky results.

Case in point: if stroke-linejoin is set to miter, it can actually wind up looking like the bevel value instead when the miter limit is too low. Here’s the spec again to help us understand why:

If the miter length divided by the stroke width exceeds the stroke-miterlimit then [the miter value] is converted to a bevel.

So, mathematically we could say that this:

[miter length] / [stroke width] > [stroke-miterlimit] = miter [miter length] / [stroke width] < [stroke-miterlimit] = bevel

That makes sense, right? If the miter is unable to exceed the width of the stroke, then it ought to be a flat edge. Otherwise, the miter can grow and form a point.

Sometimes seeing is believing, so here’s Ana Tudor with a wonderful demo showing how the stroke-miterlimit value affects an SVG’s stroke-linejoin:

Setting miter limits in design apps

Did you know that miter joins and limits are available in many of the design apps we use in our everyday work? Here’s where to find them in Illustrator, Figma, and Inkscape.

Setting miter limits in Adobe Illustrator

Illustrator has a way to modify the miter value when configuring a path’s stroke. You can find it in the “Stroke” settings on a path. Notice how — true to the spec — we are only able to set a value for the “Limit” when the path’s “Corner” is set to “Miter Join”.

Applying stroke-miterlimit in Adobe Illustrator.

One nuance is that Illustrator has a default miter limit of 10 rather than the default 4. I’ve noticed this every time I export the SVG file or copy and paste the resulting SVG code. That could be confusing when you open up the code because even if you do not change the miter limit value, Illustrator adds stroke-miterlimit="10" where you might expect 4 or perhaps no stroke-miterlimit at all.

And that’s true even if we choose a different stroke-linejoin value other than “Miter Join”. Here is the code I got when exporting an SVG with stroke-linejoin="round".

<svg viewBox="0 0 16 10"><path stroke-width="2" stroke-linejoin="round" stroke-miterlimit="10" d="M0 1h15.8S4.8 5.5 2 9.5" fill="none" stroke="#000"/></svg>

The stroke-miterlimit shouldn’t be there as it only works with stroke-linejoin="miter". Here are a couple of workarounds for that:

  • Set the “Limit” value to 4, as it is the default in SVG and is the only value that doesn’t appear in the code.
  • Use the “Export As” or “Export for Screen” options instead of “Save As” or copy-pasting the vectors directly.

If you’d like to see that fixed, join me and upvote the request to make it happen.

Setting miter limits in Figma

Miter joins and limits are slightly different in Figma. When we click the node of an angle on a shape, under the three dots of the Stroke section, we can find a place to set the join of a corner. The option “Miter angle” appears by default, but only when the join is set to miter:

Applying stroke-miterlimit in Figma.

This part works is similar to Illustrator except for how Figma allows us to set the miter angle in degree units instead of decimal values. There are some other specific nuances to point out:

  • The angle is 7.17° by default and there is no way to set a lower value. When exporting the SVG, that value is becomes stroke-miterlimit='16‘ in the markup, which is different from both the SVG spec and the Illustrator default.
  • The max value is 180°, and when drawing with this option, the join is automatically switched to bevel.
  • When exporting with bevel join, the stroke-miterlimit is there in the code, but it keeps the value that was set when the miter angle was last active (Illustrator does the same thing).
  • When exporting the SVG with a round join, the path is expanded and we no longer have a stroke, but a path with a fill color.

I was unable to find a way to avoid the extra code that ends up in the exported SVG when stroke-miterlimit is unneeded.

Setting miter limits in Inkscape

Inkscape works exactly the way I’d expect a design app to manage miter joins and limits. When selecting a a miter join, the default value is 4, exactly what it is in the spec. Better yet, stroke-miterlimit is excluded from the exported SVG code when it is the default value!

Applying stroke-miterlimit in Inkscape.

Still, if we export any path with bevel or round after the limit was modified, the stroke-miterlimit will be back in the code, unless we keep the 4 units of the default in the Limit box. Same trick as Illustrator.

These examples will work nicely if we choose the Save AsOptimized SVG option. Inkscape is free and open source and, at the end of the day, has the neatest code as far as stroke-miterlimit goes and the many options to optimize the code for exporting.

But if you are more familiar with Illustrator (like I am), there is a workaround to keep in mind. Figma, because of the degree units and the expansion of the strokes, feels like the more distant from the specs and expected behavior.

Wrapping up

And that’s what I learned about SVG’s stroke-miterlimit attribute. It’s another one of those easy-to-overlook things we might find ourselves blindly cutting out, particularly when optimizing an SVG file. So, now when you find yourself setting stroke-miterlimit you’ll know what it does, how it works alongside stroke-linejoin, and why the heck you might get a beveled join when setting a miter limit value.


Mastering SVG’s stroke-miterlimit Attribute originally published on CSS-Tricks. You should get the newsletter.

CSS-Tricks

, , ,
[Top]

First Look At The CSS object-view-box Property

Ahmad Shadeed — doing what he always does so well — provides an early look at the object-view-box property, something he describes as a native way to crop an image in the browser with CSS.

The use case? Well, Ahmad wastes no time showing how to use the property to accomplish what used to require either (1) a wrapping element with hidden overflow around an image that’s sized and positioned inside that element or (2) the background-image route.

But with object-view-box we can essentially draw the image boundaries as we can with an SVG’s viewbox. So, take a plain ol’ <img> and call on object-view-box to trim the edges using an inset function. I’ll simply drop Ahmad’s pen in here:

Only supported in Chrome Canary for now, I’m afraid. But it’s (currently) planned to release in Chrome 104. Elsewhere:

To Shared LinkPermalink on CSS-Tricks


First Look At The CSS object-view-box Property originally published on CSS-Tricks. You should get the newsletter.

CSS-Tricks

, , ,
[Top]

Dialog Components: Go Native HTML or Roll Your Own?

As the author of a library called AgnosticUI, I’m always on the lookout for new components. And recently, I decided to dig in and start work on a new dialog (aka modal) component. That’s something many devs like to have in their toolset and my goal was to make the best one possible, with an extra special focus on making it inclusive and accessible.

My first thought was that I would avoid any dependencies and bite the bullet to build my own dialog component. As you may know, there’s a new <dialog> element making the rounds and I figured using it as a starting point would be the right thing, especially in the inclusiveness and accessibilities departments.

But, after doing some research, I instead elected to leverage a11y-dialog by Kitty Giraudel. I even wrote adapters so it integrates smoothly with Vue 3, Svelte, and Angular. Kitty has long offered a React adapter as well.

Why did I go that route? Let me take you through my thought process.

First question: Should I even use the native <dialog> element?

The native <dialog> element is being actively improved and will likely be the way forward. But, it still has some issues at the moment that Kitty pointed out quite well:

  1. Clicking the backdrop overlay does not close the dialog by default
  2. The alertdialog ARIA role used for alerts simply does not work with the native <dialog> element. We’re supposed to use that role when a dialog requires a user’s response and shouldn’t be closed by clicking the backdrop, or by pressing ESC.
  3. The <dialog> element comes with a ::backdrop pseudo-element but it is only available when a dialog is programmatically opened with dialog.showModal().

And as Kitty also points out, there are general issues with the element’s default styles, like the fact they are left to the browser and will require JavaScript. So, it’s sort of not 100% HTML anyway.

Here’s a pen demonstrating these points:

Now, some of these issues may not affect you or whatever project you’re working on specifically, and you may even be able to work around things. If you still would like to utilize the native dialog you should see Adam Argyle’s wonderful post on building a dialog component with native dialog.

OK, let’s discuss what actually are the requirements for an accessible dialog component…

What I’m looking for

I know there are lots of ideas about what a dialog component should or should not do. But as far as what I was personally going after for AgnosticUI hinged on what I believe make for an accessible dialog experience:

  1. The dialog should close when clicking outside the dialog (on the backdrop) or when pressing the ESC key.
  2. It should trap focus to prevent tabbing out of the component with a keyboard.
  3. It should allow forwarding tabbing with TAB and backward tabbing with SHIFT+TAB.
  4. It should return focus back to the previously focused element when closed.
  5. It should correctly apply aria-* attributes and toggles.
  6. It should provide Portals (only if we’re using it within a JavaScript framework).
  7. It should support the alertdialog ARIA role for alert situations.
  8. It should prevent the underlying body from scrolling, if needed.
  9. It would be great if our implementation could avoid the common pitfalls that come with the native <dialog> element.
  10. It would ideally provide a way to apply custom styling while also taking the prefers-reduced-motion user preference query as a further accessibility measure.

I’m not the only one with a wish list. You might want to see Scott O’Hara’s article on the topic as well as Kitty’s full write-up on creating an accessible dialog from scratch for more in-depth coverage.

It should be clear right about now why I nixed the native <dialog> element from my component library. I believe in the work going into it, of course, but my current needs simply outweigh the costs of it. That’s why I went with Kitty’s a11y-dialog as my starting point.

Auditing <dialog> accessibility

Before trusting any particular dialog implementation, it’s worth making sure it fits the bill as far as your requirements go. With my requirements so heavily leaning on accessibility, that meant auditing a11y-dialog.

Accessibility audits are a profession of their own. And even if it’s not my everyday primary focus, I know there are some things that are worth doing, like:

This is quite a lot of work, as you might imagine (or know from experience). It’s tempting to take a path of less resistance and try automating things but, in a study conducted by Deque Systems, automated tooling can only catch about 57% of accessibility issues. There’s no substitute for good ol’ fashioned hard work.

The auditing environment

The dialog component can be tested in lots of places, including Storybook, CodePen, CodeSandbox, or whatever. For this particular test, though, I prefer instead to make a skeleton page and test locally. This way I’m preventing myself from having to validate the validators, so to speak. Having to use, say, a Storybook-specific add-on for a11y verification is fine if you’re already using Storybook on your own components, but it adds another layer of complexity when testing the accessibility of an external component.

A skeleton page can verify the dialog with manual checks, existing a11y tooling, and screen readers. If you’re following along, you’ll want to run this page via a local server. There are many ways to do that; one is to use a tool called serve, and npm even provides a nice one-liner npx serve <DIRECTORY> command to fire things up.

Let’s do an example audit together!

I’m obviously bullish on a11y-dialog here, so let’s put it to the test and verify it using some of the the recommended approaches we’ve covered.

Again, all I’m doing here is starting with an HTML. You can use the same one I am (complete with styles and scripts baked right in).

View full code
<!DOCTYPE html> <html lang="en">   <head>     <meta charset="UTF-8">     <meta name="viewport" content="width=device-width, initial-scale=1.0">     <meta http-equiv="X-UA-Compatible" content="ie=edge">     <title>A11y Dialog Test</title>     <style>       .dialog-container {         display: flex;         position: fixed;         top: 0;         left: 0;         bottom: 0;         right: 0;         z-index: 2;       }              .dialog-container[aria-hidden='true'] {         display: none;       }              .dialog-overlay {         position: fixed;         top: 0;         left: 0;         bottom: 0;         right: 0;         background-color: rgb(43 46 56 / 0.9);         animation: fade-in 200ms both;       }              .dialog-content {         background-color: rgb(255, 255, 255);         margin: auto;         z-index: 2;         position: relative;         animation: fade-in 400ms 200ms both, slide-up 400ms 200ms both;         padding: 1em;         max-width: 90%;         width: 600px;         border-radius: 2px;       }              @media screen and (min-width: 700px) {         .dialog-content {           padding: 2em;         }       }              @keyframes fade-in {         from {           opacity: 0;         }       }              @keyframes slide-up {         from {           transform: translateY(10%);         }       }        /* Note, for brevity we haven't implemented prefers-reduced-motion */              .dialog h1 {         margin: 0;         font-size: 1.25em;       }              .dialog-close {         position: absolute;         top: 0.5em;         right: 0.5em;         border: 0;         padding: 0;         background-color: transparent;         font-weight: bold;         font-size: 1.25em;         width: 1.2em;         height: 1.2em;         text-align: center;         cursor: pointer;         transition: 0.15s;       }              @media screen and (min-width: 700px) {         .dialog-close {           top: 1em;           right: 1em;         }       }              * {         box-sizing: border-box;       }              body {         font: 125% / 1.5 -apple-system, BlinkMacSystemFont, Segoe UI, Helvetica, Arial, sans-serif;         padding: 2em 0;       }              h1 {         font-size: 1.6em;         line-height: 1.1;         font-family: 'ESPI Slab', sans-serif;         margin-bottom: 0;       }              main {         max-width: 700px;         margin: 0 auto;         padding: 0 1em;       }     </style>     <script defer src="https://cdn.jsdelivr.net/npm/a11y-dialog@7/dist/a11y-dialog.min.js"></script>   </head>    <body>     <main>       <div class="dialog-container" id="my-dialog" aria-hidden="true" aria-labelledby="my-dialog-title" role="dialog">         <div class="dialog-overlay" data-a11y-dialog-hide></div>         <div class="dialog-content" role="document">           <button data-a11y-dialog-hide class="dialog-close" aria-label="Close this dialog window">             ×           </button>           <a href="https://www.yahoo.com/" target="_blank">Rando Yahoo Link</a>              <h1 id="my-dialog-title">My Title</h1>           <p id="my-dialog-description">             Some description of what's inside this dialog…           </p>         </div>       </div>       <button type="button" data-a11y-dialog-show="my-dialog">         Open the dialog       </button>     </main>     <script>       // We need to ensure our deferred A11yDialog has       // had a chance to do its thing ;-)       window.addEventListener('DOMContentLoaded', (event) => {         const dialogEl = document.getElementById('my-dialog')         const dialog = new A11yDialog(dialogEl)       });     </script>   </body>  </html>

I know, we’re ignoring a bunch of best practices (what, styles in the <head>?!) and combined all of the HTML, CSS, and JavaScript in one file. I won’t go into the details of the code as the focus here is testing for accessibility, but know that this test requires an internet connection as we are importing a11y-dialog from a CDN.

First, the manual checks

I served this one-pager locally and here are my manual check results:

Feature Result
It should close when clicking outside the dialog (on the backdrop) or when pressing the ESC key.
It ought to trap focus to prevent tabbing out of the component with a keyboard.
It should allow forwarding tabbing with TAB and backward tabbing with SHIFT+TAB.
It should return focus back to the previously focused element when closed.
It should correctly apply aria-* attributes and toggles.
I verified this one “by eye” after inspecting the elements in the DevTools Elements panel.
It should provide Portals. Not applicable.
This is only useful when implementing the element with React, Svelte, Vue, etc. We’ve statically placed it on the page with aria-hidden for this test.
It should support for the alertdialog ARIA role for alert situations.
You’ll need to do two things:

First, remove data-a11y-dialog-hide from the overlay in the HTML so that it is <div class="dialog-overlay"></div>. Replace the dialog role with alertdialog so that it becomes:

<div class="dialog-container" id="my-dialog" aria-hidden="true" aria-labelledby="my-dialog-title" aria-describedby="my-dialog-description" role="alertdialog">

Now, clicking on the overlay outside of the dialog box does not close the dialog, as expected.

It should prevent the underlying body from scrolling, if needed.
I didn’t manually test but this, but it is clearly available per the documentation.
It should avoid the common pitfalls that come with the native <dialog> element.
This component does not rely on the native <dialog> which means we’re good here.

Next, let’s use some a11y tooling

I used Lighthouse to test the component both on a desktop computer and a mobile device, in two different scenarios where the dialog is open by default, and closed by default.

a11y-dialog Lighthouse testing, score 100.

I’ve found that sometimes the tooling doesn’t account for DOM elements that are dynamically shown or hidden DOM elements, so this test ensures I’m getting full coverage of both scenarios.

I also tested with IBM Equal Access Accessibility Checker. Generally, this tool will give you a red violation error if there’s anything egregious wrong. It will also ask you to manually review certain items. As seen here, there a couple of items for manual review, but no red violations.

a11y-dialog — tested with IBM Equal Access Accessibility Checker

Moving on to screen readers

Between my manual and tooling checks, I’m already feeling relatively confident that a11y-dialog is an accessible option for my dialog of choice. However, we ought to do our due diligence and consult a screen reader.

VoiceOver is the most convenient screen reader for me since I work on a Mac at the moment, but JAWS and NVDA are big names to look at as well. Like checking for UI consistency across browsers, it’s probably a good idea to test on more than one screen reader if you can.

VoiceOver caption over the a11y-modal example.

Here’s how I conducted the screen reader part of the audit with VoiceOver. Basically, I mapped out what actions needed testing and confirmed each one, like a script:

Step Result
The dialog component’s trigger button is announced. “Entering A11y Dialog Test, web content.”
The dialog should open when pressing CTRL+ALT +Space should show the dialog. “Dialog. Some description of what’s inside this dialog. You are currently on a dialog, inside of web content.”
The dialog should TAB to and put focus on the component’s Close button. “Close this dialog button. You are currently on a button, inside of web content.”
Tab to the link element and confirm it is announced. “Link, Rando Yahoo Link”
Pressing the SPACE key while focused on the Close button should close the dialog component and return to the last item in focus.

Testing with people

If you’re thinking we’re about to move on to testing with real people, I was unfortunately unable to find someone. If I had done this, though, I would have used a similar set of steps for them to run through while I observe, take notes, and ask a few questions about the general experience.

As you can see, a satisfactory audit involves a good deal of time and thought.

Fine, but I want to use a framework’s dialog component

That’s cool! Many frameworks have their own dialog component solution, so there’s lots to choose from. I don’t have some amazing spreadsheet audit of all the frameworks and libraries in the wild, and will spare you the work of evaluating them all.

Instead, here are some resources that might be good starting points and considerations for using a dialog component in some of the most widely used frameworks.

Disclaimer: I have not tested these personally. This is all stuff I found while researching.

Angular dialog options

In 2020, Deque published an article that audits Angular component libraries and the TL;DR was that Material (and its Angular/CDK library) and ngx-bootstrap both appear to provide decent dialog accessibility.

React dialog options

Reakit offers a dialog component that they claim is compliant with WAI-ARIA dialog guidelines, and chakra-ui appears to pay attention to its accessibility. Of course, Material is also available for React, so that’s worth a look as well. I’ve also heard good things about reach/dialog and Adobe’s @react-aria/dialog.

Vue dialog options

I’m a fan of Vuetensils, which is Austin Gil’s naked (aka headless) components library, which just so happens to have a dialog component. There’s also Vuetify, which is a popular Material implementation with a dialog of its own. I’ve also crossed paths with PrimeVue, but was surprised that its dialog component failed to return focus to the original element.

Svelte dialog options

You might want to look at svelte-headlessui. Material has a port in svelterial that is also worth a look. It seems that many current SvelteKit users prefer to build their own component sets as SvelteKit’s packaging idiom makes it super simple to do. If this is you, I would definitely recommend considering svelte-a11y-dialog as a convenient means to build custom dialogs, drawers, bottom sheets, etc.

I’ll also point out that my AgnosticUI library wraps the React, Vue, Svelte and Angular a11y-dialog adapter implementations we’ve been talking about earlier.

Bootstrap, of course

Bootstrap is still something many folks reach for, and unsurprisingly, it offers a dialog component. It requires you to follow some steps in order to make the modal accessible.

If you have other inclusive and accessible library-based dialog components that merit consideration, I’d love to know about them in the comments!

But I’m creating a custom design system

If you’re creating a design system or considering some other roll-your-own dialog approach, you can see just how many things need to be tested and taken into consideration… all for one component! It’s certainly doable to roll your own, of course, but I’d say it’s also extremely prone to error. You might ask yourself whether the effort is worthwhile when there are already battle-tested options to choose from.

I’ll simply leave you with something Scott O’Hara — co-editor of ARIA in HTML and HTML AAM specifications in addition to just being super helpful with all things accessibility — points out:

You could put in the effort to add in those extensions, or you could use a robust plugin like a11y-dialog and ensure that your dialogs will have a pretty consistent experience across all browsers.

Back to my objective…

I need that dialog to support React, Vue, Svelte, and Angular implementations.

I mentioned earlier that a11y-dialog already has ports for Vue and React. But the Vue port hasn’t yet been updated for Vue 3. Well, I was quite happy to spend the time I would have spent creating what likely would have been a buggy hand-rolled dialog component toward helping update the Vue port. I also added a Svelte port and one for Angular too. These are both very new and I would consider them experimental beta software at time of writing. Feedback welcome, of course!

It can support other components, too!

I think it’s worth pointing out that a dialog uses the same underlying concept for hiding and showing that can be used for a drawer (aka off-canvas) component. For example, if we borrow the CSS we used in our dialog accessibility audit and add a few additional classes, then a11y-dialog can be transformed into a working and effective drawer component:

.drawer-start { right: initial; } .drawer-end { left: initial; } .drawer-top { bottom: initial; } .drawer-bottom { top: initial; }  .drawer-content {   margin: initial;   max-width: initial;   width: 25rem;   border-radius: initial; }  .drawer-top .drawer-content, .drawer-bottom .drawer-content {   width: 100%; }

These classes are used in an additive manner, essentially extending the base dialog component. This is exactly what I have started to do as I add my own drawer component to AgnosticUI. Saving time and reusing code FTW!

Wrapping up

Hopefully I’ve given you a good idea of the thinking process that goes into the making and maintenance of a component library. Could I have hand-rolled my own dialog component for the library? Absolutely! But I doubt it would have yielded better results than what a resource like Kitty’s a11y-dialog does, and the effort is daunting. There’s something cool about coming up with your own solution — and there may be good situations where you want to do that — but probably not at the cost of sacrificing something like accessibility.

Anyway, that’s how I arrived at my decision. I learned a lot about the native HTML <dialog> and its accessibility along the way, and I hope my journey gave you some of those nuggets too.


Dialog Components: Go Native HTML or Roll Your Own? originally published on CSS-Tricks. You should get the newsletter.

CSS-Tricks

, , , ,
[Top]

Verevolf

Zeldman:

You may not know his name, but he played a huge part in creating the web you take for granted today. And he’s back—kind of.

That would be Glenn Davis and the Verevolf site Zeldman’s talking about. The site is a growing archive of Davis’s personal (and unvarnished) recollections pioneering the early web, like the one where he recounts the origin story of his uber-famous Cool Site of the Day.

Credit: Web Design Museum

Or the how an email he wrote out of frustration snowballed into the Web Standards Project.

Personal retellings of history can be fraught with emotion and Davis’s posts are no exception. What’s super cool is how his experiences augment other retellings, including Chapter 7 of our own Web History series where Jay Hoffman documents the evolution of web standards.

To Shared LinkPermalink on CSS-Tricks


Verevolf originally published on CSS-Tricks. You should get the newsletter.

CSS-Tricks

[Top]

Inline Image Previews with Sharp, BlurHash, and Lambda Functions

Don’t you hate it when you load a website or web app, some content displays and then some images load — causing content to shift around? That’s called content reflow and can lead to an incredibly annoying user experience for visitors.

I’ve previously written about solving this with React’s Suspense, which prevents the UI from loading until the images come in. This solves the content reflow problem but at the expense of performance. The user is blocked from seeing any content until the images come in.

Wouldn’t it be nice if we could have the best of both worlds: prevent content reflow while also not making the user wait for the images? This post will walk through generating blurry image previews and displaying them immediately, with the real images rendering over the preview whenever they happen to come in.

So you mean progressive JPEGs?

You might be wondering if I’m about to talk about progressive JPEGs, which are an alternate encoding that causes images to initially render — full size and blurry — and then gradually refine as the data come in until everything renders correctly.

This seems like a great solution until you get into some of the details. Re-encoding your images as progressive JPEGs is reasonably straightforward; there are plugins for Sharp that will handle that for you. Unfortunately, you still need to wait for some of your images’ bytes to come over the wire until even a blurry preview of your image displays, at which point your content will reflow, adjusting to the size of the image’s preview.

You might look for some sort of event to indicate that an initial preview of the image has loaded, but none currently exists, and the workarounds are … not ideal.

Let’s look at two alternatives for this.

The libraries we’ll be using

Before we start, I’d like to call out the versions of the libraries I’ll be using for this post:

Making our own previews

Most of us are used to using <img /> tags by providing a src attribute that’s a URL to some place on the internet where our image exists. But we can also provide a Base64 encoding of an image and just set that inline. We wouldn’t usually want to do that since those Base64 strings can get huge for images and embedding them in our JavaScript bundles can cause some serious bloat.

But what if, when we’re processing our images (to resize, adjust the quality, etc.), we also make a low quality, blurry version of our image and take the Base64 encoding of that? The size of that Base64 image preview will be significantly smaller. We could save that preview string, put it in our JavaScript bundle, and display that inline until our real image is done loading. This will cause a blurry preview of our image to show immediately while the image loads. When the real image is done loading, we can hide the preview and show the real image.

Let’s see how.

Generating our preview

For now, let’s look at Jimp, which has no dependencies on things like node-gyp and can be installed and used in a Lambda.

Here’s a function (stripped of error handling and logging) that uses Jimp to process an image, resize it, and then creates a blurry preview of the image:

function resizeImage(src, maxWidth, quality) {   return new Promise<ResizeImageResult>(res => {     Jimp.read(src, async function (err, image) {       if (image.bitmap.width > maxWidth) {         image.resize(maxWidth, Jimp.AUTO);       }       image.quality(quality);        const previewImage = image.clone();       previewImage.quality(25).blur(8);       const preview = await previewImage.getBase64Async(previewImage.getMIME());        res({ STATUS: "success", image, preview });     });   }); }

For this post, I’ll be using this image provided by Flickr Commons:

Photo of the Big Boy statue holding a burger.

And here’s what the preview looks like:

Blurry version of the Big Boy statue.

If you’d like to take a closer look, here’s the same preview in a CodeSandbox.

Obviously, this preview encoding isn’t small, but then again, neither is our image; smaller images will produce smaller previews. Measure and profile for your own use case to see how viable this solution is.

Now we can send that image preview down from our data layer, along with the actual image URL, and any other related data. We can immediately display the image preview, and when the actual image loads, swap it out. Here’s some (simplified) React code to do that:

const Landmark = ({ url, preview = "" }) => {     const [loaded, setLoaded] = useState(false);     const imgRef = useRef<HTMLImageElement>(null);        useEffect(() => {       // make sure the image src is added after the onload handler       if (imgRef.current) {         imgRef.current.src = url;       }     }, [url, imgRef, preview]);        return (       <>         <Preview loaded={loaded} preview={preview} />         <img           ref={imgRef}           onLoad={() => setTimeout(() => setLoaded(true), 3000)}           style={{ display: loaded ? "block" : "none" }}         />       </>     );   };      const Preview: FunctionComponent<LandmarkPreviewProps> = ({ preview, loaded }) => {     if (loaded) {       return null;     } else if (typeof preview === "string") {       return <img key="landmark-preview" alt="Landmark preview" src={preview} style={{ display: "block" }} />;     } else {       return <PreviewCanvas preview={preview} loaded={loaded} />;     }   };

Don’t worry about the PreviewCanvas component yet. And don’t worry about the fact that things like a changing URL aren’t accounted for.

Note that we set the image component’s src after the onLoad handler to ensure it fires. We show the preview, and when the real image loads, we swap it in.

Improving things with BlurHash

The image preview we saw before might not be small enough to send down with our JavaScript bundle. And these Base64 strings will not gzip well. Depending on how many of these images you have, this may or may not be good enough. But if you’d like to compress things even smaller and you’re willing to do a bit more work, there’s a wonderful library called BlurHash.

BlurHash generates incredibly small previews using Base83 encoding. Base83 encoding allows it to squeeze more information into fewer bytes, which is part of how it keeps the previews so small. 83 might seem like an arbitrary number, but the README sheds some light on this:

First, 83 seems to be about how many low-ASCII characters you can find that are safe for use in all of JSON, HTML and shells.

Secondly, 83 * 83 is very close to, and a little more than, 19 * 19 * 19, making it ideal for encoding three AC components in two characters.

The README also states how Signal and Mastodon use BlurHash.

Let’s see it in action.

Generating blurhash previews

For this, we’ll need to use the Sharp library.


Note

To generate your blurhash previews, you’ll likely want to run some sort of serverless function to process your images and generate the previews. I’ll be using AWS Lambda, but any alternative should work.

Just be careful about maximum size limitations. The binaries Sharp installs add about 9 MB to the serverless function’s size.

To run this code in an AWS Lambda, you’ll need to install the library like this:

"install-deps": "npm i && SHARP_IGNORE_GLOBAL_LIBVIPS=1 npm i --arch=x64 --platform=linux sharp"

And make sure you’re not doing any sort of bundling to ensure all of the binaries are sent to your Lambda. This will affect the size of the Lambda deploy. Sharp alone will wind up being about 9 MB, which won’t be great for cold start times. The code you’ll see below is in a Lambda that just runs periodically (without any UI waiting on it), generating blurhash previews.


This code will look at the size of the image and create a blurhash preview:

import { encode, isBlurhashValid } from "blurhash"; const sharp = require("sharp");  export async function getBlurhashPreview(src) {   const image = sharp(src);   const dimensions = await image.metadata();    return new Promise(res => {     const { width, height } = dimensions;      image       .raw()       .ensureAlpha()       .toBuffer((err, buffer) => {         const blurhash = encode(new Uint8ClampedArray(buffer), width, height, 4, 4);         if (isBlurhashValid(blurhash)) {           return res({ blurhash, w: width, h: height });         } else {           return res(null);         }       });   }); }

Again, I’ve removed all error handling and logging for clarity. Worth noting is the call to ensureAlpha. This ensures that each pixel has 4 bytes, one each for RGB and Alpha.

Jimp lacks this method, which is why we’re using Sharp; if anyone knows otherwise, please drop a comment.

Also, note that we’re saving not only the preview string but also the dimensions of the image, which will make sense in a bit.

The real work happens here:

const blurhash = encode(new Uint8ClampedArray(buffer), width, height, 4, 4);

We’re calling blurhash‘s encode method, passing it our image and the image’s dimensions. The last two arguments are componentX and componentY, which from my understanding of the documentation, seem to control how many passes blurhash does on our image, adding more and more detail. The acceptable values are 1 to 9 (inclusive). From my own testing, 4 is a sweet spot that produces the best results.

Let’s see what this produces for that same image:

{   "blurhash" : "UAA]{ox^0eRiO_bJjdn~9#M_=|oLIUnzxtNG",   "w" : 276,   "h" : 400 }

That’s incredibly small! The tradeoff is that using this preview is a bit more involved.

Basically, we need to call blurhash‘s decode method and render our image preview in a canvas tag. This is what the PreviewCanvas component was doing before and why we were rendering it if the type of our preview was not a string: our blurhash previews use an entire object — containing not only the preview string but also the image dimensions.

Let’s look at our PreviewCanvas component:

const PreviewCanvas: FunctionComponent<CanvasPreviewProps> = ({ preview }) => {     const canvasRef = useRef<HTMLCanvasElement>(null);        useLayoutEffect(() => {       const pixels = decode(preview.blurhash, preview.w, preview.h);       const ctx = canvasRef.current.getContext("2d");       const imageData = ctx.createImageData(preview.w, preview.h);       imageData.data.set(pixels);       ctx.putImageData(imageData, 0, 0);     }, [preview]);        return <canvas ref={canvasRef} width={preview.w} height={preview.h} />;   };

Not too terribly much going on here. We’re decoding our preview and then calling some fairly specific Canvas APIs.

Let’s see what the image previews look like:

In a sense, it’s less detailed than our previous previews. But I’ve also found them to be a bit smoother and less pixelated. And they take up a tiny fraction of the size.

Test and use what works best for you.

Wrapping up

There are many ways to prevent content reflow as your images load on the web. One approach is to prevent your UI from rendering until the images come in. The downside is that your user winds up waiting longer for content.

A good middle-ground is to immediately show a preview of the image and swap the real thing in when it’s loaded. This post walked you through two ways of accomplishing that: generating degraded, blurry versions of an image using a tool like Sharp and using BlurHash to generate an extremely small, Base83 encoded preview.

Happy coding!


Inline Image Previews with Sharp, BlurHash, and Lambda Functions originally published on CSS-Tricks. You should get the newsletter.

CSS-Tricks

, , , , , ,
[Top]

An Interactive Starry Backdrop for Content

I was fortunate last year to get approached by Shawn Wang (swyx) about doing some work for Temporal. The idea was to cast my creative eye over what was on the site and come up with some ideas that would give the site a little “something” extra. This was quite a neat challenge as I consider myself more of a developer than a designer. But I love learning and leveling up the design side of my game.

One of the ideas I came up with was this interactive starry backdrop. You can see it working in this shared demo:

The neat thing about this design is that it’s built as a drop-in React component. And it’s super configurable in the sense that once you’ve put together the foundations for it, you can make it completely your own. Don’t want stars? Put something else in place. Don’t want randomly positioned particles? Place them in a constructed way. You have total control of what to bend it to your will.

So, let’s look at how we can create this drop-in component for your site! Today’s weapons of choice? React, GreenSock and HTML <canvas>. The React part is totally optional, of course, but, having this interactive backdrop as a drop-in component makes it something you can employ on other projects.

Let’s start by scaffolding a basic app

import React from 'https://cdn.skypack.dev/react' import ReactDOM from 'https://cdn.skypack.dev/react-dom' import gsap from 'https://cdn.skypack.dev/gsap'  const ROOT_NODE = document.querySelector('#app')  const Starscape = () => <h1>Cool Thingzzz!</h1>  const App = () => <Starscape/>  ReactDOM.render(<App/>, ROOT_NODE)

First thing we need to do is render a <canvas> element and grab a reference to it that we can use within React’s useEffect. For those not using React, store a reference to the <canvas> in a variable instead.

const Starscape = () => {   const canvasRef = React.useRef(null)   return <canvas ref={canvasRef} /> }

Our <canvas> is going to need some styles, too. For starters, we can make it so the canvas takes up the full viewport size and sits behind the content:

canvas {   position: fixed;   inset: 0;   background: #262626;   z-index: -1;   height: 100vh;   width: 100vw; }

Cool! But not much to see yet.

We need stars in our sky

We’re going to “cheat” a little here. We aren’t going to draw the “classic” pointy star shape. We’re going to use circles of differing opacities and sizes.

Draw a circle on a <canvas> is a case of grabbing a context from the <canvas> and using the arc function. Let’s render a circle, err star, in the middle. We can do this within a React useEffect:

const Starscape = () => {   const canvasRef = React.useRef(null)   const contextRef = React.useRef(null)   React.useEffect(() => {     canvasRef.current.width = window.innerWidth     canvasRef.current.height = window.innerHeight     contextRef.current = canvasRef.current.getContext('2d')     contextRef.current.fillStyle = 'yellow'     contextRef.current.beginPath()     contextRef.current.arc(       window.innerWidth / 2, // X       window.innerHeight / 2, // Y       100, // Radius       0, // Start Angle (Radians)       Math.PI * 2 // End Angle (Radians)     )     contextRef.current.fill()   }, [])   return <canvas ref={canvasRef} /> }

So what we have is a big yellow circle:

This is a good start! The rest of our code will take place within this useEffect function. That’s why the React part is kinda optional. You can extract this code out and use it in whichever form you like.

We need to think about how we’re going to generate a bunch of “stars” and render them. Let’s create a LOAD function. This function is going to handle generating our stars as well as the general <canvas> setup. We can also move the sizing logic of the <canvas> sizing logic into this function:

const LOAD = () => {   const VMIN = Math.min(window.innerHeight, window.innerWidth)   const STAR_COUNT = Math.floor(VMIN * densityRatio)   canvasRef.current.width = window.innerWidth   canvasRef.current.height = window.innerHeight   starsRef.current = new Array(STAR_COUNT).fill().map(() => ({     x: gsap.utils.random(0, window.innerWidth, 1),     y: gsap.utils.random(0, window.innerHeight, 1),     size: gsap.utils.random(1, sizeLimit, 1),     scale: 1,     alpha: gsap.utils.random(0.1, defaultAlpha, 0.1),   })) }

Our stars are now an array of objects. And each star has properties that define their characteristics, including:

  • x: The star’s position on the x-axis
  • y: The star’s position on the y-axis
  • size: The star’s size, in pixels
  • scale: The star’s scale, which will come into play when we interact with the component
  • alpha: The star’s alpha value, or opacity, which will also come into play during interactions

We can use GreenSock’s random() method to generate some of these values. You may also be wondering where sizeLimit, defaultAlpha, and densityRatio came from. These are now props we can pass to the Starscape component. We’ve provided some default values for them:

const Starscape = ({ densityRatio = 0.5, sizeLimit = 5, defaultAlpha = 0.5 }) => {

A randomly generated star Object might look like this:

{   "x": 1252,   "y": 29,   "size": 4,   "scale": 1,   "alpha": 0.5 } 

But, we need to see these stars and we do that by rendering them. Let’s create a RENDER function. This function will loop over our stars and render each of them onto the <canvas> using the arc function:

const RENDER = () => {   contextRef.current.clearRect(     0,     0,     canvasRef.current.width,     canvasRef.current.height   )   starsRef.current.forEach(star => {     contextRef.current.fillStyle = `hsla(0, 100%, 100%, $  {star.alpha})`     contextRef.current.beginPath()     contextRef.current.arc(star.x, star.y, star.size / 2, 0, Math.PI * 2)     contextRef.current.fill()   }) }

Now, we don’t need that clearRect function for our current implementation as we are only rendering once onto a blank <canvas>. But clearing the <canvas> before rendering anything isn’t a bad habit to get into, And it’s one we’ll need as we make our canvas interactive.

Consider this demo that shows the effect of not clearing between frames.

Our Starscape component is starting to take shape.

See the code
const Starscape = ({ densityRatio = 0.5, sizeLimit = 5, defaultAlpha = 0.5 }) => {   const canvasRef = React.useRef(null)   const contextRef = React.useRef(null)   const starsRef = React.useRef(null)   React.useEffect(() => {     contextRef.current = canvasRef.current.getContext('2d')     const LOAD = () => {       const VMIN = Math.min(window.innerHeight, window.innerWidth)       const STAR_COUNT = Math.floor(VMIN * densityRatio)       canvasRef.current.width = window.innerWidth       canvasRef.current.height = window.innerHeight       starsRef.current = new Array(STAR_COUNT).fill().map(() => ({         x: gsap.utils.random(0, window.innerWidth, 1),         y: gsap.utils.random(0, window.innerHeight, 1),         size: gsap.utils.random(1, sizeLimit, 1),         scale: 1,         alpha: gsap.utils.random(0.1, defaultAlpha, 0.1),       }))     }     const RENDER = () => {       contextRef.current.clearRect(         0,         0,         canvasRef.current.width,         canvasRef.current.height       )       starsRef.current.forEach(star => {         contextRef.current.fillStyle = `hsla(0, 100%, 100%, $  {star.alpha})`         contextRef.current.beginPath()         contextRef.current.arc(star.x, star.y, star.size / 2, 0, Math.PI * 2)         contextRef.current.fill()       })     }     LOAD()     RENDER()   }, [])   return <canvas ref={canvasRef} /> }

Have a play around with the props in this demo to see how they affect the the way stars are rendered.

Before we go further, you may have noticed a quirk in the demo where resizing the viewport distorts the <canvas>. As a quick win, we can rerun our LOAD and RENDER functions on resize. In most cases, we’ll want to debounce this, too. We can add the following code into our useEffect call. Note how we also remove the event listener in the teardown.

// Naming things is hard... const RUN = () => {   LOAD()   RENDER() }  RUN()  // Set up event handling window.addEventListener('resize', RUN) return () => {   window.removeEventListener('resize', RUN) }

Cool. Now when we resize the viewport, we get a new generated starry.

Interacting with the starry backdrop

Now for the fun part! Let’s make this thing interactive.

The idea is that as we move our pointer around the screen, we detect the proximity of the stars to the mouse cursor. Depending on that proximity, the stars both brighten and scale up.

We’re going to need to add another event listener to pull this off. Let’s call this UPDATE. This will work out the distance between the pointer and each star, then tween each star’s scale and alpha values. To make sure those tweeted values are correct, we can use GreenSock’s mapRange() utility. In fact, inside our LOAD function, we can create references to some mapping functions as well as a size unit then share these between the functions if we need to.

Here’s our new LOAD function. Note the new props for scaleLimit and proximityRatio. They are used to limit the range of how big or small a star can get, plus the proximity at which to base that on.

const Starscape = ({   densityRatio = 0.5,   sizeLimit = 5,   defaultAlpha = 0.5,   scaleLimit = 2,   proximityRatio = 0.1 }) => {   const canvasRef = React.useRef(null)   const contextRef = React.useRef(null)   const starsRef = React.useRef(null)   const vminRef = React.useRef(null)   const scaleMapperRef = React.useRef(null)   const alphaMapperRef = React.useRef(null)      React.useEffect(() => {     contextRef.current = canvasRef.current.getContext('2d')     const LOAD = () => {       vminRef.current = Math.min(window.innerHeight, window.innerWidth)       const STAR_COUNT = Math.floor(vminRef.current * densityRatio)       scaleMapperRef.current = gsap.utils.mapRange(         0,         vminRef.current * proximityRatio,         scaleLimit,         1       );       alphaMapperRef.current = gsap.utils.mapRange(         0,         vminRef.current * proximityRatio,         1,         defaultAlpha       );     canvasRef.current.width = window.innerWidth     canvasRef.current.height = window.innerHeight     starsRef.current = new Array(STAR_COUNT).fill().map(() => ({       x: gsap.utils.random(0, window.innerWidth, 1),       y: gsap.utils.random(0, window.innerHeight, 1),       size: gsap.utils.random(1, sizeLimit, 1),       scale: 1,       alpha: gsap.utils.random(0.1, defaultAlpha, 0.1),     }))   } }

And here’s our UPDATE function. It calculates the distance and generates an appropriate scale and alpha for a star:

const UPDATE = ({ x, y }) => {   starsRef.current.forEach(STAR => {     const DISTANCE = Math.sqrt(Math.pow(STAR.x - x, 2) + Math.pow(STAR.y - y, 2));     gsap.to(STAR, {       scale: scaleMapperRef.current(         Math.min(DISTANCE, vminRef.current * proximityRatio)       ),       alpha: alphaMapperRef.current(         Math.min(DISTANCE, vminRef.current * proximityRatio)       )     });   }) }; 

But wait… it doesn’t do anything?

Well, it does. But, we haven’t set our component up to show updates. We need to render new frames as we interact. We can reach for requestAnimationFrame often. But, because we’re using GreenSock, we can make use of gsap.ticker. This is often referred to as “the heartbeat of the GSAP engine” and it’s is a good substitute for requestAnimationFrame.

To use it, we add the RENDER function to the ticker and make sure we remove it in the teardown. One of the neat things about using the ticker is that we can dictate the number of frames per second (fps). I like to go with a “cinematic” 24fps:

// Remove RUN LOAD() gsap.ticker.add(RENDER) gsap.ticker.fps(24)  window.addEventListener('resize', LOAD) document.addEventListener('pointermove', UPDATE) return () => {   window.removeEventListener('resize', LOAD)   document.removeEventListener('pointermove', UPDATE)   gsap.ticker.remove(RENDER) }

Note how we’re now also running LOAD on resize. We also need to make sure our scale is being picked up in that RENDER function when using arc:

const RENDER = () => {   contextRef.current.clearRect(     0,     0,     canvasRef.current.width,     canvasRef.current.height   )   starsRef.current.forEach(star => {     contextRef.current.fillStyle = `hsla(0, 100%, 100%, $  {star.alpha})`     contextRef.current.beginPath()     contextRef.current.arc(       star.x,       star.y,       (star.size / 2) * star.scale,       0,       Math.PI * 2     )     contextRef.current.fill()   }) }

It works! 🙌

It’s a very subtle effect. But, that’s intentional because, while it’s is super neat, we don’t want this sort of thing to distract from the actual content. I’d recommend playing with the props for the component to see different effects. It makes sense to set all the stars to low alpha by default too.

The following demo allows you to play with the different props. I’ve gone for some pretty standout defaults here for the sake of demonstration! But remember, this article is more about showing you the techniques so you can go off and make your own cool backdrops — while being mindful of how it interacts with content.

Refinements

There is one issue with our interactive starry backdrop. If the mouse cursor leaves the <canvas>, the stars stay bright and upscaled but we want them to return to their original state. To fix this, we can add an extra handler for pointerleave. When the pointer leaves, this tweens all of the stars down to scale 1 and the original alpha value set by defaultAlpha.

const EXIT = () => {   gsap.to(starsRef.current, {     scale: 1,     alpha: defaultAlpha,   }) }  // Set up event handling window.addEventListener('resize', LOAD) document.addEventListener('pointermove', UPDATE) document.addEventListener('pointerleave', EXIT) return () => {   window.removeEventListener('resize', LOAD)   document.removeEventListener('pointermove', UPDATE)   document.removeEventListener('pointerleave', EXIT)   gsap.ticker.remove(RENDER) }

Neat! Now our stars scale back down and return to their previous alpha when the mouse cursor leaves the scene.

Bonus: Adding an Easter egg

Before we wrap up, let’s add a little Easter egg surprise to our interactive starry backdrop. Ever heard of the Konami Code? It’s a famous cheat code and a cool way to add an Easter egg to our component.

We can practically do anything with the backdrop once the code runs. Like, we could make all the stars pulse in a random way for example. Or they could come to life with additional colors? It’s an opportunity to get creative with things!

We’re going listen for keyboard events and detect whether the code gets entered. Let’s start by creating a variable for the code:

const KONAMI_CODE =   'arrowup,arrowup,arrowdown,arrowdown,arrowleft,arrowright,arrowleft,arrowright,keyb,keya';

Then we create a second effect within our the starry backdrop. This is a good way to maintain a separation of concerns in that one effect handles all the rendering, and the other handles the Easter egg. Specifically, we’re listening for keyup events and check whether our input matches the code.

const codeRef = React.useRef([]) React.useEffect(() => {   const handleCode = e => {     codeRef.current = [...codeRef.current, e.code]       .slice(         codeRef.current.length > 9 ? codeRef.current.length - 9 : 0       )     if (codeRef.current.join(',').toLowerCase() === KONAMI_CODE) {       // Party in here!!!     }   }   window.addEventListener('keyup', handleCode)   return () => {     window.removeEventListener('keyup', handleCode)   } }, [])

We store the user input in an Array that we store inside a ref. Once we hit the party code, we can clear the Array and do whatever we want. For example, we may create a gsap.timeline that does something to our stars for a given amount of time. If this is the case, we don’t want to allow Konami code to input while the timeline is active. Instead, we can store the timeline in a ref and make another check before running the party code.

const partyRef = React.useRef(null) const isPartying = () =>   partyRef.current &&   partyRef.current.progress() !== 0 &&   partyRef.current.progress() !== 1;

For this example, I’ve created a little timeline that colors each star and moves it to a new position. This requires updating our LOAD and RENDER functions.

First, we need each star to now have its own hue, saturation and lightness:

// Generating stars! ⭐️ starsRef.current = new Array(STAR_COUNT).fill().map(() => ({   hue: 0,   saturation: 0,   lightness: 100,   x: gsap.utils.random(0, window.innerWidth, 1),   y: gsap.utils.random(0, window.innerHeight, 1),   size: gsap.utils.random(1, sizeLimit, 1),   scale: 1,   alpha: defaultAlpha }));

Second, we need to take those new values into consideration when rendering takes place:

starsRef.current.forEach((star) => {   contextRef.current.fillStyle = `hsla(     $  {star.hue},     $  {star.saturation}%,     $  {star.lightness}%,     $  {star.alpha}   )`;   contextRef.current.beginPath();   contextRef.current.arc(     star.x,     star.y,     (star.size / 2) * star.scale,     0,     Math.PI * 2   );   contextRef.current.fill(); });

And here’s the fun bit of code that moves all the stars around:

partyRef.current = gsap.timeline().to(starsRef.current, {   scale: 1,   alpha: defaultAlpha });  const STAGGER = 0.01;  for (let s = 0; s < starsRef.current.length; s++) {   partyRef.current     .to(     starsRef.current[s],     {       onStart: () => {         gsap.set(starsRef.current[s], {           hue: gsap.utils.random(0, 360),           saturation: 80,           lightness: 60,           alpha: 1,         })       },       onComplete: () => {         gsap.set(starsRef.current[s], {           saturation: 0,           lightness: 100,           alpha: defaultAlpha,         })       },       x: gsap.utils.random(0, window.innerWidth),       y: gsap.utils.random(0, window.innerHeight),       duration: 0.3     },     s * STAGGER   ); }

From there, we generate a new timeline and tween the values of each star. These new values get picked up by RENDER. We’re adding a stagger by positioning each tween in the timeline using GSAP’s position parameter.

That’s it!

That’s one way to make an interactive starry backdrop for your site. We combined GSAP and an HTML <canvas>, and even sprinkled in some React that makes it more configurable and reusable. We even dropped an Easter egg in there!

Where can you take this component from here? How might you use it on a site? The combination of GreenSock and <canvas> is a lot of fun and I’m looking forward to seeing what you make! Here are a couple more ideas to get your creative juices flowing…


An Interactive Starry Backdrop for Content originally published on CSS-Tricks. You should get the newsletter.

CSS-Tricks

, , ,
[Top]

Improving Icons for UI Elements with Typographic Alignment and Scale

Utilizing icons in user interface elements is helpful. In addition to element labeling, icons can help reinforce a user element’s intention to users. But I have to say, I notice a bit of icon misalignment while browsing the web. Even if the icon’s alignment is correct, icons often do not respond well when typographic styles for the element change.

I took note of a couple real-world examples and I’d like to share my thoughts on how I improved them. It’s my hope these techniques can help others build user interface elements that better accommodate typographic changes and while upholding the original goals of the design.

Example 1 — Site messaging

I found this messaging example on a popular media website. The icon’s position doesn’t look so bad. But when changing some of the element’s style properties like font-size and line-height, it begins to unravel.

Identified issues

  • the icon is absolutely positioned from the left edge using a relative unit (rem)
  • because the icon is taken out of the flow, the parent is given a larger padding-left value to help with overall spacing – ideally, our padding-x is uniform, and everything looks good whether or not an icon is present
  • the icon (it’s an SVG) is also sized in rems – this doesn’t allow for respective resizing if its parent’s font-size changes

Recommendations

Screenshot of the site messaging element. It is overlayed with a red-dashed line indicating the icon's top edge and a blue-dashed line indicating the text's topmost point. The red-dashed line is slightly higher than the blue-dashed line.
Indicating the issues with aligning the icon and typography.

We want our icon’s top edge to be at the blue dashed line, but we often find our icon’s top edge at the red dashed line.

Have you ever inserted an icon next to some text and it just won’t align to the top of the text? You may move the icon into place with something like position: relative; top: 0.2em. This works well enough, but if typographic styles change in the future, your icon could look misaligned.

We can position our icon more reliably. Let’s use the element’s baseline distance (the distance from one line’s baseline to the next line’s baseline) to help solve this.

Screenshot of the site messaging element. It is overlayed with arrows indicating the baseline distance from the baseline of one line to the next line's baseline.
Calculating the baseline distance.

Baseline distance is font-size * line-height.

We’ll store that in a CSS custom property:

--baselineDistance: calc(var(--fontSize) * var(--lineHeight));

We can then move our icon down using the result of (baseline distance – font size) / 2.

--iconOffset: calc((var(--baselineDistance) - var(--fontSize)) / 2);

With a font-size of 1rem (16px) and line-height of 1.5, our icon will be moved 4 pixels.

  • baseline distance = 16px * 1.5 = 24px
  • icon offset = (24px16px) / 2 = 4px

Demo: before and after

Example 2 – unordered lists

The second example I found is an unordered list. It uses a web font (Font Awesome) for its icon via a ::before pseudo-element. There have been plenty of great articles on styling both ordered and unordered lists, so I won’t go into details about the relatively new ::marker pseudo-element and such. Web fonts can generally work pretty well with icon alignment depending on the icon used.

Identified issues

  • no absolute positioning used – when using pseudo-elements, we don’t often use flexbox like our first example and absolute positioning shines here
  • the list item uses a combination of padding and negative text-indent to help with layout – I am never able to get this to work well when accounting for multi-line text and icon scalability

Recommendations

Because we’ll also use a pseudo-element in our solution, we’ll leverage absolute positioning. This example’s icon size was a bit larger than its adjacent copy (about 2x). Because of this, we will alter how we calculate the icon’s top position. The center of our icon should align vertically with the center of the first line.

Start with the baseline distance calculation:

--baselineDistance: calc(var(--fontSize) * var(--lineHeight));

Move the icon down using the result of (baseline distance – icon size) / 2.

--iconOffset: calc((var(--baselineDistance) - var(--iconSize)) / 2);

So with a font-size of 1rem (16px), a line-height of 1.6, and an icon sized 2x the copy (32px), our icon will get get a top value of -3.2 pixels.

  • baseline distance = 16px * 1.6 = 25.6px
  • icon offset = (25.6px32px) / 2 = -3.2px

With a larger font-size of 2rem (32px), line-height of 1.2, and 64px icon, our icon will get get a top value of -12.8 pixels.

  • baseline distance = 32px * 1.2 = 38.4px
  • icon offset = (38.4px64px) / 2 = -12.8px

Demo: before and after

Conclusion

For user interface icons, we have a lot of options and techniques. We have SVGs, web fonts, static images, ::marker, and list-style-type. One could even use background-colors and clip-paths to achieve some interesting icon results. Performing some simple calculations can help align and scale icons in a more graceful manner, resulting in implementations that are a bit more bulletproof.

See also: Previous discussion on aligning icon to text.


Improving Icons for UI Elements with Typographic Alignment and Scale originally published on CSS-Tricks. You should get the newsletter.

CSS-Tricks

, , , , ,
[Top]

Creating Style Variations in WordPress Block Themes

Global styles, a feature of the block themes, is one of my favorite parts of creating block themes. The concept of global style variations in WordPress were introduced in Gutenberg 12.5 which would allow theme authors to create alternate variations of a block theme with different combinations of colors, fonts, typography, spacing, etc. Different theme.json files stored under /styles folder “lets users quickly and easily switch between different looks in the same theme.”

The global styles panel UI is in active development iteration. More details on the development of this feature can be found and tracked here at this GitHub ticket (#35619).

In this article, I will walk through creating a proof-of-concept global style variation using alternate /styles/theme.json files and create child themes with different color modes by swapping color palettes only.

Table of contents

Prerequisites

This article is intended for those who have basic understanding of WordPress block themes and some familiarity of using Full Site Editor (FSE) interface. If you’re new to block themes and the FSE, you can get started here on CSS-Tricks with this deep introduction to WordPress block themes and site editor documentation. This Full Site Editing website is one of the most up-to-date tutorial guides to learn all FSE features including block themes and styles variations discussed in this article.

Global style variations

For some background, let’s briefly overview global style variation. Twenty Twenty-Two (TT2) theme lead and Automattic design director Kjell Reigstad introduced global styles variations with this tweet and GitHub ticket #292 as child themes. In the ticket, Kjell notes that they were initially intended as alternate color patterns and fonts combinations, but they can be used for building simple child themes.

This example from Kjell demonstrates how different style combinations could be selected from options available in the sidebar.

Since then, the Automattic theme team has been experimenting with the concept to create variable child themes (variable color and fonts only), including the following:

  • geologist with blue, cream, slate, yellow variations
  • quadrat with black, green, red, white, and yellow versions

Global style switcher

The Gutenberg 12.5 release has introduced a global styles switcher which would allow users quickly and easily switch between different looks in the same theme via different theme.json files stored under a /styles folder.

The concept of allowing switching global style variation via theme.json has been discussed on GitHub for a while now. Gutenberg lead engineer Matias Ventura gave renewed importance to it by adding it to the WordPress 6.0 roadmap recently.

Embrace style alternates driven by json variations. This was teased in various videos around the new default theme and should be fully unveiled and presented in 6.0. One of the parallel goals is to create a few distinct variations of TT2 made just with styles. (35619)

Matias Ventura, “Preliminary Roadmap to 6.0”

The latest development iteration of theme style variation switcher is available with Gutenberg 13.0 and included in WordPress 6.0. In this Exploring WordPress 6.0 video, Automattic product liaison Anne McCarthy provides an overview of its major features, including style variations and Webfonts API (starting 5:18) discussed in this article.

Theme style variation versus child theme

In my previous article, I briefly covered building block child themes. Global style variations have blurred the line between alternate-theme.json and child themes. For example, the only difference between a recently released alante-dark child theme and its parent theme is an alternate.json file in the child theme that overrides the global theme styles like this:

Screenshot of the Visual Studio Code UI displaying the contents of alante-dark.
The alante-dark theme.

Likewise, the two recent Alara child themes in the WordPress directoryFramboise and Richmond — differ only in their single theme.json file.

Section 1: Creating theme style variations

At the root of your child theme folder, create a /styles folder, which holds style variations as JSON files. For this demo example, I created three variations of Twenty Twenty-Two’s theme.json color palettes — blue.json, maroon.json, and pink.json — by swapping the foreground and background colors:

Screenshot of the Visual Studio Code UI displaying the child theme file structure of "blue.json", "maroon.json", and "pink.json" in the styles directory.
The child theme file structure of “blue.json”, “maroon.json”, and “pink.json” in the styles directory.

Here is the final result after clicking the styles icon from the admin dashboard (located at Appearance → Editor):

Animated GIF showing the theme variations in WordPress.
Walking through the WordPress admin interface to select the “blue”, “maroon”, and “pink” styles.

Click the Other Styles button (recently revised to Browser styles), which displays “blue”, “maroon”, and “pink” color style icons in addition to its original styles.

To change and choose a style, select your preferred variation and click the Save button (top-right), which is displayed on the front end of your browser.

Adding labels to alternate style variations and file name with hover animation effect are available in Gutenberg 13.0.

Step 1: Setup and installation

First, install and set up a WordPress site with some dummy content. For this demo, I made a fresh WordPress install, activated Twenty Twenty-Two theme, and added Gutenberg test data.

The theme style variations and WebFonts API discussed in this article require installation and activation of the Gutenberg 13.0 plugin or WordPress 6.0.

Step 2: Create a TT2 child theme

In this demo child theme example, let’s slightly vary the body color from the header and footer color, with all site content centered:

The lower part of the site design are not visible because it is not scrolled into view. Site navigation is present in the header. A large banner image with a bird is visible. A date and title for the latest blog entry is also visible.
Screenshot of the default appearance of the demo theme in a browser window.

Step 3: Create JSON files

Create /styles in your child theme’s root folder with blue, maroon, and pink.json files:

__ style.css __ theme.json __ functions.php __ index.php __ templates __ ... __ parts __ ... __ styles __ blue.json __ maroon.json __ pink.json

Step 4: Create alternate theme JSON files

Next up, create your alternate-theme.json files with desired color pallets under /styles folder. For this demo example, I created three color palettes (blue, maroon, and pink). Here is the code for maroon.json:

{   "version": 2,   "title": "Maroon",   "settings": {     "color": {       "palette": [         { "slug": "foreground", "color": "#7C290F", "name": "Foreground" },         { "slug": "background", "color": "#ffffff", "name": "Background" },         { "slug": "foreground-dark", "color": "#000000", "name": "Foreground Dark" },         { "slug": "background-body", "color": "#ffd8be", "name": "Background Body" },         { "slug": "primary", "color": "#000000", "name": "Primary" },         { "slug": "secondary", "color": "#ffe2c7", "name": "Secondary" },         { "slug": "tertiary", "color": "#55ACEE", "name": "Tertiary" }       ]   },   "typography": {} }, "styles": {   "color":       {         "background": "var(--wp--preset--color--background-body)",         "text": "var(--wp--preset--color--foreground-dark)"       },   "elements": {       "link": {         "color": { "text": "var(--wp--preset--color--primary)" }       }     }   } }

The other two alternate blue.json and pink.json files swap values of foreground and background-body, foreground-dark and primary color properties with their respective blue and pink hex color values.

Section 2: An example of a use case

As I noted in my previous article, I have been working on block themes and using them for my own personal project site. Inspired by the theme style variations and Webfonts API features in Gutenberg plugin, I started tweaking my work-in-progress block theme with an alternate dark color mode and by configuring the Webfonts API.

In this section, I will walk you through how I created TT2 Gopher Blocks, a demo sibling of my work-in-progress block theme created for this article. The theme includes maroon, dark, and light color modes created using theme style variations and Webfonts API that became available with the Gutenberg 12.8 release.

Showing the homepage we are creating with style variations in WordPress.
Screenshot displaying a sample site using the TT2 Gopher theme with maroon default color.

Some highlights of the TT2 Gopher theme include centered, single-column content display, distinct header and footer, more user-friendly archive and search pages.

A copy of TT2 Gopher Blocks is available at the GitHub repository, which you can fork and customize.

Creating dark mode on WordPress

First, some background on dark mode. Dark mode is a personal preference and developers offer it or other mode toggle switches like on this site, which is not a small job for most regular developers. Creating dark mode is well-covered here at CSS-Tricks, including this complete guide to dark mode and dark mode typography.

In a WordPress site, we can add a dark mode toggle using the WP Dark Mode plugin. Erin Myers of WP Engine and WPBeginner describe how to use the WP Dark Mode plugin, while Brenda Barron lists other dark mode plugin options in this WPExplorer post.

Creating a dark mode in WordPress block themes without a plugin involves several steps. Over a year ago, Ari Stathopoulos created a dark support for the TT1 Blocks theme at the GitHub. Looking at the example here, it involves some JavaScript knowledge to create assets (e.g., toggler, customize, editor-mode-support), dark color CSS variables, and expanded functions.php files.

In this short video, Automattic’s Anne McCarthy demonstrates how simple it is to create a dark mode of TT2 block theme with global style variation by adding kllejr’s gist of JSON snippets in the TT2 /styles folder.

Creating the demo TT2 Gopher blocks theme

The TT2 Gopher is a very simple and modified version of the default Twenty Twenty-Two theme. It includes three theme style variations — maroon, dark, and white.

Describing each customization step is beyond the scope of this article, but you can learn more from my deep introduction to WordPress block themes as well as the Block Editor Handbook over at WordPress.org.

A brief overview of the TT2 Gopher theme color and font combinations include:

  • Light mode
    • The header is white and the footer has a smoky body background color.
    • Open Sans is the primary font.
  • Dark mode
    • The header and footer are black with lighter dark colors for the body backgrounds.
    • Source Serif Pro is the primary font.
  • Maroon mode
    • The header and footer are both a dark maroon color, with a lighter yellowish body background.
    • Work Sans is the primary font.

Let me briefly walk you through how I created theme style variations.

Adding and configuring webfonts

The Gutenberg 12.8 plugin introduced a new Webfonts API that allows the authors to load local (bundled) fonts “in a performance-friendly, privacy-friendly, and future-proof manner.” This feature can be implemented in a block theme the PHP way or the theme.json way.

Currently this feature works only with fonts bundled with block themes and does not support Google-hosted fonts because of privacy concerns. More details on the current status of Webfonts API development are covered in this make WordPress core article and this WP Tavern article.

Step 1: Download and add fonts in block theme

The TT2 theme adds Source Serif Pro font files to the theme’s assets/fonts folder. Two additional fonts — Work Sans and Public Sans — are also provided in he GitHub repository.

Step 2: Registering webfonts

In the TT2 theme, local Source Serif Pro webfonts are registered with PHP in its functions.php file:

function twentytwentytwo_get_font_face_styles() {   return "   @font-face{     font-family: 'Source Serif Pro';     font-weight: 200 900;     font-style: normal;     font-stretch: normal;     font-display: swap;     src: url('" . get_theme_file_uri( 'assets/fonts/SourceSerif4Variable-Roman.ttf.woff2' ) . "') format('woff2');   }   @font-face{     font-family: 'Source Serif Pro';     font-weight: 200 900;     font-style: italic;     font-stretch: normal;     font-display: swap;     src: url('" . get_theme_file_uri( 'assets/fonts/SourceSerif4Variable-Italic.ttf.woff2' ) . "') format('woff2');   }   "; }

Gutenberg 12.8 introduced the ability to register local web fonts with theme.json file. The following theme.json snippets from the demo TT2 Gopher theme show how local Work Sans web fonts are registered in the maroon theme style variation:

"typography": {   "fontFamilies": [     {       "fontFamily": "'Work Sans', -apple-system, BlinkMacSystemFont, 'Helvetica Neue', 'Helvetica', sans-serif",       "slug": "work-sans",       "name": "Work Sans",       "fontFace": [         { "fontFamily": "Work Sans", "fontDisplay": "block", "fontWeight": "400", "fontStyle": "normal", "fontStretch": "normal", "src": [ "file:./assets/fonts/work-sans/WorkSans-VariableFont_wght.ttf" ] },         { "fontFamily": "Work Sans", "fontDisplay": "block", "fontWeight": "700", "fontStyle": "normal", "fontStretch": "normal", "src": [ "file:./assets/fonts/work-sans/WorkSans-VariableFont_wght.ttf" ] },         { "fontFamily": "Work Sans", "fontDisplay": "block", "fontWeight": "400", "fontStyle": "italic", "fontStretch": "normal", "src": [ "file:./assets/fonts/work-sans/WorkSans-Italic-VariableFont_wght.ttf" ] },         { "fontFamily": "Work Sans", "fontDisplay": "block", "fontWeight": "700", "fontStyle": "italic", "fontStretch": "normal", "src": [ "file:./assets/fonts/work-sans/WorkSans-Italic-VariableFont_wght.ttf" ] }       ]     }   ] }

Additional information on how to register and use local webfonts in block themes is described in this tutorial and this WP Tavern article.

Creating theme style variations

Following the steps described in the previous section, I created two alternate versions of the theme.json file — white.json and black.json — with different color and fonts combinations inside the child theme’s /styles folder.

This feature requires version 2 of theme.json. Since Gutenberg 12.5, title can also be added at theme.json to display style label in the site editor or file name (without extension) will be displayed by default.

Here is an example of white.json:

{   "version": 2,   "title": "White",   "settings": {     "color": {       "palette": [         { "slug": "foreground", "color": "#000000", "name": "Foreground" },         { "slug": "background", "color": "#f2f2f2", "name": "Background" },         { "slug": "background-header", "color": "#ffffff", "name": "Background header" },         { "slug": "primary", "color": "#0d0d0d", "name": "Primary" },         { "slug": "secondary", "color": "#F0EAE6", "name": "Secondary" },         { "slug": "tertiary", "color": "#eb3425", "name": "Tertiary" },         { "slug": "quaternary", "color": "#7c7e83", "name": "Quaternary" }       ]     },     "typography": {       "fontFamilies": [         {         "fontFamily": ""Public Sans", sans-serif",         "name": "Public Sans",         "slug": "public-sans",         "fontFace": [           { "fontFamily": "Public Sans", "fontDisplay": "block", "fontStyle": "normal", "fontStretch": "normal", "src": [ "file:.assets/fonts/publicSans/PublicSans-VariableFont_wght.ttf.woff2" ] },           { "fontFamily": "Public Sans", "fontDisplay": "block", "fontStyle": "italic", "fontStretch": "normal", "src": [ "file:./assets/fonts/publicSans/PublicSans-Italic-VariableFont_wght.ttf.woff2" ] }         ]       }     ]   } }, "styles": {   "blocks": {     "core/image": {       "filter": { "duotone": "var(--wp--preset--duotone--default-filter)" }     },     "core/post-title": {       "typography": { "fontFamily": "var(--wp--preset--font-family--public-sans)", "fontWeight": "700", "fontSize": "var(--wp--custom--typography--font-size--gigantic)" }     },     "core/query-title": {       "typography": { "fontFamily": "var(--wp--preset--font-family--public-sans)", "fontWeight": "300", "fontSize": "var(--wp--custom--typography--font-size--gigantic)" }     },     "core/post-featured-image": {       "filter": { "duotone": "var(--wp--preset--duotone--default-filter)" }     },     "core/site-logo": {       "filter": { "duotone": "var(--wp--preset--duotone--default-filter)" }     },     "core/site-title": {       "typography": { "fontFamily": "var(--wp--preset--font-family--public-sans)", "fontSize": "var(--wp--preset--font-size--normal)", "fontWeight": "normal" }     }     },     "color": { "background": "var(--wp--preset--color--background)", "text": "var(--wp--preset--color--foreground)" },     "elements": {       "h1": {         "typography": { "fontFamily": "var(--wp--preset--font-family--public-sans)", "fontWeight": "600", "fontSize": "var(--wp--custom--typography--font-size--colossal)" }       },       "h2": {         "typography": { "fontFamily": "var(--wp--preset--font-family--public-sans)", "fontWeight": "600", "fontSize": "var(--wp--custom--typography--font-size--gigantic)" }       },       "h3": {         "typography": { "fontFamily": "var(--wp--preset--font-family--public-sans)", "fontWeight": "300", "fontSize": "var(--wp--custom--typography--font-size--huge)" }       },       "h4": {         "typography": { "fontFamily": "var(--wp--preset--font-family--public-sans)", "fontWeight": "300", "fontSize": "var(--wp--preset--font-size--x-large)" }       },       "h5": {         "typography": { "fontFamily": "var(--wp--preset--font-family--public-sans)", "fontWeight": "700", "textTransform": "uppercase", "fontSize": "var(--wp--preset--font-size--medium)" }       },       "h6": {         "typography": { "fontFamily": "var(--wp--preset--font-family--public-sans)", "fontWeight": "400", "textTransform": "uppercase", "fontSize": "var(--wp--preset--font-size--medium)" }       },       "link": {         "color": { "text": "var(--wp--custom--color--foreground)" }       }     },     "typography": { "fontFamily": "var(--wp--preset--font-family--public-sans)", "fontSize": "var(--wp--preset--font-size--normal)" }   } }

This code swaps color palettes from theme.json and also registers and defines the local Public Sans font files.

The black.json is also very similar and uses Source Serif Pro fonts registered in the functions.php file.

Screenshot of the light color theme on the left. And screenshot of the dark color theme on the right. The heading navigation and first blog entry are visible.
Side-by-side comparison of the light (left) and dark (right) color themes for TT2 Gopher.

Example of block themes with theme styles variations

  • Twenty Twenty-Two – the first default theme to include style variations. Its updated 1.2, bundled with WordPress 6.0 includes three style variations — “Blue”, “Pink”, and “Swiss” — allowing users to quickly swap between different visual styles.
  • Frost – an experimental block theme with dark theme style variation.
  • Alara – has above 100 active installs and includes 7 style variations.
  • Wabi– which powers Rich Tabor website contains 3 style variants and 300+ active installations.
  • Brisky – has more than 600 installs and one dark theme style variation.
  • Pendant – a theme by Automattic theme team under development at GitHub contains 3 theme style variation.

In this WP Tavern article, Justin speculates that this new feature may be utilized by theme authors by tying to the site visitor’s settings, while some users may prefer to tweak their site giving a seasonal or event-based design look. This is probably a little early, but only time will tell how this powerful feature would be utilized by both theme authors and users.

Wrapping up

Creating style variations of a block theme with different typography and color combination has been greatly simplified, without using plugins. It’s one of my favorite feature of the block editor that I plan to apply in my personal projects.

In my opinion, theme style variations are definitely a game changer for block themes and with this handy feature there might not be a need for child themes or even many cooky-cutter block themes. A few well-designed base block themes, similar to Automattic theme team’s block-canvas or blockbase (work-in-progress base block themes at GitHub), could be customized with theme style variation.


Resources

Dark Mode


Creating Style Variations in WordPress Block Themes originally published on CSS-Tricks. You should get the newsletter.

CSS-Tricks

, , , , ,
[Top]