Tag: Them

6 Common SVG Fails (and How to Fix Them)

Someone recently asked me how I approach debugging inline SVGs. Because it is part of the DOM, we can inspect any inline SVG in any browser DevTools. And because of that, we have the ability to scope things out and uncover any potential issues or opportunities to optimize the SVG.

But sometimes, we can’t even see our SVGs at all. In those cases, there are six specific things that I look for when I’m debugging.

1. The viewBox values

The viewBox is a common point of confusion when working with SVG. It’s technically fine to use inline SVG without it, but we would lose one of its most significant benefits: scaling with the container. At the same time, it can work against us when improperly configured, resulting in unwanted clipping.

The elements are there when they’re clipped — they’re just in a part of the coordinate system that we don’t see. If we were to open the file in some graphics editing program, it might look like this:

Cat line art with part of the drawing outside the artwork area in Illustrator.
Screenshot of SVG opened in Illustrator.

The easiest way to fix this? Add overflow="visible" to the SVG, whether it’s in our stylesheet, inline on the style attribute or directly as an SVG presentation attribute. But if we also apply a background-color to the SVG or if we have other elements around it, things might look a little bit off. In this case, the best option will be to edit the viewBox to show that part of the coordinate system that was hidden:

Demo applying overflow="hidden" and editing the viewBox.

There are a few additional things about the viewBox that are worth covering while we’re on the topic:

How does the viewBox work?

SVG is an infinite canvas, but we can control what we see and how we see it through the viewport and the viewBox.

The viewport is a window frame on the infinite canvas. Its dimensions are defined by width and height attributes, or in CSS with the corresponding width and height properties. We can specify any length unit we want, but if we provide unitless numbers, they default to pixels.

The viewBox is defined by four values. The first two are the starting point at the upper-left corner (x and y values, negative numbers allowed). I’m editing these to reframe the image. The last two are the width and height of the coordinate system inside the viewport — this is where we can edit the scale of the grid (which we’ll get into in the section on Zooming).

Here’s simplified markup showing the SVG viewBox and the width and height attributes both set on the <svg>:

<svg viewBox="0 0 700 700" width="700" height="700">   <!-- etc. --> </svg>

Reframing

So, this:

<svg viewBox="0 0 700 700">

…maps to this:

<svg viewBox="start-x-axis start-y-axis width height">

The viewport we see starts where 0 on the x-axis and 0 on the y-axis meet.

By changing this:

<svg viewBox="0 0 700 700">

…to this:

<svg viewBox="300 200 700 700">

…the width and height remain the same (700 units each), but the start of the coordinate system is now at the 300 point on the x-axis and 200 on the y-axis.

In the following video I’m adding a red <circle> to the SVG with its center at the 300 point on the x-axis and 200 on the y-axis. Notice how changing the viewBox coordinates to the same values also changes the circle’s placement to the upper-left corner of the frame while the rendered size of the SVG remains the same (700×700). All I did was “reframe” things with the viewBox.

Zooming

We can change the last two values inside the viewBox to zoom in or out of the image. The larger the values, the more SVG units are added to fit in the viewport, resulting in a smaller image. If we want to keep a 1:1 ratio, our viewBox width and height must match our viewport width and height values.

Let’s see what happens in Illustrator when we change these parameters. The artboard is the viewport which is represented by a white 700px square. Everything else outside that area is our infinite SVG canvas and gets clipped by default.

Figure 1 below shows a blue dot at 900 along the x-axis and 900 along the y-axis. If I change the last two viewBox values from 700 to 900 like this:

<svg viewBox="300 200 900 900" width="700" height="700">

…then the blue dot is almost fully back in view, as seen in Figure 2 below. Our image is scaled down because we increased the viewBox values, but the SVG’s actual width and height dimensions remained the same, and the blue dot made its way back closer to the unclipped area.

Figure 1.
Figure 1

Figure 2

There is a pink square as evidence of how the grid scales to fit the viewport: the unit gets smaller, and more grid lines fit into the same viewport area. You can play with the same values in the following Pen to see that work in action:

2. Missing width and height

Another common thing I look at when debugging inline SVG is whether the markup contains the width or height attributes. This is no big deal in many cases unless the SVG is inside a container with absolute positioning or a flexible container (as Safari computes the SVG width value with 0px instead of auto). Excluding width or height in these cases prevents us from seeing the full image, as we can see by opening this CodePen demo and comparing it in Chrome, Safari, and Firefox (tap images for larger view).

Chrome

Safari

Firefox

The solution? Add a width or height, whether as a presentation attribute, inline in the style attribute, or in CSS. Avoid using height by itself, particularly when it is set to 100% or auto. Another workaround is to set the right and left values.

You can play around with the following Pen and combine the different options.

3. Inadvertent fill and stroke colors

It may also be that we are applying color to the <svg> tag, whether it’s an inline style or coming from CSS. That’s fine, but there could be other color values throughout the markup or styles that conflict with the color set on the <svg>, causing parts to be invisible.

That’s why I tend to look for the fill and stroke attributes in the SVG’s markup and wipe them out. The following video shows an SVG I styled in CSS with a red fill. There are a couple of instances where parts of the SVG are filled in white directly in the markup that I removed to reveal the missing pieces.

4. Missing IDs

This one might seem super obvious, but you’d be surprised how often I see it come up. Let’s say we made an SVG file in Illustrator and were very diligent about naming our layers so that you get nice matching IDs in the markup when exporting the file. And let’s say we plan to style that SVG in CSS by hooking into those IDs.

That’s a nice way to do things. But there are plenty of times where I’ve seen the same SVG file exported a second time to the same location and the IDs are different, usually when copy/pasting the vectors directly. Maybe a new layer was added, or one of the existing ones was renamed or something. Whatever the case, the CSS rules no longer match the IDs in the SVG markup, causing the SVG to render differently than you’d expect.

Underscores with numbers after the element IDs
Pasting Illustrator’s exported SVG file into SVGOMG.

In large SVG files we might find it difficult to find those IDs. This is a good time to open the DevTools, inspect that part of the graphic that’s not working, and see if those IDs are still matching.

So, I’d say it’s worth opening an exported SVG file in a code editor and comparing it to the original before swapping things out. Apps like Illustrator, Figma, and Sketch are smart, but that doesn’t mean we aren’t responsible for vetting them.

5. Checklist for clipping and masking

If an SVG is unexpectedly clipped and the viewBox checks out alright, I usually look at the CSS for clip-path or mask properties that might interfere with the image. It’s tempting to keep looking at the inline markup, but it’s good to remember that an SVG’s styling might be happening elsewhere.

CSS clipping and masking allow us to “hide” parts of an image or element. In SVG, <clipPath> is a vector operation that cuts parts of an image with no halfway results. The <mask> tag is a pixel operation that allows transparency, semi-transparency effects, and blurred edges.

This is a small checklist for debugging cases where clipping and masking are involved:

  • Make sure the clipping path (or mask) and the graphic overlap one another. The overlapping parts are what gets displayed.
  • If you have a complex path that is not intersecting your graphic, try applying transforms until they match.
  • You can still inspect the inner code with the DevTools even though the <clipPath> or <mask> are not rendered, so use it!
  • Copy the markup inside <clipPath> and <mask> and paste it before closing the </svg> tag. Then add a fill to those shapes and check the SVG’s coordinates and dimensions. If you still do not see the image, try adding overflow="hidden" to the <svg> tag.
  • Check that a unique ID is used for the <clipPath> or <mask>, and that the same ID is applied to the shapes or group of shapes that are clipped or masked. A mismatched ID will break the appearance.
  • Check for typos in the markup between the <clipPath> or <mask> tags.
  • fill, stroke, opacity, or some other styles applied to the elements inside <clipPath> are useless — the only useful part is the fill-region geometry of those elements. That’s why if you use a <polyline> it will behave as a <polygon> and if you use a <line> you won’t see any clipping effect.
  • If you don’t see your image after applying a <mask>, make sure that the fill of the masking content is not entirely black. The luminance of the masking element determines the opacity of the final graphic. You’ll be able to see through the brighter parts, and the darker parts will hide your image’s content.

You can play with masked and clipped elements in this Pen.

6. Namespaces

Did you know that SVG is an XML-based markup language? Well, it is! The namespace for SVG is set on the xmlns attribute:

<svg xmlns="http://www.w3.org/2000/svg">   <!-- etc. --> </svg>

There’s a lot to know about namespacing in XML and MDN has a great primer on it. Suffice to say, the namespace provides context to the browser, informing it that the markup is specific to SVG. The idea is that namespaces help prevent conflicts when more than one type of XML is in the same file, like SVG and XHTML. This is a much less common issue in modern browsers but could help explain SVG rendering issues in older browsers or browsers like Gecko that are strict when defining doctypes and namespaces.

The SVG 2 specification does not require namespacing when using HTML syntax. But it’s crucial if support for legacy browsers is a priority — plus, it doesn’t hurt anything to add it. That way, when the <html> element’s xmlns attribute is defined, it will not conflict in those rare cases.

<html lang="en" xmlns="http://www.w3.org/1999/xhtml">   <body>     <svg xmlns="http://www.w3.org/2000/svg" width="700px" height="700px">       <!-- etc. -->     </svg>   </body> </html>

This is also true when using inline SVG in CSS, like setting it as a background image. In the following example, a checkmark icon appears on the input after successful validation. This is what the CSS looks like:

textarea:valid {  background: white url('data:image/svg+xml,     <svg xmlns="http://www.w3.org/2000/svg" width="26" height="26">     <circle cx="13" cy="13" r="13" fill="%23abedd8"/>     <path fill="none" stroke="white" stroke-width="2" d="M5 15.2l5 5 10-12"/>     </svg>') no-repeat 98% 5px; }

When we remove the namespace inside the SVG in the background property, the image disappears:

Another common namespace prefix is xlink:href. We use it a lot when referencing other parts of the SVG like: patterns, filters, animations or gradients. The recommendation is to start replacing it with href as the other one is being deprecated since SVG 2, but there might be compatibility issues with older browsers. In that case, we can use both. Just remember to include the namespace xmlns:xlink="http://www.w3.org/1999/xlink" if you are still using xlink:href.

Level up your SVG skills!

I hope these tips help save you a ton of time if you find yourself troubleshooting improperly rendered inline SVGs. These are just the things I look for. Maybe you have different red flags you watch for — if so, tell me in the comments!

The bottom line is that it pays to have at least a basic understanding of the various ways SVG can be used. CodePen Challenges often incorporate SVG and offer good practice. Here are a few more resources to level up:

There are a few people I suggest following for SVG-related goodness:


6 Common SVG Fails (and How to Fix Them) originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

CSS-Tricks

, ,

Merge Conflicts: What They Are and How to Deal with Them​

This article is part of our “Advanced Git” series. Be sure to follow Tower on Twitter or sign up for their newsletter to hear about the next articles.

Merge conflicts… Nobody likes them. Some of us even fear them. But they are a fact of life when you’re working with Git, especially when you’re teaming up with other developers. In most cases, merge conflicts aren’t as scary as you might think. In this fourth part of our “Advanced Git” series we’ll talk about when they can happen, what they actually are, and how to solve them.

Advanced Git series:

  1. Part 1: Creating the Perfect Commit in Git
  2. Part 2: Branching Strategies in Git
  3. Part 3: Better Collaboration With Pull Requests
  4. Part 4: Merge Conflicts (You are here!)
  5. Part 5: Rebase vs. Merge (Coming soon!)
  6. Part 6: Interactive Rebase
  7. Part 7: Cherry-Picking Commits in Git
  8. Part 8: Using the Reflog to Restore Lost Commits

How and when merge conflicts occur

The name gives it away: a merge conflict can occur when you integrate (or “merge”) changes from a different source into your current working branch. Keep in mind that integration is not limited to just merging branches. Conflicts can also happen during a rebase or an interactive rebase, when you’re cherry picking in Git (i.e. when you choose a commit from one branch and apply it to another), when you’re running git pull or even when reapplying a stash.

All of these actions perform some kind of integration, and that’s when merge conflicts can happen. Of course, this doesn’t mean that every one of those actions results in a merge conflict every time — thank goodness! But when exactly do conflicts occur?

Actually, Git’s merging capabilities are one of its greatest advantages: merging branches works flawlessly most of the time because Git is usually able to figure things out on its own and knows how to integrate changes.

But there are situations where contradictory changes are made — and that’s when technology simply cannot decide what’s right or wrong. These situations require a decision from a human being. For example, when the exact same line of code was changed in two commits, on two different branches, Git has no way of knowing which change you prefer. Another situation that is a bit less common: a file is modified in one branch and deleted in another one. Git will ask you what to do instead of just guessing what works best.

How to know when a merge conflict has occurred

So, how do you know a merge conflict has occurred? Don’t worry about that — Git will tell you and it will also make suggestions on how to resolve the problem. It will let you know immediately if a merge or rebase fails. For example, if you have committed changes that are in conflict with someone else’s changes, Git informs you about the problem in the terminal and tells you that the automatic merge failed:

$  git merge develop CONFLICT (content): Merge conflict in index.html Automatic merge failed; fix conflicts and then commit the result.

You can see that I ran into a conflict here and that Git tells me about the problem right away. Even if I had missed that message, I am reminded about the conflict the next time I type git status.

If you’re working with a Git desktop GUI like Tower, the app makes sure you don’t overlook any conflicts:

In any case: don’t worry about not noticing merge conflicts!

How to undo a merge conflict and start over

You can’t ignore a merge conflict — instead, you have to deal with it before you can continue your work. You basically have the following two options:

  • Resolve the conflict(s)
  • Abort or undo the action that caused the conflict(s)

Before we go into resolving conflicts, let’s briefly talk about how to undo and start over (it’s very reassuring to know this is possible). In many cases, this is as simple as using the --abort parameter, e.g. in commands like git merge --abort and git rebase --abort. This will undo the merge/rebase and bring back the state before the conflict occurred.

This also works when you’ve already started resolving the conflicted files and, even then, when you find yourself at a dead end, you can still undo the merge. This should give you the confidence that you really can’t mess up. You can always abort, return to a clean state, and start over.

What merge conflicts really look like in Git

Let’s see what a conflict really looks like under the hood. It’s time to demystify those little buggers and get to know them better. Once you understand a merge conflict, you can stop worrying.

As an example, let’s look at the contents of an index.html file that currently has a conflict:

Screenshot of an open code editor with HTML markup for a navigation that contains an unordered list of links. There are three lines of text injected by Git, the first says HEAD, the second is a line of equals signs, and the last says develop.

Git is kind enough to mark the problematic areas in the file. They’re surrounded by <<<<<<< and >>>>>>>. The content after the first marker originates from our current working branch (HEAD). The line with seven equals signs (=======) separates the two conflicting changes. Finally, the changes from the other branch are displayed (develop in our example).

Your job is to clean up those lines and solve the conflict: in a text editor, in your preferred IDE, in a Git desktop GUI, or in a Diff & Merge Tool.

How to solve a conflict in Git

It doesn’t matter which tool or application you use to resolve a merge conflict — when you’re done, the file has to look exactly as you want it to look. If it’s just you, you can easily decide to get rid of a code change. But if the conflicting change comes from someone else, you might have to talk to them before you decide which code to keep. Maybe it’s yours, maybe it’s someone else’s, and maybe it’s a combination of those two.

The process of cleaning up the file and making sure it contains what you actually want doesn’t have to involve any magic. You can do this simply by opening your text editor or IDE and making your changes.

Sometimes, though, you’ll find that this is not the most efficient way. That’s when dedicated tools can save time and effort. For example, there are various Git desktop GUIs which can be helpful when you’re resolving merge conflicts.

Let’s take Tower as an example. It offers a dedicated “Conflict Wizard” that makes these otherwise abstract situations more visual. This helps to better understand where the changes are coming from, what type of modification occurred, and ultimately solve the situation:

Especially for more complicated conflicts, it can be great to have a dedicated Diff & Merge Tool at hand. It can help you understand diffs even better by offering advanced features like special formatting and different presentation modes (e.g. side-by-side, combined in a single column, etc.).

There are several Diff & Merge Tools on the market (here are some for Mac and for Windows). You can configure your tool of choice using the git config command. (Consult the tool’s documentation for detailed instructions.) In case of a conflict, you can invoke it by simply typing git mergetool. As an example, I’m using the Kaleidoscope app on my Mac:

After cleaning up the file — either manually in a text editor, in a Git desktop GUI, or with a Merge Tool — you can commit the file like any other change. By typing git add <filename>, you inform Git that the conflict has been resolved.

When all merge conflicts have been solved and added to the Staging Area, you simply create a regular commit. And this completes the conflict resolution.

Don’t panic!

As you can see, a merge conflict is nothing to worry about and certainly no reason to panic. Once you understand what actually happened to cause the conflict, you can decide to undo the changes or resolve the conflict. Remember that you can’t break things — even if you realize you made a mistake while resolving a conflict, you can still undo it: just roll back to the commit before the great catastrophe and start over again.

If you want to dive deeper into advanced Git tools, feel free to check out my (free!) “Advanced Git Kit”: it’s a collection of short videos about topics like branching strategies, Interactive Rebase, Reflog, Submodules and much more.

Advanced Git series:

  1. Part 1: Creating the Perfect Commit in Git
  2. Part 2: Branching Strategies in Git
  3. Part 3: Better Collaboration With Pull Requests
  4. Part 4: Merge Conflicts (You are here!)
  5. Part 5: Rebase vs. Merge (Coming soon!)
  6. Part 6: Interactive Rebase
  7. Part 7: Cherry-Picking Commits in Git
  8. Part 8: Using the Reflog to Restore Lost Commits

The post Merge Conflicts: What They Are and How to Deal with Them​ appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , ,
[Top]

Three Buggy React Code Examples and How to Fix Them

There’s usually more than one way to code a thing in React. And while it’s possible to create the same thing different ways, there may be one or two approaches that technically work “better” than others. I actually run into plenty of examples where the code used to build a React component is technically “correct” but opens up issues that are totally avoidable.

So, let’s look at some of those examples. I’m going to provide three instances of “buggy” React code that technically gets the job done for a particular situation, and ways it can be improved to be more maintainable, resilient, and ultimately functional.

This article assumes some knowledge of React hooks. It isn’t an introduction to hooks—you can find a good introduction from Kingsley Silas on CSS Tricks, or take a look at the React docs to get acquainted with them. We also won’t be looking at any of that exciting new stuff coming up in React 18. Instead, we’re going to look at some subtle problems that won’t completely break your application, but might creep into your codebase and can cause strange or unexpected behavior if you’re not careful.

Buggy code #1: Mutating state and props

It’s a big anti-pattern to mutate state or props in React. Don’t do this!

This is not a revolutionary piece of advice—it’s usually one of the first things you learn if you’re getting started with React. But you might think you can get away with it (because it seems like you can in some cases).

I’m going to show you how bugs might creep into your code if you’re mutating props. Sometimes you’ll want a component that will show a transformed version of some data. Let’s create a parent component that holds a count in state and a button that will increment it. We’ll also make a child component that receives the count via props and shows what the count would look like with 5 added to it.

Here’s a Pen that demonstrates a naïve approach:

This example works. It does what we want it to do: we click the increment button and it adds one to the count. Then the child component is re-rendered to show what the count would look like with 5 added on. We changed the props in the child here and it works fine! Why has everybody been telling us mutating props is so bad?

Well, what if later we refactor the code and need to hold the count in an object? This might happen if we need to store more properties in the same useState hook as our codebase grows larger.

Instead of incrementing the number held in state, we increment the count property of an object held in state. In our child component, we receive the object through props and add to the count property to show what the count would look like if we added 5.

Let’s see how this goes. Try incrementing the state a few times in this pen:

Oh no! Now when we increment the count it seems to add 6 on every click! Why is this happening? The only thing that changed between these two examples is that we used an object instead of a number!

More experienced JavaScript programmers will know that the big difference here is that primitive types such as numbers, booleans and strings are immutable and passed by value, whereas objects are passed by reference.

This means that:

  • If you put a number in a variable, assign another variable to it, then change the second variable, the first variable will not be changed.
  • If you if you put an object in a variable, assign another variable to it, then change the second variable, the first variable will get changed.

When the child component changes a property of the state object, it’s adding 5 to the same object React uses when updating the state. This means that when our increment function fires after a click, React uses the same object after it has been manipulated by our child component, which shows as adding 6 on every click.

The solution

There are multiple ways to avoid these problems. For a situation as simple as this, you could avoid any mutation and express the change in a render function:

function Child({state}){   return <div><p>count + 5 = {state.count + 5} </p></div> }

However, in a more complicated case, you might need to reuse state.count + 5 multiple times or pass the transformed data to multiple children.

One way to do this is to create a copy of the prop in the child, then transform the properties on the cloned data. There’s a couple of different ways to clone objects in JavaScript with various tradeoffs. You can use object literal and spread syntax:

function Child({state}){ const copy = {...state};   return <div><p>count + 5 = {copy.count + 5} </p></div> }

But if there are nested objects, they will still reference the old version. Instead, you could convert the object to JSON then immediately parse it:

JSON.parse(JSON.stringify(myobject)) 

This will work for most simple object types. But if your data uses more exotic types, you might want to use a library. A popular method would be to use lodash’s deepClone. Here’s a Pen that shows a fixed version using object literal and spread syntax to clone the object:

One more option is to use a library like Immutable.js. If you have a rule to only use immutable data structures, you’ll be able to trust that your data won’t get unexpectedly mutated. Here’s one more example using the immutable Map class to represent the state of the counter app:

Buggy code #2: Derived state

Let’s say we have a parent and a child component. They both have useState hooks holding a count. And let’s say the parent passes its state down as prop down to the child, which the child uses to initialize its count.

function Parent(){   const [parentCount,setParentCount] = useState(0);   return <div>     <p>Parent count: {parentCount}</p>     <button onClick={()=>setParentCount(c=>c+1)}>Increment Parent</button>     <Child parentCount={parentCount}/>   </div>; }  function Child({parentCount}){  const [childCount,setChildCount] = useState(parentCount);   return <div>     <p>Child count: {childCount}</p>     <button onClick={()=>setChildCount(c=>c+1)}>Increment Child</button>   </div>; }

What happens to the child’s state when the parent’s state changes, and the child is re-rendered with different props? Will the child state remain the same or will it change to reflect the new count that was passed to it?

We’re dealing with a function, so the child state should get blown away and replaced right? Wrong! The child’s state trumps the new prop from the parent. After the child component’s state is initialized in the first render, it’s completely independent from any props it receives.

React stores component state for each component in the tree and the state only gets blown away when the component is removed. Otherwise, the state won’t be affected by new props.

Using props to initialize state is called “derived state” and it is a bit of an anti-pattern. It removes the benefit of a component having a single source of truth for its data.

Using the key prop

But what if we have a collection of items we want to edit using the same type of child component, and we want the child to hold a draft of the item we’re editing? We’d need to reset the state of the child component each time we switch items from the collection.

Here’s an example: Let’s write an app where we can write a daily list of five thing’s we’re thankful for each day. We’ll use a parent with state initialized as an empty array which we’re going to fill up with five string statements.

Then we’ll have a a child component with a text input to enter our statement.

We’re about to use a criminal level of over-engineering in our tiny app, but it’s to illustrate a pattern you might need in a more complicated project: We’re going to hold the draft state of the text input in the child component.

Lowering the state to the child component can be a performance optimization to prevent the parent re-rendering when the input state changes. Otherwise the parent component will re-render every time there is a change in the text input.

We’ll also pass down an example statement as a default value for each of the five notes we’ll write.

Here’s a buggy way to do this:

// These are going to be our default values for each of the five notes // To give the user an idea of what they might write const ideaList = ["I'm thankful for my friends",                   "I'm thankful for my family",                   "I'm thankful for my health",                   "I'm thankful for my hobbies",                   "I'm thankful for CSS Tricks Articles"]  const maxStatements = 5;  function Parent(){   const [list,setList] = useState([]);      // Handler function for when the statement is completed   // Sets state providing a new array combining the current list and the new item    function onStatementComplete(payload){     setList(list=>[...list,payload]);   }   // Function to reset the list back to an empty array    function reset(){     setList([]);   }   return <div>     <h1>Your thankful list</h1>     <p>A five point list of things you're thankful for:</p>      {/* First we list the statements that have been completed*/}     {list.map((item,index)=>{return <p>Item {index+1}: {item}</p>})}      {/* If the length of the list is under our max statements length, we render      the statement form for the user to enter a new statement.     We grab an example statement from the idealist and pass down the onStatementComplete function.     Note: This implementation won't work as expected*/}     {list.length<maxStatements ?        <StatementForm initialStatement={ideaList[list.length]} onStatementComplete={onStatementComplete}/>       :<button onClick={reset}>Reset</button>     }   </div>; }  // Our child StatementForm component This accepts the example statement for it's initial state and the on complete function function StatementForm({initialStatement,onStatementComplete}){    // We hold the current state of the input, and set the default using initialStatement prop  const [statement,setStatement] = useState(initialStatement);    return <div>     {/*On submit we prevent default and fire the onStatementComplete function received via props*/}     <form onSubmit={(e)=>{e.preventDefault(); onStatementComplete(statement)}}>     <label htmlFor="statement-input">What are you thankful for today?</label><br/>     {/* Our controlled input below*/}     <input id="statement-input" onChange={(e)=>setStatement(e.target.value)} value={statement} type="text"/>     <input type="submit"/>       </form>   </div> }

There’s a problem with this: each time we submit a completed statement, the input incorrectly holds onto the submitted note in the textbox. We want to replace it with an example statement from our list.

Even though we’re passing down a different example string every time, the child remembers the old state and our newer prop is ignored. You could potentially check whether the props have changed on every render in a useEffect, and then reset the state if they have. But that can cause bugs when different parts of your data use the same values and you want to force the child state to reset even though the prop remains the same.

The solution

If you need a child component where the parent needs the ability to reset the child on demand, there is a way to do it: it’s by changing the key prop on the child.

You might have seen this special key prop from when you’re rendering elements based on an array and React throws a warning asking you to provide a key for each element. Changing the key of a child element ensures React creates a brand new version of the element. It’s a way of telling React that you are rendering a conceptually different item using the same component.

Let’s add a key prop to our child component. The value is the index we’re about to fill with our statement:

<StatementForm key={list.length} initialStatement={ideaList[list.length]} onStatementComplte={onStatementComplete}/>

Here’s what this looks like in our list app:

Note the only thing that changed here is that the child component now has a key prop based on the array index we’re about to fill. Yet, the behavior of the component has completely changed.

Now each time we submit and finish writing out statement, the old state in the child component gets thrown away and replaced with the example statement.

Buggy code #3: Stale closure bugs

This is a common issue with React hooks. There’s previously been a CSS-Tricks article about dealing with stale props and states in React’s functional components.

Let’s take a look at a few situations where you might run into trouble. The first crops up is when using useEffect. If we’re doing anything asynchronous inside of useEffect we can get into trouble using old state or props.

Here’s an example. We need to increment a count every second. We set it up on the first render with a useEffect, providing a closure that increments the count as the first argument, and an empty array as the second argument. We’ll give it the empty array as we don’t want React to restart the interval on every render.

function Counter() {    let [count, setCount] = useState(0);    useEffect(() => {     let id = setInterval(() => {       setCount(count + 1);     }, 1000);     return () => clearInterval(id);   },[]);    return <h1>{count}</h1>; }

Oh no! The count gets incremented to 1 but never changes after that! Why is this happening?

It’s to do with two things:

Having a look at the MDN docs on closures, we can see:

A closure is the combination of a function and the lexical environment within which that function was declared. This environment consists of any local variables that were in-scope at the time the closure was created.

The “lexical environment” in which our useEffect closure is declared is inside our Counter React component. The local variable we’re interested is count, which is zero at the time of the declaration (the first render).

The problem is, this closure is never declared again. If the count is zero at the time declaration, it will always be zero. Each time the interval fires, it’s running a function that starts with a count of zero and increments it to 1.

So how might we get the function declared again? This is where the second argument of the useEffect call comes in. We thought we were extremely clever only starting off the interval once by using the empty array, but in doing so we shot ourselves in the foot. If we had left out this argument, the closure inside useEffect would get declared again with a new count every time.

The way I like to think about it is that the useEffect dependency array does two things:

  • It will fire the useEffect function when the dependency changes.
  • It will also redeclare the closure with the updated dependency, keeping the closure safe from stale state or props.

In fact, there’s even a lint rule to keep your useEffect instances safe from stale state and props by making sure you add the right dependencies to the second argument.

But we don’t actually want to reset our interval every time the component gets rendered either. How do we solve this problem then?

The solution

Again, there are multiple solutions to our problem here. Let’s start with the easiest: not using the count state at all and instead passing a function into our setState call:

function Counter() {    let [count, setCount] = useState(0);    useEffect(() => {     let id = setInterval(() => {       setCount(prevCount => prevCount+ 1);     }, 1000);     return () => clearInterval(id);   },[]);    return <h1>{count}</h1>; }

That was easy. Another option is to use the useRef hook like this to keep a mutable reference of the count:

function Counter() {   let [count, setCount] = useState(0);   const countRef = useRef(count)      function updateCount(newCount){     setCount(newCount);     countRef.current = newCount;   }    useEffect(() => {     let id = setInterval(() => {       updateCount(countRef.current + 1);     }, 1000);     return () => clearInterval(id);   },[]);    return <h1>{count}</h1>; }  ReactDOM.render(<Counter/>,document.getElementById("root"))

To go more in depth on using intervals and hooks you can take a look at this article about creating a useInterval in React by Dan Abramov, who is one of the React core team members. He takes a different route where, instead of holding the count in a ref, he places the entire closure in a ref.

To go more in depth on useEffect you can have a look at his post on useEffect.

More stale closure bugs

But stale closures won’t just appear in useEffect. They can also turn up in event handlers and other closures inside your React components. Let’s have a look at a React component with a stale event handler; we’ll create a scroll progress bar that does the following:

  • increases its width along the screen as the user scrolls
  • starts transparent and becomes more and more opaque as the user scrolls
  • provides the user with a button that randomizes the color of the scroll bar

We’re going to leave the progress bar outside of the React tree and update it in the event handler. Here’s our buggy implementation:

<body> <div id="root"></div> <div id="progress"></div> </body>
function Scroller(){    // We'll hold the scroll position in one state   const [scrollPosition, setScrollPosition] = useState(window.scrollY);   // And the current color in another   const [color,setColor] = useState({r:200,g:100,b:100});      // We assign out scroll listener on the first render   useEffect(()=>{    document.addEventListener("scroll",handleScroll);     return ()=>{document.removeEventListener("scroll",handleScroll);}   },[]);      // A function to generate a random color. To make sure the contrast is strong enough   // each value has a minimum value of 100   function onColorChange(){     setColor({r:100+Math.random()*155,g:100+Math.random()*155,b:100+Math.random()*155});   }      // This function gets called on the scroll event   function handleScroll(e){     // First we get the value of how far down we've scrolled     const scrollDistance = document.body.scrollTop || document.documentElement.scrollTop;     // Now we grab the height of the entire document     const documentHeight = document.documentElement.scrollHeight - document.documentElement.clientHeight;      // And use these two values to figure out how far down the document we are     const percentAlong =  (scrollDistance / documentHeight);     // And use these two values to figure out how far down the document we are     const progress = document.getElementById("progress");     progress.style.width = `$  {percentAlong*100}%`;     // Here's where our bug is. Resetting the color here will mean the color will always      // be using the original state and never get updated     progress.style.backgroundColor = `rgba($  {color.r},$  {color.g},$  {color.b},$  {percentAlong})`;     setScrollPosition(percentAlong);   }      return <div className="scroller" style={{backgroundColor:`rgb($  {color.r},$  {color.g},$  {color.b})`}}>     <button onClick={onColorChange}>Change color</button>     <span class="percent">{Math.round(scrollPosition* 100)}%</span>   </div> }  ReactDOM.render(<Scroller/>,document.getElementById("root"))

Our bar gets wider and increasingly more opaque as the page scrolls. But if you click the change color button, our randomized colors are not affecting the progress bar. We’re getting this bug because the closure is affected by component state, and this closure is never being re-declared so we only get the original value of the state and no updates.

You can see how setting up closures that call external APIs using React state, or component props might give you grief if you’re not careful.

The solution

Again, there are multiple ways to fix this problem. We could keep the color state in a mutable ref which we could later use in our event handler:

const [color,setColor] = useState({r:200,g:100,b:100}); const colorRef = useRef(color);  function onColorChange(){   const newColor = {r:100+Math.random()*155,g:100+Math.random()*155,b:100+Math.random()*155};   setColor(newColor);   colorRef.current=newColor;   progress.style.backgroundColor = `rgba($  {newColor.r},$  {newColor.g},$  {newColor.b},$  {scrollPosition})`; }

This works well enough but it doesn’t feel ideal. You may need to write code like this if you’re dealing with third-party libraries and you can’t find a way to pull their API into your React tree. But by keeping one of our elements out of the React tree and updating it inside of our event handler, we’re swimming against the tide.

This is a simple fix though, as we’re only dealing with the DOM API. An easy way to refactor this is to include the progress bar in our React tree and render it in JSX allowing it to reference the component’s state. Now we can use the event handling function purely for updating state.

function Scroller(){   const [scrollPosition, setScrollPosition] = useState(window.scrollY);   const [color,setColor] = useState({r:200,g:100,b:100});      useEffect(()=>{    document.addEventListener("scroll",handleScroll);     return ()=>{document.removeEventListener("scroll",handleScroll);}   },[]);      function onColorChange(){     const newColor = {r:100+Math.random()*155,g:100+Math.random()*155,b:100+Math.random()*155};     setColor(newColor);   }    function handleScroll(e){     const scrollDistance = document.body.scrollTop || document.documentElement.scrollTop;     const documentHeight = document.documentElement.scrollHeight - document.documentElement.clientHeight;     const percentAlong =  (scrollDistance / documentHeight);     setScrollPosition(percentAlong);   }   return <>     <div class="progress" id="progress"    style={{backgroundColor:`rgba($  {color.r},$  {color.g},$  {color.b},$  {scrollPosition})`,width: `$  {scrollPosition*100}%`}}></div>     <div className="scroller" style={{backgroundColor:`rgb($  {color.r},$  {color.g},$  {color.b})`}}>     <button onClick={onColorChange}>Change color</button>     <span class="percent">{Math.round(scrollPosition * 100)}%</span>   </div>   </> }

That feels better. Not only have we removed the chance for our event handler to get stale, we’ve also converted our progress bar into a self contained component which takes advantage of the declarative nature of React.

Also, for a scroll indicator like this, you might not even need JavaScript — have take a look at the up-and-coming @scroll-timeline CSS function or an approach using a gradient from Chris’ book on the greatest CSS tricks!

Wrapping up

We’ve had a look at three different ways you can create bugs in your React applications and some ways to fix them. It can be easy to look at counter examples which follow a happy path and don’t show subtleties in the APIs that might cause problems.

If you still find yourself needing to build a stronger mental model of what your React code is doing, here’s a list of resources which can help:


The post Three Buggy React Code Examples and How to Fix Them appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , ,
[Top]

Your Team is Not “Them”

This post was written for engineering managers, but anyone is welcome to read it.

Let’s talk for a moment about how we talk about our teams. This might not seem like something that needs a whole article dedicated to it, but it’s actually quite crucial. The way that we refer to our teams sends signals: to stakeholders, to your peers, to the team itself, and even to ourselves. In addressing how we speak about our teams, we’ll also talk about accountability.

I have noticed shared similarities in those folks I consider good managers whose teams deliver well, and those who don’t. It starts with how they communicate about their teams.

Your team is “we”

There can be a perception that as a manager of an organization you are in control at all times. Part of that control can invariably be perceived as how you appear to be in charge, are competent, or how you personally perform. Due to that, some bad behaviors can arise- not due to malice, but due to fear. For this reason, it can be tempting to take credit for success and avoid credit when there is failure.

The irony is that the more that you try to hold on to these external perceptions, the more it will slip away. Why? Because the problems you are solving as a manager really aren’t about you.

Your team is “we”. You are a driving force of that team, no matter how high up the hierarchy chain. What happens on that team is your responsibility. When you speak about your org, you should include yourself in the statement.

When your team succeeds in something though, then you can praise them and leave yourself out of it. Here’s an example:

They really pulled this project over the line, despite the incredibly tight project timeline. Everyone showed up and was driven throughout the engagement. They did a fantastic job.

However, if the team failed at something, the pronoun is then I:

I didn’t recognize how tight this turnaround was and failed to prioritize the team’s time well. I need to reconvene with everyone so we can come up with a better plan.

And never, ever them:

They didn’t adhere to this tight timeline. They just weren’t able to get this project over the line.

Do you see how the last example shirks responsibility for what occurred? Too often I will hear managers relieve themselves of their duties when shit hits the fan, and that is exactly when a manager needs to step up, and dive in to the problems that are their responsibility.

Photo by Marvin Meyer on Unsplash

The wider organization

There is another piece of this too, and it impacts how your team operates. It’s that your job is not to be the ambassador of who you manage and think of every other group as separate. You’re part of a larger system. A company is composed of groups, but those groups can only be successful if they’re working together, not if they are protecting their own org at all costs.

I admit I didn’t fully understand the depth of this until I read Patrick Lencioni’s great book The Advantage, thanks to Dalia Havens, a peer at Netlify. In the book, Lencioni talks about how organizational health, not “being smart”, as the biggest key to success. Plenty of smart people with good ideas build companies and see them fail. Success lies in being able to work together.

Fundamentally, other groups at the company are not separate from your group, rather that you’re all part of one whole. The Leadership Team is also a team, and should be treated as your team. How you speak about this team is equally important.

As such, when we talk about successes and failures of any groups, these should also be shared. There should be a sense that you’re all working towards a common goal together, and every group contributes to it. Within a leadership team there should be trust and vulnerability to own their part so that the whole organization can operate at its best.

And, yes, the leadership team as well

You may see where I’m going with this: when you talk about the leadership team, this is “we” too. You can’t speak to your team about decisions that were made at a table with your peers and boss and say “they decided something you don’t agree with” even if you don’t agree. You were there, ideally you took part in that decision, when you talk about that team, presenting them as “we” is important as well.

Why? Because as a manager, our job is to try as much as we can to drive balance and clarity. It’s confusing and disorienting to hear a manager talk about a leadership team they are on as though they aren’t a part of it and not take accountability for what’s happening there. Your reports themselves can’t effect change at that level, so if you don’t own your involvement in the leadership group, you can demoralize your staff and make them feel distrustful of other parts of the company. This can have an effect where folks demonize other teams and their initiatives, which as we discussed is ultimately unhealthy.

Saying “we” holds you accountable to your team for leadership decisions that you are a part of, which is how it should be. If people on your team have issues with the direction, it’s also your responsibility to own that conversation and next steps, as a liaison to the leadership team.

There are of course, some small instances when this might not be appropriate. Something that really goes against your core values that you fought strongly against can make this untenable. I would say those instances should ideally be very infrequent, or unfortunately you may need to pursue another place to work.

Speaking about the Leadership Team in Practice

Here’s how this works in practice, using an example of conveying a decision at the leadership level to the people who report to you:

The leadership team decided that we need to ship at least 3 features this quarter so I guess that’s what we have to figure out to do.

Versus:

One of the key OKRs this quarter is that we as a company need to double the signups to our platform. We’ve done some calculations that show we can almost certainly get there by shipping 3 features, so let’s all talk about what we can do within our group to make that possible. If you’re curious, we can chat through what initiatives other groups are doing to support this as well.

The first is not just passive, but demotivating. I have made the mistake of using this approach when I want to be liked by my employees and for them to think of me as a peer. But we’re not peers, I have a responsibility to them.

You’ll note in the second approach, we also explained the reasoning behind the decision. I’ve noticed personally that when I have to hold myself accountable to the decision, I care a bit more that people understand the reasoning behind it. This is a very good thing for the morale on your team! Which is arguably one of your most important jobs.

The last line in the second approach also opens up discussion- since you’re taking ownership of the decision, you’re also owning that you know about other pieces of the puzzle, and show a willingness to dive in with your team.

What if you make a mistake?

We all do! Management can be difficult and it’s impossible to be perfect all the time. Try not to beat yourself up, but perhaps show a bit more thoughtfulness next time. I’ve made lots of mistakes as well. It’s not a stick to beat up yourself or others, but a lesson learned to be as mindful as possible and promote a better working environment.


We communicate to our teams, peers, and stakeholders whether or not we’re taking responsibility as a true leader in these moments. We communicate whether we’ll approach a problem with humility, and a desire to collaborate and improve. This may seem to be a detail, but it’s a powerful piece of leading an organization.


The post Your Team is Not “Them” appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

,
[Top]

Sticky Headers: 5 Ways to Make Them Better

Page Laubheimer says that if you’re going to do a sticky header…

  1. Keep it small.
  2. Visually contrast it with the rest of the page.
  3. If it’s going to move, keep it minimal. (I’d say, respect prefers-reduced-motion.)
  4. Consider “partially persistent headers.” (Jemima Abu calls it a Smart Navbar.)
  5. Actually, maybe don’t even do it.

I generally like the term “sticky” header, because it implies you should use position: sticky for them, which I think you should. It used to be done with position: fixed, but that was trickier to pull off since the header would move in-and-out of flow of the document. Using sticky positioning helps reserve that space automatically without JavaScript or magic numbers.

Direct Link to ArticlePermalink


The post Sticky Headers: 5 Ways to Make Them Better appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , ,
[Top]

Web Frameworks: Why You Don’t Always Need Them

Richard MacManus explaining Daniel Kehoe’s approach to building websites:

There are three key web technologies underpinning Kehoe’s approach:

  • ES6 Modules: JavaScript ES6 can support import modules, which are also supported by browsers.
  • Module CDNs: JavaScript modules can now be downloaded from third-party content delivery networks (CDNs).
  • Custom HTML elements: Developers can now create custom HTML tags, via Web Components.

Using a no build process and only features that are built into browser, and yet that still buys you a pretty powerful setup. You can still use stuff off npm. You can still get templating. You can still build with components. You still get isolation where needed.

I’d say today you’re:

  • Giving up some DX (hot module reloading, JSX, framework doodads)
  • Gaining some DX (can jump into project and just start working)
  • Giving up some performance (no tree shaking, loads of network requests)
  • Widening your hiring pool (more people know core technologies than specific tools)

But it’s not hard to imagine a tomorrow where we give up less and gain more, making the tools we use today less necessary. I’m quite sure we’ll always still find a way to jam more tools into what we’re doing. Hammer something something nail.

Direct Link to ArticlePermalink


The post Web Frameworks: Why You Don’t Always Need Them appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , ,
[Top]

The Best Font Loading Strategies and How to Execute Them

Zach Leatherman wrote up a comprehensive list of font loading strategies that have been widely shared in the web development field. I took a look at this list before, but got so scared (and confused), that I decided not to do anything at all. I don’t know how to begin loading fonts the best way and I don’t want to feel like an idiot going through this list.

Today, I gave myself time to sit down and figure it out. In this article, I want to share with you the best loading strategies and how to execute all of them.

The best strategies

Zach recommends two strategies in his article:

  1. FOUT with Class – Best approach for most situations. (This works whether we use a font-hosting company or hosting our own fonts.)
  2. Critical FOFT – Most performant approach. (This only works if we host our own fonts.)

Before we dive into these two strategies, we need to clear up the acronyms so you understand what Zach is saying.

FOIT, FOUT, FOFT

The acronyms are as follows:

  • FOIT means flash of invisible text. When web fonts are loading, we hide text so users don’t see anything. We only show the text when web fonts are loaded.
  • FOUT means flash of unstyled text. When web fonts are loading, we show users a system font. When the web font gets loaded, we change the text back to the desired web font.
  • FOFT means flash of faux text. This one is complicated and warrants more explanation. We’ll talk about it in detail when we hit the FOFT section.

Self-hosted fonts vs. cloud-hosted fonts

There are two main ways to host fonts:

  1. Use a cloud provider.
  2. Self-host the fonts.

How we load fonts differs greatly depending on which option you choose.

Loading fonts with cloud-hosted fonts

It’s often easier to cloud-hosted fonts. The provider will give us a link to load the fonts. We can simply plunk this link into our HTML and we’ll get our web font. Here’s an example where we load web fonts from Adobe Fonts now (previously known as Typekit).

<link rel="stylesheet" href="https://use.typekit.net/your-kit-id.css">

Unfortunately, this isn’t the best approach. The href blocks the browser. If loading the web font hangs, users won’t be able to do anything while they wait.

Typekit used to provide JavaScript that loads a font asynchronously. It’s a pity they don’t show this JavaScript version anymore. (The code still works though, but I have no idea when, or if, it will stop working.)

Loading fonts from Google Fonts is slightly better because it provides font-display: swap. Here’s an example where we load Lato from Google Fonts. (The display=swap parameter triggers font-display: swap).

<link   href="https://fonts.googleapis.com/css2?family=Lato:ital,wght@0,400;0,700;1,400;1,700&display=swap"   rel="stylesheet" />

Loading fonts with self-hosted fonts

You can only self-host your fonts if you have the license to do so. Since fonts can be expensive, most people choose to use a cloud provider instead.

There’s a cheap way to get fonts though. Design Cuts runs font deals occasionally where you can get insanely high-quality fonts for just $ 29 per bundle. Each bundle can contain up to 12 fonts. I managed to get classics like Claredon, DIN, Futura, and a whole slew of fonts I can play around by camping on their newsletter for these deals.

If we want to self-host fonts, we need to understand two more concepts: @font-face and font-display: swap.

@font-face

@font-face lets us declare fonts in CSS. If we want to self-host fonts, we need to use @font-face to declare your fonts.

In this declaration, we can specify four things:

  • font-family: This tells CSS (and JavaScript) the name of our font.
  • src: Path to find the font so they can get loaded
  • font-weight: The font weight. Defaults to 400 if omitted.
  • font-style: Whether to italicize the font. Defaults to normal if omitted.

For src, we need to specify several font formats. For a practical level of browser support, we can use woff2 and woff.

Here’s an example where we load Lato via @font-face.

@font-face {   font-family: Lato;   src: url('path-to-lato.woff2') format('woff2'),        url('path-to-lato.woff') format('woff'); }  @font-face {   font-family: Lato;   src: url('path-to-lato-bold.woff2') format('woff2'),        url('path-to-lato-bold.woff') format('woff');   font-weight: 700; }  @font-face {   font-family: Lato;   src: url('path-to-lato-italic.woff2') format('woff2'),        url('path-to-lato-italic.woff') format('woff');   font-style: italic; }  @font-face {   font-family: Lato;   src: url('path-to-lato-bold-italic.woff2') format('woff2'),        url('path-to-lato-bold-italic.woff') format('woff');   font-weight: 700;   font-style: italic; }

If you don’t have woff2 or woff files, you can upload your font files (either Opentype or Truetype) into a font-face generator to get them.

Next, we declare the web font in a font-family property.

html {   font-family: Lato, sans-serif; }

When browsers parse an element with the web font, they trigger a download for the web font.

font-display: swap

font-display takes one of four possible values: auto, swap, fallback, and optional. swap instructs browsers to display the fallback text before web fonts get loaded. In other words, swap triggers FOUT for self-hosted fonts. Find out about other values from in the CSS-Tricks almanac.

Browser support for font-display: swap is pretty good nowadays so we can use it in our projects.

This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.

Desktop

Chrome Firefox IE Edge Safari
60 58 No 79 11.1

Mobile / Tablet

Android Chrome Android Firefox Android iOS Safari
88 85 81 11.3-11.4

FOUT vs. FOUT with Class

FOUT means flash of unstyled text. You always want FOUT over FOIT (flash of invisible text) because it’s better for users to read words written with system fonts compared to words written with invisible ink. We mentioned earlier that font-display: swap gives you the ability to use FOUT natively.

FOUT with Class gives you the same results—FOUT—but it uses JavaScript to load the fonts. The mechanics are as follows:

  • First: Load system fonts.
  • Second: Load web fonts via JavaScript.
  • Third: When web fonts are loaded, add a class to the <html> tag.
  • Fourth: Switch the system font with the loaded web font.

Zach recommends loading fonts via JavaScript even though font-display: swap enjoys good browser support. In other words, Zach recommends FOUT with Class over @font-face + font-display: swap.

He recommends FOUT with Class because of these three reasons:

  1. We can group repaints.
  2. We can adapt to user preferences.
  3. We can skip font-loading altogether if users have a slow connection.

I’ll let you dive deeper into the reasons in another article. While writing this article, I found a fourth reason to prefer FOUT with Class: We can skip font-loading when users already have the font loaded in their system. (We do this with sessionStorage as we’ll see below.)

FOUT with Class (for cloud-hosted fonts)

First, we want to load our fonts as usual from your cloud-hosting company. Here’s an example where I loaded Lato from Google Fonts:

<head>   <link     href="https://fonts.googleapis.com/css2?family=Lato:ital,wght@0,400;0,700;1,400;1,700&display=swap"     rel="stylesheet"   /> </head>

Next, we want to load fonts via JavaScript. We’ll inject a script into the <head> section since the code footprint is small, and it’s going to be asynchronous anyway.

<head>   <link     href="https://fonts.googleapis.com/css2?family=Lato:ital,wght@0,400;0,700;1,400;1,700&display=swap"     rel="stylesheet"   />   <script src="js/load-fonts.js"></script> </head>

In load-fonts.js, we want to use the CSS Font Loading API to load the font. Here, we can use Promise.all to load all fonts simultaneously. When we do this, we’re grouping repaints.

The code looks like this:

if ('fonts' in document) {   Promise.all([     document.fonts.load('1em Lato'),     document.fonts.load('700 1em Lato'),     document.fonts.load('italic 1em Lato'),     document.fonts.load('italic 700 1em Lato')   ]).then(_ => () {     document.documentElement.classList.add('fonts-loaded')   }) }

When fonts are loaded, we want to change the body text to the web font. We can do this via CSS by using the .fonts-loaded class.

/* System font before [[webfont]]s get loaded */ body {   font-family: sans-serif; }  /* Use [[webfont]] when [[webfont]] gets loaded*/ .fonts-loaded body {   font-family: Lato,  sans-serif; }

Pay attention here: We need to use the .fonts-loaded class with this approach.

We cannot write the web font directly in the <body>‘s font-family; doing this will trigger fonts to download, which means you’re using @font-face + font-display: swap. It also means the JavaScript is redundant.

/* DO NOT DO THIS */ body {   font-family: Lato, sans-serif; }

If users visit additional pages on the site, they already have the fonts loaded in their browser. We can skip the font-loading process (to speed things up) by using sessionStorage.

if (sessionStorage.fontsLoaded) {   document.documentElement.classList.add('fonts-loaded') } else {   if ('fonts' in document) {   Promise.all([     document.fonts.load('1em Lato'),     document.fonts.load('700 1em Lato'),     document.fonts.load('italic 1em Lato'),     document.fonts.load('italic 700 1em Lato')   ]).then(_ => () {     document.documentElement.classList.add('fonts-loaded')   })   } }

If we want to optimize the JavaScript for readability, we can use an early return pattern to reduce indentation.

function loadFonts () {   if (sessionStorage.fontsLoaded) {     document.documentElement.classList.add('fonts-loaded')     return    }     if ('fonts' in document) {   Promise.all([     document.fonts.load('1em Lato'),     document.fonts.load('700 1em Lato'),     document.fonts.load('italic 1em Lato'),     document.fonts.load('italic 700 1em Lato')   ]).then(_ => () {     document.documentElement.classList.add('fonts-loaded')   })   }  loadFonts()

FOUT with Class (for self-hosted fonts)

First, we want to load our fonts as usual via @font-face. The font-display: swap property is optional since we’re loading fonts via JavaScript.

@font-face {   font-family: Lato;   src: url('path-to-lato.woff2') format('woff2'),        url('path-to-lato.woff') format('woff'); }  /* Other @font-face declarations */

Next, we load the web fonts via JavaScript.

if ('fonts' in document) {   Promise.all([     document.fonts.load('1em Lato'),     document.fonts.load('700 1em Lato'),     document.fonts.load('italic 1em Lato'),     document.fonts.load('italic 700 1em Lato')   ]).then(_ => () {     document.documentElement.classList.add('fonts-loaded')   }) }

Then we want to change the body text to the web font via CSS:

/* System font before webfont is loaded */ body {   font-family: sans-serif; }  /* Use webfont when it loads */ .fonts-loaded body {   font-family: Lato,  sans-serif; }

Finally, we skip font loading for repeat visits.

if ('fonts' in document) {   Promise.all([     document.fonts.load('1em Lato'),     document.fonts.load('700 1em Lato'),     document.fonts.load('italic 1em Lato'),     document.fonts.load('italic 700 1em Lato')   ]).then(_ => () {     document.documentElement.classList.add('fonts-loaded')   }) }

CSS Font Loader API vs. FontFaceObserver

The CSS Font Loader API has pretty good browser support, but it’s a pretty cranky API.

This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.

Desktop

Chrome Firefox IE Edge Safari
35 41 No 79 10

Mobile / Tablet

Android Chrome Android Firefox Android iOS Safari
88 85 81 10.0-10.2

So, if you need to support older browsers (like IE 11 and below), or if you find the API weird and unwieldy, you might want to use Bramstein’s FontFaceObserver. It’s super lightweight so there’s not much harm.

The code looks like this. (It’s much nicer compared to the CSS Font Loader API).

new FontFaceObserver('lato')   .load()   .then(_ => {     document.documentElement.classList.add('fonts-loaded')   })

Make sure to use fontfaceobserver.standalone.js if you intend to it to load fonts on browsers that don’t support Promises.

FOFT

FOFT means flash of faux text. The idea here is we split font loading into three stages:

  • Step 1: Use fallback font when web fonts aren’t loaded yet.
  • Step 2: Load the Roman (also called “book” or “regular”) version of the font first. This replaces most of the text. Bold and italics will be faked by the browser (hence “faux text”).
  • Step 3: Load the rest of the fonts

Note: Zach also calls this Standard FOFT.

Using Standard FOFT

First, we load the Roman font.

@font-face {   font-family: LatoInitial;   src: url('path-to-lato.woff2') format('woff2'),        url('path-to-lato.woff') format('woff');   unicode-range: U+65-90, U+97-122; }  .fonts-loaded-1 body {   font-family: LatoInitial; }
if('fonts' in document) {   document.fonts.load("1em LatoInitial")     .then(_ => {       document.documentElement.classList.add('fonts-loaded-1')     }) }

Then, we load other fonts.

Pay attention here: we’re loading Lato again, but this time, we set font-family to Lato instead of LatoInitial.

/* Load this first */ @font-face {   font-family: LatoInitial;   src: url('path-to-lato.woff2') format('woff2'),        url('path-to-lato.woff') format('woff');   unicode-range: U+65-90, U+97-122; }  /* Load these afterwards */ @font-face {   font-family: Lato;   src: url('path-to-lato.woff2') format('woff2'),        url('path-to-lato.woff') format('woff');   unicode-range: U+65-90, U+97-122; }  /* Other @font-face for different weights and styles*/  .fonts-loaded-1 body {   font-family: LatoInitial; }  .fonts-loaded-2 body {   font-family: Lato; }
if ('fonts' in document) {   document.fonts.load('1em LatoInitial')     .then(_ => {       document.documentElement.classList.add('fonts-loaded-1')        Promise.all([         document.fonts.load('400 1em Lato'),         document.fonts.load('700 1em Lato'),         document.fonts.load('italic 1em Lato'),         document.fonts.load('italic 700 1em Lato')       ]).then(_ => {         document.documentElement.classList.add('fonts-loaded-2')       })     }) }

Again, we can optimize it for repeat views.

Here, we can add fonts-loaded-2 to the <html> straightaway since fonts are already loaded.

function loadFonts () {   if (sessionStorage.fontsLoaded) {     document.documentElement.classList.add('fonts-loaded-2')     return   }    if ('fonts' in document) {     document.fonts.load('1em Lato')       .then(_ => {         document.documentElement.classList.add('fonts-loaded-1')          Promise.all([           document.fonts.load('400 1em Lato'),           document.fonts.load('700 1em Lato'),           document.fonts.load('italic 1em Lato'),           document.fonts.load('italic 700 1em Lato')         ]).then(_ => {           document.documentElement.classList.add('fonts-loaded-2')            // Optimization for Repeat Views           sessionStorage.fontsLoaded = true         })       })   } }

Critical FOFT

The “critical” part comes from ‘critical css” (I believe) – where we load only essential CSS before loading the rest. We do this because it improves performance.
When it comes to typography, the only critical things are letters A to Z (both capitals and small letters). We can create a subset of these fonts with unicode-range.

When we create this subset, we can also create a separate font file with the necessary characters.

Here’s what @font-face declaration looks like:

@font-face {   font-family: LatoSubset;   src: url('path-to-optimized-lato.woff2') format('woff2'),        url('path-to-optimized-lato.woff') format('woff');   unicode-range: U+65-90, U+97-122; }

We load this subset first.

.fonts-loaded-1 body {   font-family: LatoSubset; }
if('fonts' in document) {   document.fonts.load('1em LatoSubset')     .then(_ => {       document.documentElement.classList.add('fonts-loaded-1')     }) }

And we load other font files later.

.fonts-loaded-2 body {   font-family: Lato; }
if ('fonts' in document) {   document.fonts.load('1em LatoSubset')     .then(_ => {       document.documentElement.classList.add('fonts-loaded-1')        Promise.all([         document.fonts.load('400 1em Lato'),         document.fonts.load('700 1em Lato'),         document.fonts.load('italic 1em Lato'),         document.fonts.load('italic 700 1em Lato')       ]).then(_ => {         document.documentElement.classList.add('fonts-loaded-2')       })     }) }

Again, we can optimize it for repeat views:

function loadFonts () {   if (sessionStorage.fontsLoaded) {     document.documentElement.classList.add('fonts-loaded-2')     return   }    if ('fonts' in document) {     document.fonts.load('1em LatoSubset')       .then(_ => {         document.documentElement.classList.add('fonts-loaded-1')          Promise.all([           document.fonts.load('400 1em Lato'),           document.fonts.load('700 1em Lato'),           document.fonts.load('italic 1em Lato'),           document.fonts.load('italic 700 1em Lato')         ]).then(_ => {           document.documentElement.classList.add('fonts-loaded-2')            // Optimization for Repeat Views           sessionStorage.fontsLoaded = true         })       })   } }

Critical FOFT variants

Zach proposed two additional Critical FOFT variants:

  • Critical FOFT with Data URI
  • Critical FOFT with Preload

Critical FOFT with data URIs

In this variant, we load the critical font (the subsetted roman font) with data URIs instead of a normal link. Here’s what it looks like. (And it’s scary.)

@font-face {   font-family: LatoSubset;   src: url("data:application/x-font-woff;charset=utf-8;base64,d09GRgABAAAAABVwAA0AAAAAG+QAARqgAAAAAAAAAAAAAAAAAAAAAAAAAABHREVGAAABMAAAABYAAAAWABEANkdQT1MAAAFIAAACkQAAA9wvekOPT1MvMgAAA9wAAABcAAAAYNjUqmVjbWFwAAAEOAAAADgAAABEAIcBBGdhc3AAAARwAAAACAAAAAgAAAAQZ2x5ZgAABHgAAA8EAAAUUN1x8mZoZWFkAAATfAAAADYAAAA2DA2UbWhoZWEAABO0AAAAHgAAACQPOga/aG10eAAAE9QAAADIAAAA2PXwFPVsb2NhAAAUnAAAAG4AAABuhQSA721heHAAABUMAAAAGgAAACAAOgBgbmFtZQAAFSgAAAA0AAAANAI2Codwb3N0AAAVXAAAABMAAAAg/3QAegABAAAADAAAAAAAAAACAAEAAQA1AAEAAHicTZO/SxthGMe/d4k2NFbSFE2Maaq2tJ4/qtE4dwnBoUgoHUq5pWBBaCv0h4OCS2MLGUuXIhlKhwxFMnVwcAvB4SiSQSRkOEK9xan/wdvPRYQSnrzv3fu8n8/7vO97siRd1z0tyS6WHj/V8OsXHzY1rCjvZYzCcevVy3ebioW9fkRl99sYUepn5vTZWtOhdW7v6MJas+aIDeujdW5d2GV7x/4VSUaKkSf8ipFN4rK/EdnjaQ9KDuKArimuId1QQjeV1C2NaFQppTWmjMaV1W1N6K7ua1qOZjSreeW1rAJzouZUMSLOc4I2SYyYbY2aY6XMltLmpzLmmfLmQAViSDajcbOinDnUHWKCmDZNOcQM/VnaOdp52kUigaOBowG/Ab8Bvwy7BHsd9gNIXUhdSF0IXWZ38X3RErkF2hjO9zhLyprfZPfI7pHdI7tHdk+DZITrrsH1NMabDHPHWcUg9jb2NvY29jZZHtWdsjthJVHmxIi4CeuvkVHDwrkYB4uDxdGUeUSFLhW6GB0qdLE6VOjqoXlOlS7rXWW9b7Vs3rDmVa2YT5yOzS6O8b+Fx8Pj4VnH4+Hx8FTxVPFU8VQ1ybqnyJ02dVx1XFVcdVxVXHVcX7XAyhfp580JLg/XCa4DbqJtvkP/BrUO1YfqQ/Uh1iD5UHwI7fCG4okRCSJJ/L//kzCvzmABbt4cUdcxniNulM1NuDrNyx27PNGs2YXiQnGhuDjLVFGhigo0lyoqEF2qqLCGXSqoQN6H/IMqtqHv9+9iC3ILagtqC2IHYgdiB0oHQofZf5h5xowzRej5MP7y5PMVpNinNL28CaA2eRtwBllmDcBqwmrCCm9peEOb/VtzwEjASMBIwEjASPAPOZBmnAAAAHicY2BmEWWcwMDKwMI6i9WYgYFRHkIzX2SoZmLgYGbiZ2ViYmJhZmJewMCwPoAhwZsBCkoqA3wYHBh4fzOxFf4rZGBgj2Scp8DAMBkkx8LLuhtIKTAwAQBlVA2xeJxjYGBgYmBgYAZiESDJCKZZGAyANAcQguQUGKIYqv7/B7McGRL/////8P/B/7vBasEAAOVfC4UAAQAB//8AD3ichVgJVFRnln7/e/VqoahX26uFWqmNYpEqoCiKHYRiK4tNggIiICAIgigqRhHbBQVcA+0yUUIE3JOo7aQx3WQhiUI66U4653TrxE5m5vSkz0x3O2cS0/GcBt5z/ldVEHTszDkUVf+r/9z73+9+97v3LwRFghAEb8EnEQEiQxAHcACTMcyKGTACmIABC7MSgBOErvgY9bysNnAHqE2HOaYQwELN/6mQ8WV8fHI2R02CenpEokUNaLmrLDJPhWDILQRhvQ2tkogeiYR2xYY4HSojCZSjA8ybyWKIS0ed8WEmg3Px0y3wu747e1KN7g05o+Pu3qluegqkle8qtoyO03cA+4XuUuvwBfo7fNJZf7wqoXGVWxZ6ZWDt2Y7UEWtufXJn7zHL8hrXvp0IApDqJ9/g4fivERuCSOMlCWZHnFzBgSEZbSyTkYAn0aEO6NaV4IjTAzZ8Zg5Dq9s+AKLxsb+uWc6RSAS6CJe3Mavrw2PFxf2/bMvaUFloFYtERFEV/eTnr9L0xHr0q0tANr1p/ao1QQKp1qAhS0/fPzhw/1Sh0BBnEgrKN2xvnwYkgjJY4NcgFsFIiB8JllwiI1HoNQCCDYWxo5YxEPTehg3v0Y/H6HGwtmuqz+Ppm+qix/HJlg/oRxcv0o/ebxktGvp9X9/vTxbBGBmMk6Fdvs+qzBB43cL6qUw0mZpGp/DJUVp5juaPLdnNW7rbv7eKugR3Uk/GqIEF7OogdvGL2OlQBYdtNVpt2FPgiSFV0oBBTGB+/Fo/BPLXV5/a225PFGtEpCpjze7y3b865i0+cXdHSkNFkfU/pErwgevF/uE15+nHU23oV1eA7G67KiYnqkoD+EREmLr0zL2DRx78U0mwPJQEboVoC/Xr8JRwqQ9JXwx4xULEMAr4MsH/tz7C5B99NP8XfJLqRI/P5qDDVCPcfR8GMwJ3Y/7d92cYssLnaU++QR/A5woEcaUD+AfJwcTjhJsYahrTUA6GyuSSdIe7pcAyndr1+uaGYF2yViyXhK/qq8buzGd2f3goj8FqBp6oCtoy+n044hJcIB2kAADLRgcUDOFQK7BhUUA8c3eF1SYD/6WIMFylPpdb5NooBZr/ulQtFXNpnE+GGEOgJaqHVKARKhk1oDQE8SwaqoAdJOahecFiAU7V+THA/gY94gEMZLdm0G4Y2cPLiP9b9tXF80DSA3gKGKTvUBCseHg+dhQwWuFzmfjWT9UaAZgj5EFBCgJ8K9Cqh66jKM1TRWttKorGMHwySDH/YohTo41XYX1K3mwOK0Eeq5pvtdmwl9UO+dzHS7IiW/RpAwyWjJD43TSrQ0TYSgy93qgMFc7fQFF8UkDOXVfHKlnC2RwRySpVxZJzf4G1UvPkWzYOuSdFrIvsWywXhmSuAPl8VQPJWNPyLggaGwN8X+l8PzZGP3635RNP/1TXjqn+goL+qR1dU/0e9KuLgJzp6JihH8I6+u+Z9vYZIL3Yd+9kcfHJe31990+VlJyCdEGZfLLOwViIhVr1q1YUCAF+r2FRYAb87Minfdn5h3/T+/Bh+f6K6DduPMQnM3dcaW56rdtN/QH9MLp0s/voCGOvhv6c7YHxaJCYp+OBSuhTWsCI0o/HFVnSmQ92XrAkKWk6eU/O/x9h8+T5A7F0K6kCIRLZj0SKw/wiSsT0lD7LGdJCyoJnYh458klvZuzavhcsGvD6zQQVfUAcEXHmt6t7K6PfeO0LfDKx9czaor62FaQ8nPpFJBocIqOG0e8jC1uXHzjIMDPlyTfYdxCJNATJAEzNLcbLdsanoz8INBrQGLavTxjDUpK5CqHcVba9PK/DG5Fav6d3T0Nq6tarm7o/9sbwZCKxPbcxN7spx5TWwHyVlrXn9vbjf71RwxfZXHZrXn1a9gvJ4ZGuit51RUOb3UWeGqHIFGkyp5XFpK1MioxOXNVTXXe1J68ZnlELs38WYsKBuXIy+mgSa1k9tGUaJy9fnn2IMxp04cn/4PFwD+SHlBGMQBgL2mFDL9x1NgyuWXOiwXm39PQXfX0PzpSiGdju+UPVZzvS0zperoafe3/yxUhV1cgDaI8L89AM7UmgvWcLFpi4r6pDueBVQsnnyQnwCsekepW69s/4pEo2971p5bJlK00svlgHZQ1aIhCEq4SWwp9jiVnKl9gFvsYOTMRFtYkN7FwBh0PwQCXbrEpSGzlgtUTIDuKCGNysOkcn3KKP8oNwHk4fvsV4nmdpMkym5SpsXqxjVk63MQ0uJFqoDrWhkWSYaG40cB6chOdRQ37xgO9EAXXk+anP5jDsJ8Dfac1dOckDP4EEaIUZfYdWgcd3FCFcuovDp/fxlDL0EforQkC9KQ9BQwgxlUnhSgJtlkupDwhlAEPUN208B0OYRO51tZ4PSkgNn6+WgSJeqOom9ae7MBaSmjZn6EMzTGiySDd/jHoLzWdOXgF5cNM3EyEOZ7wLwBlFRnKAQVZBYhnzv2PFyubfx3KL9GrW0GiBST/XOeardcjwE6xQJAJJYmodNkhmqjH6SGwPUF4m9i2YjmlbrHmXk8Bq3Kf/drO5KbsqO0Yp0XANq2Z2ru6rtpeqQ3FZROGq+pTU1iLbDXV0mjmmJC9Tv+dmRxzA0jorklilO7uNEUapMLlsZXLT4Gpqg0hdZ0kOJw3u5uKIlDCx1Oww/Asr1JHLnHH4ySOWHf8akSMWf/8IHMtfalYO2+TSwScJfhlic4ZvL7/UVjXY7ErbdnlD47F4Hjc8K/ZFz+Cw2d2QXnkoEf+aOrlijfvQ9L6td18qK8ytsc66XZ+9t35obdRKD0TS++QRdpKlY2ZA6XOr3AcAM4T5n3ozd090tV/JjOGKREKzqzjZu604PKpoc072qiSLRMl3uD/evO5Kdy7K2XbnBAQiiy9Q6UMSmoaqaoYa40OtenF2mTdvYJqJ1QN9f7kQ64J3X1+yB7z5E5DgEtvgku1J23appbbfOdFm5Yuyr3VUDjW53ja56zMq+xLjXiwYHEaxrXeHypYnoxmzmrA9pevcB2f2rR+Eka4Aj93OTxmfTLz/CuM1I9ELPhl02SaDDluM1J98gx9ib1bPm1vWjmzNlKioIjR21Q5PTkNejEQR7DDWtHUmtb3Z6/klGm9016f3nkE17W8fLs7oemNTlLZ+aF1MqAXGG5FqleYf++yPGa2FkS8z7IX1hl7AP0d0jIo5/OLKoA7bPoFqgV/XrDagHnesPbAyN4UFDDq9vTBRC5bRD97qlKux6ysaK4/UxsrWkNwQV1V2be/8EaxTgMvh6IE46MPYY5YecrwQqUYQBZwoYLtn2O5/Y0CG1SIEcj+poM78IOa2pX2OBb+SLi7DjGzHuvJl8bAThRauTtkwuMq9PQaoorpMKc3HywuWGwwpDT37e+pTsnp+vqXztc1J582eDq9nW2lUVMG61o645IJ0XVJ5gqs8Sbvty20NhRtDyZwkRYw9ShR1ota7e7Vdp8028sTZpd7dFXYxaVdYLGIWT+Goys/evS4l2ttQZEpdplLHuiOs9hABzuZqitF/txW69HpXoa12yxYG2dMQgO8ho2RP1Q4zVoUxxSI+PZE92lq+f3X07Y0dpUdTYHGM5JUnNQ9WUm3oya79RVkUC3KEaYcD+D044yiWcsSPD9NTDspMdpXabpbJzHa12maSWSbwMnWMmSTNMXDNvNtmP2IJ5r5D0Ccz9BGfNZLRfWxhmk1YbEhhz3pI4qKEiCshSrInDLmbi4yu8Wfd0efrBVzgKWNVzV1Kbyux8dnvPOMdYjEMNVIEsdD9w8k3zD9FiIdv20m1GOQJ9apL9H5CI5HoBeDUuCKUoF8T6vUO/Ov5Ab4Y1IpkdKtYHRRsIGlELAVX5QJaxKB+Df6rg54Cc/y1CfzrWY0/G2ypX1sY/3qfxMImhy3NCvFUftqsQUTS8ZqWA1p5bkWDY+XeSvtEa1N0aZp5Yn2de2sMSxDWmle1vSFxpTPE2ThUy+Rt1x59+tp05lNPd37G/NwCD6BnxVLPumc9Cd2XNv1ABmh684rqZ8jgVymWFdqSMnPYs7kKCFSgj3hy9t3u7Ly9NydnL/O+L+ftsMLOorNnzpwt6iwMQzm7po94vUemd3XfOerxHL2zq3qw0fnpW7/4zNn4U8bTaXqUFQ77k18Pl547oP4h4CntZ2KAOri5Esp/eufllrr+eCiDL70SkH56FG8L21Van3Noxif+GSm0E61cov4wNnoU+3LRY0AcXAm+m8g/VODGY67goPC2iazLG5cqcOzORQXOX14SNnsS/N1bvVSBs1yfLtRosM+n/zayQAvY2oyB3GzQyVkC53BrVK5WwQkhUp21e6NYArlmrH0XKe5XkR2tVBu0lAH7uRFmJvVHJlZ4K/4/A2tGFFchlUQkl2U4S5wqm3dd0zqvLbbmcEXTeEpEcIgwPLHIafPEa2zeuqY6r82xfrC2/eamHIFQa9Ko7emmqMRwXWhk5trslE1lscsTs+HkplEqIxJCw51WnTEirTIjf2clbOcAiYY32U/wGsTAKL1L6rvjMfJOypd21TAnM3REX4tEFeJqEEHfz4zTRujlXCLYaT+Y193bI1Jgr+TLQaaQpM/1UoNZGUKpULx6WdzAPrRbymDaA5H4kiXw6RWjK3JfypZ20p6J9vaSY6lM6yTcl9vhrWgZeBE9RW3s2ufNRqm57yD51yQ2DTHW4KTLwqG150+8ktNyDRtwCDGHKyUAn62WHacrL8L0iKj1Fo/Z7LGg50RyKH3+CRMqgND3Kw9jSeq3JA1MvEuXzKUGswFm6sXgBZh4SaHBZ7gEzpXx/4CryCJSg9/ji3m4iPcJrpUO0D8bVAf9iRvExoO4f+brBwL+c0ymHAt6XkiSQqrOnGswFoT6T2NC31PGKJWxSirLhPhvEtifYYw/zL7SBP9BFmdf5hc2LdhIj99QkFyaJ/m3IJnwCj0ONt5gJt85jvDbIDj4SsEsIaC7tRqQT4hpMfVHJQFO6xQ0vBL4EIA1zYV+5M/Dkmn5UmJEqWIDOU/AZiZ8gq2VDFLv8jTavndYAoWQ2qpYplRGK9CjIvns+2QIOMdYhUMw1gKtCn3zb0JgAGYzE3C+IBh1Up9jKMGnPkMTvAKzCP32RJHUTFDkceR/ATzlBB4AAQAAAAEaoLKse0pfDzz1AB8IAAAAAADSfZgxAAAAANJ9mDH/x/6KCBgF5AAAAAgAAgAAAAAAAHicY2BkYGCP/JfEwMCh9v84kJRgAIqgADMAZcoEDQAAeJwVjC9rQlEYxn875z3Hg8gNhiGosCA2g5hkiMUg4gcwiGFxaVwMl8GiLAzDyk1GLYsnGez3IxgNq/sGIvO94eF5nt/7x/wxefgC90bV9YjunaX3RDmoPrVftNeIZs3Zbhm5lEJaxMqSWHL/wkp+KUp3XZ2NeJYjbXdmrz9D6JK4jioQ5MZCEt3P2NkTc/WZ9JmbSFMeGUhKbmpsTPgvlO80//hv8pKrZvKqrjd2SG4zxuZKT/mHNKj7JxIJtDUnNjK9AyQsLMUAAAAAAAAAKwBrALEA3QD0AQkBVgFtAXkBnwHYAecCIQJHAocCrwL8AzQDhgOYA8AD5QQtBGIEigSlBPcFLwVtBacF6QYYBpYGuQbgBxwHUQddB5oHwAf1CC4IaAiNCN4JEgk5CV4JqwnfCgoKKAAAeJxjYGRgYDBjiGVgZgABEI+JAQkAAA92AJsAAAAAAAIAHgADAAEECQABAAgAAAADAAEECQACAA4ACABMAGEAdABvAFIAZQBnAHUAbABhAHJ4nGNgZgCD/4UMVQxYAAAsjgHuAA==") format("woff");   /* ... */ }

This code looks scary, but there’s no need to be scared. The hardest part here is generating the data URI, and CSS-Tricks has us covered here.

Critical FOFT with preload

In this variant, we add a link with the preload tag to the critical font file. Here’s what it looks like:

<link rel="preload" href="path-to-roman-subset.woff2" as="font" type="font/woff2" crossorigin>

We should only preload one font here. If we load more than one, we can adversely affect the initial load time.

In the strategy list, Zach also mentioned he prefers using a data URI over the preload variant. He only prefers it because browser support for preload used to be bad. Today, I feel that browser support is decent enough to choose preloading over a data URI.

This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.

Desktop

Chrome Firefox IE Edge Safari
50 85 No 79 11.1

Mobile / Tablet

Android Chrome Android Firefox Android iOS Safari
88 85 81 11.3-11.4

Final note from Zach

Chris ran this article via Zach and Zach wished he prioritized a JavaScript-free approach in his original article.

I think the article is good but I think my article that its based off of is probably a little dated in a few ways. I wish it prioritized no-JS approaches more when you’re only using one or two font files (or more but only 1 or 2 of each typeface). The JS approaches are kind of the exception nowadays I think (unless you’re using a lot of font files or a cloud provider that doesn’t support font-display: swap)

This switches the verdict a tiny bit, which I’m going to summarize in the next section.

Which font loading strategy to use?

If you use a cloud-hosted provider:

  • Use font-display: swap if the host provides it.
  • Otherwise, use FOUT with class

If you host your web fonts, you have a few choices:

  1. @font-face + font-display: swap
  2. FOUT with Class
  3. Standard FOFT
  4. Critical FOFT

Here’s how to choose between them:

  • Choose @font-face + font-display: swap if you’re starting out and don’t want to mess with JavaScript. It’s the simplest of them all. Also choose this option if you use only few font files (fewer than two files) for each typeface.
  • Choose Standard FOFT if you’re ready to use JavaScript, but don’t want to do the extra work of subsetting the Roman font.
  • Choose a Critical FOFT variant if you want to go all the way for performance.

That’s it! I hope you found all of this useful!

If you loved this article, you may like other articles I wrote. Consider subscribing to my newsletter 😉. Also, feel free to reach out to me if you have questions. I’ll try my best to help!


The post The Best Font Loading Strategies and How to Execute Them appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , ,
[Top]

Two Issues Styling the Details Element and How to Solve Them

In the not-too-distant past, even basic accordion-like interactions required JavaScript event listeners or some CSS… trickery. And, depending on the solution used, editing the underlying HTML could get complicated.

Now, the <details> and <summary> elements (which combine to form what’s called a “disclosure widget”) have made creation and maintenance of these components relatively trivial.

At my job, we use them for things like frequently asked questions.

Pretty standard question/answer format

There are a couple of issues to consider

Because expand-and-collapse interactivity is already baked into the <details> and <summary> HTML tags, you can now make disclosure widgets without any JavaScript or CSS. But you still might want some. Left unstyled, <details> disclosure widgets present us with two issues.

Issue 1: The <summary> cursor

Though the <summary> section invites interaction, the element’s default cursor is a text selection icon rather than the pointing finger you may expect:

We get the text cursor but might prefer the pointer to indicate interaction instead.

Issue 2: Nested block elements in <summary>

Nesting a block-level element (e.g. a heading) inside a <summary> element pushes that content down below the arrow marker, rather than keeping it inline:

Block-level elements won’t share space with the summary marker.

The CSS Reset fix

To remedy these issues, we can add the following two styles to the reset section of our stylesheets:

details summary {    cursor: pointer; }  details summary > * {   display: inline; }

Read on for more on each issue and its respective solution.

Changing the <summary> cursor value

When users hover over an element on a page, we always want them to see a cursor “that reflects the expected user interaction on that element.”

We touched briefly on the fact that, although <summary> elements are interactive (like a link or form button), its default cursor is not the pointing finger we typically see for such elements. Instead, we get the text cursor, which we usually expect when entering or selecting text on a page.

To fix this, switch the cursor’s value to pointer:

details summary {    cursor: pointer; }

Some notable sites already include this property when they style <details> elements. The MDN Web Docs page on the element itself does exactly that. GitHub also uses disclosure widgets for certain items, like the actions to watch, star and fork a repo.

GitHub uses cursor: pointer on the <summary> element of its disclosure widget menus. 

I’m guessing the default cursor: text value was chosen to indicate that the summary text can (along with the rest of a disclosure widget’s content) be selected by the user. But, in most cases, I feel it’s more important to indicate that the <summary> element is interactive.

Summary text is still selectable, even after we’ve changed the cursor value from text to pointer. Note that changing the cursor only affects appearance, and not its functionality.

Displaying nested <summary> contents inline

Inside each <summary> section of the FAQ entries I shared earlier, I usually enclose the question in an appropriate heading tag (depending on the page outline):

<details>   <summary>     <h3>Will my child's 504 Plan be implemented?</h3>   </summary>   <p>Yes. Similar to the Spring, case managers will reach out to students.</p> </details>

Nesting a heading inside <summary> can be helpful for a few reasons:

  • Consistent visual styling. I like my FAQ questions to look like other headings on my pages.
  • Using headings keeps the page structure valid for users of Internet Explorer and pre-Chromium versions of Edge, which don’t support <details> elements. (In these browsers, such content is always visible, rather than interactive.)
  • Proper headings can help users of assistive technologies navigate within pages. (That said, headings within <summary> elements pose a unique case, as explained in detail below. Some screen readers interpret these headings as what they are, but others don’t.)

Headings vs. buttons

Keep in mind that the <summary> element is a bit of an odd duck. It operates like a button in many ways. In fact, it even has implicit role=button ARIA mapping. But, very much unlike buttons, headings are allowed to be nested directly inside <summary> elements.

This poses us — and browser and assistive technology developers — with a contradiction:

  • Headings are permitted in <summary> elements to provide in-page navigational assistance.
  • Buttons strip the semantics out of anything (like headings) nested within them.

Unfortunately, assistive technologies are inconsistent in how they’ve handled this situation. Some screen-reading technologies, like NVDA and Apple’s VoiceOver, do acknowledge headings inside <summary> elements. JAWS, on the other hand, does not.

What this means for us is that, when we place a heading inside a <summary>, we can style the heading’s appearance. But we cannot guarantee our heading will actually be interpreted as a heading!

In other words, it probably doesn’t hurt to put a heading there. It just may not always help.

Inline all the things

When using a heading tag (or another block element) directly inside our <summary>, we’ll probably want to change its display style to inline. Otherwise, we’ll get some undesired wrapping, like the expand/collapse arrow icon displayed above the heading, instead of beside it.

We can use the following CSS to apply a display value of inline to every heading — and to any other element nested directly inside the <summary>:

details summary > * {    display: inline; }

A couple notes on this technique. First, I recommend using inline, and not inline-block, as the line wrapping issue still occurs with inline-block when the heading text extends beyond one line.

Second, rather than changing the display value of the nested elements, you might be tempted to replace the <summary> element’s default display: list-item value with display: flex. At least I was! However, if we do this, the arrow marker will disappear. Whoops!

Bonus tip: Excluding Internet Explorer from your styles

I mentioned earlier that Internet Explorer and pre-Chromium (a.k.a. EdgeHTML) versions of Edge don’t support <details> elements. So, unless we’re using polyfills for these browsers, we may want to make sure our custom disclosure widget styles aren’t applied for them. Otherwise, we end up with a situation where all our inline styling garbles the element.

Inline <summary> headings could have odd or undesirable effects in Internet Explorer and EdgeHTML.

Plus, the <summary> element is no longer interactive when this happens, meaning the cursor’s default text style is more appropriate than pointer.

If we decide that we want our reset styles to target only the appropriate browsers, we can add a feature query that prevents IE and EdgeHTML from ever having our styles applied. Here’s how we do that using @supports to detect a feature only those browsers support:

@supports not (-ms-ime-align: auto) {    details summary {      cursor: pointer;   }    details summary > * {      display: inline;   }    /* Plus any other <details>/<summary> styles you want IE to ignore. }

IE actually doesn’t support feature queries at all, so it will ignore everything in the above block, which is fine! EdgeHTML does support feature queries, but it too will not apply anything within the block, as it is the only browser engine that supports -ms-ime-align.

The main caveat here is that there are also a few older versions of Chrome (namely 12-27) and Safari (macOS and iOS versions 6-8) that do support <details> but don’t support feature queries. Using a feature query means that these browsers, which account for about 0.06% of global usage (as of January 2021), will not apply our custom disclosure widget styles, either.

Using a @supports selector(details) block, instead of @supports not (-ms-ime-align: auto), would be an ideal solution. But selector queries have even less browser support than property-based feature queries.

Final thoughts

Once we’ve got our HTML structure set and our two CSS reset styles added, we can spruce up all our disclosure widgets however else we like. Even some simple border and background color styles can go a long way for aesthetics and usability. Just know that customizing the <summary> markers can get a little complicated!


The post Two Issues Styling the Details Element and How to Solve Them appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , ,
[Top]

SVG, Favicons, and All the Fun Things We Can Do With Them

Favicons are the little icons you see in your browser tab. They help you understand which site is which when you’re scanning through your browser’s bookmarks and open tabs. They’re a neat part of internet history that are capable of performing some cool tricks.

One very new trick is the ability to use SVG as a favicon. It’s something that most modern browsers support, with more support on the way.

Here’s the code for how to add favicons to your site:

<link rel="icon" type="image/svg+xml" href="/favicon.svg"> <link rel="alternate icon" href="/favicon.ico"> <link rel="mask-icon" href="/safari-pinned-tab.svg" color="#ff8a01">

If a browser doesn’t support a SVG favicon, it will ignore the first link element declaration and continue on to the second. This ensures that all browsers that support favicons can enjoy the experience. 

You may also notice the alternate attribute value for our rel declaration in the second line. This programmatically communicates to the browser that the favicon with a file format that uses .ico is specifically used as an alternate presentation.

Following the favicons is a line of code that loads another SVG image, one called safari-pinned-tab.svg. This is to support Safari’s pinned tab functionality, which existed before other browsers had SVG favicon support. There’s additional files you can add here to enhance your site for different apps and services, but more on that in a bit.

Here’s more detail on the current level of SVG favicon support:

This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.

Desktop

Chrome Firefox IE Edge Safari
80 41 No 80 TP

Mobile / Tablet

Android Chrome Android Firefox Android iOS Safari
80 No No 13.4

Why SVG?

You may be questioning why this is needed. The .ico file format has been around forever and can support images up to 256×256 pixels in size. Here are three answers for you.

Ease of authoring

It’s a pain to make .ico files. The file is a proprietary format used by Microsoft, meaning you’ll need specialized tools to make them. SVG is an open standard, meaning you can use them without any further tooling or platform lock-in.

Future-proofing

Retina? 5k? 6k? When we use a resolution-agnostic SVG file for a favicon, we guarantee that our favicons look crisp on future devices, regardless of how large their displays get

Performance

SVGs are usually very small files, especially when compared to their raster image counterparts — even more-so if you optimize them beforehand. By only using a 16×16 pixel favicon as a fallback for browsers that don’t support SVG, we provide a combination that enjoys a high degree of support with a smaller file size to boot. 

This might seem a bit extreme, but when it comes to web performance, every byte counts!

Tricks

Another cool thing about SVG is we can embed CSS directly in it. This means we can do fun things like dynamically adjust them with JavaScript, provided the SVG is declared inline and not embedded using an img element.

<svg  version="1.1" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 100 100">   <style>     path { fill: #272019; }   </style>   <!-- etc. --> </svg>

Since SVG favicons are embedded using the link element, they can’t really be modified using JavaScript. We can, however, use things like emoji and media queries.

Emoji

Lea Verou had a genius idea about using emoji inside of SVG’s text element to make a quick favicon with a transparent background that holds up at small sizes.

In response, Chris Coyier whipped up a neat little demo that lets you play around with the concept.

Dark Mode support

Both Thomas Steiner and Mathias Bynens independently stumbled across the idea that you can use the prefers-color-scheme media query to provide support for dark mode. This work is built off of Jake Archibald’s exploration of SVG and media queries.

<svg width="128" height="128" xmlns="http://www.w3.org/2000/svg">   <style>     path { fill: #000000; }     @media (prefers-color-scheme: dark) {       path { fill: #ffffff; }     }   </style>   <path d="M111.904 52.937a1.95 1.95 0 00-1.555-1.314l-30.835-4.502-13.786-28.136c-.653-1.313-2.803-1.313-3.456 0L48.486 47.121l-30.835 4.502a1.95 1.95 0 00-1.555 1.314 1.952 1.952 0 00.48 1.99l22.33 21.894-5.28 30.918c-.115.715.173 1.45.768 1.894a1.904 1.904 0 002.016.135L64 95.178l27.59 14.59c.269.155.576.232.883.232a1.98 1.98 0 001.133-.367 1.974 1.974 0 00.768-1.894l-5.28-30.918 22.33-21.893c.518-.522.71-1.276.48-1.99z" fill-rule="nonzero"/> </svg>

For supporting browsers, this code means our star-shaped SVG favicon will change its fill color from black to white when dark mode is activated. Pretty neat!

Other media queries

Dark mode support got me thinking: if SVGs can support prefers-color-scheme, what other things can we do with them? While the support for Level 5 Media Queries may not be there yet, here’s some ideas to consider:

Mockup of four SVG favicon treatments. The first treatment is a pink star with a tab title of, “SVG Favicon.” The second treatment is a hot pink star with a tab title of, “Light Level SVG Favicon.” The third treatment is a light pink star with a tab title of, “Inverted Colors SVG Favicon.” The fourth treatment is a black pink star with a tab title of, “High Contrast Mode SVG Favicon.” The tabs are screen captures from Microsoft Edge, with the browser chrome updating to match each specialized mode.
A mockup of how these media query-based adjustments could work.

Keep it crisp

Another important aspect of good favicon design is making sure they look good in the small browser tab area. The secret to this is making the paths of the vector image line up to the pixel grid, the guide a computer uses to turn SVG math into the bitmap we see on a screen. 

Here’s a simplified example using a square shape:

A crisp orange square on a white background. There is also a faint grid of gray horizontal and vertical lines that represent the pixel grid. Screenshot from Figma.

When the vector points of the square align to the pixel grid of the artboard, the antialiasing effect a computer uses to smooth out the shapes isn’t needed. When the vector points aren’t aligned, we get a “smearing” effect:

A blurred orange square on a white background. There is also a faint grid of gray horizontal and vertical lines that represent the pixel grid. Screenshot from Figma.

A vector point’s position can be adjusted on the pixel grid by using a vector editing program such as Figma, Sketch, Inkscape, or Illustrator. These programs export SVGs as well. To adjust a vector point’s location, select each node with a precision selection tool and drag it into position.

Some more complicated icons may need to be simplified, in order to look good at such a small size. If you’re looking for a good primer on this, Jeremy Frank wrote a really good two-part article over at Vidget.

Go the extra mile

In addition to favicons, there are a bunch of different (and unfortunately proprietary) ways to use icons to enhance its experience. These include things like the aforementioned pinned tab icon for Safari¹, chat app unfurls, a pinned Windows start menu tile, social media previews, and homescreen launchers.

If you’re looking for a great place to get started with these kinds of enhancements, I really like realfavicongenerator.net.

Icon output from realfavicongenerator.net arranged in a grid using CSS-Trick’s logo. There are two rows of five icons: android-chrome-192x192.png, android-chrome-384x384.png, apple-touch-icon.png, favicon-16x16.png, favicon-32x32.png, mstile-150x150.png, safari-pinned-tab.svg, favicon.ico, browserconfig.xml, and site.webmanifest.
It’s a lot, but it guarantees robust support.

A funny thing about the history of the favicon: Internet Explorer was the first browser to support them and they were snuck in at the 11th hour by a developer named Bharat Shyam:

As the story goes, late one night, Shyam was working on his new favicon feature. He called over junior project manager Ray Sun to take a look.

Shyam commented, “This is good, right? Check it in?”, requesting permission to check the code into the Internet Explorer codebase so it could be released in the next version. Sun didn’t think too much of it, the feature was cool and would clearly give IE an edge. So he told Shyam to go ahead and add it. And just like that, the favicon made its way into Internet Explorer 5, which would go on to become one of the largest browser releases the web has ever seen.

The next day, Sun was reprimanded by his manager for letting the feature get by so quickly. As it turns out, Shyam had specifically waited until later in the day, knowing that a less experienced Program Manager would give him a pass. But by then, the code had been merged in. Incidentally, you’d be surprised just how many relatively major browser features have snuck their way into releases like this.

From How We Got the Favicon by Jay Hoffmann

I’m happy to see the platform throw a little love at favicons. They’ve long been one of my favorite little design details, and I’m excited that they’re becoming more reactive to user’s needs. If you have a moment, why not sneak a SVG favicon into your project the same way Bharat Shyam did way back in 1999. 


¹ I haven’t been able to determine if Safari is going to implement SVG favicon support, but I hope they do. Has anyone heard anything?

The post SVG, Favicons, and All the Fun Things We Can Do With Them appeared first on CSS-Tricks.

CSS-Tricks

, ,
[Top]

The Rising Complexity of JAMstack Sites and How to Manage Them

When you add anything with user-generated content or dynamic data to a static site, the complexity of the build process can become comparable to launching a monolithic CMS. How can we add rich content to static sites without stitching together multiple third-party services?

For people in the development community static site generators are a popular choice over traditional content management systems (CMS) like WordPress. By comparison static sites are usually lightweight, highly configurable, fast, easy to use and can be deployed almost anywhere.

With static websites, no code is generated on the server; we’ve replaced databases and server-side code with APIs and build processes.

This has become known as a JAMstack, which stands for JavaScript, APIs and Markup. I have a strong persuasion towards JAMstack sites because I feel more in control of the output than I do when working with the often large and monolithic CMSs I’ve sometimes had to use on client projects.

Despite my enthusiasm, I’m often disheartened by the steep complexity curve I typically encounter about halfway through a JAMstack project. Normally the first few weeks are incredibly liberating. It’s easy to get started, there is good visible progress, everything feels lean and fast. Over time, as more features are added, the build steps become more complex, multiple APIs are added, and suddenly everything feels slow. In other words, the development experience begins to suffer.

It usually looks something like this:

A hand-drawn chart showing complexity of a project over time. It shows a complexity curve that rises steeply at the end.

One of the reasons for this steep rise in complexity is there are limits to the type of data that markdown can easily represent. Relationships are one example where static sites struggle. Relationships between pages or collections of assets (such as an image gallery) can only be represented by markdown in inefficient ways. It requires significant preprocessing to resolve anything more complicated than a simple set of tags or categories. If you’ve ever had to do it, you will also know the authoring experience of managing relationships in markdown isn’t ideal.

User-generated content is another area that can cause a steep rise in the complexity of static sites. Adding features like comments, ratings, likes or any other kind of dynamic content will require third-party services — each has its own account that needs to be managed, not to mention that adding third-party scripts can have a negative impact of page performance.

If a service doesn’t match your specific requirements, sometimes it’s possible to cobble solutions together using generic platforms like Google Forms or AirTable.

The end result is we’ve outsourced the database, fragmented the content management experience and stitched together a bundle of compromises. That’s a stark contrast from the initial ease of setting up and deploying a JAMstack site.

Although this complexity curve is not unique to JAMstack projects, adding rich features to markdown-driven sites is far more difficult than we care to admit. What happened!? A lack of complexity was one of the reasons we favored JAMstack in the first place.

We did that thing that web developers do. We moved the complexity from one space into another and congratulated ourselves 😂. Not having complexity on the server-side is good for front-end performance, but there is little incentive to optimize any further once we do this. Ridiculous build times and complicated tool-chains have become an acceptable reality for modern front-end web development.

JAMstack Plus

Before I come across as sounding too critical, I should make it clear that I absolutely love static site generators. I think they are a perfect solution for many simple sites and you should still use them. However, I feel like a simple content management layer that I own and can configure is preferable to:

  • poor content management experiences,
  • complicated integration of third-party services, and
  • inefficient build processes.

I want to combine the benefits of a CMS with static site generators.

And it seems I’m not the only one who has reached this conclusion:

The solution doesn’t need to be another third-party service or require abandoning static sites entirely. You can use a personalized content management layer and unified API to enrich a static site. It might not be as hard as you think.

The first step is to create an API for your site. You can use any headless CMS, but the challenge I’ve had with many options is they make a lot of assumptions about the type of content you want. You might not want the CMS to manage pages and posts, but rather use it to store comments or images. I find this particularly difficult with WordPress. I often feel like I’m forcing a blogging platform to be just the service I need.

The new version of KeystoneJS (Keystone 5) is an excellent alternative to more opinionated content management systems. It’s made up of tiny independent components, so you only add the parts you need. This means it doesn’t feel like modifying a blogging platform. Instead it’s like creating a personalized mini-CMS and API to work specifically with your site.

I call this approach JAMstack Plus.

To help you get started with this idea I’ve created two projects:

  1. Supermaya, a starter kit for the static site generator Eleventy.
  2. Keystone JAMstack Plus, a blog enrichment platform.

Introducing Supermaya

The first project I want to share with you is Supermaya, an Eleventy starter kit designed to help add rich features to a blog or website without a complicated build process.

It comes with the all “blog standard” features including:

  • Posts and Pages
  • Pagination
  • Tags
  • RSS feed
  • Service worker
  • Lazy loading images
  • Critical CSS (if enabled)

It also has considerate and accessible markup. If deployed correctly, it should get full scores on a lighthouse audit out-of-the-box:

Supermaya scores 100% on Lighthouse tests.

I didn’t build Supermaya specifically as a platform to add user-generated content to static sites. Instead, I started it because I was not satisfied with the way existing static site generators integrate with other build tools. That’s why all the pre-processing steps in Supermaya are built into Eleveny itself. This includes the compilation of SCSS and JavaScript. Unifying the compilation steps eliminates the need for build tools like Grunt, Gulp or Webpack running in parallel.

After this, I realized the other reason for increasing complexity on JAMstack sites was integration with third-party services, usually for user-generated content. To solve this, Supermaya has optional tie-ins with a Keystone JAMstack Plus starter-kit, which makes it easier to add user-generated content and other rich features.

You can deploy both Keystone and Supermaya together and connect them at the same time by following the instructions during installation. This will deploy Keystone to Herouku and Supermaya to Netlify, as well as configure your admin user and API URL.

Rich features are added with progressive enhancement, so if the API cannot be reached or there is a server error, the site will continue to function without noticeable degradation or delays for users.

JAMstack Plus starter kit

The Keystone JAMstack Plus starter kit allows you to add rich features to a blog including:

  • Comments
  • Claps
  • Reading list, and
  • Logins

Just like Supermaya, it can be used on its own. After it’s deployed, you’ll get access to an admin interface that allows you to create and manage content. You’ll also get a GraphQL endpoint that can be connected to Supermaya.

It’s configured with the intention of being a headless CMS for user-generated content. It expects pages and posts to be managed by a static site generator. However, with a little work — and following the examples in Supermaya — you can connect any front-end to the GraphQL API.

I’d encourage you to modify the starter-kit: Add additional features or provide content for pages directly from Keystone. If you add features that could be used by the rest of the community contribute back to the starter-kit and we can make it easier for everyone to add rich features to their sites without the need for third-party services.

Note: The automatic deploy will deploy to a free instance of Heroku. This will sleep periodically if not used which can result in slow API response times after periods of inactivity. You can upgrade to a hobby instance to avoid this.

Consider owning your own data

JAMstack and servers are not incompatible. There’s always a server (usually multiple) — it’s just a question of who owns it. If you are using any kind of third-party service, the chances are they own your account information, your content and collect user data.

Sometimes this might be an acceptable compromise compared with the overhead of deploying and managing a back-end service, but when the complexity of stitching together several APIs becomes comparable to a CMS, I believe managing a tiny configurable service that you own, can provide a better experience for users, developers and content managers. It also provides a solid platform for websites to grow beyond purely static content into more complicated and varying types of applications.

I don’t think JAMstack should defined by pushing all the complexity into the front-end build process or by compromising on developer and user experience. Instead, I think JAMstack should focus on providing lean, configurable static front-ends. These can be connected to APIs to provide user-generated data and content management services. There is no reason not to own and manage these services, if it provides the best outcome.

CSS-Tricks

, , , , ,
[Top]