Tag: Single

Single Element Loaders: Going 3D!

For this fourth and final article of our little series on single-element loaders, we are going to explore 3D patterns. When creating a 3D element, it’s hard to imagine that just one HTML element is enough to simulate something like all six faces of a cube. But  maybe we can get away with something more cube-like instead by showing only the front three sides of the shape — it’s totally possible and that’s what we’re going to do together.

Article series

The split cube loader

Here is a 3D loader where a cube is split into two parts, but is only made with only a single element:

Each half of the cube is made using a pseudo-element:

Cool, right?! We can use a conic gradient with CSS clip-path on the element’s ::before and ::after pseudos to simulate the three visible faces of a 3D cube. Negative margin is what pulls the two pseudos together to overlap and simulate a full cube. The rest of our work is mostly animating those two halves to get neat-looking loaders!

Let’s check out a visual that explains the math behind the clip-path points used to create this cube-like element:

We have our variables and an equation, so let’s put those to work. First, we’ll establish our variables and set the sizing for the main .loader element:

.loader {   --s: 150px; /* control the size */   --_d: calc(0.353 * var(--s)); /* 0.353 = sin(45deg)/2 */    width: calc(var(--s) + var(--_d));    aspect-ratio: 1;   display: flex; }

Nothing too crazy so far. We have a 150px square that’s set up as a flexible container. Now we establish our pseudos:

.loader::before, .loader::after {   content: "";   flex: 1; }

Those are two halves in the .loader container. We need to paint them in, so that’s where our conic gradient kicks in:

.loader::before, .loader::after {   content: "";   flex: 1;   background:     conic-gradient(from -90deg at calc(100% - var(--_d)) var(--_d),     #fff 135deg, #666 0 270deg, #aaa 0); }

The gradient is there, but it looks weird. We need to clip it to the element:

.loader::before, .loader::after {   content: "";   flex: 1;   background:     conic-gradient(from -90deg at calc(100% - var(--_d)) var(--_d),     #fff 135deg, #666 0 270deg, #aaa 0);   clip-path:     polygon(var(--_d) 0, 100% 0, 100% calc(100% - var(--_d)), calc(100% - var(--_d)) 100%, 0 100%, 0 var(--_d)); }

Let’s make sure the two halves overlap with a negative margin:

.loader::before {   margin-right: calc(var(--_d) / -2); }  .loader::after {   margin-left: calc(var(--_d) / -2); }

Now let’s make ‘em move!

.loader::before, .loader::after {   /* same as before */   animation: load 1.5s infinite cubic-bezier(0, .5, .5, 1.8) alternate; }  .loader::after {   /* same as before */   animation-delay: -.75s }  @keyframes load{   0%, 40%   { transform: translateY(calc(var(--s) / -4)) }   60%, 100% { transform: translateY(calc(var(--s) / 4)) } }

Here’s the final demo once again:

The progress cube loader

Let’s use the same technique to create a 3D progress loader. Yes, still only one element!

We’re not changing a thing as far as simulating the cube the same way we did before, other than changing the loader’s height and aspect ratio. The animation we’re making relies on a surprisingly easy technique where we update the width of the left side while the right side fills the remaining space, thanks to flex-grow: 1.

The first step is to add some transparency to the right side using opacity:

This simulates the effect that one side of the cube is filled in while the other is empty. Then we update the color of the left side. To do that, we either update the three colors inside the conic gradient or we do it by adding a background color with a background-blend-mode:

.loader::before {   background-color: #CC333F; /* control the color here */   background-blend-mode: multiply; }

This trick only allows us to update the color only once. The right side of the loader blends in with the three shades of white from the conic gradient to create three new shades of our color, even though we’re only using one color value. Color trickery!

Let’s animate the width of the loader’s left side:

Oops, the animation is a bit strange at the beginning! Notice how it sort of starts outside of the cube? This is because we’re starting the animation at the 0% width. But due to the clip-path and negative margin we’re using, what we need to do instead is start from our --_d variable, which we used to define the clip-path points and the negative margin:

@keyframes load {   0%,   5% {width: var(--_d); }   95%,   100% {width: 100%; } }

That’s a little better:

But we can make this animation even smoother. Did you notice we’re missing a little something? Let me show you a screenshot to compare what the final demo should look like with that last demo:

It’s the bottom face of the cube! Since the second element is transparent, we need to see the bottom face of that rectangle as you can see in the left example. It’s subtle, but should be there!

We can add a gradient to the main element and clip it like we did with the pseudos:

background: linear-gradient(#fff1 0 0) bottom / 100% var(--_d) no-repeat;

Here’s the full code once everything is pulled together:

.loader {   --s: 100px; /* control the size */   --_d: calc(0.353*var(--s)); /* 0.353 = sin(45deg) / 2 */    height: var(--s);    aspect-ratio: 3;   display: flex;   background: linear-gradient(#fff1 0 0) bottom / 100% var(--_d) no-repeat;   clip-path: polygon(var(--_d) 0, 100% 0, 100% calc(100% - var(--_d)), calc(100% - var(--_d)) 100%, 0 100%, 0 var(--_d)); } .loader::before, .loader::after {   content: "";   clip-path: inherit;   background:     conic-gradient(from -90deg at calc(100% - var(--_d)) var(--_d),      #fff 135deg, #666 0 270deg, #aaa 0); } .loader::before {   background-color: #CC333F; /* control the color here */   background-blend-mode: multiply;   margin-right: calc(var(--_d) / -2);   animation: load 2.5s infinite linear; } .loader:after {   flex: 1;   margin-left: calc(var(--_d) / -2);   opacity: 0.4; }  @keyframes load {   0%,   5% { width: var(--_d); }   95%,   100% { width: 100%; } }

That’s it! We just used a clever technique that uses pseudo-elements, conic gradients, clipping, background blending, and negative margins to get, not one, but two sweet-looking 3D loaders with nothing more than a single element in the markup.

More 3D

We can still go further and simulate an infinite number of 3D cubes using one element — yes, it’s possible! Here’s a grid of cubes:

This demo and the following demos are unsupported in Safari at the time of writing.

Crazy, right? Now we’re creating a repeated pattern of cubes made using a single element… and no pseudos either! I won’t go into fine detail about the math we are using (there are very specific numbers in there) but here is a figure to visualize how we got here:

We first use a conic-gradient to create the repeating cube pattern. The repetition of the pattern is controlled by three variables:

  • --size: True to its name, this controls the size of each cube.
  • --m: This represents the number of columns.
  • --n: This is the number of rows.
  • --gap: this the gap or distance between the cubes
.cube {   --size: 40px;    --m: 4;    --n: 5;   --gap :10px;    aspect-ratio: var(--m) / var(--n);   width: calc(var(--m) * (1.353 * var(--size) + var(--gap)));   background:     conic-gradient(from -90deg at var(--size) calc(0.353 * var(--size)),       #249FAB 135deg, #81C5A3 0 270deg, #26609D 0) /* update the colors here */     0 0 / calc(100% / var(--m)) calc(100% / var(--n)); }

Then we apply a mask layer using another pattern having the same size. This is the trickiest part of this idea. Using a combination of a linear-gradient and a conic-gradient we will cut a few parts of our element to keep only the cube shapes visible.

.cube {   /* etc. */   mask:      linear-gradient(to bottom right,        #0000 calc(0.25 * var(--size)),        #000 0 calc(100% - calc(0.25 * var(--size)) - 1.414 * var(--gap)),        #0000 0),     conic-gradient(from -90deg at right var(--gap) bottom var(--gap), #000 90deg, #0000 0);     mask-size: calc(100% / var(--m)) calc(100% / var(--n));   mask-composite: intersect; }

The code may look a bit complex but thanks to CSS variables all we need to do is to update a few values to control our matrix of cubes. Need a 10⨉10 grid? Update the --m and --n variables to 10. Need a wider gap between cubes? Update the --gap value. The color values are only used once, so update those for a new color palette!

Now that we have another 3D technique, let’s use it to build variations of the loader by playing around with different animations. For example, how about a repeating pattern of cubes sliding infinitely from left to right?

This loader defines four cubes in a single row. That means our --n value is 4 and --m is equal to 1 . In other words, we no longer need these!

Instead, we can work with the --size and --gap variables in a grid container:

.loader {   --size: 70px;   --gap: 15px;      width: calc(3 * (1.353 * var(--size) + var(--gap)));   display: grid;   aspect-ratio: 3; }

This is our container. We have four cubes, but only want to show three in the container at a time so that we always have one sliding in as one is sliding out. That’s why we are factoring the width by 3 and have the aspect ratio set to 3 as well.

Let’s make sure that our cube pattern is set up for the width of four cubes. We’re going to do this on the container’s ::before pseudo-element:

.loader::before {    content: "";   width: calc(4 * 100% / 3);   /*      Code to create four cubes   */ }

Now that we have four cubes in a three-cube container, we can justify the cube pattern to the end of the grid container to overflow it, showing the last three cubes:

.loader {   /* same as before */   justify-content: end; }

Here’s what we have so far, with a red outline to show the bounds of the grid container:

Now all we have to do is to move the pseudo-element to the right by adding our animation:

@keyframes load {   to { transform: translate(calc(100% / 4)); } }

Did you get the trick of the animation? Let’s finish this off by hiding the overflowing cube pattern and by adding a touch of masking to create that fading effect that the start and the end:

.loader {   --size: 70px;   --gap: 15px;        width: calc(3*(1.353*var(--s) + var(--g)));   display: grid;   justify-items: end;   aspect-ratio: 3;   overflow: hidden;   mask: linear-gradient(90deg, #0000, #000 30px calc(100% - 30px), #0000); }

We can make this a lot more flexible by introducing a variable, --n, to set how many cubes are displayed in the container at once. And since the total number of cubes in the pattern should be one more than --n, we can express that as calc(var(--n) + 1).

Here’s the full thing:

OK, one more 3D loader that’s similar but has the cubes changing color in succession instead of sliding:

We’re going to rely on an animated background with background-blend-mode for this one:

.loader {   /* ... */   background:     linear-gradient(#ff1818 0 0) 0% / calc(100% / 3) 100% no-repeat,     /* ... */;   background-blend-mode: multiply;   /* ... */   animation: load steps(3) 1.5s infinite; } @keyframes load {   to { background-position: 150%; } }

I’ve removed the superfluous code used to create the same layout as the last example, but with three cubes instead of four. What I am adding here is a gradient defined with a specific color that blends with the conic gradient, just as we did earlier for the progress bar 3D loader.

From there, it’s animating the background gradient’s background-position as a three-step animation to make the cubes blink colors one at a time.

If you are not familiar with the values I am using for background-position and the background syntax, I highly recommend one of my previous articles and one of my Stack Overflow answers. You will find a very detailed explanation there.

Can we update the number of cubes to make it variables?

Yes, I do have a solution for that, but I’d like you to take a crack at it rather than embedding it here. Take what we have learned from the previous example and try to do the same with this one — then share your work in the comments!

Variations galore!

Like the other three articles in this series, I’d like to leave you with some inspiration to go forth and create your own loaders. Here is a collection that includes the 3D loaders we made together, plus a few others to get your imagination going:

That’s a wrap

I sure do hope you enjoyed spending time making single element loaders with me these past few weeks. It’s crazy that we started with seemingly simple spinner and then gradually added new pieces to work ourselves all the way up to 3D techniques that still only use a single element in the markup. This is exactly what CSS looks like when we harness its powers: scalable, flexible, and reusable.

Thanks again for reading this little series! I’ll sign off by reminding you that I have a collection of more than 500 loaders if you’re looking for more ideas and inspiration.

Article series


Single Element Loaders: Going 3D! originally published on CSS-Tricks. You should get the newsletter.

CSS-Tricks

, , ,

Single Element Loaders: The Bars

We’ve looked at spinners. We’ve looked at dots. Now we’re going to tackle another common pattern for loaders: bars. And we’re going to do the same thing in this third article of the series as we have the others by making it with only one element and with flexible CSS that makes it easy to create variations.

Article series

Let’s start with not one, not two, but 20 examples of bar loaders.

What?! Are you going to detail each one of them? That’s too much for an article!

It might seem like that at first glance! But all of them rely on the same code structure and we only update a few values to create variations. That’s all the power of CSS. We don’t learn how to create one loader, but we learn different techniques that allow us to create as much loader as we want using merely the same code structure.

Let’s make some bars!

We start by defining the dimensions for them using width (or height) with aspect-ratio to maintain proportion:

.bars {   width: 45px;   aspect-ratio: 1; }

We sort of “fake” three bars with a linear gradient on the background — very similar to how we created dot loaders in Part 2 of this series.

.bars {   width: 45px;   aspect-ratio: 1;   --c: no-repeat linear-gradient(#000 0 0); /* we define the color here */   background:      var(--c) 0%   50%,     var(--c) 50%  50%,     var(--c) 100% 50%;   background-size: 20% 100%; /* 20% * (3 bars + 2 spaces) = 100% */ }

The above code will give us the following result:

Like the other articles in this series, we are going to deal with a lot of background trickery. So, if you ever feel like we’re jumping around too fast or feel you need a little more detail, please do check those out. You can also read my Stack Overflow answer where I give a detailed explanation on how all this works.

Animating the bars

We either animate the element’s size or position to create the bar loader. Let’s animate the size by defining the following animation keyframes:

@keyframes load {   0%   { background-size: 20% 100%, 20% 100%, 20% 100%; }  /* 1 */   33%  { background-size: 20% 10% , 20% 100%, 20% 100%; }  /* 2 */   50%  { background-size: 20% 100%, 20% 10% , 20% 100%; }  /* 3 */   66%  { background-size: 20% 100%, 20% 100%, 20% 10%;  }  /* 4 */   100% { background-size: 20% 100%, 20% 100%, 20% 100%; }  /* 5 */ }

See what’s happening there? Between 0% and 100%, the animation changes the background-size of the element’s background gradient. Each keyframe sets three background sizes (one for each gradient).

And here’s what we get:

Can you start to imagine all the possible variations we can get by playing with different animation configurations for the sizes or the positions?

Let’s fix the size to 20% 50% and update the positions this time:

.loader {   width: 45px;   aspect-ratio: .75;   --c: no-repeat linear-gradient(#000 0 0);   background:      var(--c),     var(--c),     var(--c);   background-size: 20% 50%;   animation: load 1s infinite linear; } @keyframes load {   0%   { background-position: 0% 100%, 50% 100%, 100% 100%; } /* 1 */   20%  { background-position: 0% 50% , 50% 100%, 100% 100%; } /* 2 */   40%  { background-position: 0% 0%  , 50% 50% , 100% 100%; } /* 3 */   60%  { background-position: 0% 100%, 50% 0%  , 100% 50%;  } /* 4 */   80%  { background-position: 0% 100%, 50% 100%, 100% 0%;   } /* 5 */    100% { background-position: 0% 100%, 50% 100%, 100% 100%; } /* 6 */ }

…which gets us another loader!

You’ve probably got the trick by now. All you need is to define a timeline that you translate into a keyframe. By animating the size, the position — or both! — there’s an infinite number of loader possibilities at our fingertips.

And once we get comfortable with such a technique we can go further and use a more complex gradient to create even more loaders.

Expect for the last two examples in that demo, all of the bar loaders use the same underlying markup and styles and different combinations of animations. Open the code and try to visualize each frame independently; you’ll see how relatively trivial it is to make dozens — if not hundreds — of variations.

Getting fancy

Did you remember the mask trick we did with the dot loaders in the second article of this series? We can do the same here!

If we apply all the above logic inside the mask property we can use any background configuration to add a fancy coloration to our loaders.

Let’s take one demo and update it:

All I did is updating all the background-* with mask-* and I added a gradient coloration. As simple as that and yet we get another cool loader.

So there is no difference between the dots and the bars?

No difference! I wrote two different articles to cover as many examples as possible but in both, I am relying on the same techniques:

  1. Gradients to create the shapes (dots or bars or maybe something else)
  2. Animating background-size and/or background-position to create the loader animation
  3. Adding mask to add a touch of colors

Rounding the bars

Let’s try something different this time where we can round the edges of our bars.

Using one element and its ::before and ::after pseudos, we define three identical bars:

.loader {   --s: 100px; /* control the size */    display: grid;   place-items: center;   place-content: center;   margin: 0 calc(var(--s) / 2); /* 50px */ } .loader::before, .loader::after {   content: "";   grid-area: 1/1; } .loader, .loader::before, .loader::after {   height: var(--s);   width: calc(var(--s) / 5); /* 20px */   border-radius: var(--s);   transform: translate(calc(var(--_i, 0) * 200%)); } .loader::before { --_i: -1; } .loader::after { --_i:  1; }

That gives us three bars, this time without relying on a linear gradient:

Now the trick is to fill in those bars with a lovely gradient. To simulate a continuous gradient, we need to play with background properties. In the above figure, the green area defines the area covered by the loader. That area should be the size of the gradient and, if we do the math, it’s equal to multiplying both sides labeled S in the diagram, or background-size: var(--s) var(--s).

Since our elements are individually placed, we need to update the position of the gradient inside each one to make sure all of them overlap. This way, we’re simulating one continuous gradient even though it’s really three of them.

For the main element (placed at the center), the background needs to be at the center. We use the following:

.loader {   /* etc. */   background: linear-gradient() 50% / var(--s) var(--s); }

For the pseudo-element on the left, we need the background on the left

.loader::before {   /* etc. */   background: linear-gradient() 0% / var(--s) var(--s); }

And for the pseudo on the right, the background needs to be positioned to the right:

.loader::after {   background: linear-gradient() 100% / var(--s) var(--s); }

Using the same CSS variable, --_i, that we used for the translate, we can write the code like this:

.loader {   --s: 100px; /* control the size */   --c: linear-gradient(/* etc. */); /* control the coloration */    display: grid;   place-items: center;   place-content: center; } .loader::before, .loader::after{   content: "";   grid-area: 1/1; } .loader, .loader::before, .loader::after{   height: var(--s);   width: calc(var(--s) / 5);   border-radius: var(--s);   background: var(--c) calc(50% + var(--_i, 0) * 50%) / var(--s) var(--s);   transform: translate(calc(var(--_i, 0) * 200%)); } .loader::before { --_i: -1; } .loader::after  { --_i:  1; }

Now, all we have to do is to animate the height and add some delays! Here are three examples where all that’s different are the colors and sizes:

Wrapping up

I hope so far you are feeling super encouraged by all the powers you have to make complex-looking loading animations. All we need is one element, either gradients or pseudos to draw the bars, then some keyframes to move things around. That’s the entire recipe for getting an endless number of possibilities, so go out and starting cooking up some neat stuff!

Until the next article, I will leave you with a funny collection of loaders where I am combining the dots and the bars!

Article series


Single Element Loaders: The Bars originally published on CSS-Tricks. You should get the newsletter.

CSS-Tricks

, , ,
[Top]

Single Element Loaders: The Dots

We’re looking at loaders in this series. More than that, we’re breaking down some common loader patterns and how to re-create them with nothing more than a single div. So far, we’ve picked apart the classic spinning loader. Now, let’s look at another one you’re likely well aware of: the dots.

Dot loaders are all over the place. They’re neat because they usually consist of three dots that sort of look like a text ellipsis (…) that dances around.

Article series

  • Single Element Loaders: The Spinner
  • Single Element Loaders: The Dots — you are here
  • Single Element Loaders: The Bars — coming June 24
  • Single Element Loaders: Going 3D — coming July 1

Our goal here is to make this same thing out of a single div element. In other words, there is no one div per dot or individual animations for each dot.

That example of a loader up above is made with a single div element, a few CSS declarations, and no pseudo-elements. I am combining two techniques using CSS background and mask. And when we’re done, we’ll see how animating a background gradient helps create the illusion of each dot changing colors as they move up and down in succession.

The background animation

Let’s start with the background animation:

.loader {   width: 180px; /* this controls the size */   aspect-ratio: 8/5; /* maintain the scale */   background:      conic-gradient(red   50%, blue   0) no-repeat, /* top colors */     conic-gradient(green 50%, purple 0) no-repeat; /* bottom colors */   background-size: 200% 50%;    animation: back 4s infinite linear; /* applies the animation */ }  /* define the animation */ @keyframes back {   0%,                       /* X   Y , X     Y */   100% { background-position: 0%   0%, 0%   100%; }   25%  { background-position: 100% 0%, 0%   100%; }   50%  { background-position: 100% 0%, 100% 100%; }   75%  { background-position: 0%   0%, 100% 100%; } }

I hope this looks pretty straightforward. What we’ve got is a 180px-wide .loader element that shows two conic gradients sporting hard color stops between two colors each — the first gradient is red and blue along the top half of the .loader, and the second gradient is green and purple along the bottom half.

The way the loader’s background is sized (200% wide), we only see one of those colors in each half at a time. Then we have this little animation that pushes the position of those background gradients left, right, and back again forever and ever.

When dealing with background properties — especially background-position — I always refer to my Stack Overflow answer where I am giving a detailed explanation on how all this works. If you are uncomfortable with CSS background trickery, I highly recommend reading that answer to help with what comes next.

In the animation, notice that the first layer is Y=0% (placed at the top) while X is changes from 0% to 100%. For the second layer, we have the same for X but Y=100% (placed at the bottom).

Why using a conic-gradient() instead of linear-gradient()?

Good question! Intuitively, we should use a linear gradient to create a two-color gradients like this:

linear-gradient(90deg, red 50%, blue 0)

But we can also reach for the same using a conic-gradient() — and with less of code. We reduce the code and also learn a new trick in the process!

Sliding the colors left and right is a nice way to make it look like we’re changing colors, but it might be better if we instantly change colors instead — that way, there’s no chance of a loader dot flashing two colors at the same time. To do this, let’s change the animation‘s timing function from linear to steps(1)

The loader dots

If you followed along with the first article in this series, I bet you know what comes next: CSS masks! What makes masks so great is that they let us sort of “cut out” parts of a background in the shape of another element. So, in this case, we want to make a few dots, show the background gradients through the dots, and cut out any parts of the background that are not part of a dot.

We are going to use radial-gradient() for this:

.loader {   width: 180px;   aspect-ratio: 8/5;   mask:     radial-gradient(#000 68%, #0000 71%) no-repeat,     radial-gradient(#000 68%, #0000 71%) no-repeat,     radial-gradient(#000 68%, #0000 71%) no-repeat;   mask-size: 25% 40%; /* the size of our dots */ }

There’s some duplicated code in there, so let’s make a CSS variable to slim things down:

.loader {   width: 180px;   aspect-ratio: 8/5;   --_g: radial-gradient(#000 68%, #0000 71%) no-repeat;   mask: var(--_g),var(--_g),var(--_g);   mask-size: 25% 40%; }

Cool cool. But now we need a new animation that helps move the dots up and down between the animated gradients.

.loader {   /* same as before */   animation: load 2s infinite; }  @keyframes load {      /* X  Y,     X   Y,    X   Y */   0%     { mask-position: 0% 0%  , 50% 0%  , 100% 0%; } /* all of them at the top */   16.67% { mask-position: 0% 100%, 50% 0%  , 100% 0%; }   33.33% { mask-position: 0% 100%, 50% 100%, 100% 0%; }   50%    { mask-position: 0% 100%, 50% 100%, 100% 100%; } /* all of them at the bottom */   66.67% { mask-position: 0% 0%  , 50% 100%, 100% 100%; }   83.33% { mask-position: 0% 0%  , 50% 0%  , 100% 100%; }   100%   { mask-position: 0% 0%  , 50% 0%  , 100% 0%; } /* all of them at the top */ }

Yes, that’s a total of three radial gradients in there, all with the same configuration and the same size — the animation will update the position of each one. Note that the X coordinate of each dot is fixed. The mask-position is defined such that the first dot is at the left (0%), the second one at the center (50%), and the third one at the right (100%). We only update the Y coordinate from 0% to 100% to make the dots dance.

Dot loader dots with labels showing their changing positions.

Here’s what we get:

Now, combine this with our gradient animation and magic starts to happen:

Dot loader variations

The CSS variable we made in the last example makes it all that much easier to swap in new colors and create more variations of the same loader. For example, different colors and sizes:

What about another movement for our dots?

Here, all I did was update the animation to consider different positions, and we get another loader with the same code structure!

The animation technique I used for the mask layers can also be used with background layers to create a lot of different loaders with a single color. I wrote a detailed article about this. You will see that from the same code structure we can create different variations by simply changing a few values. I am sharing a few examples at the end of the article.

Why not a loader with one dot?

This one should be fairly easy to grok as I am using the same technique but with a more simple logic:

Here is another example of loader where I am also animating radial-gradient combined with CSS filters and mix-blend-mode to create a blobby effect:

If you check the code, you will see that all I am really doing there is animating the background-position, exactly like we did with the previous loader, but adding a dash of background-size to make it look like the blob gets bigger as it absorbs dots.

If you want to understand the magic behind that blob effect, you can refer to these interactive slides (Chrome only) by Ana Tudor because she covers the topic so well!

Here is another dot loader idea, this time using a different technique:

This one is only 10 CSS declarations and a keyframe. The main element and its two pseudo-elements have the same background configuration with one radial gradient. Each one creates one dot, for a total of three. The animation moves the gradient from top to bottom by using different delays for each dot..

Oh, and take note how this demo uses CSS Grid. This allows us to leverage the grid’s default stretch alignment so that both pseudo-elements cover the whole area of their parent. No need for sizing! Push the around a little with translate() and we’re all set.

More examples!

Just to drive the point home, I want to leave you with a bunch of additional examples that are really variations of what we’ve looked at. As you view the demos, you’ll see that the approaches we’ve covered here are super flexible and open up tons of design possibilities.

Next up…

OK, so we covered dot loaders in this article and spinners in the last one. In the next article of this four-part series, we’ll turn our attention to another common type of loader: the bars. We’ll take a lot of what we learned so far and see how we can extend them to create yet another single element loader with as little code and as much flexibility as possible.

Article series

  • Single Element Loaders: The Spinner
  • Single Element Loaders: The Dots — you are here
  • Single Element Loaders: The Bars — coming June 24
  • Single Element Loaders: Going 3D — coming July 1

Single Element Loaders: The Dots originally published on CSS-Tricks. You should get the newsletter.

CSS-Tricks

, , ,
[Top]

Single Element Loaders: The Spinner

Making CSS-only loaders is one of my favorite tasks. It’s always satisfying to look at those infinite animations. And, of course, there are lots of techniques and approaches to make them — no need to look further than CodePen to see just how many. In this article, though, we will see how to make a single element loader writing as little code as possible.

I have made a collection of more than 500 single div loaders and in this four-part series, I am going to share the tricks I used to create many of them. We will cover a huge number of examples, showing how small adjustments can lead to fun variations, and how little code we need to write to make it all happen!

Single-Element Loaders series:

  1. Single Element Loaders: The Spinner — you are here
  2. Single Element Loaders: The Dots — coming June 17
  3. Single Element Loaders: The Bars — coming June 24
  4. Single Element Loaders: Going 3D — coming July 1

For this first article, we are going to create a one of the more common loader patterns: spinning bars:

Here’s the approach

A trivial implementation for this loader is to create one element for each bar wrapped inside a parent element (for nine total elements), then play with opacity and transform to get the spinning effect.

My implementation, though, requires only one element:

<div class="loader"></div>

…and 10 CSS declarations:

.loader {   width: 150px; /* control the size */   aspect-ratio: 1;   display: grid;   mask: conic-gradient(from 22deg, #0003, #000);   animation: load 1s steps(8) infinite; } .loader, .loader:before {   --_g: linear-gradient(#17177c 0 0) 50%; /* update the color here */   background:      var(--_g)/34% 8%  space no-repeat,     var(--_g)/8%  34% no-repeat space; } .loader:before {   content: "";   transform: rotate(45deg); } @keyframes load {   to { transform: rotate(1turn); } }

Let’s break that down

At first glance, the code may look strange but you will see that it’s more simple than what you might think. The first step is to define the dimension of the element. In our case, it’s a 150px square. We can put aspect-ratio to use so the element stays square no matter what.

.loader {   width: 150px; /* control the size */   aspect-ratio: 1; /* make height equal to width */ }

When building CSS loaders, I always try to have one value for controlling the overall size. In this case, it’s the width and all the calculations we cover will refer to that value. This allows me to change a single value to control the loader. It’s always important to be able to easily adjust the size of our loaders without the need to adjust a lot of additional values.

Next, we will use gradients to create the bars. This is the trickiest part! Let’s use one gradient to create two bars like the below:

background: linear-gradient(#17177c 0 0) 50%/34% 8% space no-repeat;
Showing a space between two gradient lines for a single element loader.

Our gradient is defined with one color and two color stops. The result is a solid color with no fading or transitions. The size is equal to 34% wide and 8% tall. It’s also placed in the center (50%). The trick is the use of the keyword value space — this duplicates the gradient, giving us two total bars.

From the specification:

The image is repeated as often as will fit within the background positioning area without being clipped and then the images are spaced out to fill the area. The first and last images touch the edges of the area.

I am using a width equal to 34% which means we cannot have more than two bars (3*34% is greater than 100%) but with two bars we will have empty spaces (100% - 2 * 34% = 32%). That space is placed in the center between the two bars. In other words, we use a width for the gradient that is between 33% and 50% to make sure we have at least two bars with a little bit of space between them. The value space is what correctly places them for us.

We do the same and make a second similar gradient to get two more bars at the top and bottom, which give us a background property value of:

background:   linear-gradient(#17177c 0 0) 50%/34% 8%  space no-repeat,  linear-gradient(#17177c 0 0) 50%/8%  34% no-repeat space;

We can optimize that using a CSS variable to avoid repetition:

--_g: linear-gradient(#17177c 0 0) 50%; /* update the color here */ background:   var(--_g)/34% 8%  space no-repeat,  var(--_g)/8%  34% no-repeat space;

So, now we have four bars and, thanks to CSS variables, we can write the color value once which makes it easy to update later (like we did with the size of the loader).

To create the remaining bars, let’s tap into the .loader element and its ::before pseudo-element to get four more bars for a grand total of eight in all.

.loader {   width: 150px; /* control the size */   aspect-ratio: 1;   display: grid; } .loader, .loader::before {   --_g: linear-gradient(#17177c 0 0) 50%; /* update the color here */   background:      var(--_g)/34% 8%  space no-repeat,     var(--_g)/8%  34% no-repeat space; } .loader::before {   content: "";   transform: rotate(45deg); }

Note the use of display: grid. This allows us to rely on the grid’s default stretch alignment to make the pseudo-element cover the whole area of its parent; thus there’s no need to specify a dimension on it — another trick that reduces the code and avoid us to deal with a lot of values!

Now let’s rotate the pseudo-element by 45deg to position the remaining bars. Hover the following demo to see the trick:

Setting opacity

What we’re trying to do is create the impression that there is one bar that leaves a trail of fading bars behind it as it travels a circular path. What we need now is to play with the transparency of our bars to make that trail, which we are going to do with CSS mask combined with a conic-gradient as follows:

mask: conic-gradient(from 22deg,#0003,#000);

To better see the trick, let’s apply this to a full-colored box:

The transparency of the red color is gradually increasing clockwise. We apply this to our loader and we have the bars with different opacity:

Radial gradient plus, spinner bars equals spinner bars with gradients.

In reality, each bar appears to fade because it’s masked by a gradient and falls between two semi-transparent colors. It’s hardly noticeable when this runs, so it’s sort of like being able to say that all the bars have the same color with a different level of opacity.

The rotation

Let’s apply a rotation animation to get our loader. Note, that we need a stepped animation and not a continuous one that’s why I am using steps(8). 8 is nothing but the number of the bars, so that value can be changed depending on how many bars are in use.

.loader {   animation: load 3s steps(8) infinite; }  /* Same as before: */ @keyframes load {   to { transform: rotate(1turn) } }

That’s it! We have our loader with only one element and a few lines of CSS. We can easily control its size and color by adjusting one value.

Since we only used the ::before pseudo-element, we can add four more bars by using ::after to end with 12 bars in total and almost the same code:

We update the rotation of our pseudo-elements to consider 30deg and 60deg instead of 45deg while using an twelve-step animation, rather than eight. I also decreased the height to 5% instead of 8% to make the bars a little thinner.

Notice, too, that we have grid-area: 1/1 on the pseudo-elements. This allows us to place them in the same area as one another, stacked on top of each other.

Guess what? We can reach for the same loader using another implementation:

Can you figure out the logic behind the code? Here is a hint: the opacity is no longer handled with a CSS mask but inside the gradient and is also using the opacity property.

Why not dots instead?

We can totally do that:

If you check the code, you will see that we’re now working with a radial gradient instead of a linear one. Otherwise, the concept is exactly the same where the mask creates the impression of opacity, but we made the shapes as circles instead of lines.

Below is a figure to illustrate the new gradient configuration:

Showing placement of dots in the single-element loader.

If you’re using Safari, note that the demo may be buggy. That’s because Safari currently lacks support for the at syntax in radial gradients. But we can reconfigure the gradient a bit to overcome that:

.loader, .loader:before, .loader:after {   background:     radial-gradient(       circle closest-side,       currentColor 90%,       #0000 98%     )      50% -150%/20% 80% repeat-y,     radial-gradient(       circle closest-side,       currentColor 90%,       #0000 98%     )      -150% 50%/80% 20% repeat-x; }

More loader examples

Here is another idea for a spinner loader similar to the previous one.

For this one, I am only relying on background and mask to create the shape (no pseudo-elements needed). I am also defining the configuration with CSS variables to be able to create a lot of variations from the same code — another example of just the powers of CSS variables. I wrote another article about this technique if you want to more details.

Note that some browsers still rely on a -webkit- prefix for mask-composite with its own set of values, and will not display the spinner in the demo.

I have another one for you:

For this one, I am using a background-color to control the color, and use mask and mask-composite to create the final shape:

Different steps for applying a master to a element in the shape of a circle.

Before we end, here are some more spinning loaders I made a while back. I am relying on different techniques but still using gradients, masks, pseudo-element, etc. It could be a good exercise to figure out the logic of each one and learn new tricks at the same time. This said, if you have any question about them, the comment section is down below.

Wrapping up

See, there’s so much we can do in CSS with nothing but a single div, a couple of gradients, pseudo-elements, variables. It seems like we created a whole bunch of different spinning loaders, but they’re all basically the same thing with slight modifications.

This is only the the beginning. In this series, we will be looking at more ideas and advanced concepts for creating CSS loaders.

Single-Element Loaders series:

  1. Single Element Loaders: The Spinner — you are here
  2. Single Element Loaders: The Dots — coming June 17
  3. Single Element Loaders: The Bars — coming June 24
  4. Single Element Loaders: Going 3D — coming July 1

Single Element Loaders: The Spinner originally published on CSS-Tricks. You should get the newsletter.

CSS-Tricks

, , ,
[Top]

The Single Page App Morality Play

Baldur Bjarnason brings some baby bear porridge to the discussion of Single Page App (SPA) vs. Multi Page App (MPA).

Single-Page-Apps can be fantastic. Most teams will mess them up because most teams operate in dysfunctional organisations. Multi-Page-Apps can also be fantastic, both in highly functional organisations that can apply them when and where they are appropriate and in dysfunctional ones, as they enforce a limit to project scope.

Both approaches can be good and bad. Baldur makes the point that management plays the biggest role in ensuring either outcome.

Another truth: there are an awful lot of projects out there that aren’t all-in on either approach, but are a mixture. I feel that. I also feel like there is a strong desire to declare a winner or have a simple checklist to decide which approach to go for, and that just ain’t gonna happen because the landscape is too shifty and there are just too many factors. Technology: it’s complicated.

A recent episode of Web Rush went into all this as well:

Direct Link to ArticlePermalink


The post The Single Page App Morality Play appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , ,
[Top]

Developers and Designers Work on a Single Source of Truth with UXPin

(This is a sponsored post.)

There is a conversation that has been percolating for as long as I’ve been in the web design and development industry. It’s centered around the conflict between design tools and development tools. The final product of web design is often a mockup. The old joke was that web developers make websites and web designers make paintings of websites. That disconnect is a source of immense friction. Which is the source of truth?

What if there really could be a single source of truth. What if the design tool works on the same exact code as the production website? The latest chapter in this epic conversation is UXPin.

Let’s set up the facts so you can see this all play out.

UXPin is an in-browser design tool.

UXPin is a powerful design tool with all the features you’d expect, particularly focused on digital screen-based design.

The fact that it is in-browser is extra great here. Designing websites… on a website is an obvious and natural fit. It means what you are looking at is how it’s going to look. This is particularly important with typography! The implementer of this card component can see exact colors (in the right formats) are being used, as well as the exact pixel dimensions, etc.

This is laid out nicely by Ania Kubów in a video about UXPin.


Over a decade ago, Jason Santa Maria thought a lot about what a next-gen design tool would look like. Could we just use the browser directly?

I don’t think the browser is enough. A web designer jumping into the browser before tackling the creative and messaging problems is akin to an architect hammering pieces of wood together and then measuring afterwards. The imaginative process is cut short by the tools at hand; and it’s that imagination—or spark—at the beginning of a design that lays the path for everything that follows.

Jason Santa Maria, “A Real Web Design Application”

Perhaps not the browser directly, but a design tool within a browser, that could be the best of both worlds:

An application like this could change the process of web design considerably. Most importantly, it wouldn’t be a proxy application that we use to simulate the way webpages look—it would already speak the language of the web. It would truly be designing in the browser.

It’s so cool to see this play out in a way that aligns with what great designers envisioned before it was possible, and with new aspects that melt with today’s technological possibilities.

You can work on your own React components within UXPin.

This is where the source of truth magic can happen. It’s one thing if a design tool can output a React (or any other framework) component. That’s a neat trick. But it’s likely to be a one-way trip. Components in real-world projects are full of other things that aren’t entirely the domain of design. Perhaps a component uses a hook to return the current user’s permissions and disable a button if they don’t have access. The disabled button has an element of design to it, but most of that code does not.

It’s impractical to have a design tool that can’t respect other code in that component and essentially just leave it alone. Essentially, it’s not really a design tool if it exports components but can’t import them.

This is where UXPin Merge comes in.

Now, fair is fair, this is going to take a little work to set up. Might just be a couple of hours, or it might take few days for a complete design system. UXPin only works with React and uses a webpack configuration to integrate it.

Once you’ve gotten in going, the components you use in UXPin are very literally the components you use to build your production website.

It’s pretty impressive really, to see a design tool digest pre-built components and allow them to be used on an entirely new canvas for prototyping.

They’ve got lots of help for you on getting this going on your project, including:

As it should, it’s likely to influence how you build components.

Components tend to have props, and props control things like design and content inside. UXPin gives you a UI for the props, meaning you have total control over the component.

<LineChart    barColor="green"   height="200"   width="500"   showXAxis="false"   showYAxis="true"   data={[ ... ]} />

Knowing that, you might give yourself a prop interface for your components that provides you with lots of design control. For example, integrating theme switching.

This is all even faster with Storybook.

Another awfully popular tool in JavaScript-components-land to view your components is Storybook. It’s not a design tool like UXPin—it’s more like a zoo for your components. You might already have it set up, or you might find value in using Storybook as well.

The great news? UXPin Merge works together awesomely with Storybook. It makes integration super quick and easy. Plus then it supports any framework, like Angular, Svelte, Vue, etc—in addition to React.

Look how fast:

UXPin CEO Marcin Treder had a strong vision:

What if designers could use the very same components used by engineers and they’re all stored in a shared design system (with accurate documentation and tests)? Many of the frustrating and expensive misunderstandings between designers and engineers would stop happening.

And a plan:

  1. Connect to any Git repo.
  2. Learn about all the components in that repo.
  3. Make those components available to the UXPin visual editor.
  4. Watch for any changes to the repo and reflect those changes in the visual editor.
  5. Let designer’s design and deliver accurate specs to developers for using those components.

And that’s what they’ve pulled off here.


The post Developers and Designers Work on a Single Source of Truth with UXPin appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , , ,
[Top]

Implementing a single GraphQL across multiple data sources

(This is a sponsored post.)

In this article, we will discuss how we can apply schema stitching across multiple Fauna instances. We will also discuss how to combine other GraphQL services and data sources with Fauna in one graph.

What is Schema Stitching?

Schema stitching is the process of creating a single GraphQL API from multiple underlying GraphQL APIs.

Where is it useful?

While building large-scale applications, we often break down various functionalities and business logic into micro-services. It ensures the separation of concerns. However, there will be a time when our client applications need to query data from multiple sources. The best practice is to expose one unified graph to all your client applications. However, this could be challenging as we do not want to end up with a tightly coupled, monolithic GraphQL server. If you are using Fauna, each database has its own native GraphQL. Ideally, we would want to leverage Fauna’s native GraphQL as much as possible and avoid writing application layer code. However, if we are using multiple databases our front-end application will have to connect to multiple GraphQL instances. Such arrangement creates tight coupling. We want to avoid this in favor of one unified GraphQL server.

To remedy these problems, we can use schema stitching. Schema stitching will allow us to combine multiple GraphQL services into one unified schema. In this article, we will discuss

  1. Combining multiple Fauna instances into one GraphQL service
  2. Combining Fauna with other GraphQL APIs and data sources
  3. How to build a serverless GraphQL gateway with AWS Lambda?

Combining multiple Fauna instances into one GraphQL service

First, let’s take a look at how we can combine multiple Fauna instances into one GraphQL service. Imagine we have three Fauna database instances ProductInventory, and Review. Each is independent of the other. Each has its graph (we will refer to them as subgraphs). We want to create a unified graph interface and expose it to the client applications. Clients will be able to query any combination of the downstream data sources.

We will call the unified graph to interface our gateway service. Let’s go ahead and write this service.

We’ll start with a fresh node project. We will create a new folder. Then navigate inside it and initiate a new node app with the following commands.

mkdir my-gateway  cd my-gateway npm init --yes

Next, we will create a simple express GraphQL server. So let’s go ahead and install the express and express-graphqlpackage with the following command.

npm i express express-graphql graphql --save

Creating the gateway server

We will create a file called gateway.js . This is our main entry point to the application. We will start by creating a very simple GraphQL server.

const express = require('express'); const { graphqlHTTP } = require('express-graphql'); const { buildSchema }  = require('graphql');  // Construct a schema, using GraphQL schema language const schema = buildSchema(`   type Query {     hello: String   } `);  // The root provides a resolver function for each API endpoint const rootValue = {     hello: () => 'Hello world!', };  const app = express();  app.use(   '/graphql',   graphqlHTTP((req) => ({     schema,     rootValue,     graphiql: true,   })), );  app.listen(4000); console.log('Running a GraphQL API server at <http://localhost:4000/graphql>');

In the code above we created a bare-bone express-graphql server with a sample query and a resolver. Let’s test our app by running the following command.

node gateway.js

Navigate to [<http://localhost:4000/graphql>](<http://localhost:4000/graphql>) and you will be able to interact with the GraphQL playground.

Creating Fauna instances

Next, we will create three Fauna databases. Each of them will act as a GraphQL service. Let’s head over to fauna.com and create our databases. I will name them ProductInventory and Review

Once the databases are created we will generate admin keys for them. These keys are required to connect to our GraphQL APIs.

Let’s create three distinct GraphQL schemas and upload them to the respective databases. Here’s how our schemas will look.

# Schema for Inventory database type Inventory {   name: String   description: String   sku: Float   availableLocation: [String] }
# Schema for Product database type Product {   name: String   description: String   price: Float }
# Schema for Review database type Review {   email: String   comment: String   rating: Float }

Head over to the relative databases, select GraphQL from the sidebar and import the schemas for each database.

Now we have three GraphQL services running on Fauna. We can go ahead and interact with these services through the GraphQL playground inside Fauna. Feel free to enter some dummy data if you are following along. It will come in handy later while querying multiple data sources.

Setting up the gateway service

Next, we will combine these into one graph with schema stitching. To do so we need a gateway server. Let’s create a new file gateway.js. We will be using a couple of libraries from graphql tools to stitch the graphs.

Let’s go ahead and install these dependencies on our gateway server.

npm i @graphql-tools/schema @graphql-tools/stitch @graphql-tools/wrap cross-fetch --save 

In our gateway, we are going to create a new generic function called makeRemoteExecutor. This function is a factory function that returns another function. The returned asynchronous function will make the GraphQL query API call.

// gateway.js  const express = require('express'); const { graphqlHTTP } = require('express-graphql'); const { buildSchema }  = require('graphql');   function makeRemoteExecutor(url, token) {     return async ({ document, variables }) => {       const query = print(document);       const fetchResult = await fetch(url, {         method: 'POST',         headers: { 'Content-Type': 'application/json', 'Authorization': 'Bearer ' + token },         body: JSON.stringify({ query, variables }),       });       return fetchResult.json();     }  }  // Construct a schema, using GraphQL schema language const schema = buildSchema(`   type Query {     hello: String   } `);  // The root provides a resolver function for each API endpoint const rootValue = {     hello: () => 'Hello world!', };  const app = express();  app.use(   '/graphql',   graphqlHTTP(async (req) => {     return {       schema,       rootValue,       graphiql: true,     }   }), );  app.listen(4000); console.log('Running a GraphQL API server at http://localhost:4000/graphql');

As you can see above the makeRemoteExecutor has two parsed arguments. The url argument specifies the remote GraphQL url and the token argument specifies the authorization token.

We will create another function called makeGatewaySchema. In this function, we will make the proxy calls to the remote GraphQL APIs using the previously created makeRemoteExecutor function.

// gateway.js  const express = require('express'); const { graphqlHTTP } = require('express-graphql'); const { introspectSchema } = require('@graphql-tools/wrap'); const { stitchSchemas } = require('@graphql-tools/stitch'); const { fetch } = require('cross-fetch'); const { print } = require('graphql');  function makeRemoteExecutor(url, token) {   return async ({ document, variables }) => {     const query = print(document);     const fetchResult = await fetch(url, {       method: 'POST',       headers: { 'Content-Type': 'application/json', 'Authorization': 'Bearer ' + token },       body: JSON.stringify({ query, variables }),     });     return fetchResult.json();   } }  async function makeGatewaySchema() {      const reviewExecutor = await makeRemoteExecutor('https://graphql.fauna.com/graphql', 'fnAEQZPUejACQ2xuvfi50APAJ397hlGrTjhdXVta');     const productExecutor = await makeRemoteExecutor('https://graphql.fauna.com/graphql', 'fnAEQbI02HACQwTaUF9iOBbGC3fatQtclCOxZNfp');     const inventoryExecutor = await makeRemoteExecutor('https://graphql.fauna.com/graphql', 'fnAEQbI02HACQwTaUF9iOBbGC3fatQtclCOxZNfp');      return stitchSchemas({         subschemas: [           {             schema: await introspectSchema(reviewExecutor),             executor: reviewExecutor,           },           {             schema: await introspectSchema(productExecutor),             executor: productExecutor           },           {             schema: await introspectSchema(inventoryExecutor),             executor: inventoryExecutor           }         ],                  typeDefs: 'type Query { heartbeat: String! }',         resolvers: {           Query: {             heartbeat: () => 'OK'           }         }     }); }  // ...

We are using the makeRemoteExecutor function to make our remote GraphQL executors. We have three remote executors here one pointing to Product , Inventory , and Review services. As this is a demo application I have hardcoded the admin API key from Fauna directly in the code. Avoid doing this in a real application. These secrets should not be exposed in code at any time. Please use environment variables or secret managers to pull these values on runtime.

As you can see from the highlighted code above we are returning the output of the switchSchemas function from @graphql-tools. The function has an argument property called subschemas. In this property, we can pass in an array of all the subgraphs we want to fetch and combine. We are also using a function called introspectSchema from graphql-tools. This function is responsible for transforming the request from the gateway and making the proxy API request to the downstream services.

You can learn more about these functions on the graphql-tools documentation site.

Finally, we need to call the makeGatewaySchema. We can remove the previously hardcoded schema from our code and replace it with the stitched schema.

// gateway.js  // ...  const app = express();  app.use(   '/graphql',   graphqlHTTP(async (req) => {     const schema = await makeGatewaySchema();     return {       schema,       context: { authHeader: req.headers.authorization },       graphiql: true,     }   }), );  // ...

When we restart our server and go back to localhost we will see that queries and mutations from all Fauna instances are available in our GraphQL playground.

Let’s write a simple query that will fetch data from all Fauna instances simultaneously.

Stitch third party GraphQL APIs

We can stitch third-party GraphQL APIs into our gateway as well. For this demo, we are going to stitch the SpaceX open GraphQL API with our services.

The process is the same as above. We create a new executor and add it to our sub-graph array.

// ...  async function makeGatewaySchema() {    const reviewExecutor = await makeRemoteExecutor('https://graphql.fauna.com/graphql', 'fnAEQdRZVpACRMEEM1GKKYQxH2Qa4TzLKusTW2gN');   const productExecutor = await makeRemoteExecutor('https://graphql.fauna.com/graphql', 'fnAEQdSdXiACRGmgJgAEgmF_ZfO7iobiXGVP2NzT');   const inventoryExecutor = await makeRemoteExecutor('https://graphql.fauna.com/graphql', 'fnAEQdR0kYACRWKJJUUwWIYoZuD6cJDTvXI0_Y70');    const spacexExecutor = await makeRemoteExecutor('https://api.spacex.land/graphql/')    return stitchSchemas({     subschemas: [       {         schema: await introspectSchema(reviewExecutor),         executor: reviewExecutor,       },       {         schema: await introspectSchema(productExecutor),         executor: productExecutor       },       {         schema: await introspectSchema(inventoryExecutor),         executor: inventoryExecutor       },       {         schema: await introspectSchema(spacexExecutor),         executor: spacexExecutor       }     ],              typeDefs: 'type Query { heartbeat: String! }',     resolvers: {       Query: {         heartbeat: () => 'OK'       }     }   }); }  // ...

Deploying the gateway

To make this a true serverless solution we should deploy our gateway to a serverless function. For this demo, I am going to deploy the gateway into an AWS lambda function. Netlify and Vercel are the two other alternatives to AWS Lambda.

I am going to use the serverless framework to deploy the code to AWS. Let’s install the dependencies for it.

npm i -g serverless # if you don't have the serverless framework installed already npm i serverless-http body-parser --save  

Next, we need to make a configuration file called serverless.yaml

# serverless.yaml  service: my-graphql-gateway  provider:   name: aws   runtime: nodejs14.x   stage: dev   region: us-east-1  functions:   app:     handler: gateway.handler     events:       - http: ANY /       - http: 'ANY {proxy+}'

Inside the serverless.yaml we define information such as cloud provider, runtime, and the path to our lambda function. Feel free to take look at the official documentation for the serverless framework for more in-depth information.

We will need to make some minor changes to our code before we can deploy it to AWS.

npm i -g serverless # if you don't have the serverless framework installed already npm i serverless-http body-parser --save 

Notice the highlighted code above. We added the body-parser library to parse JSON body. We have also added the serverless-http library. Wrapping the express app instance with the serverless function will take care of all the underlying lambda configuration.

We can run the following command to deploy this to AWS Lambda.

serverless deploy

This will take a minute or two to deploy. Once the deployment is complete we will see the API URL in our terminal.

Make sure you put /graphql at the end of the generated URL. (i.e. https://gy06ffhe00.execute-api.us-east-1.amazonaws.com/dev/graphql).

There you have it. We have achieved complete serverless nirvana 😉. We are now running three Fauna instances independent of each other stitched together with a GraphQL gateway.

Feel free to check out the code for this article here.

Conclusion

Schema stitching is one of the most popular solutions to break down monoliths and achieve separation of concerns between data sources. However, there are other solutions such as Apollo Federation which pretty much works the same way. If you would like to see an article like this with Apollo Federation please let us know in the comment section. That’s it for today, see you next time.


The post Implementing a single GraphQL across multiple data sources appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , , ,
[Top]

From a Single Repo, to Multi-Repos, to Monorepo, to Multi-Monorepo

I’ve been working on the same project for several years. Its initial version was a huge monolithic app containing thousands of files. It was poorly architected and non-reusable, but was hosted in a single repo making it easy to work with. Later, I “fixed” the mess in the project by splitting the codebase into autonomous packages, hosting each of them on its own repo, and managing them with Composer. The codebase became properly architected and reusable, but being split across multiple repos made it a lot more difficult to work with.

As the code was reformatted time and again, its hosting in the repo also had to adapt, going from the initial single repo, to multiple repos, to a monorepo, to what may be called a “multi-monorepo.”

Let me take you on the journey of how this took place, explaining why and when I felt I had to switch to a new approach. The journey consists of four stages (so far!) so let’s break it down like that.

Stage 1: Single repo

The project is leoloso/PoP and it’s been through several hosting schemes, following how its code was re-architected at different times.

It was born as this WordPress site, comprising a theme and several plugins. All of the code was hosted together in the same repo.

Some time later, I needed another site with similar features so I went the quick and easy way: I duplicated the theme and added its own custom plugins, all in the same repo. I got the new site running in no time.

I did the same for another site, and then another one, and another one. Eventually the repo was hosting some 10 sites, comprising thousands of files.

A single repository hosting all our code.

Issues with the single repo

While this setup made it easy to spin up new sites, it didn’t scale well at all. The big thing is that a single change involved searching for the same string across all 10 sites. That was completely unmanageable. Let’s just say that copy/paste/search/replace became a routine thing for me.

So it was time to start coding PHP the right way.

Stage 2: Multirepo

Fast forward a couple of years. I completely split the application into PHP packages, managed via Composer and dependency injection.

Composer uses Packagist as its main PHP package repository. In order to publish a package, Packagist requires a composer.json file placed at the root of the package’s repo. That means we are unable to have multiple PHP packages, each of them with its own composer.json hosted on the same repo.

As a consequence, I had to switch from hosting all of the code in the single leoloso/PoP repo, to using multiple repos, with one repo per PHP package. To help manage them, I created the organization “PoP” in GitHub and hosted all repos there, including getpop/root, getpop/component-model, getpop/engine, and many others.

In the multirepo, each package is hosted on its own repo.

Issues with the multirepo

Handling a multirepo can be easy when you have a handful of PHP packages. But in my case, the codebase comprised over 200 PHP packages. Managing them was no fun.

The reason that the project was split into so many packages is because I also decoupled the code from WordPress (so that these could also be used with other CMSs), for which every package must be very granular, dealing with a single goal.

Now, 200 packages is not ordinary. But even if a project comprises only 10 packages, it can be difficult to manage across 10 repositories. That’s because every package must be versioned, and every version of a package depends on some version of another package. When creating pull requests, we need to configure the composer.json file on every package to use the corresponding development branch of its dependencies. It’s cumbersome and bureaucratic.

I ended up not using feature branches at all, at least in my case, and simply pointed every package to the dev-master version of its dependencies (i.e. I was not versioning packages). I wouldn’t be surprised to learn that this is a common practice more often than not.

There are tools to help manage multiple repos, like meta. It creates a project composed of multiple repos and doing git commit -m "some message" on the project executes a git commit -m "some message" command on every repo, allowing them to be in sync with each other.

However, meta will not help manage the versioning of each dependency on their composer.json file. Even though it helps alleviate the pain, it is not a definitive solution.

So, it was time to bring all packages to the same repo.

Stage 3: Monorepo

The monorepo is a single repo that hosts the code for multiple projects. Since it hosts different packages together, we can version control them together too. This way, all packages can be published with the same version, and linked across dependencies. This makes pull requests very simple.

The monorepo hosts multiple packages.

As I mentioned earlier, we are not able to publish PHP packages to Packagist if they are hosted on the same repo. But we can overcome this constraint by decoupling development and distribution of the code: we use the monorepo to host and edit the source code, and multiple repos (at one repo per package) to publish them to Packagist for distribution and consumption.

The monorepo hosts the source code, multiple repos distribute it.

Switching to the Monorepo

Switching to the monorepo approach involved the following steps:

First, I created the folder structure in leoloso/PoP to host the multiple projects. I decided to use a two-level hierarchy, first under layers/ to indicate the broader project, and then under packages/, plugins/, clients/ and whatnot to indicate the category.

Showing the HitHub repo for a project called PoP. The screen in is dark mode, so the background is near black and the text is off-white, except for blue links.
The monorepo layers indicate the broader project.

Then, I copied all source code from all repos (getpop/engine, getpop/component-model, etc.) to the corresponding location for that package in the monorepo (i.e. layers/Engine/packages/engine, layers/Engine/packages/component-model, etc).

I didn’t need to keep the Git history of the packages, so I just copied the files with Finder. Otherwise, we can use hraban/tomono or shopsys/monorepo-tools to port repos into the monorepo, while preserving their Git history and commit hashes.

Next, I updated the description of all downstream repos, to start with [READ ONLY], such as this one.

Showing the GitHub repo for the component-model project. The screen is in dark mode, so the background is near black and the text is off-white, except for blue links. There is a sidebar to the right of the screen that is next to the list of files in the repo. The sidebar has an About heading with a description that reads: Read only, component model for Pop, over which the component-based architecture is based." This is highlighted in red.
The downstream repo’s “READ ONLY” is located in the repo description.

I executed this task in bulk via GitHub’s GraphQL API. I first obtained all of the descriptions from all of the repos, with this query:

{   repositoryOwner(login: "getpop") {     repositories(first: 100) {       nodes {         id         name         description       }     }   } }

…which returned a list like this:

{   "data": {     "repositoryOwner": {       "repositories": {         "nodes": [           {             "id": "MDEwOlJlcG9zaXRvcnkxODQ2OTYyODc=",             "name": "hooks",             "description": "Contracts to implement hooks (filters and actions) for PoP"           },           {             "id": "MDEwOlJlcG9zaXRvcnkxODU1NTQ4MDE=",             "name": "root",             "description": "Declaration of dependencies shared by all PoP components"           },           {             "id": "MDEwOlJlcG9zaXRvcnkxODYyMjczNTk=",             "name": "engine",             "description": "Engine for PoP"           }         ]       }     }   } }

From there, I copied all descriptions, added [READ ONLY] to them, and for every repo generated a new query executing the updateRepository GraphQL mutation:

mutation {   updateRepository(     input: {       repositoryId: "MDEwOlJlcG9zaXRvcnkxODYyMjczNTk="       description: "[READ ONLY] Engine for PoP"     }   ) {     repository {       description     }   } }

Finally, I introduced tooling to help “split the monorepo.” Using a monorepo relies on synchronizing the code between the upstream monorepo and the downstream repos, triggered whenever a pull request is merged. This action is called “splitting the monorepo.” Splitting the monorepo can be achieved with a git subtree split command but, because I’m lazy, I’d rather use a tool.

I chose Monorepo builder, which is written in PHP. I like this tool because I can customize it with my own functionality. Other popular tools are the Git Subtree Splitter (written in Go) and Git Subsplit (bash script).

What I like about the Monorepo

I feel at home with the monorepo. The speed of development has improved because dealing with 200 packages feels pretty much like dealing with just one. The boost is most evident when refactoring the codebase, i.e. when executing updates across many packages.

The monorepo also allows me to release multiple WordPress plugins at once. All I need to do is provide a configuration to GitHub Actions via PHP code (when using the Monorepo builder) instead of hard-coding it in YAML.

To generate a WordPress plugin for distribution, I had created a generate_plugins.yml workflow that triggers when creating a release. With the monorepo, I have adapted it to generate not just one, but multiple plugins, configured via PHP through a custom command in plugin-config-entries-json, and invoked like this in GitHub Actions:

- id: output_data   run: |     echo "quot;::set-output name=plugin_config_entries::$ (vendor/bin/monorepo-builder plugin-config-entries-json)"

This way, I can generate my GraphQL API plugin and other plugins hosted in the monorepo all at once. The configuration defined via PHP is this one.

class PluginDataSource {   public function getPluginConfigEntries(): array   {     return [       // GraphQL API for WordPress       [         'path' => 'layers/GraphQLAPIForWP/plugins/graphql-api-for-wp',         'zip_file' => 'graphql-api.zip',         'main_file' => 'graphql-api.php',         'dist_repo_organization' => 'GraphQLAPI',         'dist_repo_name' => 'graphql-api-for-wp-dist',       ],       // GraphQL API - Extension Demo       [         'path' => 'layers/GraphQLAPIForWP/plugins/extension-demo',         'zip_file' => 'graphql-api-extension-demo.zip',         'main_file' =>; 'graphql-api-extension-demo.php',         'dist_repo_organization' => 'GraphQLAPI',         'dist_repo_name' => 'extension-demo-dist',       ],     ];   } }

When creating a release, the plugins are generated via GitHub Actions.

Dark mode screen in GitHub showing the actions for the project.
This figure shows plugins generated when a release is created.

If, in the future, I add the code for yet another plugin to the repo, it will also be generated without any trouble. Investing some time and energy producing this setup now will definitely save plenty of time and energy in the future.

Issues with the Monorepo

I believe the monorepo is particularly useful when all packages are coded in the same programming language, tightly coupled, and relying on the same tooling. If instead we have multiple projects based on different programming languages (such as JavaScript and PHP), composed of unrelated parts (such as the main website code and a subdomain that handles newsletter subscriptions), or tooling (such as PHPUnit and Jest), then I don’t believe the monorepo provides much of an advantage.

That said, there are downsides to the monorepo:

  • We must use the same license for all of the code hosted in the monorepo; otherwise, we’re unable to add a LICENSE.md file at the root of the monorepo and have GitHub pick it up automatically. Indeed, leoloso/PoP initially provided several libraries using MIT and the plugin using GPLv2. So, I decided to simplify it using the lowest common denominator between them, which is GPLv2.
  • There is a lot of code, a lot of documentation, and plenty of issues, all from different projects. As such, potential contributors that were attracted to a specific project can easily get confused.
  • When tagging the code, all packages are versioned independently with that tag whether their particular code was updated or not. This is an issue with the Monorepo builder and not necessarily with the monorepo approach (Symfony has solved this problem for its monorepo).
  • The issues board needs proper management. In particular, it requires labels to assign issues to the corresponding project, or risk it becoming chaotic.
Showing the list of reported issues for the project in GitHub in dark mode. The image shows just how crowded and messy the screen looks when there are a bunch of issues from different projects in the same list without a way to differentiate them.
The issues board can become chaotic without labels that are associated with projects.

All these issues are not roadblocks though. I can cope with them. However, there is an issue that the monorepo cannot help me with: hosting both public and private code together.

I’m planning to create a “PRO” version of my plugin which I plan to host in a private repo. However, the code in the repo is either public or private, so I’m unable to host my private code in the public leoloso/PoP repo. At the same time, I want to keep using my setup for the private repo too, particularly the generate_plugins.yml workflow (which already scopes the plugin and downgrades its code from PHP 8.0 to 7.1) and its possibility to configure it via PHP. And I want to keep it DRY, avoiding copy/pastes.

It was time to switch to the multi-monorepo.

Stage 4: Multi-monorepo

The multi-monorepo approach consists of different monorepos sharing their files with each other, linked via Git submodules. At its most basic, a multi-monorepo comprises two monorepos: an autonomous upstream monorepo, and a downstream monorepo that embeds the upstream repo as a Git submodule that’s able to access its files:

A giant red folder illustration is labeled as the downstream monorepo and it contains a smaller green folder showing the upstream monorepo.
The upstream monorepo is contained within the downstream monorepo.

This approach satisfies my requirements by:

  • having the public repo leoloso/PoP be the upstream monorepo, and
  • creating a private repo leoloso/GraphQLAPI-PRO that serves as the downstream monorepo.
The same illustration as before, but now the large folder is a bright pink and is labeled as with the project name, and the smaller folder is a purplish-blue and labeled with the name of the public downstream module,.
A private monorepo can access the files from a public monorepo.

leoloso/GraphQLAPI-PRO embeds leoloso/PoP under subfolder submodules/PoP (notice how GitHub links to the specific commit of the embedded repo):

This figure show how the public monorepo is embedded within the private monorepo in the GitHub project.

Now, leoloso/GraphQLAPI-PRO can access all the files from leoloso/PoP. For instance, script ci/downgrade/downgrade_code.sh from leoloso/PoP (which downgrades the code from PHP 8.0 to 7.1) can be accessed under submodules/PoP/ci/downgrade/downgrade_code.sh.

In addition, the downstream repo can load the PHP code from the upstream repo and even extend it. This way, the configuration to generate the public WordPress plugins can be overridden to produce the PRO plugin versions instead:

class PluginDataSource extends UpstreamPluginDataSource {   public function getPluginConfigEntries(): array   {     return [       // GraphQL API PRO       [         'path' => 'layers/GraphQLAPIForWP/plugins/graphql-api-pro',         'zip_file' => 'graphql-api-pro.zip',         'main_file' => 'graphql-api-pro.php',         'dist_repo_organization' => 'GraphQLAPI-PRO',         'dist_repo_name' => 'graphql-api-pro-dist',       ],       // GraphQL API Extensions       // Google Translate       [         'path' => 'layers/GraphQLAPIForWP/plugins/google-translate',         'zip_file' => 'graphql-api-google-translate.zip',         'main_file' => 'graphql-api-google-translate.php',         'dist_repo_organization' => 'GraphQLAPI-PRO',         'dist_repo_name' => 'graphql-api-google-translate-dist',       ],       // Events Manager       [         'path' => 'layers/GraphQLAPIForWP/plugins/events-manager',         'zip_file' => 'graphql-api-events-manager.zip',         'main_file' => 'graphql-api-events-manager.php',         'dist_repo_organization' => 'GraphQLAPI-PRO',         'dist_repo_name' => 'graphql-api-events-manager-dist',       ],     ];   } }

GitHub Actions will only load workflows from under .github/workflows, and the upstream workflows are under submodules/PoP/.github/workflows; hence we need to copy them. This is not ideal, though we can avoid editing the copied workflows and treat the upstream files as the single source of truth.

To copy the workflows over, a simple Composer script can do:

{   "scripts": {     "copy-workflows": [       "php -r "copy('submodules/PoP/.github/workflows/generate_plugins.yml', '.github/workflows/generate_plugins.yml');"",       "php -r "copy('submodules/PoP/.github/workflows/split_monorepo.yaml', '.github/workflows/split_monorepo.yaml');""     ]   } }

Then, each time I edit the workflows in the upstream monorepo, I also copy them to the downstream monorepo by executing the following command:

composer copy-workflows

Once this setup is in place, the private repo generates its own plugins by reusing the workflow from the public repo:

This figure shows the PRO plugins generated in GitHub Actions.

I am extremely satisfied with this approach. I feel it has removed all of the burden from my shoulders concerning the way projects are managed. I read about a WordPress plugin author complaining that managing the releases of his 10+ plugins was taking a considerable amount of time. That doesn’t happen here—after I merge my pull request, both public and private plugins are generated automatically, like magic.

Issues with the multi-monorepo

First off, it leaks. Ideally, leoloso/PoP should be completely autonomous and unaware that it is used as an upstream monorepo in a grander scheme—but that’s not the case.

When doing git checkout, the downstream monorepo must pass the --recurse-submodules option as to also checkout the submodules. In the GitHub Actions workflows for the private repo, the checkout must be done like this:

- uses: actions/checkout@v2   with:     submodules: recursive

As a result, we have to input submodules: recursive to the downstream workflow, but not to the upstream one even though they both use the same source file.

To solve this while maintaining the public monorepo as the single source of truth, the workflows in leoloso/PoP are injected the value for submodules via an environment variable CHECKOUT_SUBMODULES, like this:

env:   CHECKOUT_SUBMODULES: "";  jobs:   provide_data:     steps:       - uses: actions/checkout@v2         with:           submodules: $ {{ env.CHECKOUT_SUBMODULES }} 

The environment value is empty for the upstream monorepo, so doing submodules: "" works well. And then, when copying over the workflows from upstream to downstream, I replace the value of the environment variable to "recursive" so that it becomes:

env:   CHECKOUT_SUBMODULES: "recursive" 

(I have a PHP command to do the replacement, but we could also pipe sed in the copy-workflows composer script.)

This leakage reveals another issue with this setup: I must review all contributions to the public repo before they are merged, or they could break something downstream. The contributors would also completely unaware of those leakages (and they couldn’t be blamed for it). This situation is specific to the public/private-monorepo setup, where I am the only person who is aware of the full setup. While I share access to the public repo, I am the only one accessing the private one.

As an example of how things could go wrong, a contributor to leoloso/PoP might remove CHECKOUT_SUBMODULES: "" since it is superfluous. What the contributor doesn’t know is that, while that line is not needed, removing it will break the private repo.

I guess I need to add a warning!

env:   ### ☠️ Do not delete this line! Or bad things will happen! ☠️   CHECKOUT_SUBMODULES: "" 

Wrapping up

My repo has gone through quite a journey, being adapted to the new requirements of my code and application at different stages:

  • It started as a single repo, hosting a monolithic app.
  • It became a multirepo when splitting the app into packages.
  • It was switched to a monorepo to better manage all the packages.
  • It was upgraded to a multi-monorepo to share files with a private monorepo.

Context means everything, so there is no “best” approach here—only solutions that are more or less suitable to different scenarios.

Has my repo reached the end of its journey? Who knows? The multi-monorepo satisfies my current requirements, but it hosts all private plugins together. If I ever need to grant contractors access to a specific private plugin, while preventing them to access other code, then the monorepo may no longer be the ideal solution for me, and I’ll need to iterate again.

I hope you have enjoyed the journey. And, if you have any ideas or examples from your own experiences, I’d love to hear about them in the comments.


The post From a Single Repo, to Multi-Repos, to Monorepo, to Multi-Monorepo appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , ,
[Top]

How We Improved the Accessibility of Our Single Page App Menu

I recently started working on a Progressive Web App (PWA) for a client with my team. We’re using React with client-side routing via React Router, and one of the first elements that we made was the main menu. Menus are a key component of any site or app. That’s really how folks get around, so making it accessible was a super high priority for the team.

But in the process, we learned that making an accessible main menu in a PWA isn’t as obvious as it might sound. I thought I’d share some of those lessons with you and how we overcame them.

As far as requirements go, we wanted a menu that users could not only navigate using a mouse, but using a keyboard as well, the acceptance criteria being that a user should be able to tab through the top-level menu items, and the sub-menu items that would otherwise only be visible if a user with a mouse hovered over a top-level menu item. And, of course, we wanted a focus ring to follow the elements that have focus.

The first thing we had to do was update the existing CSS that was set up to reveal a sub-menu when a top-level menu item is hovered. We were previously using the visibility property, changing between visible and hidden on the parent container’s hovered state. This works fine for mouse users, but for keyboard users, focus doesn’t automatically move to an element that is set to visibility: hidden (the same applies for elements that are given display: none). So we removed the visibility property, and instead used a very large negative position value:

.menu-item {   position: relative; }  .sub-menu {   position: absolute   left: -100000px; /* Kicking off  the page instead of hiding visiblity */ }  .menu-item:hover .sub-menu {   left: 0; }

This works perfectly fine for mouse users. But for keyboard users, the sub menu still wasn’t visible even though focus was within that sub menu! In order to make the sub-menu visible when an element within it has focus, we needed to make use of :focus and :focus-within on the parent container:

.menu-item {   position: relative; }  .sub-menu {   position: absolute   left: -100000px; }  .menu-item:hover .sub-menu, .menu-item:focus .sub-menu, .menu-item:focus-within .sub-menu {   left: 0; }

This updated code allows the the sub-menus to appear as each of the links within that menu gets focus. As soon as focus moves to the next sub menu, the first one hides, and the second becomes visible. Perfect! We considered this task complete, so a pull request was created and it was merged into the main branch.

But then we used the menu ourselves the next day in staging to create another page and ran into a problem. Upon selecting a menu item—regardless of whether it’s a click or a tab—the menu itself wouldn’t hide. Mouse users would have to click off to the side in some white space to clear the focus, and keyboard users were completely stuck! They couldn’t hit the esc key to clear focus, nor any other key combination. Instead, keyboard users would have to press the tab key enough times to move the focus through the menu and onto another element that didn’t cause a large drop down to obscure their view.

The reason the menu would stay visible is because the selected menu item retained focus. Client-side routing in a Single Page Application (SPA) means that only a part of the page will update; there isn’t a full page reload.

There was another issue we noticed: it was difficult for a keyboard user to use our “Jump to Content” link. Web users typically expect that pressing the tab key once will highlight a “Jump to Content” link, but our menu implementation broke that. We had to come up with a pattern to effectively replicate the “focus clearing” that browsers would otherwise give us for free on a full page reload.

The first option we tried was the easiest: Add an onClick prop to React Router’s Link component, calling document.activeElement.blur() when a link in the menu is selected:

const Menu = () => {   const clearFocus = () => {     document.activeElement.blur();   }    return (     <ul className="menu">       <li className="menu-item">         <Link to="/" onClick={clearFocus}>Home</Link>       </li>       <li className="menu-item">         <Link to="/products" onClick={clearFocus}>Products</Link>         <ul className="sub-menu">           <li>             <Link to="/products/tops" onClick={clearFocus}>Tops</Link>           </li>           <li>             <Link to="/products/bottoms" onClick={clearFocus}>Bottoms</Link>           </li>           <li>             <Link to="/products/accessories" onClick={clearFocus}>Accessories</Link>           </li>         </ul>       </li>     </ul>   ); }

This approach worked well for “closing” the menu after an item is clicked. However, if a keyboard user pressed the tab key after selecting one of the menu links, then the next link would become focused. As mentioned earlier, pressing the tab key after a navigation event would ideally focus on the “Jump to Content” link first.

At this point, we knew we were going to have to programmatically force focus to another element, preferably one that’s high up in the DOM. That way, when a user starts tabbing after a navigation event, they’ll arrive at or near the top of the page, similiar to a full page reload, making it much easier to access the jump link.

We initially tried to force focus on the <body> element itself, but this didn’t work as the body isn’t something the user can interact with. There wasn’t a way for it to receive focus.

The next idea was to force focus on the logo in the header, as this itself is just a link back to the home page and can receive focus. However, in this particular case, the logo was below the “Jump To Content” link in the DOM, which means that a user would have to shift + tab to get to it. No good.

We finally decided that we had to render an interact-able element, for example, an anchor element, in the DOM, at a point that’s above than the “Jump to Content” link. This new anchor element would be styled so that it’s invisible and that users are unable to focus on it using “normal” web interactions (i.e. it’s taken out of the normal tab flow). When a user selects a menu item, focus would be programmatically forced to this new anchor element, which means that pressing tab again would focus directly on the “Jump to Content” link. It also meant that the sub-menu would immediately hide itself once a menu item is selected.

const App = () => {   const focusResetRef = React.useRef();    const handleResetFocus = () => {     focusResetRef.current.focus();   };    return (     <Fragment>       <a         ref={focusResetRef}         href="javascript:void(0)"         tabIndex="-1"         style={{ position: "fixed", top: "-10000px" }}         aria-hidden       >Focus Reset</a>       <a href="#main" className="jump-to-content-a11y-styles">Jump To Content</a>       <Menu onSelectMenuItem={handleResetFocus} />       ...     </Fragment>   ) }

Some notes of this new “Focus Reset” anchor element:

  • href is set to javascript:void(0) so that if a user manages to interact with the element, nothing actually happens. For example, if a user presses the return key immediately after selecting a menu item, that will trigger the interaction. In that instance, we don’t want the page to do anything, or the URL to change.
  • tabIndex is set to -1 so that a user can’t “normally” move focus to this element. It also means that the first time a user presses the tab key upon loading a page, this element won’t be focused, but the “Jump To Content” link instead.
  • style simply moves the element out of the viewport. Setting to position: fixed ensures it’s taken out of the document flow, so there isn’t any vertical space allocated to the element
  • aria-hidden tells screen readers that this element isn’t important, so don’t announce it to users

But we figured we could improve this even further! Let’s imagine we have a mega menu, and the menu doesn’t hide automatically when a mouse user clicks a link. That’s going to cause frustration. A user will have to precisely move their mouse to a section of the page that doesn’t contain the menu in order to clear the :hover state, and therefore allow the menu to close.

What we need is to “force clear” the hover state. We can do that with the help of React and a clearHover class:

// Menu.jsx const Menu = (props) => {   const { onSelectMenuItem } = props;   const [clearHover, setClearHover] = React.useState(false);    const closeMenu= () => {     onSelectMenuItem();     setClearHover(true);   }    React.useEffect(() => {     let timeout;     if (clearHover) {       timeout = setTimeout(() => {         setClearHover(false);       }, 0); // Adjust this timeout to suit the applications' needs     }     return () => clearTimeout(timeout);   }, [clearHover]);    return (     <ul className={`menu $  {clearHover ? "clearHover" : ""}`}>       <li className="menu-item">         <Link to="/" onClick={closeMenu}>Home</Link>       </li>       <li className="menu-item">         <Link to="/products" onClick={closeMenu}>Products</Link>         <ul className="sub-menu">           {/* Sub Menu Items */}         </ul>       </li>     </ul>   ); }

This updated code hides the menu immediately when a menu item is clicked. It also hides immediately when a keyboard user selects a menu item. Pressing the tab key after selecting a navigation link moves the focus to the “Jump to Content” link.

At this point, our team had updated the menu component to a point where we were super happy. Both keyboard and mouse users get a consistent experience, and that experience follows what a browser does by default for a full page reload.

Our actual implementation is slightly different than the example here so we could use the pattern on other projects. We put it into a React Context, with the Provider set to wrap the Header component, and the Focus Reset element being automatically added just before the Provider’s children. That way, the element is placed before the “Jump to Content” link in the DOM hierarchy. It also allows us to access the focus reset function with a simple hook, instead of having to prop drill it.

We have created a Code Sandbox that allows you to play with the three different solutions we covered here. You’ll definitely see the pain points of the earlier implementation, and then see how much better the end result feels!

We would love to hear feedback on this implementation! We think it’s going to work well, but it hasn’t been released to in the wild yet, so we don’t have definitive data or user feedback. We’re certainly not a11y experts, just doing our best with what we do know, and are very open and willing to learn more on the topic.


The post How We Improved the Accessibility of Our Single Page App Menu appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , ,
[Top]

A Whole Website in a Single HTML File

I can’t stop thinking about this site. It looks like a pretty standard fare; a website with links to different pages. Nothing to write home about except that… the whole website is contained within a single HTML file.

What about clicking the navigation links, you ask? Each link merely shows and hides certain parts of the HTML.

<section id="home">   <!-- home content goes here --> </section> <section id="about">   <!-- about page goes here --> </section>

Each <section> is hidden with CSS:

section { display: none; }

Each link in the main navigation points to an anchor on the page:

<a href="#home">Home</a> <a href="#about">About</a>

And once you click a link, the <section> for that particular link is displayed via:

section:target { display: block; }

See that :target pseudo selector? That’s the magic! Sure, it’s been around for years, but this is a clever way to use it for sure. Most times, it’s used to highlight the anchor on the page once an anchor link to it has been clicked. That’s a handy way to help the user know where they’ve just jumped to.

Anyway, using :target like this is super smart stuff! It ends up looking like just a regular website when you click around:

Direct Link to ArticlePermalink


The post A Whole Website in a Single HTML File appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , ,
[Top]