Tag: Used

How I Used the WAAPI to Build an Animation Library

The Web Animations API lets us construct animations and control their playback with JavaScript. The API opens the browser’s animation engine to developers and was designed to underlie implementations of both CSS animations and transitions, leaving the door open to future animation effects. It is one of the most performant ways to animate on the Web, letting the browser make its own internal optimizations without hacks, coercion, or window.requestAnimationFrame().

With the Web Animations API, we can move interactive animations from stylesheets to JavaScript, separating presentation from behavior. We no longer need to rely on DOM-heavy techniques such as writing CSS properties and scoping classes onto elements to control playback direction. And unlike pure, declarative CSS, JavaScript also lets us dynamically set values from properties to durations. For building custom animation libraries and creating interactive animations, the Web Animations API might be the perfect tool for the job. Let’s see what it can do!

For the rest of this article, I will sometimes refer to the Web Animation API as WAAPI. When searching for resources on the Web Animation API, you might be led astray by searching “Web Animation API” so, to make it easy to find resources, I feel we should adopt the term WAAPI; tell me what you think in the comments below.

This is the library I made with the WAAPI

@okikio/animate is an animation library for the modern web. It was inspired by animateplus, and animejs; it is focused on performance and developer experience, and utilizes the Web Animation API to deliver butter-smooth animations at a small size, weighing in at ~5.79 KB (minified and gzipped).

The story behind @okikio/animate

In 2020, I decided to make a more efficient PJAX library, similar to Rezo Zero’sStarting Blocks project, but with the ease of use of barbajs. I felt starting blocks was easier to extend with custom functionality, and could be made smoother, faster, and easier to use.

Note: if you don’t know what a PJAX library is I suggest checking out MoOx/pjax; in short, PJAX allows for smooth transitions between pages using fetch requests and switching out DOM Elements.

Over time my intent shifted, and I started noticing how often sites from awwwards.com used PJAX, but often butchered the natural experience of the site and browser . Many of the sites looked cool at first glance, but the actual usage often told a different story — scrollbars were often overridden, prefetching was often too eager, and a lack of preparation for people without powerful internet connections, CPUs and/or GPUs. So, I decided to progressively enhance the library I was going to build. I started what I call the “native initiative” stored in the GitHub repo okikio/native; a means of introducing all the cool and modern features in a highly performant, compliant, and lightweight way.

For the native initiative I designed the PJAX library @okikio/native; while testing on an actual project, I ran into the Web Animation API, and realized there were no libraries that took advantage of it, so, I developed @okikio/animate, to create a browser compliant animation library. (Note: this was in 2020, around the same time use-web-animations by wellyshen was being developed. If you are using react and need some quick animate.css like effects, use-web-animations is a good fit.) At first, it was supposed to be simple wrapper but, little by little, I built on it and it’s now at 80% feature parity with more mature animation libraries.

Note: you can read more on the native initiative as well as the @okikio/native library on the Github repo okikio/native. Also, okikio/native, is a monorepo with @okikio/native and @okikio/animate being sub-packages within it.

Where @okikio/animate fits into this article

The Web Animation API is very open in design. It is functional on its own but it’s not the most developer-friendly or intuitive API, so I developed @okikio/animate to act as a wrapper around the WAAPI and introduce the features you know and love from other more mature animation libraries (with some new features included) to the high-performance nature of the Web Animation API. Give the project’s README a read for much more context.

Now, let’s get started

@okikio/animate creates animations by creating new instances of Animate (a class that acts as a wrapper around the Web Animation API).

import { Animate } from"@okikio/animate";  new Animate({   target: [/* ... */],   duration: 2000,   // ...  });

The Animate class receives a set of targets to animate, it then creates a list of WAAPI Animation instances, alongside a main animation (the main animation is a small Animation instance that is set to animate over a non-visible element, it exists as a way of tracking the progress of the animations of the various target elements), the Animate class then plays each target elements Animation instance, including the main animation, to create smooth animations.

The main animation is there to ensure accuracy in different browser vendor implementations of WAAPI. The main animation is stored in Animate.prototype.mainAnimation, while the target element’s Animation instances are stored in a WeakMap, with the key being its KeyframeEffect. You can access the animation for a specific target using the Animate.prototype.getAnimation(el).

You don‘t need to fully understand the prior sentences, but they will aid your understanding of what @okikio/animate does. If you want to learn more about how WAAPI works, check out MDN, or if you would like to learn more about the @okikio/animate library, I’d suggest checking out the okikio/native project on GitHub.

Usage, examples and demos

By default, creating a new instance of Animate is very annoying, so, I created the animate function, which creates new Animate instances every time it’s called.

import animate from "@okikio/animate"; // or import { animate } from "@okikio/animate";  animate({    target: [/* ... */],   duration: 2000,   // ...  });

When using the @okikio/animate library to create animations you can do this:

import animate from "@okikio/animate";  // Do this if you installed it via the script tag: const { animate } = window.animate;  (async () => {   let [options] = await animate({     target: ".div",      // Units are added automatically for transform CSS properties     translateX: [0, 300],     duration: 2000, // In milliseconds     speed: 2,   });    console.log("The Animation is done..."); })();

You can also play with a demo with playback controls:

Try out Motion Path:

Try different types of Motion by changing the Animation Options:

I also created a complex demo page with polyfills:

You can find the source code for this demo in the animate.ts and animate.pug files in the GitHub repo. And, yes, the demo uses Pug, and is a fairly complex setup. I highly suggest looking at the README as a primer for getting started.

The native initiative uses Gitpod, so if you want to play with the demo, I recommend clicking the “Open in Gitpod” link since the entire environment is already set up for you — there’s nothing to configure.

You can also check out some more examples in this CodePen collection I put together. For the most part, you can port your code from animejs to @okikio/animate with few-to-no issues.

I should probably mention that @okikio/animate supports both the target and targets keywords for settings animation targets. @okikio/animate will merge both list of targets into one list and use Sets to remove any repeated targets. @okikio/animate supports functions as animation options, so you can use staggering similar to animejs. (Note: the order of arguments are different, read more in the “Animation Options & CSS Properties as Methods” section of the README file.)

Restrictions and limitations

@okikio/animate isn’t perfect; nothing really is, and seeing as the Web Animation API is a living standard constantly being improved, @okikio/animate itself still has lots of space to grow. That said, I am constantly trying to improve it and would love your input so please open a new issue, create a pull request or we can have a discussion over at the GitHub project.

The first limitation is that it doesn’t really have a built-in timeline. There are a few reasons for this:

  1. I ran out of time. I am still only a student and don’t have lots of time to develop all the projects I want to.
  2. I didn’t think a formal timeline was needed, as async/await programming was supported. Also, I added timelineOffset as an animation option, should anyone ever need to create something similar to the timeline in animejs.
  3. I wanted to make @okikio/animate as small as possible.
  4. With group effects and sequence effects coming soon, I thought it would be best to leave the package small until an actual need comes up. On that note, I highly suggest reading Daniel C. Wilson’s series on the WAAPI, particularly the fourth installment that covers group effects and sequence effects.

Another limitation of @okikio/animate is that it lacks support for custom easings, like spring, elastic, etc. But check out Jake Archibald’s proposal for an easing worklet. He discusses multiple standards that are currently in discussion. I prefer his proposal, as it’s the easiest to implement, not to mention the most elegant of the bunch. In the meanwhile, I’m taking inspiration from Kirill Vasiltsov article on Spring animations with WAAPI and I am planning to build something similar into the library.

The last limitation is that @okikio/animate only supports automatic units on transform functions e.g. translateX, translate, scale, skew, etc. This is no longer the case as of @okikio/animate@2.2.0, but there are still some limitations on CSS properties that support color. Check the GitHub release for more detail.

For example:

animate({   targets: [".div", document.querySelectorAll(".el")],    // By default "px", will be applied   translateX: 300,   left: 500,   margin: "56 70 8em 70%",    // "deg" will be applied to rotate instead of px   rotate: 120,     // No units will be auto applied   color: "rgb(25, 25, 25)",   "text-shadow": "25px 5px 15px rgb(25, 25, 25)" });

Looking to the future

Some future features, like ScrollTimeline, are right around the corner. I don’t think anyone actually knows when it will release but since the ScrollTimeline in Chrome Canary 92, I think it’s safe to say the chances of a release in the near future look pretty good.

I built the timeline animation option into @okikio/animate to future-proof it. Here’s an example:

Thanks to Bramus for the demo inspiration! Also, you may need the Canary version of Chrome or need to turn on Experimental Web Platform features in Chrome Flags to view this demo. It seems to work just fine on Firefox, though, so… 🤣.

If you want to read more on the ScrollTimeline, Bramus wrote an excellent article on it. I would also suggest reading the Google Developers article on Animation Worklets.

My hope is to make the library smaller. It’s currently ~5.79 KB which seems high, at least to me. Normally, I would use a bundlephobia embed but that has trouble bundling the project, so if you want to verify the size, I suggest using bundle.js.org because it actually bundles the code locally on your browser. I specifically built it for checking the bundle size of @okikio/animate, but note it’s not as accurate as bundlephobia.

Polyfills

One of the earlier demos shows polyfills in action. You are going to need web-animations-next.min.js from web-animations-js to support timelines. Other modern features the KeyframeEffect constructor is required.

The polyfill uses JavaScript to test if the KeyframeEffect is supported and, if it isn’t, the polyfill loads and does its thing. Just avoid adding async/defer to the polyfill, or it will not work the way you expect. You’ll also want to polyfill Map, Set, and Promise.

<html>   <head>     <!-- Async -->     <script src="https://cdn.polyfill.io/v3/polyfill.min.js?features=default,es2015,es2018,Array.prototype.includes,Map,Set,Promise" async></script>     <!-- NO Async/Defer -->     <script src="./js/webanimation-polyfill.min.js"></script>   </head>   <body>     <!-- Content -->   </body> </html>

And if you’re building for ES6+, I highly recommend using esbuild for transpiling, bundling, and minifying. For ES5, I suggest using esbuild (with minify off), Typescript (with target of ES5), and terser; as of now, this is the fastest setup to transpile to ES5, it’s faster and more reliable than babel. See the Gulpfile from the demo for more details.

Conclusion

@okikio/animate is a wrapper around the Web Animation API (WAAPI) that allows you to use all the features you love from animejs and other animation libraries, in a small and concise package. So, what are your thoughts after reading about it? Is it something you think you’ll reach for when you need to craft complex animations? Or, even more important, is there something that would hold you back from using it? Leave a comment below or join the discussion on Github Discussions.


This article originally appeared on dev.to, it also appeared on hackernoon.com and hashnode.com.
Photo by Pankaj Patel on Unsplash.


The post How I Used the WAAPI to Build an Animation Library appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , ,

How I Used Brotli to Get Even Smaller CSS and JavaScript Files at CDN Scale

The HBO sitcom Silicon Valley hilariously followed Pied Piper, a team of developers with startup dreams to create a compression algorithm so powerful that high-quality streaming and file storage concerns would become a thing of the past.

In the show, Google is portrayed by the fictional company Hooli, which is after Pied Piper’s intellectual property. The funny thing is that, while being far from a startup, Google does indeed have a powerful compression engine in real life called Brotli

This article is about my experience using Brotli at production scale. Despite being really expensive and a truly unfeasible method for on-the-fly compression, Brotli is actually very economical and saves cost on many fronts, especially when compared with gzip or lower compression levels of Brotli (which we’ll get into).

Brotli’s beginning…

In 2015, Google published a blog post announcing Brotli and released its source code on GitHub. The pair of developers who created Brotli also created Google’s Zopfli compression two years earlier. But where Zopfli leveraged existing compression techniques, Brotli was written from the ground-up and squarely focused on text compression to benefit static web assets, like HTML, CSS, JavaScript and even web fonts.

At that time, I was working as a freelance website performance consultant. I was really excited for the 20-26% improvement Brotli promised over Zopfli. Zopfli in itself is a dense implementation of the deflate compressor compared with zlib’s standard implementation, so the claim of up to 26% was quite impressive. And what’s zlib? It’s essentially the same as gzip.

So what we’re looking at is the next generation of Zopfli, which is an offshoot of zlib, which is essentially gzip.

A story of disappointment

It took a few months for major CDN players to support Brotli, but meanwhile it was seeing widespread adoption in tools, services, browsers and servers. However, the 26% dense compression that Brotli promised was never reflected in production. Some CDNs set a lower compression level internally while others supported Brotli at origin so that they only support it if it was enabled manually at the origin.

Server support for Brotli was pretty good, but to achieve high compression levels, it required rolling your own pre-compression code or using a server module to do it for you — which is not always an option, especially in the case of shared hosting services.

This was really disappointing for me. I wanted to compress every last possible byte for my clients’ websites in a drive to make them faster, but using pre-compression and allowing clients to update files on demand simultaneously was not always easy.

Taking matters into my own hands

I started building my own performance optimization service for my clients.

I had several tricks that could significantly speed up websites. The service categorized all the optimizations in three groups consisting of several “Content,” “Delivery,” and “Cache” optimizations. I had Brotli in mind for the content optimization part of the service for compressible resources.

Like other compression formats, Brotli comes in different levels of power. Brotli’s max level is exactly like the max volume of the guitar amps in This is Spinal Tap: it goes to 11.

Brotli:11, or Brotli compression level 11, can offer significant reduction in the size of compressible files, but has a substantial trade-off: it is painfully slow and not feasible for on demand compression the same way gzip is capable of doing it. It costs significantly more in terms of CPU time.

In my benchmarks, Brotli:11 takes several hundred milliseconds to compress a single minified jQuery file. So, the only way to offer Brotli:11 to my clients was to use it for pre-compression, leaving me to figure out a way to cache files at the server level. Luckily we already had that in place. The only problem was the fear that Brotli could kill all our processing resources.

Maybe that’s why Pied Piper had to continue rigging its servers for more power.

I put my fears aside and built Brotli:11 as a configurable server option. This way, clients could decide whether enabling it was worth the computing cost.

It’s slow, but gradually pays off

Among several other optimizations, the service for my clients also offers geographic content delivery; in other words, it has a built-in CDN.

Of the several tricks I tried when taking matters into my own hands, one was to combine public CDN (or open-source CDN) and private CDN on a single host so that websites can enjoy the benefits of shared browser cache of public resources without incurring separate DNS lookup and connection cost for that public host. I wanted to avoid this extra connection cost because it has significant impact for mobile users. Also, combining more and more resources on a single host can help get the most of HTTP/2 features, like multiplexing.

I enabled the public CDN and turned on Brotli:11 pre-compression for all compressible resources, including CSS, JavaScript, SVG, and TTF, among other types of files. The overhead of compression did indeed increase on first request of each resource — but after that, everything seemed to run smoothly. Brotli has over 90% browser support and pretty much all the requests hitting my service now use Brotli.

I was happy. Clients were happy. But I didn’t have numbers. I started analyzing the impact of enabling this high density compression on public resources. For this, I recorded file transfer sizes of several popular libraries — including jQuery, Bootstrap, React, and other frameworks — that used common compression methods implemented by other CDNs and found that Brotli:11 compression was saving around 21% compared to other compression formats.

It’s important to note that some of the other public CDNs I compared were already using Brotli, but at lower compression levels. So, the 21% extra compression was really satisfying for me. This number is based on a very small subset of libraries but is not incorrect by a big margin as I was seeing this much gain on all of the websites that I tested.

Here is a graphical representation of the savings.

Vertical bar chart. Compares jQuery, Bootstrap, D3.js, Ant Design, Senamtic UI, Font Awesome, React, Three.js, Bulma and Vue before and after Brotli compression. Brotli compression is always smaller.

You can see the raw data below..Note that the savings for CSS is much more prominent than what JavaScript gets.

Library Original Avg. of Common Compression (A) Brotli:11 (B) (A) / (B) – 1
Ant Design 1,938.99 KB 438.24 KB 362.82 KB 20.79%
Bootstrap 152.11 KB 24.20 KB 17.30 KB 39.88%
Bulma 186.13 KB 23.40 KB 19.30 KB 21.24%
D3.js 236.82 KB 74.51 KB 65.75 KB 13.32%
Font Awesome 1,104.04 KB 422.56 KB 331.12 KB 27.62%
jQuery 86.08 KB 30.31 KB 27.65 KB 9.62%
React 105.47 KB 33.33 KB 30.28 KB 10.07%
Semantic UI 613.78 KB 91.93 KB 78.25 KB 17.48%
three.js 562.75 KB 134.01 KB 114.44 KB 17.10%
Vue.js 91.48 KB 33.17 KB 30.58 KB 8.47%

The results are great, which is what I expected. But what about the overall impact of using Brotli:11 at scale? Turns out that using Brotli:11 for all public resources reduces cost all around:

  • The smaller file sizes are expected to result in lower TLS overhead. That said, it is not easily measurable, nor is it significant for my service because modern CPUs are very fast at encryption. Still, I believe there is some tiny and repeated saving on account of encryption for every request as smaller files encrypt faster.
  • It reduces the bandwidth cost. The 21% savings I got across the board is the case in point. And, remember, savings are not a one-time thing. Each request counts as cost, so the 21% savings is repeated time and again, creating a snowball savings for the cost of bandwidth. 
  • We only cache hot files in memory at edge servers. Due to the widespread browser support for Brotli, these hot files are mostly encoded by Brotli and their small size lets us fit more of them in available memory.
  • Visitors, especially those on mobile devices, enjoy reduced data transfer. This results in less battery use and savings on data charges. That’s a huge win that gets passed on to the users of our clients!

This is all so good. The cost we save per request is not significant, but considering we have a near zero cache miss rate for public resources, we can easily amortize the initial high cost of compression in next several hundred requests. After that,  we’re looking at a lifetime benefit of reduced overhead.

It doesn’t end there

With the mix of public and private CDNs that we introduced as part of our performance optimization service, we wanted to make sure that clients could set lower compression levels for resources that frequently change over time (like custom CSS and JavaScript) on the private CDN and automatically switch to the public CDN for open-source resources that change less often and have pre-configured Brotli:11. This way, our clients can still get a high compression ratio on resources that change less often while still enjoying good compression ratios with instant purge and updates for compressible resources.

This all is done smoothly and seamlessly using our integration tools. The added benefit of this approach for clients is that the bandwidth on the public CDN is totally free with unprecedented performance levels.

Try it yourself!

Testing on a common website, using aggressive compression can easily shave around 50 KB off the page load. If you want to play with the free public CDN and enjoy smaller CSS and JavaScript, you are welcome to use our PageCDN service. Here are some of the most used libraries for your use:

<!-- jQuery 3.5.0 --> <script src="https://pagecdn.io/lib/jquery/3.5.0/jquery.min.js" crossorigin="anonymous" integrity="sha256-xNzN2a4ltkB44Mc/Jz3pT4iU1cmeR0FkXs4pru/JxaQ=" ></script> 
 <!-- FontAwesome 5.13.0 --> <link href="https://pagecdn.io/lib/font-awesome/5.13.0/css/all.min.css" rel="stylesheet" crossorigin="anonymous" integrity="sha256-h20CPZ0QyXlBuAw7A+KluUYx/3pK+c7lYEpqLTlxjYQ=" > 
 <!-- Ionicons 4.6.3 --> <link href="https://pagecdn.io/lib/ionicons/4.6.3/css/ionicons.min.css" rel="stylesheet" crossorigin="anonymous" integrity="sha256-UUDuVsOnvDZHzqNIznkKeDGtWZ/Bw9ZlW+26xqKLV7c=" > 
 <!-- Bootstrap 4.4.1 --> <link href="https://pagecdn.io/lib/bootstrap/4.4.1/css/bootstrap.min.css" rel="stylesheet" crossorigin="anonymous" integrity="sha256-L/W5Wfqfa0sdBNIKN9cG6QA5F2qx4qICmU2VgLruv9Y=" > 
 <!-- React 16.13.1 --> <script src="https://pagecdn.io/lib/react/16.13.1/umd/react.production.min.js" crossorigin="anonymous" integrity="sha256-yUhvEmYVhZ/GGshIQKArLvySDSh6cdmdcIx0spR3UP4=" ></script> 
 <!-- Vue 2.6.11 --> <script src="https://pagecdn.io/lib/vue/2.6.11/vue.min.js" crossorigin="anonymous" integrity="sha256-ngFW3UnAN0Tnm76mDuu7uUtYEcG3G5H1+zioJw3t+68=" ></script>

Our PHP library automatic switches between private and public CDN if you need it to. The same feature is implemented seamlessly in our WordPress plugin that automatically loads public resources over Public CDN. Both of these tools allow full access to the free public CDN. Libraries for JavaScript, Python. and Ruby are not yet available. If you contribute any such library to our Public CDN, I will be happy to list it in our docs.

Additionally, you can use our search tool to immediately find a corresponding resource on the public CDN by supplying a URL of a resource on your website. If none of these tools work for you, then you can check the relevant library page and pick the URLs you want.

Looking toward the future

We started by hosting only the most popular libraries in order to prevent malware spread. However, things are changing rapidly and we add new libraries as our users suggest them to us. You are welcome to suggest your favorite ones, too. If you still want to link to a public or private Github repo that is not yet available on our public CDN, you can use our private CDN to connect to a repo and import all new releases as they appear on GitHub and then apply your own aggressive optimizations before delivery.

What do you think?

Everything we covered here is solely based on my personal experience working with Brotli compression at CDN scale. It just happens to be an introduction to my public CDN as well. We are still a small service and our client websites are only in the hundreds. Still, at this scale the aggressive compression seems to pay off.

I achieved high quality results for my clients and now you can use this free service for your websites as well. And, if you like it, please leave feedback at my email and recommend it to others.

The post How I Used Brotli to Get Even Smaller CSS and JavaScript Files at CDN Scale appeared first on CSS-Tricks.

CSS-Tricks

, , , , , ,
[Top]

Let’s Make One of Those Fancy Scrolling Animations Used on Apple Product Pages

Apple is well-known for the sleek animations on their product pages. For example, as you scroll down the page products may slide into view, MacBooks fold open and iPhones spin, all while showing off the hardware, demonstrating the software and telling interactive stories of how the products are used.

Just check out this video of the mobile web experience for the iPad Pro:

Source: Twitter

A lot of the effects that you see there aren’t created in just HTML and CSS. What then, you ask? Well, it can be a little hard to figure out. Even using the browser’s DevTools won’t always reveal the answer, as it often can’t see past a <canvas> element.

Let’s take an in-depth look at one of these effects to see how it’s made so you can recreate some of these magical effects in our own projects. Specifically, let’s replicate the AirPods Pro product page and the shifting light effect in the hero image.

The basic concept

The idea is to create an animation just like a sequence of images in rapid succession. You know, like a flip book! No complex WebGL scenes or advanced JavaScript libraries are needed.

By synchronizing each frame to the user’s scroll position, we can play the animation as the user scrolls down (or back up) the page.

Start with the markup and styles

The HTML and CSS for this effect is very easy as the magic happens inside the <canvas> element which we control with JavaScript by giving it an ID.

In CSS, we’ll give our document a height of 100vh and make our <body> 5⨉ taller than that to give ourselves the necessary scroll length to make this work. We’ll also match the background color of the document with the background color of our images.

The last thing we’ll do is position the <canvas>, center it, and limit the max-width and height so it does not exceed the dimensions of the viewport.

html {   height: 100vh; } 
 body {   background: #000;   height: 500vh; } 
 canvas {   position: fixed;   left: 50%;   top: 50%;   max-height: 100vh;   max-width: 100vw;   transform: translate(-50%, -50%); }

Right now, we are able to scroll down the page (even though the content does not exceed the viewport height) and our <canvas> stays at the top of the viewport. That’s all the HTML and CSS we need.

Let’s move on to loading the images.

Fetching the correct images

Since we’ll be working with an image sequence (again, like a flip book), we’ll assume the file names are numbered sequentially in ascending order (i.e. 0001.jpg, 0002.jpg, 0003.jpg, etc.) in the same directory.

We’ll write a function that returns the file path with the number of the image file we want, based off of the user’s scroll position.

const currentFrame = index => (   `https://www.apple.com/105/media/us/airpods-pro/2019/1299e2f5_9206_4470_b28e_08307a42f19b/anim/sequence/large/01-hero-lightpass/$ {index.toString().padStart(4, '0')}.jpg` )

Since the image number is an integer, we’ll need to turn it in to a string and use padStart(4, '0') to prepend zeros in front of our index until we reach four digits to match our file names. So, for example, passing 1 into this function will return 0001.

That gives us a way to handle image paths. Here’s the first image in the sequence drawn on the <canvas> element:

As you can see, the first image is on the page. At this point, it’s just a static file. What we want is to update it based on the user’s scroll position. And we don’t merely want to load one image file and then swap it out by loading another image file. We want to draw the images on the <canvas> and update the drawing with the next image in the sequence (but we’ll get to that in just a bit).

We already made the function to generate the image filepath based on the number we pass into it so what we need to do now is track the user’s scroll position and determine the corresponding image frame for that scroll position.

Connecting images to the user’s scroll progress

To know which number we need to pass (and thus which image to load) in the sequence, we need to calculate the user’s scroll progress. We’ll make an event listener to track that and handle some math to calculate which image to load.

We need to know:

  • Where scrolling starts and ends
  • The user’s scroll progress (i.e. a percentage of how far the user is down the page)
  • The image that corresponds to the user’s scroll progress

We’ll use scrollTop to get the vertical scroll position of the element, which in our case happens to be the top of the document. That will serve as the starting point value. We’ll get the end (or maximum) value by subtracting the window height from the document scroll height. From there, we’ll divide the scrollTop value by the maximum value the user can scroll down, which gives us the user’s scroll progress.

Then we need to turn that scroll progress into an index number that corresponds with the image numbering sequence for us to return the correct image for that position. We can do this by multiplying the progress number by the number of frames (images) we have. We’ll use Math.floor() to round that number down and wrap it in Math.min() with our maximum frame count so it never exceeds the total number of frames.

window.addEventListener('scroll', () => {     const scrollTop = html.scrollTop;   const maxScrollTop = html.scrollHeight - window.innerHeight;   const scrollFraction = scrollTop / maxScrollTop;   const frameIndex = Math.min(     frameCount - 1,     Math.floor(scrollFraction * frameCount)   ); });

Updating <canvas> with the correct image

We now know which image we need to draw as the user’s scroll progress changes. This is where the magic of  <canvas> comes into play. <canvas> has many cool features for building everything from games and animations to design mockup generators and everything in between!

One of those features is a method called requestAnimationFrame that works with the browser to update <canvas> in a way we couldn’t do if we were working with straight image files instead. This is why I went with a <canvas> approach instead of, say, an <img> element or a <div> with a background image.

requestAnimationFrame will match the browser refresh rate and enable hardware acceleration by using WebGL to render it using the device’s video card or integrated graphics. In other words, we’ll get super smooth transitions between frames — no image flashes!

Let’s call this function in our scroll event listener to swap images as the user scrolls up or down the page. requestAnimationFrame takes a callback argument, so we’ll pass a function that will update the image source and draw the new image on the <canvas>:

requestAnimationFrame(() => updateImage(frameIndex + 1))

We’re bumping up the frameIndex by 1 because, while the image sequence starts at 0001.jpg, our scroll progress calculation starts actually starts at 0. This ensures that the two values are always aligned.

The callback function we pass to update the image looks like this:

const updateImage = index => {   img.src = currentFrame(index);   context.drawImage(img, 0, 0); }

We pass the frameIndex into the function. That sets the image source with the next image in the sequence, which is drawn on our <canvas> element.

Even better with image preloading

We’re technically done at this point. But, come on, we can do better! For example, scrolling quickly results in a little lag between image frames. That’s because every new image sends off a new network request, requiring a new download.

We should try preloading the images new network requests. That way, each frame is already downloaded, making the transitions that much faster, and the animation that much smoother!

All we’ve gotta do is loop through the entire sequence of images and load ‘em up:

const frameCount = 148; 
 const preloadImages = () => {   for (let i = 1; i < frameCount; i++) {     const img = new Image();     img.src = currentFrame(i);   } }; 
 preloadImages();

Demo!

A quick note on performance

While this effect is pretty slick, it’s also a lot of images. 148 to be exact.

No matter much we optimize the images, or how speedy the CDN is that serves them, loading hundreds of images will always result in a bloated page. Let’s say we have multiple instances of this on the same page. We might get performance stats like this:

1,609 requests, 55.8 megabytes transferred, 57.5 megabytes resources, load time of 30.45 seconds.

That might be fine for a high-speed internet connection without tight data caps, but we can’t say the same for users without such luxuries. It’s a tricky balance to strike, but we have to be mindful of everyone’s experience — and how our decisions affect them.

A few things we can do to help strike that balance include:

  • Loading a single fallback image instead of the entire image sequence
  • Creating sequences that use smaller image files for certain devices
  • Allowing the user to enable the sequence, perhaps with a button that starts and stops the sequence

Apple employs the first option. If you load the AirPods Pro page on a mobile device connected to a slow 3G connection and, hey, the performance stats start to look a whole lot better:

8 out of 111 requests, 347 kilobytes of 2.6 megabytes transferred, 1.4 megabytes of 4.5 megabytes resources, load time of one minute and one second.

Yeah, it’s still a heavy page. But it’s a lot lighter than what we’d get without any performance considerations at all. That’s how Apple is able to get get so many complex sequences onto a single page.


Further reading

If you are interested in how these image sequences are generated, a good place to start is the Lottie library by AirBnB. The docs take you through the basics of generating animations with After Effects while providing an easy way to include them in projects.

The post Let’s Make One of Those Fancy Scrolling Animations Used on Apple Product Pages appeared first on CSS-Tricks.

CSS-Tricks

, , , , , , , ,
[Top]

Alpine.js: The JavaScript Framework That’s Used Like jQuery, Written Like Vue, and Inspired by TailwindCSS

We have big JavaScript frameworks that tons of people already use and like, including React, Vue, Angular, and Svelte. Do we need another JavaScript library? Let’s take a look at Alpine.js and you can decide for yourself. Alpine.js is for developers who aren’t looking to build a single page application (SPA). It’s lightweight (~7kB gzipped) and designed to write markup-driven client-side JavaScript.

The syntax is borrowed from Vue and Angular directive. That means it will feel familiar if you’ve worked with those before. But, again, Alpine.js is not designed to build SPAs, but rather enhance your templates with a little bit of JavaScript.

For example, here’s an Alpine.js demo of an interactive “alert” component.

The alert message is two-way bound to the input using x-model="msg". The “level” of the alert message is set using a reactive level property. The alert displays when when both msg and level have a value.

It’s like a replacement for jQuery and JavaScript, but with declarative rendering

Alpine.js is a Vue template-flavored replacement for jQuery and vanilla JavaScript rather than a React/Vue/Svelte/WhateverFramework competitor.

Since Alpine.js is less than a year old, it can make assumptions about DOM APIs that jQuery cannot. Let’s briefly draw a comparison between the two.

Querying vs. binding

The bulk of jQuery’s size and features comes in the shape of a cross-browser compatibility layer over imperative DOM APIs — this is usually referred to as jQuery Core and sports features that can query the DOM and manipulate it.

The Alpine.js answer to jQuery core is a declarative way to bind data to the DOM using the x-bind attribute binding directive. It can be used to bind any attribute to reactive data on the Alpine.js component. Alpine.js, like its declarative view library contemporaries (React, Vue), provides x-ref as an escape hatch to directly access DOM elements from JavaScript component code when binding is not sufficient (eg. when integrating a third-party library that needs to be passed a DOM Node).

Handling events

jQuery also provides a way to handle, create and trigger events. Alpine.js provides the x-on directive and the $ event magic value which allows JavaScript functions to handle events. To trigger (custom) events, Alpine.js provides the $ dispatch magic property which is a thin wrapper over the browser’s Event and Dispatch Event APIs.

Effects

One of jQuery’s key features is its effects, or rather, it’s ability to write easy animations. Where we might use slideUp, slideDown, fadeIn, fadeOut properties in jQuery to create effects, Alpine.js provides a set of x-transition directives, which add and remove classes throughout the element’s transition. That’s largely inspired by the Vue Transition API.

Also, jQuery’s Ajax client has no prescriptive solution in Alpine.js, thanks to the Fetch API or taking advantage of a third party HTTP library (e.g. axios, ky, superagent).

Plugins

It’s also worth calling out jQuery plugins. There is no comparison to that (yet) in the Alpine.js ecosystem. Sharing Alpine.js components is relatively simple, usually requiring a simple case of copy and paste. The JavaScript in Alpine.js components are “just functions” and tend not to access Alpine.js itself, making them relatively straightforward to share by including them on different pages with a script tag. Any magic properties are added when Alpine initializes or is passed into bindings, like $ event in x-on bindings.

There are currently no examples of Alpine.js extensions, although there are a few issues and pull requests to add “core” events that hook into Alpine.js from other libraries. There are also discussions happening about the ability to add custom directives. The stance from Alpine.js creator Caleb Porzio, seems to be basing API decisions on the Vue APIs, so I would expect that any future extension point would be inspired on what Vue.js provides.

Size

Alpine.js is lighter weight than jQuery, coming in at 21.9kB minified — 7.1kB gzipped — compared to jQuery at 87.6kB minified — 30.4kB minified and gzipped. Only 23% the size!

Most of that is likely due to the way Alpine.js focuses on providing a declarative API for the DOM (e.g. attribute binding, event listeners and transitions).

Bundlephobia breaks down the two

For the sake of comparison, Vue comes in at 63.5kB minified (22.8kB gzipped). How can Alpine.js come in lighter despite it’s API being equivalent Vue? Alpine.js does not implement a Virtual DOM. Instead, it directly mutates the DOM while exposing the same declarative API as Vue.

Let’s look at an example

Alpine is compact because since application code is declarative in nature, and is declared via templates. For example, here’s a Pokemon search page using Alpine.js:

This example shows how a component is set up using x-data and a function that returns the initial component data, methods, and x-init to run that function on load.

Bindings and event listeners in Alpine.js with a syntax that’s strikingly similar to Vue templates.

  • Alpine: x-bind:attribute="express" and x-on:eventName="expression", shorthand is :attribute="expression" and @eventName="expression" respectively
  • Vue: v-bind:attribute="express" and v-on:eventName="expression", shorthand is :attribute="expression" and @eventName="expression" respectively

Rendering lists is achieved with x-for on a template element and conditional rendering with x-if on a template element.

Notice that Alpine.js doesn’t provide a full templating language, so there’s no interpolation syntax (e.g. {{ myValue }} in Vue.js, Handlebars and AngularJS). Instead, binding dynamic content is done with the x-text and x-html directives (which map directly to underlying calls to Node.innerText and Node.innerHTML).

An equivalent example using jQuery is an exercise you’re welcome to take on, but the classic style includes several steps:

  • Imperatively bind to the button click using $ ('button').click(/* callback */).
  • Within this “click callback” get the input value from the DOM, then use it to call the API.
  • Once the call has completed, the DOM is updated with new nodes generated from the API response.

If you’re interested in a side by side comparison of the same code in jQuery and Alpine.js, Alex Justesen created the same character counter in jQuery and in Alpine.js.

Back in vogue: HTML-centric tools

Alpine.js takes inspiration from TailwindCSS. The Alpine.js introduction on the repository is as “Tailwind for JavaScript.”

Why is that important?

One of Tailwind’s selling points is that it “provides low-level utility classes that let you build completely custom designs without ever leaving your HTML.” That’s exactly what Alpine does. It works inside HTML so there is no need to work inside of JavaScript templates the way we would in Vue or React  Many of the Alpine examples cited in the community don’t even use script tags at all!

Let’s look at one more example to drive the difference home. Here’s is an accessible navigation menu in Alpine.js that uses no script tags whatsoever.

This example leverages aria-labelledby and aria-controls outside of Alpine.js (with id references). Alpine.js makes sure the “toggle” element (which is a button), has an aria-expanded attribute that’s true when the navigation is expanded, and false when it’s collapsed. This aria-expanded binding is also applied to the menu itself and we show/hide the list of links in it by binding to hidden.

Being markup-centric means that Alpine.js and TailwindCSS examples are easy to share. All it takes is a copy-paste into HTML that is also running Alpine.js/TailwindCSS. No crazy directories full of templates that compile and render into HTML!

Since HTML is a fundamental building block of the web, it means that Alpine.js is ideal for augmenting server-rendered (Laravel, Rails, Django) or static sites (Hugo, Hexo, Jekyll). Integrating data with this sort of tooling can be a simple as outputting some JSON into the x-data="{}" binding. The affordance of passing some JSON from your backend/static site template straight into the Alpine.js component avoids building “yet another API endpoint” that simply serves a snippet of data required by a JavaScript widget.

Client-side without the build step

Alpine.js is designed to be used as a direct script include from a public CDN. Its developer experience is tailored for that. That’s why it makes for a great jQuery comparison and replacement: it’s dropped in and eliminates a build step.

While it’s not traditionally used this way, the bundled version of Vue can be linked up directly. Sarah Drasner has an excellent write-up showing examples of jQuery substituted with Vue. However, if you use Vue without a build step, you’re actively opting out of:

  • the Vue CLI
  • single file components
  • smaller/more optimized bundles
  • a strict CSP (Content Security Policy) since Vue inline templates evaluate expressions client-side

So, yes, while Vue boasts a buildless implementation, its developer experience is really depedent on the Vue CLI. That could be said about Create React App for React, and the Angular CLI. Going build-less strips those frameworks of their best qualities.

There you have it! Alpine.js is a modern, CDN-first  library that brings declarative rendering for a small payload — all without the build step and templates that other frameworks require. The result is an HTML-centric approach that not only resembles a modern-day jQuery but is a great substitute for it as well.

If you’re looking for a jQuery replacement that’s not going to force you into a SPAs architecture, then give Alpine.js a go! Interested? You can find out more on Alpine.js Weekly, a free weekly roundup of Alpine.js news and articles.

The post Alpine.js: The JavaScript Framework That’s Used Like jQuery, Written Like Vue, and Inspired by TailwindCSS appeared first on CSS-Tricks.

CSS-Tricks

, , , , , , , , ,
[Top]

Understanding How Reducers are Used in Redux

A reducer is a function that determines changes to an application’s state. It uses the action it receives to determine this change. We have tools, like Redux, that help manage an application’s state changes in a single store so that they behave consistently.

Why do we mention Redux when talking about reducers? Redux relies heavily on reducer functions that take the previous state and an action in order to execute the next state.

We’re going to focus squarely on reducers is in this post. Our goal is to get comfortable working with the reducer function so that we can see how it is used to update the state of an application — and ultimately understand the role they play in a state manager, like Redux.

What we mean by “state”

State changes are based on a user’s interaction, or even something like a network request. If the application’s state is managed by Redux, the changes happen inside a reducer function — this is the only place where state changes happen. The reducer function makes use of the initial state of the application and something called action, to determine what the new state will look like.

If we were in math class, we could say:

initial state + action = new state

In terms of an actual reducer function, that looks like this:

const contactReducer = (state = initialState, action) => {   // Do something }

Where do we get that initial state and action? Those are things we define.

The state parameter

The state parameter that gets passed to the reducer function has to be the current state of the application. In this case, we’re calling that our initialState because it will be the first (and current) state and nothing will precede it.

contactReducer(initialState, action)

Let’s say the initial state of our app is an empty list of contacts and our action is adding a new contact to the list.

const initialState = {   contacts: [] }

That creates our initialState, which is equal to the state parameter we need for the reducer function.

The action parameter

An action is an object that contains two keys and their values. The state update that happens in the reducer is always dependent on the value of action.type. In this scenario, we are demonstrating what happens when the user tries to create a new contact. So, let’s define the action.type as NEW_CONTACT.

const action = {   type: 'NEW_CONTACT',   name: 'John Doe',   location: 'Lagos Nigeria',   email: 'johndoe@example.com' }

There is typically a payload value that contains what the user is sending and would be used to update the state of the application. It is important to note that action.type is required, but action.payload is optional. Making use of payload brings a level of structure to how the action object looks like.

Updating state

The state is meant to be immutable, meaning it shouldn’t be changed directly. To create an updated state, we can make use of Object.assign or opt for the spread operator.

Object.assign

const contactReducer = (state, action) => {   switch (action.type) {     case 'NEW_CONTACT':     return Object.assign({}, state, {       contacts: [         ...state.contacts,         action.payload       ]     })     default:       return state   } }

In the above example, we made use of the Object.assign() to make sure that we do not change the state value directly. Instead, it allows us to return a new object which is filled with the state that is passed to it and the payload sent by the user.

To make use of Object.assign(), it is important that the first argument is an empty object. Passing the state as the first argument will cause it to be mutated, which is what we’re trying to avoid in order to keep things consistent.

The spread operator

The alternative to object.assign() is to make use of the spread operator, like so:

const contactReducer = (state, action) => {   switch (action.type) {     case 'NEW_CONTACT':     return {         ...state, contacts:         [...state.contacts, action.payload]     }     default:       return state   } }

This ensures that the incoming state stays intact as we append the new item to the bottom.

Working with a switch statement

Earlier, we noted that the update that happens depends on the value of action.type. The switch statement conditionally determines the kind of update we’re dealing with, based on the value of the action.type.

That means that a typical reducer will look like this:

const addContact = (state, action) => {   switch (action.type) {     case 'NEW_CONTACT':     return {         ...state, contacts:         [...state.contacts, action.payload]     }     case 'UPDATE_CONTACT':       return {         // Handle contact update       }     case 'DELETE_CONTACT':       return {         // Handle contact delete       }     case 'EMPTY_CONTACT_LIST':       return {         // Handle contact list       }     default:       return state   } }

It’s important that we return state our default for when the value of action.type specified in the action object does not match what we have in the reducer — say, if for some unknown reason, the action looks like this:

const action = {   type: 'UPDATE_USER_AGE',   payload: {     age: 19   } }

Since we don’t have this kind of action type, we’ll want to return what we have in the state (the current state of the application) instead. All that means is we’re unsure of what the user is trying to achieve at the moment.

Putting everything together

Here’s a simple example of how I implemented the reducer function in React.

See the Pen
reducer example
by Kingsley Silas Chijioke (@kinsomicrote)
on CodePen.

You can see that I didn’t make use of Redux, but this is very much the same way Redux uses reducers to store and update state changes. The primary state update happens in the reducer function, and the value it returns sets the updated state of the application.

Want to give it a try? You can extend the reducer function to allow the user to update the age of a contact. I’d like to see what you come up with in the comment section!

Understanding the role that reducers play in Redux should give you a better understanding of what happens underneath the hood. If you are interested in reading more about using reducers in Redux, it’s worth checking out the official documentation.

The post Understanding How Reducers are Used in Redux appeared first on CSS-Tricks.

CSS-Tricks

, , ,
[Top]