Tag: Used

How I Used Brotli to Get Even Smaller CSS and JavaScript Files at CDN Scale

The HBO sitcom Silicon Valley hilariously followed Pied Piper, a team of developers with startup dreams to create a compression algorithm so powerful that high-quality streaming and file storage concerns would become a thing of the past.

In the show, Google is portrayed by the fictional company Hooli, which is after Pied Piper’s intellectual property. The funny thing is that, while being far from a startup, Google does indeed have a powerful compression engine in real life called Brotli

This article is about my experience using Brotli at production scale. Despite being really expensive and a truly unfeasible method for on-the-fly compression, Brotli is actually very economical and saves cost on many fronts, especially when compared with gzip or lower compression levels of Brotli (which we’ll get into).

Brotli’s beginning…

In 2015, Google published a blog post announcing Brotli and released its source code on GitHub. The pair of developers who created Brotli also created Google’s Zopfli compression two years earlier. But where Zopfli leveraged existing compression techniques, Brotli was written from the ground-up and squarely focused on text compression to benefit static web assets, like HTML, CSS, JavaScript and even web fonts.

At that time, I was working as a freelance website performance consultant. I was really excited for the 20-26% improvement Brotli promised over Zopfli. Zopfli in itself is a dense implementation of the deflate compressor compared with zlib’s standard implementation, so the claim of up to 26% was quite impressive. And what’s zlib? It’s essentially the same as gzip.

So what we’re looking at is the next generation of Zopfli, which is an offshoot of zlib, which is essentially gzip.

A story of disappointment

It took a few months for major CDN players to support Brotli, but meanwhile it was seeing widespread adoption in tools, services, browsers and servers. However, the 26% dense compression that Brotli promised was never reflected in production. Some CDNs set a lower compression level internally while others supported Brotli at origin so that they only support it if it was enabled manually at the origin.

Server support for Brotli was pretty good, but to achieve high compression levels, it required rolling your own pre-compression code or using a server module to do it for you — which is not always an option, especially in the case of shared hosting services.

This was really disappointing for me. I wanted to compress every last possible byte for my clients’ websites in a drive to make them faster, but using pre-compression and allowing clients to update files on demand simultaneously was not always easy.

Taking matters into my own hands

I started building my own performance optimization service for my clients.

I had several tricks that could significantly speed up websites. The service categorized all the optimizations in three groups consisting of several “Content,” “Delivery,” and “Cache” optimizations. I had Brotli in mind for the content optimization part of the service for compressible resources.

Like other compression formats, Brotli comes in different levels of power. Brotli’s max level is exactly like the max volume of the guitar amps in This is Spinal Tap: it goes to 11.

Brotli:11, or Brotli compression level 11, can offer significant reduction in the size of compressible files, but has a substantial trade-off: it is painfully slow and not feasible for on demand compression the same way gzip is capable of doing it. It costs significantly more in terms of CPU time.

In my benchmarks, Brotli:11 takes several hundred milliseconds to compress a single minified jQuery file. So, the only way to offer Brotli:11 to my clients was to use it for pre-compression, leaving me to figure out a way to cache files at the server level. Luckily we already had that in place. The only problem was the fear that Brotli could kill all our processing resources.

Maybe that’s why Pied Piper had to continue rigging its servers for more power.

I put my fears aside and built Brotli:11 as a configurable server option. This way, clients could decide whether enabling it was worth the computing cost.

It’s slow, but gradually pays off

Among several other optimizations, the service for my clients also offers geographic content delivery; in other words, it has a built-in CDN.

Of the several tricks I tried when taking matters into my own hands, one was to combine public CDN (or open-source CDN) and private CDN on a single host so that websites can enjoy the benefits of shared browser cache of public resources without incurring separate DNS lookup and connection cost for that public host. I wanted to avoid this extra connection cost because it has significant impact for mobile users. Also, combining more and more resources on a single host can help get the most of HTTP/2 features, like multiplexing.

I enabled the public CDN and turned on Brotli:11 pre-compression for all compressible resources, including CSS, JavaScript, SVG, and TTF, among other types of files. The overhead of compression did indeed increase on first request of each resource — but after that, everything seemed to run smoothly. Brotli has over 90% browser support and pretty much all the requests hitting my service now use Brotli.

I was happy. Clients were happy. But I didn’t have numbers. I started analyzing the impact of enabling this high density compression on public resources. For this, I recorded file transfer sizes of several popular libraries — including jQuery, Bootstrap, React, and other frameworks — that used common compression methods implemented by other CDNs and found that Brotli:11 compression was saving around 21% compared to other compression formats.

It’s important to note that some of the other public CDNs I compared were already using Brotli, but at lower compression levels. So, the 21% extra compression was really satisfying for me. This number is based on a very small subset of libraries but is not incorrect by a big margin as I was seeing this much gain on all of the websites that I tested.

Here is a graphical representation of the savings.

Vertical bar chart. Compares jQuery, Bootstrap, D3.js, Ant Design, Senamtic UI, Font Awesome, React, Three.js, Bulma and Vue before and after Brotli compression. Brotli compression is always smaller.

You can see the raw data below..Note that the savings for CSS is much more prominent than what JavaScript gets.

Library Original Avg. of Common Compression (A) Brotli:11 (B) (A) / (B) – 1
Ant Design 1,938.99 KB 438.24 KB 362.82 KB 20.79%
Bootstrap 152.11 KB 24.20 KB 17.30 KB 39.88%
Bulma 186.13 KB 23.40 KB 19.30 KB 21.24%
D3.js 236.82 KB 74.51 KB 65.75 KB 13.32%
Font Awesome 1,104.04 KB 422.56 KB 331.12 KB 27.62%
jQuery 86.08 KB 30.31 KB 27.65 KB 9.62%
React 105.47 KB 33.33 KB 30.28 KB 10.07%
Semantic UI 613.78 KB 91.93 KB 78.25 KB 17.48%
three.js 562.75 KB 134.01 KB 114.44 KB 17.10%
Vue.js 91.48 KB 33.17 KB 30.58 KB 8.47%

The results are great, which is what I expected. But what about the overall impact of using Brotli:11 at scale? Turns out that using Brotli:11 for all public resources reduces cost all around:

  • The smaller file sizes are expected to result in lower TLS overhead. That said, it is not easily measurable, nor is it significant for my service because modern CPUs are very fast at encryption. Still, I believe there is some tiny and repeated saving on account of encryption for every request as smaller files encrypt faster.
  • It reduces the bandwidth cost. The 21% savings I got across the board is the case in point. And, remember, savings are not a one-time thing. Each request counts as cost, so the 21% savings is repeated time and again, creating a snowball savings for the cost of bandwidth. 
  • We only cache hot files in memory at edge servers. Due to the widespread browser support for Brotli, these hot files are mostly encoded by Brotli and their small size lets us fit more of them in available memory.
  • Visitors, especially those on mobile devices, enjoy reduced data transfer. This results in less battery use and savings on data charges. That’s a huge win that gets passed on to the users of our clients!

This is all so good. The cost we save per request is not significant, but considering we have a near zero cache miss rate for public resources, we can easily amortize the initial high cost of compression in next several hundred requests. After that,  we’re looking at a lifetime benefit of reduced overhead.

It doesn’t end there

With the mix of public and private CDNs that we introduced as part of our performance optimization service, we wanted to make sure that clients could set lower compression levels for resources that frequently change over time (like custom CSS and JavaScript) on the private CDN and automatically switch to the public CDN for open-source resources that change less often and have pre-configured Brotli:11. This way, our clients can still get a high compression ratio on resources that change less often while still enjoying good compression ratios with instant purge and updates for compressible resources.

This all is done smoothly and seamlessly using our integration tools. The added benefit of this approach for clients is that the bandwidth on the public CDN is totally free with unprecedented performance levels.

Try it yourself!

Testing on a common website, using aggressive compression can easily shave around 50 KB off the page load. If you want to play with the free public CDN and enjoy smaller CSS and JavaScript, you are welcome to use our PageCDN service. Here are some of the most used libraries for your use:

<!-- jQuery 3.5.0 --> <script src="https://pagecdn.io/lib/jquery/3.5.0/jquery.min.js" crossorigin="anonymous" integrity="sha256-xNzN2a4ltkB44Mc/Jz3pT4iU1cmeR0FkXs4pru/JxaQ=" ></script> 
 <!-- FontAwesome 5.13.0 --> <link href="https://pagecdn.io/lib/font-awesome/5.13.0/css/all.min.css" rel="stylesheet" crossorigin="anonymous" integrity="sha256-h20CPZ0QyXlBuAw7A+KluUYx/3pK+c7lYEpqLTlxjYQ=" > 
 <!-- Ionicons 4.6.3 --> <link href="https://pagecdn.io/lib/ionicons/4.6.3/css/ionicons.min.css" rel="stylesheet" crossorigin="anonymous" integrity="sha256-UUDuVsOnvDZHzqNIznkKeDGtWZ/Bw9ZlW+26xqKLV7c=" > 
 <!-- Bootstrap 4.4.1 --> <link href="https://pagecdn.io/lib/bootstrap/4.4.1/css/bootstrap.min.css" rel="stylesheet" crossorigin="anonymous" integrity="sha256-L/W5Wfqfa0sdBNIKN9cG6QA5F2qx4qICmU2VgLruv9Y=" > 
 <!-- React 16.13.1 --> <script src="https://pagecdn.io/lib/react/16.13.1/umd/react.production.min.js" crossorigin="anonymous" integrity="sha256-yUhvEmYVhZ/GGshIQKArLvySDSh6cdmdcIx0spR3UP4=" ></script> 
 <!-- Vue 2.6.11 --> <script src="https://pagecdn.io/lib/vue/2.6.11/vue.min.js" crossorigin="anonymous" integrity="sha256-ngFW3UnAN0Tnm76mDuu7uUtYEcG3G5H1+zioJw3t+68=" ></script>

Our PHP library automatic switches between private and public CDN if you need it to. The same feature is implemented seamlessly in our WordPress plugin that automatically loads public resources over Public CDN. Both of these tools allow full access to the free public CDN. Libraries for JavaScript, Python. and Ruby are not yet available. If you contribute any such library to our Public CDN, I will be happy to list it in our docs.

Additionally, you can use our search tool to immediately find a corresponding resource on the public CDN by supplying a URL of a resource on your website. If none of these tools work for you, then you can check the relevant library page and pick the URLs you want.

Looking toward the future

We started by hosting only the most popular libraries in order to prevent malware spread. However, things are changing rapidly and we add new libraries as our users suggest them to us. You are welcome to suggest your favorite ones, too. If you still want to link to a public or private Github repo that is not yet available on our public CDN, you can use our private CDN to connect to a repo and import all new releases as they appear on GitHub and then apply your own aggressive optimizations before delivery.

What do you think?

Everything we covered here is solely based on my personal experience working with Brotli compression at CDN scale. It just happens to be an introduction to my public CDN as well. We are still a small service and our client websites are only in the hundreds. Still, at this scale the aggressive compression seems to pay off.

I achieved high quality results for my clients and now you can use this free service for your websites as well. And, if you like it, please leave feedback at my email and recommend it to others.

The post How I Used Brotli to Get Even Smaller CSS and JavaScript Files at CDN Scale appeared first on CSS-Tricks.

CSS-Tricks

, , , , , ,

Let’s Make One of Those Fancy Scrolling Animations Used on Apple Product Pages

Apple is well-known for the sleek animations on their product pages. For example, as you scroll down the page products may slide into view, MacBooks fold open and iPhones spin, all while showing off the hardware, demonstrating the software and telling interactive stories of how the products are used.

Just check out this video of the mobile web experience for the iPad Pro:

Source: Twitter

A lot of the effects that you see there aren’t created in just HTML and CSS. What then, you ask? Well, it can be a little hard to figure out. Even using the browser’s DevTools won’t always reveal the answer, as it often can’t see past a <canvas> element.

Let’s take an in-depth look at one of these effects to see how it’s made so you can recreate some of these magical effects in our own projects. Specifically, let’s replicate the AirPods Pro product page and the shifting light effect in the hero image.

The basic concept

The idea is to create an animation just like a sequence of images in rapid succession. You know, like a flip book! No complex WebGL scenes or advanced JavaScript libraries are needed.

By synchronizing each frame to the user’s scroll position, we can play the animation as the user scrolls down (or back up) the page.

Start with the markup and styles

The HTML and CSS for this effect is very easy as the magic happens inside the <canvas> element which we control with JavaScript by giving it an ID.

In CSS, we’ll give our document a height of 100vh and make our <body> 5⨉ taller than that to give ourselves the necessary scroll length to make this work. We’ll also match the background color of the document with the background color of our images.

The last thing we’ll do is position the <canvas>, center it, and limit the max-width and height so it does not exceed the dimensions of the viewport.

html {   height: 100vh; } 
 body {   background: #000;   height: 500vh; } 
 canvas {   position: fixed;   left: 50%;   top: 50%;   max-height: 100vh;   max-width: 100vw;   transform: translate(-50%, -50%); }

Right now, we are able to scroll down the page (even though the content does not exceed the viewport height) and our <canvas> stays at the top of the viewport. That’s all the HTML and CSS we need.

Let’s move on to loading the images.

Fetching the correct images

Since we’ll be working with an image sequence (again, like a flip book), we’ll assume the file names are numbered sequentially in ascending order (i.e. 0001.jpg, 0002.jpg, 0003.jpg, etc.) in the same directory.

We’ll write a function that returns the file path with the number of the image file we want, based off of the user’s scroll position.

const currentFrame = index => (   `https://www.apple.com/105/media/us/airpods-pro/2019/1299e2f5_9206_4470_b28e_08307a42f19b/anim/sequence/large/01-hero-lightpass/$ {index.toString().padStart(4, '0')}.jpg` )

Since the image number is an integer, we’ll need to turn it in to a string and use padStart(4, '0') to prepend zeros in front of our index until we reach four digits to match our file names. So, for example, passing 1 into this function will return 0001.

That gives us a way to handle image paths. Here’s the first image in the sequence drawn on the <canvas> element:

As you can see, the first image is on the page. At this point, it’s just a static file. What we want is to update it based on the user’s scroll position. And we don’t merely want to load one image file and then swap it out by loading another image file. We want to draw the images on the <canvas> and update the drawing with the next image in the sequence (but we’ll get to that in just a bit).

We already made the function to generate the image filepath based on the number we pass into it so what we need to do now is track the user’s scroll position and determine the corresponding image frame for that scroll position.

Connecting images to the user’s scroll progress

To know which number we need to pass (and thus which image to load) in the sequence, we need to calculate the user’s scroll progress. We’ll make an event listener to track that and handle some math to calculate which image to load.

We need to know:

  • Where scrolling starts and ends
  • The user’s scroll progress (i.e. a percentage of how far the user is down the page)
  • The image that corresponds to the user’s scroll progress

We’ll use scrollTop to get the vertical scroll position of the element, which in our case happens to be the top of the document. That will serve as the starting point value. We’ll get the end (or maximum) value by subtracting the window height from the document scroll height. From there, we’ll divide the scrollTop value by the maximum value the user can scroll down, which gives us the user’s scroll progress.

Then we need to turn that scroll progress into an index number that corresponds with the image numbering sequence for us to return the correct image for that position. We can do this by multiplying the progress number by the number of frames (images) we have. We’ll use Math.floor() to round that number down and wrap it in Math.min() with our maximum frame count so it never exceeds the total number of frames.

window.addEventListener('scroll', () => {     const scrollTop = html.scrollTop;   const maxScrollTop = html.scrollHeight - window.innerHeight;   const scrollFraction = scrollTop / maxScrollTop;   const frameIndex = Math.min(     frameCount - 1,     Math.floor(scrollFraction * frameCount)   ); });

Updating <canvas> with the correct image

We now know which image we need to draw as the user’s scroll progress changes. This is where the magic of  <canvas> comes into play. <canvas> has many cool features for building everything from games and animations to design mockup generators and everything in between!

One of those features is a method called requestAnimationFrame that works with the browser to update <canvas> in a way we couldn’t do if we were working with straight image files instead. This is why I went with a <canvas> approach instead of, say, an <img> element or a <div> with a background image.

requestAnimationFrame will match the browser refresh rate and enable hardware acceleration by using WebGL to render it using the device’s video card or integrated graphics. In other words, we’ll get super smooth transitions between frames — no image flashes!

Let’s call this function in our scroll event listener to swap images as the user scrolls up or down the page. requestAnimationFrame takes a callback argument, so we’ll pass a function that will update the image source and draw the new image on the <canvas>:

requestAnimationFrame(() => updateImage(frameIndex + 1))

We’re bumping up the frameIndex by 1 because, while the image sequence starts at 0001.jpg, our scroll progress calculation starts actually starts at 0. This ensures that the two values are always aligned.

The callback function we pass to update the image looks like this:

const updateImage = index => {   img.src = currentFrame(index);   context.drawImage(img, 0, 0); }

We pass the frameIndex into the function. That sets the image source with the next image in the sequence, which is drawn on our <canvas> element.

Even better with image preloading

We’re technically done at this point. But, come on, we can do better! For example, scrolling quickly results in a little lag between image frames. That’s because every new image sends off a new network request, requiring a new download.

We should try preloading the images new network requests. That way, each frame is already downloaded, making the transitions that much faster, and the animation that much smoother!

All we’ve gotta do is loop through the entire sequence of images and load ‘em up:

const frameCount = 148; 
 const preloadImages = () => {   for (let i = 1; i < frameCount; i++) {     const img = new Image();     img.src = currentFrame(i);   } }; 
 preloadImages();

Demo!

A quick note on performance

While this effect is pretty slick, it’s also a lot of images. 148 to be exact.

No matter much we optimize the images, or how speedy the CDN is that serves them, loading hundreds of images will always result in a bloated page. Let’s say we have multiple instances of this on the same page. We might get performance stats like this:

1,609 requests, 55.8 megabytes transferred, 57.5 megabytes resources, load time of 30.45 seconds.

That might be fine for a high-speed internet connection without tight data caps, but we can’t say the same for users without such luxuries. It’s a tricky balance to strike, but we have to be mindful of everyone’s experience — and how our decisions affect them.

A few things we can do to help strike that balance include:

  • Loading a single fallback image instead of the entire image sequence
  • Creating sequences that use smaller image files for certain devices
  • Allowing the user to enable the sequence, perhaps with a button that starts and stops the sequence

Apple employs the first option. If you load the AirPods Pro page on a mobile device connected to a slow 3G connection and, hey, the performance stats start to look a whole lot better:

8 out of 111 requests, 347 kilobytes of 2.6 megabytes transferred, 1.4 megabytes of 4.5 megabytes resources, load time of one minute and one second.

Yeah, it’s still a heavy page. But it’s a lot lighter than what we’d get without any performance considerations at all. That’s how Apple is able to get get so many complex sequences onto a single page.


Further reading

If you are interested in how these image sequences are generated, a good place to start is the Lottie library by AirBnB. The docs take you through the basics of generating animations with After Effects while providing an easy way to include them in projects.

The post Let’s Make One of Those Fancy Scrolling Animations Used on Apple Product Pages appeared first on CSS-Tricks.

CSS-Tricks

, , , , , , , ,
[Top]

Alpine.js: The JavaScript Framework That’s Used Like jQuery, Written Like Vue, and Inspired by TailwindCSS

We have big JavaScript frameworks that tons of people already use and like, including React, Vue, Angular, and Svelte. Do we need another JavaScript library? Let’s take a look at Alpine.js and you can decide for yourself. Alpine.js is for developers who aren’t looking to build a single page application (SPA). It’s lightweight (~7kB gzipped) and designed to write markup-driven client-side JavaScript.

The syntax is borrowed from Vue and Angular directive. That means it will feel familiar if you’ve worked with those before. But, again, Alpine.js is not designed to build SPAs, but rather enhance your templates with a little bit of JavaScript.

For example, here’s an Alpine.js demo of an interactive “alert” component.

The alert message is two-way bound to the input using x-model="msg". The “level” of the alert message is set using a reactive level property. The alert displays when when both msg and level have a value.

It’s like a replacement for jQuery and JavaScript, but with declarative rendering

Alpine.js is a Vue template-flavored replacement for jQuery and vanilla JavaScript rather than a React/Vue/Svelte/WhateverFramework competitor.

Since Alpine.js is less than a year old, it can make assumptions about DOM APIs that jQuery cannot. Let’s briefly draw a comparison between the two.

Querying vs. binding

The bulk of jQuery’s size and features comes in the shape of a cross-browser compatibility layer over imperative DOM APIs — this is usually referred to as jQuery Core and sports features that can query the DOM and manipulate it.

The Alpine.js answer to jQuery core is a declarative way to bind data to the DOM using the x-bind attribute binding directive. It can be used to bind any attribute to reactive data on the Alpine.js component. Alpine.js, like its declarative view library contemporaries (React, Vue), provides x-ref as an escape hatch to directly access DOM elements from JavaScript component code when binding is not sufficient (eg. when integrating a third-party library that needs to be passed a DOM Node).

Handling events

jQuery also provides a way to handle, create and trigger events. Alpine.js provides the x-on directive and the $ event magic value which allows JavaScript functions to handle events. To trigger (custom) events, Alpine.js provides the $ dispatch magic property which is a thin wrapper over the browser’s Event and Dispatch Event APIs.

Effects

One of jQuery’s key features is its effects, or rather, it’s ability to write easy animations. Where we might use slideUp, slideDown, fadeIn, fadeOut properties in jQuery to create effects, Alpine.js provides a set of x-transition directives, which add and remove classes throughout the element’s transition. That’s largely inspired by the Vue Transition API.

Also, jQuery’s Ajax client has no prescriptive solution in Alpine.js, thanks to the Fetch API or taking advantage of a third party HTTP library (e.g. axios, ky, superagent).

Plugins

It’s also worth calling out jQuery plugins. There is no comparison to that (yet) in the Alpine.js ecosystem. Sharing Alpine.js components is relatively simple, usually requiring a simple case of copy and paste. The JavaScript in Alpine.js components are “just functions” and tend not to access Alpine.js itself, making them relatively straightforward to share by including them on different pages with a script tag. Any magic properties are added when Alpine initializes or is passed into bindings, like $ event in x-on bindings.

There are currently no examples of Alpine.js extensions, although there are a few issues and pull requests to add “core” events that hook into Alpine.js from other libraries. There are also discussions happening about the ability to add custom directives. The stance from Alpine.js creator Caleb Porzio, seems to be basing API decisions on the Vue APIs, so I would expect that any future extension point would be inspired on what Vue.js provides.

Size

Alpine.js is lighter weight than jQuery, coming in at 21.9kB minified — 7.1kB gzipped — compared to jQuery at 87.6kB minified — 30.4kB minified and gzipped. Only 23% the size!

Most of that is likely due to the way Alpine.js focuses on providing a declarative API for the DOM (e.g. attribute binding, event listeners and transitions).

Bundlephobia breaks down the two

For the sake of comparison, Vue comes in at 63.5kB minified (22.8kB gzipped). How can Alpine.js come in lighter despite it’s API being equivalent Vue? Alpine.js does not implement a Virtual DOM. Instead, it directly mutates the DOM while exposing the same declarative API as Vue.

Let’s look at an example

Alpine is compact because since application code is declarative in nature, and is declared via templates. For example, here’s a Pokemon search page using Alpine.js:

This example shows how a component is set up using x-data and a function that returns the initial component data, methods, and x-init to run that function on load.

Bindings and event listeners in Alpine.js with a syntax that’s strikingly similar to Vue templates.

  • Alpine: x-bind:attribute="express" and x-on:eventName="expression", shorthand is :attribute="expression" and @eventName="expression" respectively
  • Vue: v-bind:attribute="express" and v-on:eventName="expression", shorthand is :attribute="expression" and @eventName="expression" respectively

Rendering lists is achieved with x-for on a template element and conditional rendering with x-if on a template element.

Notice that Alpine.js doesn’t provide a full templating language, so there’s no interpolation syntax (e.g. {{ myValue }} in Vue.js, Handlebars and AngularJS). Instead, binding dynamic content is done with the x-text and x-html directives (which map directly to underlying calls to Node.innerText and Node.innerHTML).

An equivalent example using jQuery is an exercise you’re welcome to take on, but the classic style includes several steps:

  • Imperatively bind to the button click using $ ('button').click(/* callback */).
  • Within this “click callback” get the input value from the DOM, then use it to call the API.
  • Once the call has completed, the DOM is updated with new nodes generated from the API response.

If you’re interested in a side by side comparison of the same code in jQuery and Alpine.js, Alex Justesen created the same character counter in jQuery and in Alpine.js.

Back in vogue: HTML-centric tools

Alpine.js takes inspiration from TailwindCSS. The Alpine.js introduction on the repository is as “Tailwind for JavaScript.”

Why is that important?

One of Tailwind’s selling points is that it “provides low-level utility classes that let you build completely custom designs without ever leaving your HTML.” That’s exactly what Alpine does. It works inside HTML so there is no need to work inside of JavaScript templates the way we would in Vue or React  Many of the Alpine examples cited in the community don’t even use script tags at all!

Let’s look at one more example to drive the difference home. Here’s is an accessible navigation menu in Alpine.js that uses no script tags whatsoever.

This example leverages aria-labelledby and aria-controls outside of Alpine.js (with id references). Alpine.js makes sure the “toggle” element (which is a button), has an aria-expanded attribute that’s true when the navigation is expanded, and false when it’s collapsed. This aria-expanded binding is also applied to the menu itself and we show/hide the list of links in it by binding to hidden.

Being markup-centric means that Alpine.js and TailwindCSS examples are easy to share. All it takes is a copy-paste into HTML that is also running Alpine.js/TailwindCSS. No crazy directories full of templates that compile and render into HTML!

Since HTML is a fundamental building block of the web, it means that Alpine.js is ideal for augmenting server-rendered (Laravel, Rails, Django) or static sites (Hugo, Hexo, Jekyll). Integrating data with this sort of tooling can be a simple as outputting some JSON into the x-data="{}" binding. The affordance of passing some JSON from your backend/static site template straight into the Alpine.js component avoids building “yet another API endpoint” that simply serves a snippet of data required by a JavaScript widget.

Client-side without the build step

Alpine.js is designed to be used as a direct script include from a public CDN. Its developer experience is tailored for that. That’s why it makes for a great jQuery comparison and replacement: it’s dropped in and eliminates a build step.

While it’s not traditionally used this way, the bundled version of Vue can be linked up directly. Sarah Drasner has an excellent write-up showing examples of jQuery substituted with Vue. However, if you use Vue without a build step, you’re actively opting out of:

  • the Vue CLI
  • single file components
  • smaller/more optimized bundles
  • a strict CSP (Content Security Policy) since Vue inline templates evaluate expressions client-side

So, yes, while Vue boasts a buildless implementation, its developer experience is really depedent on the Vue CLI. That could be said about Create React App for React, and the Angular CLI. Going build-less strips those frameworks of their best qualities.

There you have it! Alpine.js is a modern, CDN-first  library that brings declarative rendering for a small payload — all without the build step and templates that other frameworks require. The result is an HTML-centric approach that not only resembles a modern-day jQuery but is a great substitute for it as well.

If you’re looking for a jQuery replacement that’s not going to force you into a SPAs architecture, then give Alpine.js a go! Interested? You can find out more on Alpine.js Weekly, a free weekly roundup of Alpine.js news and articles.

The post Alpine.js: The JavaScript Framework That’s Used Like jQuery, Written Like Vue, and Inspired by TailwindCSS appeared first on CSS-Tricks.

CSS-Tricks

, , , , , , , , ,
[Top]

Understanding How Reducers are Used in Redux

A reducer is a function that determines changes to an application’s state. It uses the action it receives to determine this change. We have tools, like Redux, that help manage an application’s state changes in a single store so that they behave consistently.

Why do we mention Redux when talking about reducers? Redux relies heavily on reducer functions that take the previous state and an action in order to execute the next state.

We’re going to focus squarely on reducers is in this post. Our goal is to get comfortable working with the reducer function so that we can see how it is used to update the state of an application — and ultimately understand the role they play in a state manager, like Redux.

What we mean by “state”

State changes are based on a user’s interaction, or even something like a network request. If the application’s state is managed by Redux, the changes happen inside a reducer function — this is the only place where state changes happen. The reducer function makes use of the initial state of the application and something called action, to determine what the new state will look like.

If we were in math class, we could say:

initial state + action = new state

In terms of an actual reducer function, that looks like this:

const contactReducer = (state = initialState, action) => {   // Do something }

Where do we get that initial state and action? Those are things we define.

The state parameter

The state parameter that gets passed to the reducer function has to be the current state of the application. In this case, we’re calling that our initialState because it will be the first (and current) state and nothing will precede it.

contactReducer(initialState, action)

Let’s say the initial state of our app is an empty list of contacts and our action is adding a new contact to the list.

const initialState = {   contacts: [] }

That creates our initialState, which is equal to the state parameter we need for the reducer function.

The action parameter

An action is an object that contains two keys and their values. The state update that happens in the reducer is always dependent on the value of action.type. In this scenario, we are demonstrating what happens when the user tries to create a new contact. So, let’s define the action.type as NEW_CONTACT.

const action = {   type: 'NEW_CONTACT',   name: 'John Doe',   location: 'Lagos Nigeria',   email: 'johndoe@example.com' }

There is typically a payload value that contains what the user is sending and would be used to update the state of the application. It is important to note that action.type is required, but action.payload is optional. Making use of payload brings a level of structure to how the action object looks like.

Updating state

The state is meant to be immutable, meaning it shouldn’t be changed directly. To create an updated state, we can make use of Object.assign or opt for the spread operator.

Object.assign

const contactReducer = (state, action) => {   switch (action.type) {     case 'NEW_CONTACT':     return Object.assign({}, state, {       contacts: [         ...state.contacts,         action.payload       ]     })     default:       return state   } }

In the above example, we made use of the Object.assign() to make sure that we do not change the state value directly. Instead, it allows us to return a new object which is filled with the state that is passed to it and the payload sent by the user.

To make use of Object.assign(), it is important that the first argument is an empty object. Passing the state as the first argument will cause it to be mutated, which is what we’re trying to avoid in order to keep things consistent.

The spread operator

The alternative to object.assign() is to make use of the spread operator, like so:

const contactReducer = (state, action) => {   switch (action.type) {     case 'NEW_CONTACT':     return {         ...state, contacts:         [...state.contacts, action.payload]     }     default:       return state   } }

This ensures that the incoming state stays intact as we append the new item to the bottom.

Working with a switch statement

Earlier, we noted that the update that happens depends on the value of action.type. The switch statement conditionally determines the kind of update we’re dealing with, based on the value of the action.type.

That means that a typical reducer will look like this:

const addContact = (state, action) => {   switch (action.type) {     case 'NEW_CONTACT':     return {         ...state, contacts:         [...state.contacts, action.payload]     }     case 'UPDATE_CONTACT':       return {         // Handle contact update       }     case 'DELETE_CONTACT':       return {         // Handle contact delete       }     case 'EMPTY_CONTACT_LIST':       return {         // Handle contact list       }     default:       return state   } }

It’s important that we return state our default for when the value of action.type specified in the action object does not match what we have in the reducer — say, if for some unknown reason, the action looks like this:

const action = {   type: 'UPDATE_USER_AGE',   payload: {     age: 19   } }

Since we don’t have this kind of action type, we’ll want to return what we have in the state (the current state of the application) instead. All that means is we’re unsure of what the user is trying to achieve at the moment.

Putting everything together

Here’s a simple example of how I implemented the reducer function in React.

See the Pen
reducer example
by Kingsley Silas Chijioke (@kinsomicrote)
on CodePen.

You can see that I didn’t make use of Redux, but this is very much the same way Redux uses reducers to store and update state changes. The primary state update happens in the reducer function, and the value it returns sets the updated state of the application.

Want to give it a try? You can extend the reducer function to allow the user to update the age of a contact. I’d like to see what you come up with in the comment section!

Understanding the role that reducers play in Redux should give you a better understanding of what happens underneath the hood. If you are interested in reading more about using reducers in Redux, it’s worth checking out the official documentation.

The post Understanding How Reducers are Used in Redux appeared first on CSS-Tricks.

CSS-Tricks

, , ,
[Top]