Category: Design

The `ping` attribute on anchor links

I didn’t know this was a thing until Stefan Judis’s post:

<a href="https://www.stefanjudis.com/popular-posts/"     ping="https://www.stefanjudis.com/tracking/">Read popular posts</a>

You give an anchor link a URL via a ping attribute, and the browser will hit that URL with a web request (a literal PING) when clicked. The headers have a ping-to key with the href value of the link.

Why? Data. Wouldn’t it be nice to know what off-site links people are clicking on your website?

Even if you have Google Analytics installed, you don’t get that data by default. You’d have to write something custom or use something like their autotrack plugin with the outboundLinkTracker. Whatever you do, it is non-trivial, as in order to work, it has to:

  1. Have JavaScript intervene
  2. Prevent the default action of the link (going to the website)
  3. Track the event (send a ping somewhere)
  4. Then tell the browser to actually go to the website (window.location = …)

That’s because running a bit of JavaScript to ping your tracking service is unreliable on an off-site click. Your site is unloading and the new site loading as fast as possible, meaning that ping might not get a chance to run.

Presumably, with the ping attribute, you don’t have to do this little JavaScript dance of sending the ping before the page unloads — it will “just work.” That’s a big plus for this technique. It’s so cool to move complex ideas to lower-level languages that work easier and better.

There are heaps of downsides though. You don’t exactly get a nice API for sending along metadata. The best you could do is append a query string for extra data (e.g., was the link in the footer? or was it in a blog post?). But perhaps the biggest limitation is that it’s only for anchor links. If you were building a really serious event tracking thing, you’d want it to be useful for any type of event (not just clicks) on any type of element (not just links). So it’s not terribly surprising this is almost entirely the JavaScript’s domain.

Direct Link to ArticlePermalink


The post The `ping` attribute on anchor links appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , ,

Comparing the New Generation of Build Tools

A bunch of new developer tools have landed in the past year and they are biting at the heels of the tools that have dominated front-end development over the last few years, including webpack, Babel, Rollup, Parcel, create-react-app.

These new tools aren’t designed to perform the exact same function, and each has different things they’re trying to achieve and features to get there. Despite their differences, these tools do share a common goal: improve the developer experience.


Specifically, I’d like to evaluate each one, outlining what they do, why we need them, and their use cases. I realize that comparisons aren’t always fair. Again, it’s not like any of the things we’re looking at in this article are direct competitors. In fact, Snowpack and Vite actually use esbuild under the hood for certain tasks. Our goal is more to get a better view of the landscape of developer tools that run tasks to make our jobs easier. This way, we see what options are out there and how they stack up, so we can make the best choices when we need them.

Of course, all of this will be colored by my experience using React and Preact. I’m more familiar with these frameworks libraries, but we’ll look at their support for other front-end frameworks too.

There have been a whole lot of great articles, streams and podcasts about these new developer tools. There are a couple of ShopTalk Show episodes I’d recommend for more context: Episode 454 discusses Vite and Episode 448 features the creators of wmr and Snowpack. Something that stands out from these episodes is that a huge amount of work has gone into building these tools to modernize our developer environments.

Why are these tools all arriving now?

In part, I think these tools are arriving as a reaction to JavaScript tooling fatigue — something captured nicely in this article about learning JavaScript back in 2016. They also fill a missing middle ground between writing a single vanilla JavaScript file, and having to download 200 megabytes of tooling dependencies before you’ve written a line of your own code. They come batteries-included without the dependency list, and are part of a trend of collapsing layers in the JavaScript ecosystem.

Snowpack, Vite, and wmr have all been enabled by native JavaScript modules in the browser. Back in 2018, Firefox 60 was released with ECMAScript 2015 modules enabled by default. Since then, all major browser engines have supported native JavaScript modules. Node.js also shipped with native JavaScript modules in November 2019. We’re still finding out what possibilities native JavaScript modules unlock today in 2021.

How are these different from existing tools?

Whether we use webpack, Rollup, or Parcel for a development server, the tool bundles our entire codebase from our source code and a node_modules folder, runs these through build processes —  like Babel, TypeScript, or PostCSS — then pushes the bundled code to our browser. This all takes work, and can slow development servers to a crawl in larger codebases, even after all the work that’s gone into caching and optimizing.

Snowpack, Vite, and wmr development servers don’t follow this model. Instead, they wait until the browser finds an import statement and makes an HTTP request for the module. Only after this request is made will the tool apply transforms to the requested module and any leaf nodes in the module’s import tree, then serve these to the browser. This speeds things up a lot as there’s less work in the process of pushing to a dev server.

You’ll notice esbuild missing from this picture. It’s a bundler first and foremost. It doesn’t side-step bundling the way the other tools do. Instead, esbuild processes code extremely fast by avoiding expensive transformations, leveraging parallelization and using the Go language.

The experiment

I took one of the the example apps from the React docs and rebuilt it with each tool covered in this article. The project I went with was Snap Shot by Yogita Verma. Here’s a link to the original repo, and a link to my repo with the four versions of Snap Shot, each using a different build tool. We’ll compare the output of each build step later. Rebuilding this app allowed me to test out the developer experience of pulling some pretty standard React dependencies into the tools, including React Router and axios.

Comparable features

Before we get into the specifics of each individual tool, they all support the following features out of the box (to varying degrees):

  • First-class support for native JavaScript modules
  • TypeScript compilation (but not type checking)
  • JSX
  • Plugin API for extensibility
  • A built-in development server
  • CSS bundling and support for CSS-in-JS libraries

All of these tools can compile TypeScript into JavaScript, but will do so even if there are type errors. For proper type checking you would need to install TypeScript and run tsc --noEmit on your root JavaScript file, or alternatively, use editor plugins to watch for type errors.

OK, let’s take look at each tool.

esbuild

esbuild was created by Evan Wallace (CTO of Figma). Its main feature is that it provides a build step 10×-100× faster than Node-based bundlers (by their own benchmarks). It doesn’t provide many of the developer conveniences you might find in something like create-react-app. But there are more and more esbuild starters popping up that fills those gaps, including create-react-app-esbuild, estrella and Snowpack, which uses esbuild for its build step.

esbuild is very new. It hasn’t yet reached a 1.0 version and isn’t quite ready for production use — but it’s not far off. It gives you intuitive JavaScript and command line APIs with smart defaults.

Use cases

esbuild is a complete game-changer in the bundler world. It’s going to be most useful in large codebases where the speed difference between esbuild and node bundlers gets multiplied. When esbuild hits 1.0 it’s going to be very useful in big production sites, and will save teams a whole lot of time waiting for builds to complete. Unfortunately, big production sites will have to wait until esbuild becomes stable. In the meantime it’ll just be good to add some speed to your bundling in side projects.

esbuild’s lightening fast speed will be a bonus for any kind of work that you’re doing. Less time spent waiting for builds to run is always going to be good for developer experience! This considered, if you’re prototyping quick applications you might want to start with something more high level than esbuild — otherwise, you’ll need to spend some time pulling in dependencies and configuring your environment before you get conveniences we expect in the JavaScript ecosystem. Also, if you want to minimize the size of your bundle as much as possible you may want to use Rollup and terser, which will produce slightly smaller bundle sizes.

Setup

I decided to start a React project in esbuild in a naïve way: npm installing esbuild, React and ReactDOM. I created a src/app.jsx file and a dist/index.html file. Then, I used the following command to compile the app into a dist/bundle.js file:

./node_modules/.bin/esbuild src/app.jsx --bundle --platform=browser --outfile=dist/bundle.js

When I hosted and opened index.html in the browser, I was met with the “white screen of death” and an “Uncaught ReferenceError: process is not defined” console error. Both the docs and the CLI explain exactly what you need to do to prevent this but it might be a bit of a “gotcha” for beginners, because it requires an extra argument when bundling React:

--define:process.env.NODE_ENV="production"

Or, if you’re including esbuild in npm scripts written like this to escape the quotes:

--define:process.env.NODE_ENV=\"production\"

This define argument is needed for any library bundled for the browser that expects node environment variables. Vue 2.0 also expects these. You won’t have the same problem with Preact because it doesn’t expect any environment variables and ships ready for the browser by default.

After I ran the command with the define argument, my “Hello world” React app was working perfectly. JSX works out of the box with .jsx files. That said, React needs to be manually imported and then JSX is converted to the React.createElement. However, there are ways to add auto imports in JSX and/or configure JSX for Preact.

Usage

esbuild provides a --serve option for a development server. This bypasses the filesystem and serves modules straight from memory, ensuring that the browser doesn’t pull older versions of modules. However, it doesn’t include live/hot reloading, so you will find yourself refreshing the browser after saving which isn’t an ideal experience.

I decided to use the newly-released watch feature.This tells esbuild to recompile code every time a source file is saved. But we still need a server to see our saved changes. We can pull in a development server package, such as Luke Jackson’s servor:

npm install servor --save-dev

Then we can use the esbuild Javascript API to start as server and run esbuild’s watch mode at the same time. Let’s create a file at the root of our project called watch.js:

// watch.js const esbuild = require("esbuild"); const servor = require("servor");  esbuild.build({   // pass any options to esbuild here...   entryPoints: ["src/app.jsx"],   outdir: "dist",   define: { "process.env.NODE_ENV": '"production"' },   watch: true, });  async function serve(){   console.log("running server from: http://localhost:8080/");   await servor({     // pass any options to servor here...     browser:true,     root: "dist",     port: 8080,   }); }  serve();

Now run node watch.js in the command line. This gives us a nice dev server, though again, it doesn’t give us hot module replacement or fast refresh (i.e., your client-side state won’t be preserved). But this was enough for my testing needs.

Even though we’re rebundling our entire application every time we save a file, we’d need to have a pretty massive application before esbuild slows down. After I set up this tooling, I was getting instant feedback from changes. My computer uses an intel i7 from 2012, so it certainly isn’t a top-of-the-line machine.

If you need a preconfigured version of esbuild with live reload and some React defaults, you can clone this repo.

Supported files

esbuild can import CSS in JavaScript if that’s your style. It will compile CSS into an output file with the same name as your main output JavaScript file. It can also bundle CSS @import statements by default. There is no support for CSS Modules, but there are plans for it.

There is a growing community of plugins for esbuild. For example, there are plugins available for Vue single file components, and Svelte components.

esbuild works with JSON files and can bundle them into JavaScript modules without any configuration.

It can also import images in JavaScript with the option to either convert them into data URLs or copying them into an output folder. This behavior isn’t enabled by default, but you can add the following in your esbuild config object to enable either option:

loader: { '.png': 'dataurl' } // Converts to data url in JS bundle loader: { '.png': 'file' } // Copies to output folder

Code splitting appears to be a work in progress, but is mostly there in the ESM output format, and it does look like it is a priority for the project. It’s also worth mentioning that tree-shaking is built into esbuild by default and can’t be turned off.

Production build

Using the “minify” and “bundle” options in your esbuild command won’t create a bundle quite as small as a Rollup/Terser pipeline. This is because esbuild sacrifices some bundle size optimization to get through your code in as few passes as possible. However, the difference may be pretty negligible, and worth it for the increase in bundling speed, depending on your project. In my clone of the Snap Shot application, esbuild created a bundle of 177 KB which isn’t a lot more than the 165KB produced by Vite, which uses rollup and terser.

Overall

esbuild
Templates for multiple front end frameworks
Hot module replacement development server
Streaming imports
Preconfigured production build 
Automatic PostCSS and preprocessor conversion
HTM transform
Rollup plugin support
Size on disk (default install) 7.34 MB

esbuild is an extremely powerful tool. But it might be difficult if you’re used to zero-config setups. If you need more, then you might want to take a look at the next tool, Snowpack, which uses esbuild.

Snowpack

Snowpack is a build tool by the creators of Skypack and Pika. It provides an awesome development server and was created with an “unbundled development” philosophy. To quote the documentation: “You should be able to use a bundler because you want to, and not because you need to.”

By default, Snowpack’s build step doesn’t bundle files into a single package but provides unbundled esmodules that run in the browser. esbuild is actually included in there as a dependency, but the idea is to use JavaScript modules and only bundle with esbuild when it’s needed.

Snowpack has some pretty slick documentation, including a list of guides for using it with JavaScript frameworks, and a bunch of templates for them. Some of the guides are still a work in progress, but others like the one for React are nice and clear. It also looks like Snowpack treats Svelte as a first-class citizen. I actually first heard about Snowpack from Rich Harris’s “Futuristic Web Development” talk at Svelte Summit 2020. That said, the upcoming Svelte meta-framework SvelteKit was supposed to be powered by Snowpack but has since switched to Vite (which we’ll review next).

Use cases

Snowpack is a good choice if you want to double down on unbundled deployment. You may be writing source code with a small number of modules. This would mean you’re not creating a big request waterfall with an unbundled build. If you don’t need the added complexity and technical debt of bundling, then Snowpack is a great choice. A good use case would be if you’re incrementally adopting a front-end framework into a server-rendered or static application. You’d be pulling in as little tooling as possible from the node ecosystem but you’d still be getting the benefits of declarative frontend frameworks.

Secondly, I’d argue that Snowpack is a great wrapper around esbuild. If you want to try out esbuild but also want a development server and pre-written templates for front-end frameworks, then you can’t go wrong with Snowpack. Enable esbuild in the build step of your Snowpack config and you’re good to go.

As things currently stand, I’d argue that Snowpack wouldn’t be the best replacement for a zero configuration tool like create-react-app because you’ll need to pull in plugins and configure them yourself if you have a big application and need a super-fancy optimized production-ready build step.

Setup

Let’s start a project with Snowpack by jumping into the command line:

mkdir snowpackproject cd snowpackproject npm init #fill with defaults  npm install snowpack

Now, let’s add the following to package.json:

// package.json "scripts": {   "start": "snowpack dev",   "build": "snowpack build" },

Next, we’ll create a configuration file:

// Mac or Linux touch snowpack.config.js // Windows new-item snowpack.config.js

I think the most magical part of Snowpack comes when setting one innocent-looking key value pair in the configuration file. Paste this into the configuration file, for example:

// snowpack.config.js module.exports = {   packageOptions: {     "source": "remote",   } };

source: remote enables something called streaming imports. Streaming imports enable Snowpack to bypass npm installation by converting bare imports (e.g., import React from 'react';) into CDN imports from Skypack.

Moving ahead, let’s make an index.html file:

<!--index.html--> <!DOCTYPE html> <html lang="en"> <head>   <meta charset="UTF-8">>   <title>Snowpack streaming imports</title> </head> <body>   <div id="root"></div>   <!-- Note the type="module". This is important for JavaScript module imports. -->   <script type="module" src="app.js"></script> </body> </html>

And, finally, we’ll add an app.jsx file:

// app.jsx import React from 'react' import ReactDOM from 'react-dom' const App = ()=>{   return <h1>Welcome to Snowpack streaming imports!</h1> } ReactDOM.render(<App />,document.getElementById('root')); 0

Note that we didn’t npm install React or ReactDOM at any stage. But if we start up the Snowpack developer server like this:

./node_modules/.bin/snowpack dev

…our app still works!

Instead of pulling from a node_modules folder, Snowpack pulls the npm package down from Skypack, a CDN that hosts the npm registry, and it’s is pre-optimized to work in the browser. Snowpack then serves it in a ./_snowpack/pkg URL.

Usage

This is a big step away from Node/npm-based workflow. What we’re actually looking at is a new CDN/JavaScript module-based workflow.

If, however, we our app as it is and run a production build, Snowpack throws an error. This is because it needs to know which versions of React and ReactDOM to use when building. You can fix this by writing to a snowpack.deps.json which can automatically be created by running the following:

./node_modules/.bin/snowpack add react ./node_modules/.bin/snowpack add react-dom

That won’t download the package from npm, but it will record the version of the packages used for Snowpack builds.

One caveat is that we miss out on developer error messages, as Skypack will ship the production version of packages.

Even if we aren’t using streaming imports, the Snowpack development server bundles each dependency from node_modules into one JavaScript file per dependency, converts those files to a native JavaScript module, then serves it to the browser. This means the browser can cache these scripts and only re-request them if they’ve changed. The development server automatically refreshes on save, but doesn’t preserve the client-side state. All dependencies from node seemed to work out of the box regardless of whether they were using legacy module formats or node APIs (such as the infamous process.env we had trouble with in esbuild).

Preserving client-side state in React requires react-refresh, which requires a few Babel packages of its own as dependencies. These aren’t included by default, but are available using the more maximal React template. The template pulls in react-refresh, Prettier, Chai, and React Testing Library, for a total Node dependency package weighing in at 80 MB:

npx create-snowpack-app my-react-project --template @snowpack/app-template-react

Supported files

JSX is supported, but again, only with .jsx files by default. Snowpack automatically detects if whether React or Preact is being used, and decides accordingly which render function to use for the JSX transform. However, if we want to customize the JSX further than this, we’d need to pull in Babel via their plugin. There is also a Snowpack plugin available for Vue single file components and, of course, for Svelte components. Further, Snowpack compiles TypeScript, but for type checking we need the TypeScript plugin.

CSS can be imported into JavaScript and are tossed into the document <head> at runtime. CSS modules are also supported out of the box for scoping as long as they have the .module.css extension.

Imported JSON files will be cast into a JavaScript module with an object as a default export. Snowpack supports images and copies them into the production folder. Going along with its unbundled philosophy, Snowpack does not include images as data URLs in the bundle.

Production build

The default snowpack build command basically copies the exact source file structure into an output folder. For files that compile to JavaScript (e.g. TypeScript, JSX, JSON, .vue, .svelte), it transforms each individual file into a separate browser-friendly JavaScript module.

This works fine, but isn’t great for production, as it could cause a big waterfall of requests if the source code is split into a lot of files. In the Snap Shot application I ended up with 184KB of source files which would then request another 105 KB of dependencies from Skypack which made for a pretty huge waterfall.

However, Snowpack pulls esbuild as a dependency and we can enable esbuild to bundle, minify and compile our code by adding an “optimize” object to the Snowpack config:

// snowpack.config.js module.exports = {   optimize: {     bundle: true,     minify: true,     target: 'es2018',   }, };

This runs the code using the optimization features provided by esbuild, so by just adding these options we could get the same build we had earlier on with esbuild.

Since esbuild hasn’t reached 1.0 yet, Snowpack recommends using either the webpack or Rollup plugin for production builds, both of which need to be configured.

Overall

Snowpack provides a lightweight developer experience with a full-featured development server, detailed documentation, and easy-to-install templates. You are left to decide whether you want to bundle your application and how you want to do so. If you want a tool that provides both a dev server and a more opinionated build step, you might want to take a look at Vite, the next tool on our list.

Snowpack
Templates for multiple front end frameworks
Hot module replacement development server ✅ (when using templates)
Streaming imports
Preconfigured production build 
Automatic PostCSS and preprocessor conversion
HTM transform
Rollup plugin support ✅ (when using snowpack-plugin-rollup-bundle for build step)
Size on disk (default install) 16 MB

Vite

Vite is developed by Vue creator (and Hades speedrunner) Evan You. Where esbuild concentrates on the build step and Snowpack concentrates on the development server, Vite provides both: a full development server and an optimized build command using Rollup.

Use cases

If you want a serious create-react-app or Vue CLI competitor, Vite is the closest one in the bunch because it comes with batteries-included features. The lightening-fast development server and zero-config optimized production build mean you can get from zero to production without any configuration. Vite is a tool that could be used in both a tiny side-project or a big production application. A good use case for Vite would be any sizeable single page app.

Why wouldn’t you use Vite? Vite is an opinionated tool and you might disagree with its opinions. You might not want to use Rollup for your build (we’ve been talking about how fast esbuild is), or you might want your tooling to give you the full power of Babel, eslint and the ecosystem of webpack loaders out of the box.

Also, you want zero-config server-side rendering meta-frameworks, you’d be better off staying with webpack-based frameworks, like Nuxt.js and Next.js until the story for Vite server-side rendering is more complete.

Setup

Vite has more opinionated defaults than esbuild and Snowpack. Its documentation is clear and detailed. We get full support for Vue with Evan being the creator and all, so Vite is a definite happy path for Vue developers. That said, Vite can be used with any front-end framework and even provides a list of templates to get you started.

Usage

Vite’s development server is pretty powerful. Vite pre-bundles all of a project’s dependencies together into a single native JavaScript module with esbuild, then serves it up with a heavily cached HTTP header. This means no time is wasted on compiling, serving or requesting imported dependencies after the first page load. Vite also provides clear error messaging, printing the exact block of code and the line numbers to troubleshoot. Again with Vite, I didn’t have any issues pulling in dependencies that used node APIs or legacy formats. They all seemed to be shimmed into a browser-acceptable esmodule.

Vite’s React and Vue templates both pull in plugins that enable hot module replacement. The Vue template pulls in a Vue plugin for single file components, and a Vue plugin for JSX. The React template pulls in the react-refresh plugin. Either way, both will give you hot module replacement and client-side state preservation. Sure, they add a few more dependencies, including Babel packages, but, Babel isn’t actually necessary when using JSX in Vite. By default, JSX works the same way as esbuild — it converts to React.createElement. It won’t automatically import React, but its behavior can be configured.

And while we’re at it, Vite doesn’t support streaming imports like Snowpack and wmr do. That means npm-installing dependencies as usual.

One cool thing is that Vite includes experimental support for server-side rendering. Pick your framework of choice and generate static HTML that ships directly to the client. At the moment, it looks like we need to construct this architecture on our own, but still, this looks like a good opportunity for meta-frameworks to be built on top of Vite. Evan You already has a work in progress called VitePress, a replacement for VuePress with the benefits of using Vite. And Sveltekit has also added Vite to its dependency list. It looks like CSS code splitting inclusion was part of the reason Sveltekit made the switch to Vite.

Supported files

For CSS, Vite provides the most features out of all of the tools that we are looking at. It supports bundling CSS imports as well as CSS modules. But we can also npm install PostCSS plugins and create a postcss.config.js file, and Vite will automatically start applying these transforms to CSS.

We can install and use CSS preprocessors — simply npm install the preprocessor and rename the file to the right extension (e.g. .filename.scss) and Vite will start applying the corresponding preprocessor. And, as we said in the overview, Vite support CSS code-splitting.

Image imports default to a public URL, but we’re also able load them into the bundle as strings by using a ?raw parameter at the end of the URL string.

JSON files can be imported in the source and converted into an esmodule exporting a single object. We can also provide a named import and Vite will look in the root field of the JSON file to find the import and treeshake the rest.

Production build

Vite uses Rollup for a preconfigured production build with a bunch of optimizations. It intentionally provides a zero-config build which should be enough for most use cases.

The build comes with the Rollup features we expect: bundling, minification and tree shaking. But we also get extras, like code-splitting dynamic imports and something called “asynchronous chunk loading” which is a fancy way to say that if we request a JavaScript module that imports another module, the build will be pre-optimized to load both at the same time (asynchronously).

Running Vite’s default build with the Snap Shot app I ended up with one 5KB JavaScript file and one 160KB JavaScript file (for a grand total of 165KB) and all CSS in the project was automatically minified to a tiny 2.71KB file.

Overall

The opinionated nature of Vite makes it a serious competitor with our current tooling. A lot of work has been done to make the developer experience really seamless and make production-ready builds out of the box.

Vite
Templates for multiple front end frameworks
Hot module replacement development server ✅ (when using templates)
Streaming imports
Preconfigured production build 
Automatic PostCSS and preprocessor conversion
HTM transform
Rollup plugin support ✅ 
Size on disk (default install) 17.1 MB

wmr

Like Vite, wmr is another opinionated build tool that provides both a development server and a build step. It was built by the creator of Preact, Jason Miller, so it’s definitely a happy path for Preact developers. Jason Miller explained the thinking behind wmr when he appeared as a guest on the JS Party podcast:

Preact is tiny and it’s really good if you want to do a lightweight project. Where is our tooling for that? We have a webpack-based tool that’s used in used in production by a bunch of high profile sites, but that’s the heavyweight tool. Where’s the prototyping tool? That was the one hand. The other hand is myself and a bunch of others who happened to be on the Preact team; We had been kind of on the sidelines of the bundler ecosystem for a little while, prodding people, trying to get consensus on a direction that we can move in to further this idea of writing modern code and shipping modern code.

This tells us that wmr is all about writing and shipping modern code, enabling lighter tooling in a project.

You might be wondering what wmr stands for? Nothing! The names “Web Modules Runtime” and “Wet Module Replacement” were floated, but it’s a fake abbreviation, similar to npm.

wmr is built with the same ruthless bundle size purging as Preact, so it’s tiny — weighing a mere 2.6 MB — and contains exactly zero npm dependencies. Still, it manages to pack in a whole lot of really awesome features including a hot-module-replacing development server and an optimized production build.

Use cases

I would use wmr if I was looking to create a prototype using Preact as fast as possible. There’s no configuration and it only takes seconds to download. It feels like using a supercharged static file server. With TypeScript, an optimized-build step, and static HTML rendering, wmr offers everything needed to ship small-to-medium sized applications. Its small size is also great for quickly trying out a library or demoing an idea.

wmr may not be the tool for you if you’re not using Preact, React or vanilla JavaScript. The Preact team has yet to provide templates for other frameworks. The documentation also isn’t as detailed as the other tools we’ve looked at. This means the further you stray from the happy path, the more you’ll dig into the source. So, I can’t recommend it if a lot of customization is needed.

Setup

If you’re using preact there is absolutely zero setup needed except for a quick npm install. To use React with wmr instead of Preact, there are currently two steps to take. First, alias htm/preact to htm/react, and react to es-react in your package.json:

"alias": {   "htm/preact": "htm/react",   "react": "es-react" },

Then add imports from es-react into your components:

// ReactDOM only needed on root render import { React, ReactDOM,} from 'es-react';

This means we aren’t actually using the normal React package you might be used to, but instead pulling in React from es-react. This is because wmr relies on packages being compatible with native JavaScript modules. React does not use native modules by default, instead using an older style of module called UMD modules. es-react is a package that pulls in React but provides exports that are compatible with the web platform.

This illustrates wmr’s philosophy of using native primitives of the web platform as opposed to using tooling to sidestep and abstract it away.

Another option could be to use Skypack imports in our application, which is also pre-optimized to work in the browser:

import React from 'https://cdn.skypack.dev/react'; import ReactDOM from 'https://cdn.skypack.dev/react-dom';

wmr expects that you’re writing modern code which runs in the browser, which may mean that you’ll need to do some configuration if you pull in dependencies that use node APIs or legacy module systems. To get the Snap Shot app working I needed to dive into node modules and convert a library or two to use native JavaScript module syntax. This might slow you down if you are using older libraries. The Preact ecosystem is all optimized to run in the browser and shouldn’t require any massaging. This is another reason to stick with the Preact happy path in wmr.

There are plugins for wmr. It exposes a plugin API that supports Rollup plugins for a build step. There are more and more wmr-specific examples on the docs, including a plugin that minifies HTML, and one that features filesystembased routing.

wmr supports different frameworks, but there aren’t any pre-built templates for them. And at first I found it rather difficult to configure the JSX transform. All that said, Jason has confirmed there are plans to make JSX more configurable and that wmr is intended to be framework-agnostic. JSX is planned to work out of the box in regular JavaScript files.

Usage

To get started you can either run this command in the command line:

npm init wmr your-project-name

Or alternatively, you can run these commands to manually build up your application:

npm init -y npm install wmr mkdir public touch public/index.html touch public/index.js 

Then add an script import in the body of your index.html (again make sure to use type="module"):

<script type="module" src="./index.js"></script>  

Now you can write a Preact hello world into your index.js file:

import { render } from 'preact'; render(<h1>Hello World!</h1>, document.body);

And finally start your development server:

node_modules/.bin/wmr

Now we have a full hot module replacement development server which will immediately respond to any changes to our source-code.

wmr uses a tool called htm when transforming JSX, which provides some awesome benefits. Let’s say we’re writing a counter using Preact in wmr and make a mistake:

import { render } from 'preact'; import { useState } from 'preact/hooks'; function App() {   const [count,setCount] = useState(0)   return <>   <button onClick={()=>{setCount(cout+5)}}>Click to add 5 to count</button> // HIGHLIGHT   <p>count: {count}</p>   </> } render(<App />, document.body);

count is misspelled in the onClick handler function, so running this results in an error. Usually, we would have to rely on our tooling and a source map to gather information about where the bug is located, but wmr takes a different solution. With htm, this gets as close as you can get to native JSX in the browser by using tagged template literals. So, where writing React or Preact code usually looks like this:

<MyComponent>I am JSX. I am not actually valid Javascript</MyComponent>

…htm looks more like this:

html`<$ {MyComponent}>I am about as close as it gets to JSX as you can get while being able to run in the browser</MyComponent>`

Now, if we’re debugging our code and open the “Sources” panel in DevTools, we should see a script that’s almost identical to what the source code looks like in the editor.

Image of the source code.

This way, we can properly investigate where bugs are located in the browser without having to use sourcemaps. Sure, this specific example is pretty contrived, but you can see how this could be really useful because it means wmr doesn’t need source maps in your developer environment.

wmr supports streaming imports by default, so bare imports will be pulled down from the npm registry. This is done through a complex process which examines all of the source in the npm package, removes all the tests and metadata, and converts it to a single native JavaScript import. Similar to Snowpack, it’s possible to build a complex app without using npm to install anything. In fact, wmr is the first tool to support this idea.

Supported files

As far as the other types of files wmr supports, CSS files can be imported in JavaScript, and CSS modules are supported as well.

There isn’t any built-in support for either Vue single file components or Svelte components. However, wmr’s build step works with Rollup plugins and the development server can be configured with Polka/Express middleware, so it’s possible to use these to convert imports into Vue and Svelte components. In fact, I wrote a small plugin for Vue Single file Components to show how this might be done.

We can’t import images into JavaScript as data URLs in wmr without a plugin. Instead, we need to import them using a syntactically correct JavaScript method. So, if we have, say, a picture of a dog in our public folder we might include it in a Preact component like this:

function Dog() {   return <img src={new URL('./dog.jpg', import.meta.url)} alt="dog hanging out"></img> }

And once the build step runs, the image is copied and accessible from the distribution folder. There is hot module replacement for images in the development server, so changes with images are immediately reflected in the browser.

One more note on file support: JSON can be imported, and is converted into a JavaScript object for use. But when actually building an application, we’d need the Rollup JSON plugin.

Production build

wmr provides a production build step that includes bundling, minification and tree-shaking without any additional dependencies. Having a look at the source of wmr, it looks like rollup and terser are used under the hood, and minified versions of these are included in the wmr package. The wmr bundle for the Snap Shot app was 164KB so it created a bundle just a tiny bit smaller than total size of the two JavaScript files created by Vite.

There is also a way to configure wmr is such a way that it renders an application to static HTML and hydrates on the browser using preact-iso. This means wmr can be used as a meta-framework for Preact, similar to Next.js.

Overall

I love the experience of using wmr to prototype both React and Preact applications. It feels great to get started with a tool that is ridiculously small but provides developer conveniences that get close to matching heavyweight bundlers.

wmr
Templates for multiple front end frameworks
Hot module replacement development server
Streaming imports
Preconfigured production build 
Automatic PostCSS and preprocessor conversion
HTM transform
Rollup plugin support ✅ 
Size on disk (default install) 2.57 MB

Feature comparison

We just covered a lot ground! Rather than scroll up and down this article to compare results, I’ve compiled everything here to see how the tools stack up wen they’re side-by-side. I’ve even included additional comparisons for features that we didn’t explicitly covers.

Use cases

Tool Use case
esbuild Large codebases. Not ready for production yet.
Snowpack Small applications that don’t need bundling or applications where you want to choose which bundler you use. Also good for incrementally adopting JavaScript frameworks in server-rendered apps. 
Vite Replacement for Vue CLI/Create-React-App for producing single page applications. This is the happy path for Vue.
wmr Prototypes. Good for small- to medium-size apps and can be used for either single page or server-rendered apps. This is the happy path for Preact.

Setup

esbuild Snowpack Vite wmr
Templates for multiple front end frameworks
Size on disk (default install) 7.34 MB 16 MB 17.1 MB 2.57 MB
Zero-config production build
HMR Development Server with zero config
Process.env handling for node packages

Development server

esbuild Snowpack Vite wmr
Hot module replacement
CSS hot replacement
npm dependency pre-bundling
Browser error messaging
HTM tranform

Production build

esbuild Snowpack Vite wmr
Output bundle size of Snap Shop app 177 KB 184 KB of multiple JavaScript files, plus 105 KB of Skypack CDN dependencies  165 KB (one 5 KB file and one 160 KB file) 164 KB
Go-based bundling ✅ when using esbuild is used in build step
Preconfigured production build
Asynchronous chunk loading
Rollup plugin support

Other features

esbuild Snowpack Vite wmr
Streaming inputs
Server-side rendering ✅ (experimental)
CSS Modules
Automatic PostCSS and preprocessor conversion

Wrapping up

I’m super excited about building JavaScript applications with any and all of the tools we just looked at. Whether we’re writing a small side project or a big production site, all these tools speed up feedback loops and increase productivity. They’ve opened up the gates to ask what’s necessary in the JavaScript ecosystem and whether we can start losing the cruft brought by legacy modules and browsers. These tools are going to lower the barrier-to-entry for new developers by providing a leaner, faster developer environment with fewer abstractions between the code that gets written and the code that runs in the browser.

If you’re getting sick of waiting for your dependencies to download and your build steps to run, I’d recommend giving this new generation of tooling a try.

Further reading

Other new JavaScript tooling to check out

  • Rome – A full toolchain including linting, compiling, bundling, test-running and formatting
  • SWC – A rust-based JavaScript/TypeScript compiler
  • Deno – A runtime for JavaScript and TypeScript (similar to Node.js)

The post Comparing the New Generation of Build Tools appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , ,
[Top]

CSS Is, In Fact, Awesome

You’ve seen the iconic image. Perhaps some of what makes that image so iconic is that people see what they want to see in it. If you see it as a critique of CSS being silly, weird, or confusing, you can see that in the image. If you see it as CSS being powerful and flexible, you’ve got that too. That’s what Jim Neilsen is saying here, reacting to a presentation by Hidde de Vries:

This is the power of CSS. It gives you options. Use them or don’t.

Want it to overflow visibly? It can. Want it to lop off overflowing content? It can. Want it to stretch? It can. Want it to ellipse? It can. Want it to wrap or not wrap? It can. Want to scale the type to fit? It can. If you love CSS, this is probably exactly why.

Mandy Michael has a great thread on this from a few years back:

Brandon Smith wrote about all this a few years back as well. I remain chuffed that Eric Meyer asked the original creator of the image, Steve Frank of Panic, about it and Steve once stopped by to explain the real origin:

It was 2009 and I’d spent what seemed like hours trying to do something in CSS that I already knew I could do in seconds with tables. I was trying really hard to do it with CSS because that’s what you’re supposed to do, but I just wasn’t very good at it (spoiler alert: I’m still not very good at it).

I do have a slightly better grasp on the concept of overflow now, but at the time it just blew my mind that someone thought the default behavior should be to just have the text honk right out of the box, instead of just making the box bigger like my nice, sensible tables had always done.

Anyway, I just had this moment of pure frustration and, instead of solving the problem properly, I spent 5 minutes creating a snarky mug and went back to using tables. Because that’s my signature move in times of crisis.

So, the original is indeed born out of frustration, but has nonetheless inspired many love letters to CSS. It has also certainly earned its place in CSS infamy, right alongside Peter Griffin struggling with window blinds, as one of the most iconic CSS images ever.

Direct Link to ArticlePermalink


The post CSS Is, In Fact, Awesome appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

,
[Top]

SvelteKit is in public beta

Rich Harris:

Think of it as Next for Svelte. It’s a framework for building apps with Svelte, complete with server-side rendering, routing, code-splitting for JS and CSS, adapters for different serverless platforms and so on.

Great move. I find Next.js a real pleasure to work with. I’ve hit some rough edges trying to get it to do what are probably non-standard things, but even then, I was able to get past them and have had a pretty great developer experience, while producing something that I’d like to think is going to be a pretty great user experience, too.

I always want server-side rendering. I want a blessed routing solution. I want pre-made smart solutions for common tasks and elegant solutions for hard problems. Packaging something like that up for Svelte in a core project seems very smart, just as it’s smart for Vue to have Nuxt.js. Maybe even smarter, they resisted naming it Svxt.js which was surely the right call.

Direct Link to ArticlePermalink


The post SvelteKit is in public beta appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, ,
[Top]

Coordinating Svelte Animations With XState

This post is an introduction to XState as it might be used in a Svelte project. XState is unique in the JavaScript ecosystem. It doesn’t keep your DOM synced with your application state, nor does it help you with asynchrony, or streams of data; XState helps manage your application’s state by allowing you to model your state as a finite state machine (FSM).

A deep dive into state machines and formal languages is beyond the scope of this post, but Jon Bellah does that in another CSS-Tricks article. For now, think of an FSM as a flow chart. Flow charts have a number of states, represented as bubbles, and arrows leading from one state to the next, signifying a transition from one state to the next. State machines can have more than one arrow leading out of a state, or none at all if it’s a final state, and they can even have arrows leaving a state, and pointing right back into that same state.

If that all sounds overwhelming, relax, we’ll get into all the details, nice and slow. For now, the high level view is that, when we model our application as a state machine, we’ll be creating different “states” our application can be in (get it … state machine … states?), and the events that happen and cause changes to state will be the arrows between those states. XState calls the states “states,” and the arrows between the states “actions.”

Our example

XState has a learning curve, which makes it challenging to teach. With too contrived a use case it’ll appear needlessly complex. It’s only when an application’s code gets a bit tangled that XState shines. This makes writing about it tricky. With that said, the example we’ll look at is an autocomplete widget (sometimes called autosuggest), or an input box that, when clicked, reveals a list of items to choose from, which filter as you type in the input.

For this post we’ll look at getting the animation code cleaned up. Here’s the starting point:

This is actual code from my svelte-helpers library, though with unnecessary pieces removed for this post. You can click the input and filter the items, but you won’t be able to select anything, “arrow down” through the items, hover, etc. I’ve removed all the code that’s irrelevant to this post.

We’ll be looking at the animation of the list of items. When you click the input, and the results list first renders, we want to animate it down. As you type and filter, changes to the list’s dimensions will animate larger and smaller. And when the input loses focus, or you click ESC, we animate the list’s height to zero, while fading it out, and then remove it from the DOM (and not before). To make things more interesting (and nice for the user), let’s use a different spring configuration for the opening than what we use for the closing, so the list closes a bit more quickly, or stiffly, so unneeded UX doesn’t linger on the screen too long.

If you’re wondering why I’m not using Svelte transitions to manage the animations in and out of the DOM, it’s because I’m also animating the list’s dimensions when it’s open, as the user filters, and coordinating between transition, and regular spring animations is a lot harder than simply waiting for a spring update to finish getting to zero before removing an element from the DOM. For example, what happens if the user quickly types and filters the list, as it’s animating in? As we’ll see, XState makes tricky state transitions like this easy.

Scoping the Problem

Let’s take a look at the code from the example so far. We’ve got an open variable to control when the list is open, and a resultsListVisible property to control whether it should be in the DOM. We also have a closing variable that controls whether the list is in the process of closing.

On line 28, there’s an inputEngaged method that runs when the input is clicked or focused. For now let’s just note that it sets open and resultsListVisible to true. inputChanged is called when the user types in the input, and sets open to true. This is for when the input is focused, the user clicks escape to close it, but then starts typing, so it can re-open. And, of course, the inputBlurred function runs when you’d expect, and sets closing to true, and open to false.

Let’s pick apart this tangled mess and see how the animations work. Note the slideInSpring and opacitySpring at the top. The former slides the list up and down, and adjusts the size as the user types. The latter fades the list out when hidden. We’ll focus mostly on the slideInSpring.

Take a look at the monstrosity of a function called setSpringDimensions. This updates our slide spring. Focusing on the important pieces, we take a few boolean properties. If the list is opening, we set the opening spring config, we immediately set the list’s width (I want the list to only slide down, not down and out), via the { hard: true } config, and then set the height. If we’re closing, we animate to zero, and, when the animation is complete, we set resultsListVisible to false (if the closing animation is interrupted, Svelte will be smart enough to not resolve the promise so the callback will never run). Lastly, this method is also called any time the size of the results list changes, i.e., as the user filters. We set up a ResizeObserver elsewhere to manage this.

Spaghetti galore

Let’s take stock of this code.

  • We have our open variable which tracks if the list is open.
  • We have the resultsListVisible variable which tracks if the list should be in the DOM (and set to false after the close animation is complete).
  • We have the closing variable that tracks if the list is in the process of closing, which we check for in the input focus/click handler so we can reverse the closing animation if the user quickly re-engages the widget before it’s done closing.
  • We also have setSpringDimensions that we call in four different places. It sets our springs depending on whether the list is opening, closing, or just resizing while open (i.e. if the user filters the list).
  • Lastly, we have a resultsListRendered Svelte action that runs when the results list DOM element renders. It starts up our ResizeObserver, and when the DOM node unmounts, sets closing to false.

Did you catch the bug? When the ESC button is pressed, I’m only setting open to false. I forgot to set closing to true, and call setSpringDimensions(false, true). This bug was not purposefully contrived for this blog post! That’s an actual mistake I made when I was overhauling this widget’s animations. I could just copy paste the code in inputBlured over to where the escape button is caught, or even move it to a new function and call it from both places. This bug isn’t fundamentally hard to solve, but it does increase the cognitive load of the code.

There’s a lot of things we’re keeping track of, but worst of all, this state is scattered all throughout the module. Take any piece of state described above, and use CodeSandbox’s Find feature to view all the places where that piece of state is used. You’ll see your cursor bouncing across the file. Now imagine you’re new to this code, trying to make sense of it. Think about the growing mental model of all these state pieces that you’ll have to keep track of, figuring out how it works based on all the places it exists. We’ve all been there; it sucks. XState offers a better way; let’s see how.

Introducing XState

Let’s step back a bit. Wouldn’t it be simpler to model our widget in terms of what state it’s in, with events happening as the user interacts, which cause side effects, and transitions to new states? Of course, but that’s what we were already doing; the problem is, the code is scattered everywhere. XState gives us the ability to properly model our state in this way.

Setting expectations

Don’t expect XState to magically make all of our complexity vanish. We still need to coordinate our springs, adjust the spring’s config based on opening and closing states, handle resizes, etc. What XState gives us is the ability to centralize this state management code in a way that’s easy to reason about, and adjust. In fact, our overall line count will increase a bit, as a result of our state machine setup. Let’s take a look.

Your first state machine

Let’s jump right in, and see what a bare bones state machine looks like. I’m using XState’s FSM package, which is a minimal, pared down version of XState, with a tiny 1KB bundle size, perfect for libraries (like an autosuggest widget). It doesn’t have a lot of advanced features like the full XState package, but we wouldn’t need them for our use case, and we wouldn’t want them for an introductory post like this.

The code for our state machine is below, and the interactive demo is over at Code Sandbox. There’s a lot, but we’ll go over it shortly. And to be clear, it doesn’t work yet.

const stateMachine = createMachine(   {     initial: "initial",     context: {       open: false,       node: null     },     states: {       initial: {         on: { OPEN: "open" }       },       open: {         on: {           RENDERED: { actions: "rendered" },           RESIZE: { actions: "resize" },           CLOSE: "closing"         },         entry: "opened"       },       closing: {         on: {           OPEN: { target: "open", actions: ["resize"] },           CLOSED: "closed"         },         entry: "close"       },       closed: {         on: {           OPEN: "open"         },         entry: "closed"       }     }   },   {     actions: {       opened: assign(context => {         return { ...context, open: true };       }),       rendered: assign((context, evt) => {         const { node } = evt;         return { ...context, node };       }),       close() {},       resize(context) {},       closed: assign(() => {         return { open: false, node: null };       })     }   } );

Let’s go from top to bottom. The initial property controls what the initial state is, which I’ve called “initial.” context is the data associated with our state machine. I’m storing a boolean for whether the results list is currently open, as well as a node object for that same results list. Next we see our states. Each state is a key in the states property. For most states, you can see we have an on property, and an entry property.

on configures events. For each event, we can transition to a new state; we can run side effects, called actions; or both. For example, when the OPEN event happens inside of the initial state, we move into the open state. When the RENDERED event happens in the open state, we run the rendered action. And when the OPEN event happens inside the closing state, we transition into the open state, and also run the resize action. The entry field you see on most states configures an action to run automatically whenever a state is entered. There are also exit actions, although we don’t need them here.

We still have a few more things to cover. Let’s look at how our state machine’s data, or context, can change. When we want an action to modify context, we wrap it in assign and return the new context from our action; if we don’t need any processing, we can just pass the new state directly to assign. If our action does not update context, i.e., it’s just for side effects, then we don’t wrap our action function in assign, and just perform whatever side effects we need.

Affecting change in our state machine

We have a cool model for our state machine, but how do we run it? We use the interpret function.

const stateMachineService = interpret(stateMachine).start();

Now stateMachineService is our running state machine, on which we can invoke events to force our transitions and actions. To fire an event, we call send, passing the event name, and then, optionally, the event object. For example, in our Svelte action that runs when the results list first mounts in the DOM, we have this:

stateMachineService.send({ type: "RENDERED", node });

That’s how the rendered action gets the node for the results list. If you look around the rest of the AutoComplete.svelte file, you’ll see all the ad hoc state management code replaced with single line event dispatches. In the event handler for our input click/focus, we run the OPEN event. Our ResizeObserver fires the RESIZE event. And so on.

Let’s pause for a moment and appreciate the things XState gives us for free here. Let’s look at the handler that runs when our input is clicked or focused before we added XState.

function inputEngaged(evt) {   if (closing) {     setSpringDimensions();   }   open = true;   resultsListVisible = true; } 

Before, we were checking to see if we were closing, and if so, forcing a re-calculation of our sliding spring. Otherwise we opened our widget. But what happened if we clicked on the input when it was already open? The same code re-ran. Fortunately that didn’t really matter. Svelte doesn’t care if we re-set open and resultsListVisible to the values they already held. But those concerns disappear with XState. The new version looks like this:

 function inputEngaged(evt) {   stateMachineService.send("OPEN"); }

If our state machine is already in the open state, and we fire the OPEN event, then nothing happens, since there’s no OPEN event configured for that state. And that special handling for when the input is clicked when the results are closing? That’s also handled right in the state machine config — notice how the OPEN event tacks on the resize action when it’s run from the closing state.

And, of course, we’ve fixed the ESC key bug from before. Now, pressing the key simply fires the CLOSE event, and that’s that.

Finishing up

The ending is almost anti-climactic. We need to take all of the work we were doing before, and simply move it to the right place among our actions. XState does not remove the need for us to write code; it only provides a structured, clear place to put it.

{   actions: {     opened: assign({ open: true }),     rendered: assign((context, evt) => {       const { node } = evt;       const dimensions = getResultsListDimensions(node);       itemsHeightObserver.observe(node);       opacitySpring.set(1, { hard: true });       Object.assign(slideInSpring, SLIDE_OPEN);       slideInSpring.update(prev => ({ ...prev, width: dimensions.width }), {         hard: true       });       slideInSpring.set(dimensions, { hard: false });       return { ...context, node };     }),     close() {       opacitySpring.set(0);       Object.assign(slideInSpring, SLIDE_CLOSE);       slideInSpring         .update(prev => ({ ...prev, height: 0 }))         .then(() => {           stateMachineService.send("CLOSED");         });     },     resize(context) {       opacitySpring.set(1);       slideInSpring.set(getResultsListDimensions(context.node));     },     closed: assign(() => {       itemsHeightObserver.unobserve(resultsList);       return { open: false, node: null };     })   } }

Odds and ends

Our animation state is in our state machine, but how do we get it out? We need the open state to control our results list rendering, and, while not used in this demo, the real version of this autosuggest widget needs the results list DOM node for things like scrolling the currently highlighted item into view.

It turns out our stateMachineService has a subscribe method that fires whenever there’s a state change. The callback you pass is invoked with the current state machine state, which includes a context object. But Svelte has a special trick up its sleeve: its reactive syntax of $ : doesn’t only work with component variables and Svelte stores; it also works with any object with a subscribe method. That means we can sync with our state machine with something as simple as this:

$  : ({ open, node: resultsList } = $  stateMachineService.context);

Just a regular destructuring, with some parens to help things get parsed correctly.

One quick note here, as an area for improvement. Right now, we have some actions which both both perform a side effect, and also update state. Ideally, we should probably split these up into two actions, one just for the side effect, and the other using assign for the new state. But I decided to keep things as simple as possible for this article to help ease the introduction of XState, even if a few things wound up not being quite ideal.

Parting thoughts

I hope this post has sparked some interest in XState. I’ve found it to be an incredibly useful, easy to use tool for managing complex state. Please know that we’ve only scratched the surface. We focused on the minimal fsm package, but the entire XState library is capable of a lot more than what we covered here, from nested states, to first-class support for Promises, and it even has a state visualization tool! I urge you to check it out.

Happy coding!


The post Coordinating Svelte Animations With XState appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , ,
[Top]

Space Jam

It’s certainly worth noting that the Space Jam website, which made its way into umpteen conference talks for being fabulous evidence of the web’s strength in backward compatibility, has been replaced. We could have saw that coming. Everything is remake. The original was released in 1996, making the site, which they kept online, 25 years old.

Of course, you knew folks would pull out their measuring sticks. Here’s Max Böck:

Unsurprisingly, the new site is a lot heavier than the original: with 4.673KB vs. 120KB, the new site is about 39 times the size of the old one. That’s because the new site has a trailer video, high-res images and a lot more Javascript.

That’s funny, the 25 year old site is more than 25 times smaller.

They are both websites that exist and promote a movie, so I feel like it’s fair to call that an apples-to-apples comparison. But Max levels the playing field to the time period by comparing the old site on a 1996 56kb modem and the new site on a 3G mobile network connection, which is 30× faster. When you do that, the sites are nearly neck-and-neck, with the new one being 1.3 seconds faster.

You could say that whatever we’re given, we use, sort of like how building better protective gear for athletes only makes the athletes hit harder.

Direct Link to ArticlePermalink


The post Space Jam appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

[Top]

Some Articles About Accessibility I’ve Saved Recently

  • “Good news about display: contents and Chrome” — Rachel Andrew notes that the accessibility danger of using display: contents; is fixed in Chrome. The problem was that, say you had a parent div that is laid out as a grid and inside you have a <ul> with <li> elements, and you wanted the <li> elements to participate on that same parent grid. We have subgrid, but it’s not really the same thing. What you want is just to pretend like the <ul> isn’t there at all and that the <li> elements can hang out on the grid like anything else. The problem was that if you did that, you wiped out the accessible semantics of the list. But no more!
  • “Grid, content re-ordering, and accessibility”  — Speaking of grids and accessibility, here’s Rachel again teaching us (through this slide deck) how it’s all-too-easy to really diverge the source order and display order of content with modern layout techniques. At the moment, the solution is essentially not to do that, but the future might hold a way for browsers to update tab order to be visually sensible when you do dramatically alter the layout.
  • “The most useful accessibility testing tools and techniques” — Atrem Sapegin lists out some good ones, like eslint-plugin-jsx-a11y, storybook-addon-a11y, cypress-axe, Contrast app, Spectrum browser extension, and… using your tab key (lolz).
  • ButtonBuddy — Tool from Stephanie Eckles that helps generate CSS for buttons. But the real point of it is to give you colors as custom properties that satisfy color contrast guidelines.
  • “Are your Anchor Links Accessible?” — Amber Wilson goes through five iterations of an anchor link in/by a header before landing on a good one and, even then, there are questions to tackle.
  • “Don’t put pointer-events: none on form labels” — I’m a little shocked that anyone would do this at all, but it turns out it comes from Material Design’s “floating label” pattern. I think that pattern is so silly. It doesn’t actually save any space because you need the space where you float the label to anyway. Gosh.
  • “Accessible Text Labels For All” — Sara Soueidan tests real accessibility software and how it presents common interactive elements. For example, a “read more” link isn’t very useful (read more what?), and “add to cart” isn’t very useful alone (add what to cart?). You can add, for example, product names to those “add to cart” buttons, but don’t do it in the middle of the button as that can break things. Add the extra text at the end.

The post Some Articles About Accessibility I’ve Saved Recently appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , , ,
[Top]

Jetpack Turns 10!

(This is a sponsored post.)

Ten years! That’s a huge milestone for a project, especially one that had a pretty simple goal in mind from the start: give self-hosted WordPress sites many of the same features and functionality enjoyed by hosted WordPress.com sites.

It’s a great story. The Automattic team responsible for driving social activity in WordPress sees Jetpack as a way to expand and unify social activity across all WordPress sites, and winds up paving the way for a product that today helps more than 5 million sites with everything from security and performance to backups and integrations.

And what has Jetpack accomplished in those 10 years and 5 million sites? The numbers are staggering:

  • 122 billion blocked malicious login attempts.
    9,330,623 of those on CSS-Tricks, as we write.
  • 269 million site backups.
    Last backup of CSS-Tricks: 3 minutes ago, as we write.
  • 24 trillion images served by Jetpack CDN.
    Incredibly, a free feature of Jetpack.
  • 61.6 billion site searches.
    Try site search on this site, the latest release has really nice UI & UX improvements, like seeing the post thumbnail.
  • 50 billion related posts displayed. Related posts plugins are notoriously heavy on the server, but not when you let Jetpack do it!
  • 1.6 trillion tracked page views. Those are the numbers at work here with WordPress being some 40% of the web.
  • 2.6 billion shared social posts.
    The @css Twitter account runs itself thanks to Jetpack.

And you know CSS-Tricks is represented in those figures. Jetpack powers our search. We use it for real-time backups and downtime monitoring. It’s what helps us push content to Twitter and display related posts. We use its CDN. We even use a bunch of blocks it includes when we’re writing posts like this. We love Jetpack.

The Jetpack team launched a site celebrating 10 years. And, hey, for a super limited time, you can get 40% off the entire first year of any Jetpack plan.

Direct Link to ArticlePermalink


The post Jetpack Turns 10! appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

,
[Top]

Gaps? Gasp!

At first, there were flexboxes (the children of a display: flex container). If you wanted them to be visually separate, you had to use content justification (i.e. justify-content: space-between), margin trickery, or sometimes, both. Then along came grids (a display: grid container), and grids could have not-margin not-trickeried minimum gaps between grid cells, thanks to grid-gap. Flexboxes did not have gaps.

Now they can, thanks to the growing support of gap, the grid-gap successor that isn’t confined to grids. With gap, you can gap your grids, your flexboxes, and even your multiple columns. It’s gaptastic!

Gap with Grid

Let’s start where gap is the most robust: CSS Grid. Here’s a basic grid setup in HTML and CSS:

<section>   <div>div</div>   <div>div</div>   <div>div</div>   <div>div</div>   <div>div</div>   <div>div</div>   <div>div</div> </section>
section {   display: grid;   grid-template-rows: repeat(2,auto);   grid-template-columns: repeat(4,auto);   gap: 1em; } section div {   width: 2em; }

That places the grid cells at least 1em apart from each other. The separation distance can be greater than that, depending on other conditions beyond the scope of this post, but at a minimum they should be separated by 1em. (OK, let’s do one example: gap’s gaps are in addition to any margins on the grid cells, so if all the grid items have margin: 2px;, then the visual distance between grid cells would be at least 1em plus 4px.) By default, changes to the gap size causes resizing of the grid items, so that they fill their cells.

This all works because gap is actually shorthand for the properties row-gap and column-gap. The gap: 1em is interpreted as gap: 1em 1em, which is shorthand for row-gap: 1em; column-gap: 1em;. If you want different row and column gap distances, then something like gap: 0.5em 1em will do nicely.

Gap with Flexbox

Doing the same thing in a flexbox context gives you gaps, but not in quite the same way they happen in grids. Assume the same HTML as above, but this CSS instead:

section {   display: flex;   flex-wrap: wrap;   gap: 1em; }

The flexboxes are pushed apart by at least the value of gap here, and (thanks to flex-wrap) wrap to new flex lines when they run out of space inside their flex container. Changing the gap distance could lead to a change in the wrapping of the flex items, but unlike in Grid, changing gaps between flex items won’t change the sizes of the flex items. Gap changes can cause the flex wrapping to happen at different places, meaning the number of flex items per row will change, but the widths will stay the same (unless you’ve set them to grow or shrink via flex, that is).

Gap with Multi-Column

In the case of multicolumn content, there is bit of a restriction on gap: only column gaps are used. You can declare row gaps for multicolumn if you want, but they’ll be ignored.

section {   columns: 2;   gap: 1em; }

Support

Support for gap, row-gap, and column-gap is surprisingly widespread. Mozilla’s had them since version 61, Chromium since version 66, and thanks to work by Igalia’s Sergio Villar, they’re coming to Safari and Mobile Safari soon (they’re already in the technology preview builds). So if your grid, flex, or multicolumn content needs a bit more space to breathe, get ready to fall into the gap!


The post Gaps? Gasp! appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

,
[Top]

Splitting Time Between Product and Engineering Efforts

At each company I’ve worked, we have had a split between time spent on Product initiatives and Engineering work. The percentages always changed, sometimes 70% Product, 30% Engineering, sometimes as much as a 50/50 split. The impetus is to make sure that Engineering spends a portion of their time building new features, but also ensures we can do “our own” work such as address technical debt, upgrade systems, and document our code.

The trouble is, it’s one thing to say this at the outset, and another to make it a reality. There are some reasons that I’ve seen this model fail, not because people don’t understand in theory that it’s valuable, but because in practice, there are some common pitfalls that you have to think through. We’ll cover some of these scenarios from the perspective of an Engineering leader so that we can address a good path forward.

The issues

Some of the pointers below are reactions and planning based on what can go wrong, so let’s talk first about the challenges you can encounter if this isn’t set up right.

  • Product may have conflicts, either with the work itself or the time involved. This can strain the relationship between Product and Engineering. If they are caught by surprise, you can potentially find the boundaries of your work getting more restrictive.
  • Your engineers might not understand what’s expected of them. Parallelization of efforts can be hard to do, so building a good process can provide clarity.
  • Maintenance path should be clear: Are you planning on making a giant system upgrade? This may affect other teams over time and if you’re not clear about eventual ownership, it could come back to haunt you. 👻

As nice as it is to have some freedom in your engineering time, communication, planning, and clear expectations can help make sure that you avoid any of the issues outlined above.

A group of people working
Photo by Akson on Unsplash

Communication

Once you figure out what problem you would like to tackle, it’s critical to write up a small one-sheeter that you can share with stakeholders on the nature of the work, the amount of time it’s going to take, and why it’s important.

If it’s a large project, you can also scope those pieces down into GitHub/GitLab/Jira issues, and add a label for the type of work that it is. This is great because you can use whatever project management system you already use to elevate the amount of work and expectations weekly. It’s good to keep the dialogue open with your Product partners on scope and the nature of the work so they aren’t surprised by other work getting done. This will largely vary by the culture of the team and organization.

This can help provide clarity for your Engineers, too. If they understand the nature of the work and what’s expected of them, it’s easier for them to tackle the small issues that make up a whole.

You may find that it makes less sense from a focus perspective to have every engineer split time across product and engineering projects. They may instead prefer to split the work up between themselves: three people on product work for a few weeks, one person on engineering work. There are also times where everyone does need to be involved so that they have equal institutional knowledge (migrations can be like this, depending on what it is). Your mileage may vary based on the size of the team, the amount of product work, and the type of project. 

Communication helps here, too — if you’re not sure what the right path is, it can help to have a small brainstorm as a group on how you want to get this done. Just be sure you also align everyone with why the project is important as you do so.

Types of projects

There are many types of projects that you can create in your Engineering team time, and each has slightly different approaches from what I’ve seen, so let’s go over each one of them.

Tech debt

Let’s address technical debt first because that’s one of the most common pieces of work that can unlock your team. For every feature you write, if Engineering effort is slowed, you’re not only losing time in terms of product development, you’re also losing money in terms of engineering time in salary.

A bit of technical debt is natural, particularly at smaller companies where it makes more fiscal sense to move quickly, but there are some points where tech debt becomes crippling for development and releases, and makes a codebase unstable. Sometimes it needs to be done immediately to make sure all your engineers can work efficiently, and sometimes it’s gradual.

In a lot of cases, the technical debt pieces are things you learn you need by a bottoms-up approach: the devs that are closest to working with the system will know best what day-to-day technical debt exists than Engineering Managers (EMs) typically will. The challenge as an EM is to notice larger patterns, like when many folks complain of the same thing, rather than one dev who may have a strong opinion. Asking around before you start this type of project can help — poll people on how much time they think they’re wasting in a given week vs the prospect of an alternative.

Sometimes technical debt is a matter of a large amount of refactor. I’ve seen this go best when people are up front on what kind of pull requests (PR) are necessary. Do you need to update the CSS in a million spots? Or convert old class components to hooks? You probably don’t want one huge PR for all of it, but it doesn’t make sense to break this work per-component either. Work together as a team on how much each PR will hold and what is expected of the review so you don’t create a “review hole” while the work is being done.

Two people looking at some code
Photo by heylagostechie on Unsplash

Innovative projects

A lot of companies will do hack week/innovation week projects where devs can work on some feature related to the company’s product untethered. It’s a great time for exploration, and I’ve seen some powerful features added to well-known applications this way. It’s also incredibly energizing for the team to see an idea of their own come to fruition.

The trouble with doing these kinds of projects in the split engineering time is that you can, at times, make the Product team feel a little slighted. Why? Well, think of things from their perspective. Their job is to put forth these features, plan carefully with stakeholders, put together roadmaps (often based on company metrics and research), and get on the Engineering schedule, usually working with a project manager. If you spend half your time working on unplanned features, you can potentially fork an existing plan for a project, go against some of the known research they have, or simply slow down the process to get a core make-it-or-break it feature they need.

The way I’ve seen this play out well is when the EM communicates up front with Product. Consider this a partnership: if Product says that a particular feature doesn’t make sense, they likely have a good reason for thinking so. If you can both hear each other out, there is likely a path forward where you both agree. 

It’s good to address their fears, too. Are they concerned that there won’t be enough time for product work? Ask your team directly how many weeks at half time they think it might take (with the expectations that things might shift once they dig in). Make it clear to everyone that you don’t expect it to be done at a break-neck pace.

Ultimately, communication is key. Ideally, these are small projects that won’t derail anything that can be done in parallel to the regular work. My suggestion is to try it with something very small first to see what bumps in the road there might be, and also build trust with Product that you’ll still get your work done and not “go rogue.”

The final piece of this is to figure out who is responsible for metrics, outcomes, and when things don’t go well. Part of the reason Product gets to decide direction is because they’re on the hook when it fails. Make sure you’re clear that as an Engineering leader, you’re taking responsibility for outcomes, both the good and the bad to maintain a good relationship.

Slow, ongoing work

This is probably the most clear-cut of any of the types of projects and will likely get the least amount of pushback from anyone. Examples of this type of work is internal documentation, tooling (if you don’t have a dedicated tools team), or small bits of maintenance.

The communication needed here is a little different from other projects, as it’s not necessarily going to be one constrained project that you ship, but rather an iterative process. Take documentation as an example: I would suggest building time for internal documentation into any feature process. 

For instance, let’s say you created a new feature that allows teams to collaborate. Not everyone across the company may know that you created a microservice for this feature that any team can use, and what parameters are expected, or how to add functionality down the road. Internal docs can be the difference between the service being used, as well as your team being asked to pair with someone every time someone needs to use it. Or worse: them trying to hack around and figure it out on their own, creating a mess of something that could have been worked on quicker and more efficiently.

Unlike the innovation projects, slow, ongoing work is typically not something folks really crave doing, so setting a process and expectations up straightaway works best. Internal documentation is a sometimes hidden but very important part of a well-functioning team. It helps with onboarding, getting everyone on the same page about system architecture, and can even help devs really solidify what they built and think through how they’re solving it.

Two women talking about something on a computer screen
Photo by Christina @ wocintechchat.com on Unsplash

Migrations

Migrations are handled a little differently than some other types of projects because it likely affects everyone. There is no one right way to do this, and will also largely depend on what type of migration it is — framework to framework, breaking down a monolith, and migrating to a different build process or server all may have different approaches. Due to the fact that each one of these is likely an article of its own, let’s go through some high level options that apply to the organization of them.

  • My first suggestion is to do as much research as possible up front on whatever type of migration you’re doing. There’s no way to know everything, but you don’t want to get part-way through a process to find out something critical. This is also helpful information to share with stakeholders.
  • Are there internal debates about what direction your company should head in? Timebox a unit of time to work through the problem and make sure you have a clear decision-maker at the end. A lot of tech problems don’t have one “true” solution, so having one owner make the decision and everyone else disagree and commit can help. But you also want to give a moment for folks to have their voices heard about what gives them pause, even if they are in disagreement — they might be thinking of something you’re not.
  • Document a migration plan, both at a high level and then work through the impact on each team. This is also a great time to explain to Product why this work is important: is your codebase becoming old and can no longer play well with other libraries and tools? Did a new build process come out that could save your engineers time in a release process? Help them understand why the work is critical.
  • Be clear about maintenance and ownership. If one team migrates a build process that then causes issues for another, who’s fixing things to unblock that team? You should decide this before it happens.
  • Some migration paths allow you do things slowly over time, or team by team, or do a lot of the work up front. However, there is usually a time when it’s going to be critical and all hands on deck are needed. Unlike some of the other work that can be parallelized, you may have to work something out with Product where all other feature work is stalled for a little bit while you get the new system in place. If you work closely with them, you may find that there are times in the season where you naturally have more of a customer lull, and it could give you the breathing room you need to get this done. I’d suggest that if they’re willing to let you take Engineering time to 100% for a little while, you return the favor; and once the platform is stable, dedicate 100% of the team’s time to Product work.

Celebrate!

This final step might seem optional, but it’s a big deal in my opinion. Your team just pulled off something incredible: they parallelized efforts, they were good partners to Product, they got something done for the Engineering org at large. It’s crucial to celebrate the work like you would a launch.

The team needs to know you value this work because it’s often thankless, but very impactful. It can also build trust to know that if something hairy comes up in the future, that it does actually help their career path as well. Celebrating with your team what you accomplished costs very little, and has great cultural impacts.


The post Splitting Time Between Product and Engineering Efforts appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , ,
[Top]