Suspense is React’s forthcoming feature that helps coordinate asynchronous actions—like data loading—allowing you to easily prevent inconsistent state in your UI. I’ll provide a better explanation of what exactly that means, along with a quick introduction of Suspense, and then go over a somewhat realistic use case, and cover some lessons learned.
The features I’m covering are still in the alpha stage, and should by no means be used in production. This post is for folks who want to take a sneak peek at what’s coming, and see what the future looks like.
A Suspense primer
One of the more challenging parts of application development is coordinating application state and how data loads. It’s common for a state change to trigger new data loads in multiple locations. Typically, each piece of data would have its own loading UI (like a “spinner”), roughly where that data lives in the application. The asynchronous nature of data loading means each of these requests can be returned in any order. As a result, not only will your app have a bunch of different spinners popping in and out, but worse, your application might display inconsistent data. If two out of three of your data loads have completed, you’ll have a loading spinner sitting on top of that third location, still displaying the old, now outdated data.
I know that was a lot. If you find any of that baffling, you might be interested in a prior post I wrote about Suspense. That goes into much more detail on what Suspense is and what it accomplishes. Just note that a few minor pieces of it are now outdated, namely, the useTransition hook no longer takes a timeoutMs value, and waits as long as needed instead.
Now let’s do a quick walkthrough of the details, then get into a specific use case, which has a few lurking gotchas.
How does Suspense work?
Fortunately, the React team was smart enough to not limit these efforts to just loading data. Suspense works via low-level primitives, which you can apply to just about anything. Let’s take a quick look at these primitives.
First up is the <Suspense> boundary, which takes a fallback prop:
<Suspense fallback={<Fallback />}>
Whenever any child under this component suspends, it renders the fallback. No matter how many children are suspending, for whatever reason, the fallback is what shows. This is one way React ensures a consistent UI—it won’t render anything, until everything is ready.
But what about after things have rendered, initially, and now the user changes state, and loads new data. We certainly don’t want our existing UI to vanish and display our fallback; that would be a poor UX. Instead, we probably want to show one loading spinner, until all data are ready, and then show the new UI.
The useTransition hook accomplishes this. This hook returns a function and a boolean value. We call the function and wrap our state changes. Now things get interesting. React attempts to apply our state change. If anything suspends, React sets that boolean to true, then waits for the suspension to end. When it does, it’ll try to apply the state change again. Maybe it’ll succeed this time, or maybe something else suspends instead. Whatever the case, the boolean flag stays true until everything is ready, and then, and only then, does the state change complete and get reflected in the UI.
Lastly, how do we suspend? We suspend by throwing a promise. If data is requested, and we need to fetch, then we fetch—and throw a promise that’s tied to that fetch. The suspension mechanism being at a low level like this means we can use it with anything. The React.lazy utility for lazy loading components works with Suspense already, and I’ve previously written about using Suspense to wait until images are loaded before displaying a UI in order to prevent content from shifting.
Don’t worry, we’ll get into all this.
What we’re building
We’ll build something slightly different than the examples of many other posts like this. Remember, Suspense is still in alpha, so your favorite data loading utility probably doesn’t have Suspense support just yet. But that doesn’t mean we can’t fake a few things and get an idea of how Suspense works.
Let’s build an infinite loading list that displays some data, combined with some Suspense-based preloaded images. We’ll display our data, along with a button to load more. As data renders, we’ll preload the associated image, and Suspend until it’s ready.
This use case is based on actual work I’ve done on my side project (again, don’t use Suspense in production—but side projects are fair game). I was using my own GraphQL client, and this post is motivated by some of the difficulties I ran into. We’ll just fake the data loading in order to keep things simple and focus on Suspense itself, rather than any individual data loading utility.
Let’s build!
Here’s the sandbox for our initial attempt. We’re going to use it to walk through everything, so don’t feel pressured to understand all the code right now.
Our root App component renders a Suspense boundary like this:
<Suspense fallback={<Fallback />}>
Whenever anything suspends (unless the state change happened in a useTransition call), the fallback is what renders. To make things easier to follow, I made this Fallback component turn the entire UI pink, that way it’s tough to miss; our goal is to understand Suspense, not to build a quality UI.
We’re loading the current chunk of data inside of our DataList component:
const newData = useQuery(param);
Our useQuery hook is hardcoded to return fake data, including a timeout that simulates a network request. It handles caching the results and throws a promise if the data is not yet cached.
We’re keeping (at least for now) state in the master list of data we’re displaying:
const [data, setData] = useState([]);
As new data comes in from our hook, we append it to our master list:
Lastly, when the user wants more data, they click the button, which calls this:
function loadMore() { startTransition(() => { setParam((x) => x + 1); }); }
Finally, note that I’m using a SuspenseImg component to handle preloading the image I’m displaying with each piece of data. There are only five random images being displayed, but I’m adding a query string to ensure a fresh load for each new piece of data we encounter.
Recap
To summarize where we are at this point, we have a hook that loads the current data. The hook obeys Suspense mechanics, and throws a promise while loading is happening. Whenever that data changes, the running total list of items is updated and appended with the new items. This happens in useEffect. Each item renders an image, and we use a SuspenseImg component to preload the image, and suspend until it’s ready. If you’re curious how some of that code works, check out my prior post on preloading images with Suspense.
Let’s test
This would be a pretty boring blog post if everything worked, and don’t worry, it doesn’t. Notice how, on the initial load, the pink fallback screen shows and then quickly hides, but then is redisplayed.
When we click the button that’s loads more data, we see the inline loading indicator (controlled by the useTransition hook) flip to true. Then we see it flip to false, before our original pink fallback shows. We were expecting to never see that pink screen again after the initial load; the inline loading indicator was supposed to show until everything was ready. What’s going on?
The problem
It’s been hiding right here in plain sight the entire time:
useEffect runs when a state change is complete, i.e., a state change has finished suspending, and has been applied to the DOM. That part, “has finished suspending,” is key here. We can set state in here if we’d like, but if that state change suspends, again, that is a brand new suspension. That’s why we saw the pink flash on initial load, as well subsequent loads when the data finished loading. In both cases, the data loading was finished, and then we set state in an effect which caused that new data to actually render, and suspend again, because of the image preloads.
So, how do we fix this? On one level, the solution is simple: stop setting state in the effect. But that’s easier said than done. How do we update our running list of entries to append new results as they come in, without using an effect. You might think we could track things with a ref.
Unfortunately, Suspense comes with some new rules about refs, namely, we can’t set refs inside of a render. If you’re wondering why, remember that Suspense is all about React attempting to run a render, seeing that promise get thrown, and then discarding that render midway through. If we mutated a ref before that render was cancelled and discarded, the ref would still have that changed, but invalid value. The render function needs to be pure, without side effects. This has always been a rule with React, but it matters more now.
First, instead of storing our master list of data in state, let’s do something different: let’s store a list of pages we’re viewing. We can store the most recent page in a ref (we won’t write to it in render, though), and we’ll store an array of all currently-loaded pages in state.
The tricky part, however, is turning those page numbers into actual data. What we certainly cannot do is loop over those pages and call our useQuery hook; hooks cannot be called in a loop. What we need is a new, non-hook-based data API. Based on a very unofficial convention I’ve seen in past Suspense demos, I’ll name this method read(). It is not going to be a hook. It returns the requested data if it’s cached, or throws a promise otherwise. For our fake data loading hook, no real changes were necessary; I simple copy-and-pasted the hook, then renamed it. But for an actual data loading utility library, authors will likely need to do some work to expose both options as part of their public API. In my GraphQL client referenced earlier, there is indeed both a useSuspenseQuery hook, and also a read() method on the client object.
With this new read() method in place, the final piece of our code is trivial:
const data = pages.flatMap((page) => read(page));
We’re taking each page, and requesting the corresponding data with our read() method. If any of the pages are uncached (which really should only be the last page in the list) then a promise is thrown, and React suspends for us. When the promise resolves, React attempts the prior state change again, and this code runs again.
Don’t let the flatMap call confuse you. That does the exact same thing as map except it takes each result in the new array and, if it itself is an array, “flattens” it.
The result
With these changes in place, everything works as we expected it to when we started. Our pink loading screen shows once on their initial load, then, on subsequent loads, the inline loading state shows until everything is ready.
Parting thoughts
Suspense is an exciting update that’s coming to React. It’s still in the alpha stages, so don’t try to use it anywhere that matters. But if you’re the kind of developer who enjoys taking a sneak peek at upcoming things, then I hope this post provided you some good context and info that’s useful when this releases.
I fell in love with coding the moment I created my first CSS :hover effect. Years later, that initial bite into interactivity on the web led me to a new goal: making a game.
Those early moments playing with :hover were nothing special, or even useful. I remember making a responsive grid of blue squares (made with float, if that gives you an idea of the timeline), each of which turned orange when the cursor moved over them. I spent what felt like hours mousing over the boxes, resizing the window to watch them change size and alignment, then doing it all over again. It felt like pure magic.
What I built on the web naturally became more complex than that grid of <div> elements over the years, but the thrill of bringing something truly interactive to life always stuck with me. And as I learned more and more about JavaScript, I especially loved making games.
Sometimes it was just a CodePen demo; sometimes it was a small side project deployed on Vercel or Netlify. I loved the challenge of recreating games like color flood, hangman, or Connect Four in a browser.
After a while, though, the goal got bigger: what if I made anactualgame? Not just a web app; a real live, honest-to-goodness, download-from-an-app-store game. Last August, I started working on my most ambitious project to date, and four months later, I released it to the world (read: got tired of fiddling with it): a word game app that I call Quina.
What’s the game (and what’s that name)?
The easiest way to explain Quina is: it’s Mastermind, but with five-letter words. In fact, Mastermind is actually a version of a classic pen-and-paper game; Quina is simply another variation on that same original game.
The object of Quina is to guess a secret five-letter word. After each guess, you get a clue that tells you how close your guess is to the code word. You use that clue to refine your next guess, and so on, but you only get ten total guesses; run out and you lose.
Example Quina gameplay
The name “Quina” came about because it means “five at a time” in Latin (or so Google told me, anyway). The traditional game is usually played with four-letter words, or sometimes four digits (or in the case of Mastermind, four colors); Quina uses five-letter words with no repeated letters, so it felt fitting that the game should have a name that plays by its own rules. (I have no idea how the original Latin word was pronounced, but I say it “QUINN-ah,” which is probably wrong, but hey, it’s my game, right?)
I spent my evenings and weekends over the course of about four months building the app. I’d like to spend this article talking about the tech behind the game, the decisions involved, and lessons learned in case this is a road you’re interested in traveling down yourself.
Choosing Nuxt
I’m a huge fan of Vue, and wanted to use this project as a way to expand my knowledge of its ecosystem. I considered using another framework (I’ve also built projects in Svelte and React), but I felt Nuxt hit the sweet spot of familiarity, ease of use, and maturity. (By the way, if you didn’t know or hadn’t guessed: Nuxt could be fairly described as the Vue equivalent of Next.js.)
I hadn’t gone too deep with Nuxt previously; just a couple of very small apps. But I knew Nuxt can compile to a static app, which is just what I wanted — no (Node) servers to worry about. I also knew Nuxt could handle routing as easily as dropping Vue components into a /pages folder, which was very appealing.
Plus, though Vuex (the official state management in Vue) isn’t terribly complex on its own, I appreciated the way that Nuxt adds just a little bit of sugar to make it even simpler. (Nuxt makes things easy in a variety of ways, by the way, such as not requiring you to explicitly import your components before you can use them; you can just put them in the markup and Nuxt will figure it out and auto-import as needed.)
Finally, I knew ahead of time I was building a Progressive Web App (PWA), so the fact that there’s already a Nuxt PWA module to help build out all the features involved (such as a service worker for offline capability) already packaged up and ready to go was a big draw. In fact, there’s an impressive array of Nuxt modules available for any unseen hurdles. That made Nuxt the easiest, most obvious choice, and one I never regretted.
I ended up using more of the modules as I went, including the stellar Nuxt Content module, which allows you to write page content in Markdown, or even a mixture of Markdown and Vue components. I used that feature for the “FAQs” page and the “How to Play” page as well (since writing in Markdown is so much nicer than hard-coding HTML pages).
Achieving native app feel with the web
Quina would eventually find a home on the Google Play Store, but regardless of how or where it was played, I wanted it to feel like a full-fledged app from the get-go.
To start, that meant an optional dark mode, and a setting to reduce motion for optimal usability, like many native apps have (and in the case of reduced motion, like anything with animations should have).
Under the hood, both of the settings are ultimately booleans in the app’s Vuex data store. When true, the setting renders a specific class in the app’s default layout. Nuxt layouts are Vue templates that “wrap” all of your content, and render on all (or many) pages of your app (commonly used for things like shared headers and footers, but also useful for global settings):
Speaking of settings: though the web app is split into several different pages — menu, settings, about, play, etc. — the shared global Vuex data store helps to keep things in sync and feeling seamless between areas of the app (since the user will adjust their settings on one page, and see them apply to the game on another).
Every setting in the app is also synced to both localStorage and the Vuex store, which allows saving and loading values between sessions, on top of keeping track of settings as the user navigates between pages.
And speaking of navigation: moving between pages is another area where I felt there was a lot of opportunity to make Quina feel like a native app, by adding full-page transitions.
Nuxt page transitions in action
Vue transitions are fairly straightforward in general—you just write specifically-named CSS classes for your “to” and “from” transition states—but Nuxt goes a step further and allows you to set full page transitions with only a single line in a page’s Vue file:
<!-- A page component, e.g., pages/Options.vue --> <script> export default { transition: 'page-slide' // ... The rest of the component properties } </script>
That transition property is powerful; it lets Nuxt know we want the page-slide transition applied to this page whenever we navigate to or away from it. From there, all we need to do is define the classes that handle the animation, as you would with any Vue transition. Here’s my page-slide SCSS:
Notice the .reduce-motion class; that’s what we talked about in the layout file just above. It prevents visual movement when the user has indicated they prefer reduced motion (either via media query or manual setting), by disabling any transform properties (which seemed to warrant usage of the divisive !important flag). The opacity is still allowed to fade in and out, however, since this isn’t really movement.
Comparing default motion (left) with reduced motion (right)
Side note on transitions and handling 404s: The transitions and routing are, of course, handled by JavaScript under the hood (Vue Router, to be exact), but I ran into a frustrating issue where scripts would stop running on idle pages (for example, if the user left the app or tab open in the background for a while). When coming back to those idle pages and clicking a link, Vue Router would have stopped running, and so the link would be treated as relative and 404.
Example: the /faq page goes idle; the user comes back to it and clicks the link to visit the /options page. The app would attempt to go to /faq/options, which of course doesn’t exist.
My solution to this was a custom error.vue page (this is a Nuxt page that automatically handles all errors), where I’d run validation on the incoming path and redirect to the end of the path.
This worked for my use case because a) I don’t have any nested routes; and b) at the end of it, if the path isn’t valid, it still hits a 404.
Vibration and sound
Transitions are nice, but I also knew Quina wouldn’t feel like a native app — especially on a smartphone — without both vibration and sound.
Vibration is relatively easy to achieve in browsers these days, thanks to the Navigator API. Most modern browsers simply allow you to call window.navigator.vibrate() to give the user a little buzz or series of buzzes — or, using a very short duration, a tiny bit of tactile feedback, like when you tap a key on a smartphone keyboard.
Obviously, you want to use vibration sparingly, for a few reasons. First, because too much can easily become a bad user experience; and second, because not all devices/browsers support it, so you need to be very careful about how and where you attempt to call the vibrate() function, lest you cause an error that shuts down the currently running script.
Personally, my solution was to set a Vuex getter to verify that the user is allowing vibration (it can be disabled from the settings page); that the current context is the client, not the server; and finally, that the function exists in the current browser. (ES2020 optional chaining would have worked here as well for that last part.)
Side note: Checking for process.client is important in Nuxt — and many other frameworks with code that may run on Node — since window won’t always exist. This is true even if you’re using Nuxt in static mode, since the components are validated in Node during build time. process.client (and its opposite, process.server ) are Nuxt niceties that just validate the code’s current environment at runtime, so they’re perfect for isolating browser-only code.
Sound is another key part of the app’s user experience. Rather than make my own effects (which would’ve undoubtedly added dozens more hours to the project), I mixed samples from a few artists who know better what they’re doing in that realm, and who offered some free game sounds online. (See the app’s FAQs for full info.)
Users can set the volume they prefer, or shut the sound off entirely. This, and the vibration, are also set in localStorage on the user’s browser as well as synced to the Vuex store. This approach allows us to set a “permanent” setting saved in the browser, but without the need to retrieve it from the browser every time it’s referenced. (Sounds, for example, check the current volume level each time one is played, and the latency of waiting on a localStorage call every time that happens could be enough to kill the experience.)
An aside on sound
It turns out that for whatever reason, Safari is extremely laggy when it comes to sound. All the clicks, boops and dings would take a noticeable amount of time after the event that triggered them to actually play in Safari, especially on iOS. That was a deal-breaker, and a rabbit hole I spent a good amount of hours despairingly tunneling down.
Fortunately, I found a library called Howler.js that solves cross-platform sound issues quite easily (and that also has a fun little logo). Simply installing Howler as a dependency and running all of the app’s sounds through it — basically one or two lines of code — was enough to solve the issue.
If you’re building a JavaScript app with synchronous sound, I’d highly recommend using Howler, as I have no idea what Safari’s issue was or how Howler solves it. Nothing I tried worked, so I’m happy just having the issue resolved easily with very little overhead or code modification.
Gameplay, history, and awards
Quina can be a difficult game, especially at first, so there are a couple of ways to adjust the difficulty of the game to suit your personal preference:
You can choose what kind of words you want to get as code words: Basic (common English words), Tricky (words that are either more obscure or harder to spell), or Random (a weighted mix of the two).
You can choose whether to receive a hint at the start of each game, and if so, how much that hint reveals.
Quina offers several different ways to play, to accommodate players of all skill levels
These settings allow players of various skill, age, and/or English proficiency to play the game on their own level. (A Basic word set with strong hints would be the easiest; Tricky or Random with no hints would be the hardest.)
“Soft hints” reveal one letter in the code word (but not its position)
While simply playing a series of one-off games with adjustable difficulty might be enjoyable enough, that would feel more like a standard web app or demo than a real, full-fledged game. So, in keeping with the pursuit of that native app feel, Quina tracks your game history, shows your play statistics in a number of different ways, and offers several “awards” for various achievements.
Quina tracks the results of all games played, your longest win streaks, and many other stats
Under the hood, each game is saved as an object that looks something like this:
The app catalogues your games played (again, via Vuex state synced to localStorage) in the form of a gameHistory array of game objects, which the app then uses to display your stats — such as your win/loss ratio, how many games you’ve played, and your average guesses — as well as to show your progress towards the game’s “awards.”
This is all done easily enough with various Vuex getters, each of which utilizes JavaScript array methods, like .filter() and .reduce(), on the gameHistory array. For example, this is the getter that shows how many games the user has won while playing on the “tricky” setting:
There are many other getters of varying complexity. (The one to determine the user’s longest win streak was particularly gnarly.)
Adding awards was a matter of creating an array of award objects, each tied to a specific Vuex getter, and each with a requirement.threshold property indicating when that award was unlocked (i.e., when the value returned by the getter was high enough). Here’s a sample:
// assets/js/awards.js export default [ { title: 'Onset', requirement: { getter: 'totalGamesPlayed', threshold: 1, text: 'Play your first game of Quina', } }, { title: 'Sharp', requirement: { getter: 'trickyGamesWon', threshold: 10, text: 'Win ten total games on Tricky', }, }, ]
From there, it’s a pretty straightforward matter of looping over the achievements in a Vue template file to get the final output, using its requirement.text property (though there’s a good deal of math and animation added to fill the gauges to show the user’s progress towards achieving the award):
There are 25 awards in all (that’s 5 × 5, in keeping with the theme) for various achievements like winning a certain number of games, trying out all the game modes, or even winning a game within your first three guesses. (That one is called “Lucky” — as an added little Easter egg, the name of each award is also a potential code word, i.e., five letters with no repeats.)
Unlocking awards doesn’t do anything except give you bragging rights, but some of them are pretty difficult to achieve. (It took me a few weeks after releasing to get them all!)
Pros and cons of this approach
There’s a lot to love about the “build once, deploy everywhere” strategy, but it also comes with some drawbacks:
Pros
You only need to deploy your store app once. After that, all updates can just be website deploys. (This is much quicker than waiting for an app store release.)
Build once. This is sorta true, but turned out to be not quite as straightforward as I thought due to Google’s payments policy (more on that later).
Everything is a browser. Your app is always running in the environment you’re used to, whether the user realizes it or not.
Cons
Event handlers can get really tricky. Since your code is running on all platforms simultaneously, you have to anticipate any and all types of user input at once. Some elements in the app can be tapped, clicked, long-pressed, and also respond differently to various keyboard keys; it can be tricky to handle all of those at once without any of the handlers stepping on each other’s toes.
You may have to split experiences. This will depend on what your app is doing, but there were some things I needed to show only for users on the Android app and others that were only for web. (I go into a little more detail on how I solved this in another section below.)
Everything is a browser. You’re not worried about what version of Android your users are on, but you are worried about what their default browser is (because the app will use their default browser behind the scenes). Typically on Android this will mean Chrome, but you do have to account for every possibility.
Logistics: turning a web app into a native app
There’s a lot of technology out there that makes the “build for the web, release everywhere” promise — React Native, Cordova, Ionic, Meteor, and NativeScript, just to name a few.
Generally, these boil down to two categories:
You write your code the way a framework wants you to (not exactly the way you normally would), and the framework transforms it into a legitimate native app;
You write your code the usual way, and the tech just wraps a native “shell” around your web tech and essentially disguises it as a native app.
The first approach may seem like the more desirable of the two (since at the end of it all you theoretically end up with a “real” native app), but I also found it comes with the biggest hurdles. Every platform or product requires you to learn its way of doing things, and that way is bound to be a whole ecosystem and framework unto itself. The promise of “just write what you know” is a pretty strong overstatement in my experience. I’d guess in a year or two a lot of those problems will be solved, but right now, you still feel a sizable gap between writing web code and shipping a native app.
On the other hand, the second approach is viable because of a thing called “TWA,” which is what makes it possible to make a website into an app in the first place.
What is a TWA app?
TWA stands for Trusted Web Activity — and since that answer is not likely to be helpful at all, let’s break that down a bit more, shall we?
A TWA app basically turns a website (or web app, if you want to split hairs) into a native app, with the help of a little UI trickery.
You could think of a TWA app as a browser in disguise. It’s an Android app without any internals, except for a web browser. The TWA app is pointed to a specific web URL, and whenever the app is booted, rather than doing normal native app stuff, it just loads that website instead — full-screen, with no browser controls, effectively making the website look and behave as though it were a full-fledged native app.
TWA requirements
It’s easy to see the appeal of wrapping up a website in a native app. However, not just any old site or URL qualifies; in order to launch your web site/app as a TWA native app, you’ll need to check the following boxes:
Your site/app must be a PWA. Google offers a validation check as part of Lighthouse, or you can check with Bubblewrap (more on that in a bit).
You must generate the app bundle/APK yourself; it’s not quite as easy as just submitting the URL of your progressive web app and having all the work done for you. (Don’t worry; we’ll cover a way to do this even if you know nothing about native app development.)
You must have a matching secure key, both in the Android app and uploaded to your web app at a specific URL.
That last point is where the “trusted” part comes in; a TWA app will check its own key, then verify that the key on your web app matches it, to ensure it’s loading the right site (presumably, to prevent malicious hijacking of app URLs). If the key doesn’t match or isn’t found, the app will still work, but the TWA functionality will be gone; it will just load the web site in a plain browser, chrome and all. So the key is extremely important to the experience of the app. (You could say it’s a key part. Sorry not sorry.)
Advantages and drawbacks of building a TWA app
The main advantage of a TWA app is that it doesn’t require you to change your code at all — no framework or platform to learn; you’re just building a website/web app like normal, and once you’ve got that done, you’ve basically got the app code done, too.
The main drawback, however, is that (despite helping to usher in the modern age of the web and JavaScript), Apple is not in favor of TWA apps; you can’t list them in the Apple App store. Only Google Play.
This may sound like a deal-breaker, but bear a few things in mind:
Remember, to list your app in the first place, it needs to be a PWA — which means it’s installable by default. Users on any platform can still add it to their device’s home screen from the browser. It doesn’t need to be in the Apple App Store to be installed on Apple devices (though it certainly misses out on the discoverability). So you could still build a marketing landing page into your app and prompt users to install it from there.
There’s also nothing to prevent you from developing a native iOS app using a completely different strategy. Even if you wanted both iOS and Android apps, as long as a web app is also part of the plan, having a TWA effectively cuts out half of that work.
Finally, while iOS has about a 50% market share in predominantly English-speaking countries and Japan, Android has well over 90% of the rest of the world. So, depending on your audience, missing out on the App Store may not be as impactful as you might think.
How to generate the Android App APK
At this point you might be saying, this TWA business sounds all well and good, but how do I actually take my site/app and shove it into an Android app?
The answer comes in the form of a lovely little CLI tool called Bubblewrap.
You can think of Bubblewrap as a tool that takes some input and options from you, and generates an Android app (specifically, an APK, one of the file formats allowed by the Google Play Store) out of the input.
Installing Bubblewrap is a little tricky, and while using it is not quite plug-and-play, it’s definitely far more within reach for an average front-end dev than any other comparable options that I found. The README file on Bubblewrap’s NPM page goes into the details, but as a brief overview:
Install Bubblewrap by running npm i -g @bubblewrap/cli (I’m assuming here you’re familiar with NPM and installing packages from it via the command line). That will allow you to use Bubblewrap anywhere.
Note: the manifest.json file is required of all PWAs, and Bubblewrap needs the URL to that file, not just your app. Also be warned: depending on how your manifest file is generated, its name may be unique to each build. (Nuxt’s PWA module appends a unique UUID to the file name, for example.)
Also note that by default, Bubblewrap will validate that your web app is a valid PWA as part of this process. For some reason, when I was going through this process, the check kept coming back negative, despite Lighthouse confirming that it was in fact a fully functional progressive web app. Fortunately, Bubblewrap allows you to skip this check with the --skipPwaValidation flag.
If this is your first time using Bubblewrap, it will then ask if you want it to install the Java Development Kit (JDK) and Android Software Development Kit (SDK) for you. These two are the behind-the-scenes utilities required to generate an Android app. If you’re not sure, hit “Y” for yes.
Note: Bubblewrap expects these two development kits to exist in very specific locations, and won’t work properly if they’re not there. You can run bubblewrap doctor to verify, or see the full Bubblewrap CLI README.
After everything’s installed — assuming it finds your manifest.json file at the provided URL — Bubblewrap will ask some questions about your app.
Many of the questions are either preference (like your app’s main color) or just confirming basic details (like the domain and entry point for the app), and most will be pre-filled from your site’s manifest file.
Other questions that may already be pre-filled by your manifest include where to find your app’s various icons (to use as the home screen icon, status bar icon, etc.), what color the splash screen should be while the app is opening, and the app’s screen orientation, in case you want to force portrait or landscape. Bubblewrap will also ask if you want to request permission for your user’s geolocation, and whether you’re opting into Play Billing.
However, there are a few important questions that may be a little confusing, so let’s cover those here:
Application ID: This appears to be a Java convention, but each app needs a unique ID string that’s generally 2–3 dot-separated sections (e.g., collinsworth.quina.app). It doesn’t actually matter what this is; it’s not functional, it’s just convention. The only important thing is that you remember it, and that it’s unique. But do note that this will become part of your app’s unique Google Play Store URL. (For this reason, you cannot upload a new bundle with a previously used App ID, so make sure you’re happy with your ID.)
Starting version: This doesn’t matter at the moment, but the Play Store will require you to increment the version as you upload new bundles, and you cannot upload the same version twice. So I’d recommend starting at 0 or 1.
Display mode: There are actually a few ways that TWA apps can display your site. Here, you most likely want to choose either standalone (full-screen, but with the native status bar at the top), or fullscreen (no status bar). I personally chose the default standalone option, as I didn’t see any reason to hide the user’s status bar in-app, but you might choose differently depending on what your app does.
The signing key
The final piece of the puzzle is the signing key. This is the most important part. This key is what connects your progressive web app to this Android app. If the key the app is expecting doesn’t match what’s found in your PWA, again: your app will still work, but it will not look like a native app when the user opens it; it’ll just be a normal browser window.
There are two approaches here that are a little too complex to go into in detail, but I’ll try to give some pointers:
Generate your own keystore. You can have Bubblewrap do this, or use a CLI tool called keytool (appropriately enough), but either way: be very careful. You need to explicitly track the exact name and passwords for your keystores, and since you’re creating both on the command line, you need to be extremely careful of special characters that could mess up the whole process. (Special characters may be interpreted differently on the command line, even when input as part of a password prompt.)
Allow Google to handle your keys. This honestly isn’t dramatically simpler in my experience, but it saves some of the trouble of wrangling your own signing keys by allowing you to go into the Google Play Developer console, and download a pre-generated key for your app.
Whichever option you choose, there’s in-depth documentation on app signing here (written for Android apps, but most of it is still relevant).
The part where you get the key onto your personal site is covered in this guide to verifying Android app links. To crudely summarize: Google will look for a /.well-known/assetlinks.json file at that exact path on your site. The file needs to contain your unique key hash as well as a few other details:
Before you get started, there are also some hurdles to be aware of on the app store side of things:
First and foremost, you need to sign up before you can publish to the Google Play Store. This eligibility costs a one-time $ 25 USD fee.
Once approved, know that listing an app is neither quick nor easy. It’s more tedious than difficult or technical, but Google reviews every single app and update on the store, and requires you to fill out a lot of forms and info about both yourself and your app before you can even start the review process — which itself can take many days, even if your app isn’t even public yet. (Friendly heads-up: there’s been a “we’re experiencing longer than usual review times” warning banner in the Play console dashboard for at least six months now.)
Among the more tedious parts: you must upload several images of your app in action before your review can even begin. These will eventually become the images shown in the store listing — and bear in mind that changing them will also kick off a new review, so come to the table prepared if you want to minimize turnaround time.
You also need to provide links to your app’s terms of service and privacy policy (which is the only reason my app even has them, since they’re all but pointless).
There are lots of things you can’t undo. For example, you can never change a free app to paid, even if it hasn’t publicly launched yet and/or has zero downloads. You also have to be strict on versioning and naming with what you upload, because Google doesn’t let you overwrite or delete your apps or uploaded bundles, and doesn’t always let you revert other settings in the dashboard, either. If you have a “just jump in and work out the kinks later” approach (like me), you may find yourself starting over from scratch at least once or twice.
With a few exceptions, Google has extremely restrictive policies about collecting payments in an app. When I was building, it was charging a 30% fee on all transactions (they’ve since conditionally lowered that to 15% — better, but still five times more than most other payment providers would charge). Google also forces developers (with a few exceptions) to use its own native payment platform; no opting for Square, Stripe, PayPal, etc. in-app.
Fun fact: this policy had been announced but wasn’t in effect yet while I was trying to release Quina, and it still got flagged by the reviewer for being in violation. So they definitely take this policy very seriously.
Monetization, unlockables, and getting around Google
While my goal with Quina was mostly personal — challenge myself, prove I could, and learn more about the Vue ecosystem in a complex real-world app — I had also hoped as a secondary goal that my work might be able to make a little money on the side for me and my family.
Not a lot. I never had illusions of building the next Candy Crush (nor the ethical void required to engineer an addiction-fueled micro-transaction machine). But since I had poured hundreds of hours of my time and energy into the game, I had hoped that maybe I could make something in return, even if it was just a little beer money.
Initially, I didn’t love the idea of trying to sell the app or lock its content, so I decided to add a simple “would you care to support Quina if you like it?” prompt after every so many games, and make some of the content unlockable specifically for supporters. (Word sets are limited in size bu default, and some game settings are initially locked as well.) The prompt to support Quina can be permanently dismissed (I’m not a monster), and any donation unlocks everything; no tiered access or benefits.
This was all fairly straightforward to implement thanks to Stripe, even without a server; it’s all completely client-side. I just import a bit of JavaScript on the /support page, using Nuxt’s handy head function (which adds items to the <head> element specifically on the given page):
With that bit in place (along with a sprinkle of templating and logic), users can choose their donation amount — set up as products on the Stripe side — and be redirected to Stripe to complete payment, then returned when finished. For each tier, the return redirect URL is slightly different via query parameters. Vue Router parses the URL to adjust the user’s stored donation history, and unlock features accordingly.
You might wonder why I’m revealing all of this, since it exposes the system as fairly easy to reverse-engineer. The answer is: I don’t care. In fact, I added a free tier myself, so you don’t even have to go to the trouble. I decided that if somebody really wanted the unlockables but couldn’t or wouldn’t pay for whatever reason, that’s fine. Maybe they live in a situation where $ 3 is a lot of money. Maybe they gave on one device already. Maybe they’ll do something else nice instead. But honestly, even if their intentions aren’t good: so what?
I appreciate support, but this isn’t my living, and I’m not trying to build a dopamine tollbooth. Besides, I’m not personally comfortable with the ethical implications of using a stack of totally open-source and/or free software (not to mention the accompanying mountain of documentation, blog posts, and Stack Overflow answers written about all of it) to build a closed garden for personal profit.
So, if you like Quina and can support it: sincerely, thank you. That means a ton to me. I love to see my work being enjoyed. But if not: that’s cool. If you want the “free” option, it’s there for you.
Anyway, this whole plan hit a snag when I learned about Google Play’s new monetization policy, effective this year. You can read it yourself, but to summarize: if you make money through a Google Play app and you’re not a nonprofit, you gotta go through Google Pay and pay a hefty fee — you are not allowed to use any other payment provider.
This meant I couldn’t even list the app; it would be blocked just for having a “support” page with payments that don’t go through Google. (I suppose I probably could have gotten around this by registering a nonprofit, but that seemed like the wrong way to go about it, on a number of levels.)
My eventual solution was to charge for the app itself on Google Play, by listing it for $ 2.99 (rather than my previously planned price of “free”), and simply altering the app experience for Android users accordingly.
Customizing the app experience for Google Play
Fortunately enough, Android apps send a custom header with the app’s unique ID when requesting a website. Using this header, it was easy enough to differentiate the app’s experience on the web and in the actual Android app.
For each request, the app checks for the Android ID; if present, the app sets a Vuex state boolean called isAndroid to true. This state cascades throughout the app, working to trigger various conditionals to do things like hide and show various FAQ questions, and (most importantly) to hide the support page in the nav menu. It also unlocks all content by default (since the user’s already “donated” on Android, by purchasing). I even went so far as to make simple <WebOnly> and <AndroidOnly> Vue wrapper components to wrap content only meant for one of the two. (Obviously, users on Android who can’t visit the support page shouldn’t see FAQs on the topic, as an example.)
For a time while building Quina, I had Firebase set up for logins and storing user data. I really liked the idea of allowing users to play on all their devices and track their stats everywhere, rather than have a separate history on each device/browser.
In the end, however, I scrapped that idea, for a few reasons. One was complexity; it’s not easy maintaining a secure accounts system and database, even with a nice system like Firebase, and that kind of overhead isn’t something I took lightly. But mainly: the decision boiled down to security and simplicity.
At the end of the day, I didn’t want to be responsible for users’ data. Their privacy and security is guaranteed by using localStorage, at the small cost of portability. I hope players don’t mind the possibility of losing their stats from time to time if it means they have no login or data to worry about. (And hey, it also gives them a chance to earn those awards all over again.)
Plus, it just feels nice. I get to honestly say there’s no way my app can possibly compromise your security or data because it knows literally nothing about you. And also, I don’t need to worry about compliance or cookie warnings or anything like that, either.
Wrapping up
Building Quina was my most ambitious project to date, and I had as much fun designing and engineering it as I have seeing players enjoy it.
I hope this journey has been helpful for you! While getting a web app listed in the Google Play Store has a lot of steps and potential pitfalls, it’s definitely within reach for a front-end developer. I hope you take this story as inspiration, and if you do, I’m excited to see what you build with your newfound knowledge.
I spent a good chunk of my work life this year trying (in collaboration with the amazing Noam Rosenthal) to standardize a new web platform feature: a way to modify the intrinsic size and resolution of images. And hey! We did it! But boy, was it ever a learning experience.
This wasn’t my first standardization rodeo, so many of the issues we ran into, I more-or-less anticipated. Strong negative feedback from browsers. Weird, unforeseen gotchas with the underlying primitives. A complete re-think or two. What I didn’t anticipate though, was that our proposal — which, again, was “only” about modifying the default display size of images — would run afoul of the fundamental privacy and security principles of the web. Because before this year, I didn’t really understand those principles.
Let me set the table a bit. What were we trying to do?
By default, images on the web show up exactly as big as they are. Embedding an 800×600 image? Unless you stretch or shrink that image with CSS or markup, that’s exactly how large it’s going to be: 800 CSS pixels across, and 600 CSS pixels tall. That’s the image’s intrinsic (aka “natural”) size. Another way to put this is that, by default, all images on the web have an intrinsic density of 1×.
That’s all well and good, until you’re trying to serve up high-, low-, or ✨variable✨-density images, without access to CSS or HTML. This is a situation that image hosts like my employer, Cloudinary, find themselves in quite often.
So, we set out to give ourselves and the rest of the web a tool to modify the intrinsic size and resolution of images. After a couple of re-thinks, the solution that we landed on was this:
Browsers should read and apply metadata contained within image resources themselves, allowing them to declare their own intended display size and resolution.
Following in the recent footsteps of image-orientation — by default, browsers would respect and apply this metadata. But you could override it or turn it off with a little CSS (image-resolution), or markup (srcset’s x descriptors).
We felt pretty good about this. It was flexible, it built on an existing pattern, and it seemed to address all of the issues that had been raised against our previous proposals. Alas, one of the editors of the HTML spec, Anne van Kesteren, said: no. This wasn’t going to work. And image-orientation needed an urgent re-think, too. Because this pattern, where you can turn the effects of EXIF metadata on and off with CSS and HTML, would violate the “Same-Origin Policy.”
Uh… what?
Aren’t we just scaling and rotating images??
Confession time! Before all of this, I’d more or less equated the Same-Origin Policy with CORS errors, and all of the frustration that they’ve caused me over the years. Now, though, the Same-Origin Policy wasn’t just standing between me and handling a fetch, it was holding up a major work initiative. And I had to explain the situation to bosses who knew even less about security and privacy on the web than I did. Time to learn!
Here’s what I learned:
The Same-Origin Policy isn’t a single, simple, rule. And it certainly isn’t == CORS errors.
What it is, is a philosophy which has evolved over time, and has been inconsistently implemented across the web platform.
In general, what it says is: the fundamental security and privacy boundary of the web is origins. Do you share an origin with something else on the web? You can interact with it however you like. If not, though, you might have to jump through some hoops.
Why “might?” Well, a lot of cross-origin interactions are allowed, by default! Generally, when you’re making a website, you can write across origins (by sending POST requests off to whoever you please, via forms). And you can even embed cross-origin resources (iframes, images, fonts, etc) that your site’s visitors will see, right there on your website. But what you can’t do, is look at those cross-origin resources, yourself. You shouldn’t be able to read anything about a cross-origin resource, in your JavaScript, without specially-granted permission (via our old friend, CORS).
Here’s the thing that blew my mind the most, once I finally understood it: cross-origin reads are forbidden by default because, as end-users, we all see different world-wide webs, and a website shouldn’t be able to see the rest of the web through its visitors’ eyes. Individuals’ varied local browsing contexts – including, but not limited to, cookies — mean that when I go to, say, gmail.com, I’m going to see something different than you, when you enter that same URL into your address bar and hit “return.” If other websites could fire off requests to Gmail from my browser, with my cookies, and read the results, well – that would be very, very bad!
So by default: you can do lots of things with cross-origin resources. But preventing cross-origin reads is kind of the whole ballgame. Those defaults are more-or-less what people are talking about when they talk about the “Same-Origin Policy.”
How does this all relate to the intrinsic size and resolution of images?
Let’s say there’s an image URL – https://coolbank.com/hero.jpg, that happens to return a different resource depending on whether or not a user is currently logged in at coolbank.com. And let’s say that the version that shows up when you’re logged in, has some EXIF resolution info, but the version that shows up when you’re not, doesn’t. Lastly, let’s pretend that I’m an evil phisher-man, trying to figure out which bank you belong to, so I can spoof its homepage and trick you into typing your bank login info into my evil form.
So! I embed https://coolbank.com/hero.jpg on an evil page. I check its intrinsic size. I turn EXIF-sizing off, with image-resolution: none, and then check its size again. Now, even though CORS restrictions are preventing me from looking at any of the image’s pixel data, I know whether or not it contains any EXIF resolution information — I’ve been able to read a little tiny piece of that image, across origins. And now, I know whether or not you’re logged into, and have an account at, coolbank.com.
Far-fetched? Perhaps! But the web is an unimaginably large place. And, as Jen Simmons once put it,
This one of the best & most important things about THE WEB. Go to a website, it’s safe from malware. Download an app, you are at risk. You can’t download random apps from random places. You can go to random websites, and expect to be safe. We must fight to keep the web like this. https://t.co/xKQ5vVNCaU
Browsing the web is basically going around running other people’s untrusted and potentially malicious code, willy-nilly, all day long. The principles that underly web security and privacy — including the Same-Origin Policy — enable this safety, and must be defended absolutely. The hole we were unintentionally trying to open in the Same-Origin Policy seemed so small, at first. A few literal bits of seemingly-harmless information. But a cross-origin read, however small, is a cross-origin read, and cross-origin reads are not allowed.
How did we fix our spec? We made EXIF resolution and orientation information un-readable across origins by making it un-turn-off-able: in cross-origin contexts, EXIF modifications are always applied. An 800×600 image whose EXIF says it should be treated as 400×300 will behave exactly like a 400×300 image, would, no matter what. A simple-enough solution — once we understood the problem.
As a bonus, once I really understood the Same-Origin Policy and the whys behind the web’s default security policies, a bunch of other web security pieces started to fall into place for me.
Cross-site request forgery attacks take advantage of the fact that cross-origin writes are allowed, by default. If an API endpoint isn’t careful about how it responds to POST requests, bad things can happen. Likewise, Content Security Policy allows granular control over what sorts of embeds are allowed, because again, by default, they all are, and it turns out, that opens the door to cross-site scripting attacks. And the new alphabet soup of web security features — COOP, COEP, CORP, and CORB — are all about shutting down cross-origin interactions completely, fixing some of the inconsistent ways that the Same-Origin Policy has been implemented over the years and closing down any/all possible cross-origin interaction, to achieve a rarefied state known as “cross-origin isolation”. In a world where Spectre and friends mean that cross-origin loading can be exploited to perform cross-origin reading, full cross-origin isolation is needed to guarantee saftey when doing various, new, powerful things.
In short:
Security and privacy on the web are actually pretty amazing, when you think about it.
They’re a product of the platform’s default policies, which are all about restricting interactions across origins.
By default, the one thing no one should ever be able to do is read data across origins (without special permission).
The reason reads are forbidden is that we all see different webs, and attackers shouldn’t be able to see the web through potential victims’ eyes.
No ifs, ands, or buts! Any hole in the Same-Origin Policy, however small, is surface area for abuse.
In 2020, I tried to open a tiny hole in the Same-Origin Policy (oops), and then got to learn all of the above.
Here’s to a safer and more secure 2021, in every possible sense.
Caution: Terrible sense of humor ahead. We’ll talk about practical stuff, but the examples pretty much all involve zombies and silly jokes. You have been warned.
I’ll be linking to individual Pens as I discuss the lessons I learned, but if you’d like to get a sense of the entire project, check out 60 days of Animation on Undead Institute. I started this project to end on August 1st, 2020, coinciding with the publication of a book I wrote featuring CSS animation, humor, and zombies — because, obviously, zombies will destroy the world if you don’t brandish your web skills and stop the apocalypse. Nothing puts the hurt on the horde like a HTML element on the move!
I had a few rules for myself throughout the project.
I would hand-code all CSS. (I’m a masochist.)
The user would initiate all of the animation. (I hate coming upon an animation that’s already halfway through.)
I would use JavaScript as little as possible and never for animation. (I only ended up using JavaScript once, and that was to start audio with the final animation. I have nothing against JavaScript, it’s just not what I wanted to do here.)
Lesson 1: Eighty days is a long time.
Uh, doesn’t the title say “sixty” days? Yes, but my original goal was to do eighty days and as day one approached with less than twenty animations prepared and a three day average for each production, I freaked out and switched to sixty days. That gave me both twenty more days till the beginning date and twenty fewer pieces to do.
Lesson 1A: Sixty days is still a long time.
That’s a lot of animation to do with a limited amount of time, ideas, and even more limited artistic skills. And while I thought of dropping to thirty days, I’m glad I didn’t. Sixty days stretched me and forced me to go deeper into how CSS animation — and by extension, CSS itself — works. I’m also proudest of many of the later pieces I did as my skills increased, and I had to be more innovative and think harder about how to make things interesting. Once you’ve used all the easy options, the actual work and best results begin. (And yes, it ended up being sixty-two days because I started on June 1 and wanted to do a final animation on August 1. Starting June 3 just felt icky and wrong.)
So, the real Lesson 1: stretch yourself.
Lesson 2: Interactive animations are hard, and even harder to make responsive.
If you want something to fly across the screen and connect with another element or appear to start another element’s move, you must use either all standard, inflexible units or all flexible units.
Three variables determine when and where an animated element will be during any animation: duration, velocity, and distance. The duration of the animation is set in the animation property and cannot be changed in relation to screen size. The animation timing function determines the velocity; screen size can’t change that either. Thus, if the distance varies with the screen size, the timing will be off everywhere except a specific screen width and height.
Look at Tank!. Run the animation at wide and narrow screen sizes. While I got the timing close, if you compare the two, you’ll see that the tank is in a different place relative to the zombies when the last zombies fall.
To avoid these timing issues, you can use fixed units and a large number, like 2000 or 5000 pixels or more, so that the animation will cover the width (or height) of the screen for all but the largest monitors.
Lesson 3: If you want a responsive animation, put everything in (one of the) viewport units.
Going halfsies on unit proportions (e.g. setting width and height in pixels, but location and movement with viewport units) will lead to unpredictable results. Don’t use both vw and vh either but one or the other; whichever will be the dominant orientation. Mixing vh and vw units will make your animation go “wonky” which I believe is the technical term.
Take Superbly Zomborrific, for instance. It mixes pixel, vw, and vh units. The premise is that the Super Zombie is flying upward as the “camera” follows. Super Zombie smashes into a ledge and falls as the camera continues, but you wouldn’t understand that if your screen was sufficiently tall.
That also means that if you need something to come in from the top — like I did in Nobody Here But Us Humans —you must set the vw height high enough to ensure that the ninja zombie isn’t visible at most aspect ratios.
Lesson 3A: Use pixel units for movements within an SVG element.
All that said, transforming elements within an SVG element should not use viewport units. SVG tags are their own proportional universe. The SVG “pixel” will stay proportional within the SVG element to all the other SVG element children while viewport units will not. So transform with pixel units within an SVG element, but use viewport units everywhere else.
Lesson 4: SVGs scale horribly at runtime.
For animations, like Oops…, I made the SVG image of the zombie scale up to five times his size, but that makes the edges fuzzy. [Shakes fist at “scalable” vector graphics.]
I learned to set their dimensions to the final dimensions that would be in effect at the end of the animation, then use a scale transform to shrink them down to the size for the start of the animation.
In short, the revised code moves from a scaled-down version of the image up to the full width and height. The browser always renders at 1, making the edges crisp and clean at a scale of 1. So instead of scaling from 1 to 5, I scaled from 0.2 to 1.
Lesson 5: The axis Isn’t a universal truth.
An element’s axes stay in sync with the element, not the page. A 90-degree rotation before a translateX will change the direction of the translateX from horizontal to vertical. In Nobody Here But Us Humans… 2, I flipped the zombies using a 180-degree rotation. But positive Y values move the ninjas towards the top, and negative ones move them towards the bottom (the opposite of normal). Beware of how a rotation may affect transforms further down the line.
Lesson 6. Separate complex animations into concentric elements to make easier adjustments.
When creating a complex animation that moves in multiple directions, adding wrapper divs, or rather parent elements, and animating each one individually will cut down on conflicting transforms, and prevent you from becoming a weepy mess.
For instance, in Space Cadet, I had three different transforms going on. The first is the zomb-o-naut’s moving in an up and down motion. The second is a movement across the screen. The third is a rotation. Rather than trying to do everything in a single transform, I added two wrapping elements and did one animation on each element (I also saved my hair… at least some of it.) This helped avoid the axis issues discussed in the last lesson because I performed the rotation on the innermost element, leaving its parent’s and grandparent’s axes in place.
Lesson 7: SVG and CSS transforms are the same.
Some paths and groups and other SVG elements will already have transforms defined on them. It could be from an optimization algorithm, or perhaps it’s just how the illustration software generates the code. If a path, group, or whatever element in an SVG already has an SVG transform on it, removing that transform will reset the element, often to a bizarre location or size compared to the rest of the drawing.
Since SVG and CSS transforms are the same, any CSS transform you do replaces the SVG transform, meaning your CSS transform will start from that bizarre location or size rather than the location or size that is set in the the SVG.
You can copy the transform from the SVG element to your CSS and set it as the starting position in CSS (updating it to the CSS syntax first, of course). You can then modify it in your CSS animation.
For instance, in Uhhh, Yeah…, my tribute to Office Space, Undead Lumbergh’s right upper arm (the #arm2 element) had a transform on it in the original SVG code.
This process is harder when the tool generating the SVG code attempts to “simplify” the transform into a matrix. While you can recreate the matrix transform by copying it into the CSS, it is a difficult task to do. You’re a better developer than me — which might be true anyway — if you can take a matrix transform and manipulate it to scale, rotate, or translate in the exact way you want.
Alternatively, you can recreate the matrix transform using translation, rotation, and scaling, but if the path is complex, the likelihood that you can recreate it in a timely manner without finding yourself in a straight jacket is low.
The last and probably easiest option is to wrap the element in a group (<g>) tag. Add a class or ID to it for easy CSS access and transform the group itself, thus separating out the transforms as discussed in the last lesson.
Lesson 8: Keep your sanity by using transform-origin when transforming part of an SVG
The CSS transform-origin property moves the point around which the transform happens. If you’re trying to rotate an arm — like I did in Clubbin’ It — your animation will look more natural if you rotate the arm from the center of the shoulder, but that path’s natural transform origin is in the upper-left. Use transform-origin to fix this for smoother, more natural feel… you know that really natural pixel art look…
Transforming the origin can also be useful when scaling, like I did in Mustachioed Oops, or when rotating mouth movements, such as the dinosaur’s jaw in Super Tasty. If you don’t change the origin, the transforms will use an origin point at the upper left corner of the SVG element.
Lesson 9: Sprite animations can be responsive
I ended up doing a lot of sprite animations for this project (i.e., where you use multiple, incremental frames and switch between them fast enough that the characters seem to move). I created the images in one wide file, added them as a background image to an element the size of a single frame, used background-size to set the background image to the width of the image, and hid the overflow. Then I used background-position and the animation timing function, step(), to walk through the images; for example: Post-Apocalyptic Celebrations.
Before the project, I always used inflexible images. I’d scale things down a little so that there would be at least a little responsive give, but I didn’t think you could make it a fully flexible width. However, if you use SVG as the background image you can then use viewport units to scale the element along with the changing screen size. The only problem is the background position. However, if you use viewport units for that, it will stay in sync. Check that out in Finally, Alone with my Sandwich…
Lesson 9A: Use viewport units to set the background size of an image when creating responsive sprite animation
As I’ve learned throughout this project, using a single type of unit is almost always the way to go. Initially, I’d set my sprite’s background size using percentages. The math was easy (100% * (number of steps + 1)) and it worked fine in most cases. In longer animations, however, the exact frame tracking could be off and parts of the wrong sprite frame might display. The problem grows as more frames are added to the sprite.
I’m not sure the exact reason this causes an issue, but I believe it’s because of rounding errors that compound over the length of the sprite sheet (the amount of the shift increases with the number of frames).
For my final animation, It Ain’t Over Till the Zombie Sings, I had a dinosaur open his mouth to reveal a zombie Viking singing (while lasers fired in the background plus there was dancing, accordions playing and zombies fired from cannons, of course). Yeah, I know how to throw a party… a nerd party.
The dinosaur and viking was one of the longest sprite animations I did for the project. But when I used percentages to set the background size, the tracking would be off at certain sizes in Safari. By the end of the animation, part of the dinosaur’s nose from a different frame would appear to the right and a similar part of the nose would be missing on the left.
The dinosaur on the left is missing part of his left cheek and growing a new one next to his right cheek.
This was super frustrating to diagnose because it seemed to work fine in Chrome and I’d think I fixed it in Safari only to look at a slightly different screen size and see the frame off again. However, if I used consistent units — i.e. vw for background-size, frame width, and background-position — everything worked fine. Again, it comes down to working with consistent units!
Lesson 10: Invite people into the project.
While I learned tons of things during this process, I beat my head against the wall for most of it (often until the wall broke or my head did… I can’t tell). While that’s one way to do it, even if you’re hard-headed, you’ll still end up with a headache. Invite others into your project, be it for advice, to point out an obvious blind spot you missed, provide feedback, help with the project, or simply to encourage you to keep going when the scope is stupidly and arbitrarily large.
So let me put this lesson into practice. What are your thoughts? How will you stop the zombie hordes with CSS animation? What stupidly and arbitrarily large project will you take on to stretch yourself?
I was browsing the Svelte docs on my iPhone and came across a blaring UI bug. The notch in the in the REPL knob was totally out of whack. I’m always looking to contribute to open source, and I thought this would be a quick and easy fix. Turns out, there was a lot more to it than just changing one line of CSS.
Replicating, debugging, setting up the local environment was interesting, difficult, and meaningful.
The issue
I opened my browser DevTools, thinking I’d see the same issue in the phone view. But, the bug wasn’t there. Now this is a seriously tricky CSS problem.
💡 What I learned
If you’re using Chrome on iOS as your browser, you’re still using Safari’s renderer. From Wikipedia:
Chrome uses the iOS WebKit – which is Apple’s own mobile rendering engine and components, developed for their Safari browser – therefore it is restricted from using Google’s own V8 JavaScript engine.
This is backed up by caniuse, which provides this note on iPS Safari:
Now it’s clear why the issue wasn’t showing up on my machine but it was showing up on my phone. Different rendering engines!
Reproduce the issue locally
I pulled down the project and ran it locally. I confirmed it was still an issue by running the local code in a simulator as well as on my actual iPhone. Safari on macOS has an easy way to open up DevTools instances of each one.
This provides access to a console just like you would in the browser but for iOS Safari. I’m not going to lie, Apple’s developer experience is top notch (see what I did there? 😬).
I’m able to reproduce the issue locally now.
💡 What I learned
After pulling down the Svelte repo and looking around the code a bit, I noticed the UI and SVGs were being pulled in via a package called @sveltejs/site-kit. Great, now I need my local version of site kit to get pulled into svelte/site so I can see changes and debug the issue.
I needed to point the node_modules in Svelte’s package.json to my local copy of site-kit. This sounded like a Symlink. After looking through the docs without much luck I Googled around and stumbled upon npm-link. That let me see what I was doing!
I can now make local changes to site-kit and see them reflected in the Svelte project.
Solving the issue
Seriously, all this needed was a one-line change:
border: transparent;
But locating where that one line should go was not as straightforward as you’d think. Source maps on the project are still a little rough around the edges and are showing this CSS coming from the Nav.svelte component when it was really coming from another file. That would be another great way to contribute to the project!
Then you search around and learn that this is being handled and you learn a little more about how it’s done. Everything now looks great on mobile and desktop.
That’s all it needed!
Let’s rewind
What started as a quick, one-line change became a bit of a journey. I had to:
Run the project and component repositories
Learn about system linking
Contribute documentation about lining to site-kit
Learn about different browser renderers
Learn how to emulate an iOS Safari browser
Learn how to get access to its debugger
Find the issue when source maps weren’t working correctly
Fix the issue (finally!)
Working on your own, you normally don’t get to deal with issues like this, or have a large complex system you need to build a mental model of and learn. You don’t get to learn from maintainers. Most importantly, you don’t see all of the hard work that goes into building a popular technical product.
When I submitted this idea to CSS-Tricks. Chris said he had recently dealt with a similar situation. Difficult learning is durable learning. Embrace the struggle.
Never stop learning
I grabbed another issue from the Svelte project and now I’m learning about CSSStyleSheet because there’s another issue (I think), with how Safari handles keyframe animations within stylemanager.ts. And so the learning continues down paths I never would have treaded in my day-to-day work.
When something breaks, enjoy the journey of learning the system. You’ll gain valuable insights into why that thing broke and what can be done to fix it. That’s one of the awesome benefits of contributing to open source projects and why I’d encourage you to do the same.
Here’s a couple of lessons I’ve learned about how not to build React components. These are things I’ve come across over the past couple of months and thought they might be of interest to you if you’re working on a design system, especially one with a bunch of legacy technical decisions and a lot of tech debt under the hood.
Lesson 1: Avoid child components as much as you can
One thing about working on a big design system with lots of components is that the following pattern eventually starts to become problematic real quick:
<Card> <Card.Header>Title</Card.Header> <Card.Body><p>This is some content</p></Card.Body> </Card>
The problematic parts are those child components, Card.Body and Card.Header. This example isn’t terrible because things are relatively simple — it’s when components get more complex that things can get bonkers. For example, each child component can have a whole series of complex props that interfere with the others.
One of my biggest pain points is with our Form components. Take this:
I’m simplifying things considerably, of course, but every time an engineer wants to place two buttons next to each other, they’d import Form.Actions, even if there wasn’t a Form on the page. This meant that everything inside the Form component gets imported and that’s ultimately bad for performance. It just so happens to be bad system design implementation as well.
This also makes things extra difficult when documenting components because now you’ll have to ensure that each of these child components are documented too.
So instead of making Form.Actions a child component, we should’ve made it a brand new component, simply: FormActions (or perhaps something with a better name like ButtonGroup). That way, we don’t have to import Form all the time and we can keep layout-based components separate from the others.
I’ve learned my lesson. From here on out I’ll be avoiding child components altogether where I can.
Lesson 2: Make sure your props don’t conflict with one another
Mandy Michael wrote a great piece about how props can bump into one another and cause all sorts of confusing conflicts, like this TypeScript example:
The purpose of these props are to change the way the image or video is rendered within the card or if the media is rendered at all. The problem with defining them separately is that you end up with a number of flags which toggle component features, many of which are mutually exclusive. For example, you can’t have an image that fills the margins if it’s also hidden.
This was definitely a problem for a lot of the components we inherited in my team’s design systems. There were a bunch of components where boolean props would make a component behave in all sorts of odd and unexpected ways. We even had all sorts of bugs pop up in our Card component during development because the engineers wouldn’t know which props to turn on and turn off for any given effect!
In short: if we combine all of these nascent options together then we have a much cleaner API that’s easily extendable and is less likely to cause confusion in the future.
That’s it! I just wanted to make a quick note about those two lessons. Here’s my question for you: What have you learned when it comes to making components or working on design systems?
The merits of Git as a version control system are difficult to contest, but while Git will do a superb job in keeping track of the commits you and your teammates have made to a repository, it will not, in itself, guarantee the quality of those commits. Git will not stop you from committing code with linting errors in it, nor will it stop you from writing commit messages that convey no information whatsoever about the nature of the commits themselves, and it will, most certainly, not stop you from committing poorly formatted code.
Fortunately, with the help of Git hooks, we can rectify this state of affairs with only a few lines of code. In this tutorial, I will walk you through how to implement Git hooks that will only let you make a commit provided that it meets all the conditions that have been set for what constitutes an acceptable commit. If it does not meet one or more of those conditions, an error message will be shown that contains information about what needs to be done for the commit to pass the checks. In this way, we can keep the commit histories of our code bases neat and tidy, and in doing so make the lives of our teammates, and not to mention our future selves, a great deal easier and more pleasant.
As an added bonus, we will also see to it that code that passes all the tests is formatted before it gets committed. What is not to like about this proposition? Alright, let us get cracking.
Prerequisites
In order to be able to follow this tutorial, you should have a basic grasp of Node.js, npm and Git. If you have never heard of something called package.json and git commit -m [message] sounds like code for something super-duper secret, then I recommend that you pay this and this website a visit before you continue reading.
Our plan of action
First off, we are going to install the dependencies that make implementing pre-commit hooks a walk in the park. Once we have our toolbox, we are going to set up three checks that our commit will have to pass before it is made:
The code should be free from linting errors.
Any related unit tests should pass.
The commit message should adhere to a pre-determined format.
Then, if the commit passes all of the above checks, the code should be formatted before it is committed. An important thing to note is that these checks will only be run on files that have been staged for commit. This is a good thing, because if this were not the case, linting the whole code base and running all the unit tests would add quite an overhead time-wise.
In this tutorial, we will implement the checks discussed above for some front-end boilerplate that uses TypeScript and then Jest for the unit tests and Prettier for the code formatting. The procedure for implementing pre-commit hooks is the same regardless of the stack you are using, so by all means, do not feel compelled to jump on the TypeScript train just because I am riding it; and if you prefer Mocha to Jest, then do your unit tests with Mocha.
Installing the dependencies
First off, we are going to install Husky, which is the package that lets us do whatever checks we see fit before the commit is made. At the root of your project, run:
npm i husky --save-dev
However, as previously discussed, we only want to run the checks on files that have been staged for commit, and for this to be possible, we need to install another package, namely lint-staged:
npm i lint-staged --save-dev
Last, but not least, we are going to install commitlint, which will let us enforce a particular format for our commit messages. I have opted for one of their pre-packaged formats, namely the conventional one, since I think it encourages commit messages that are simple yet to the point. You can read more about it here.
npm install @commitlint/{config-conventional,cli} --save-dev ## If you are on a device that is running windows npm install @commitlint/config-conventional @commitlint/cli --save-dev
After the commitlint packages have been installed, you need to create a config that tells commitlint to use the conventional format. You can do this from your terminal using the command below:
Great! Now we can move on to the fun part, which is to say implementing our checks!
Implementing our pre-commit hooks
Below is an overview of the scripts that I have in the package.json of my boilerplate project. We are going to run two of these scripts out of the box before a commit is made, namely the lint and prettier scripts. You are probably asking yourself why we will not run the test script as well, since we are going to implement a check that makes sure any related unit tests pass. The answer is that you have to be a little bit more specific with Jest if you do not want all unit tests to run when a commit is made.
As you can tell from the code we added to the package.json file below, creating the pre-commit hooks for the lint and prettier scripts does not get more complicated than telling Husky that before a commit is made, lint-staged needs to be run. Then you tell lint-staged to run the lint and prettier scripts on all staged JavaScript and TypeScript files, and that is it!
At this point, if you set out to anger the TypeScript compiler by passing a string to a function that expects a number and then try to commit this code, our lint check will stop your commit in its tracks and tell you about the error and where to find it. This way, you can correct the error of your ways, and while I think that, in itself, is pretty powerful, we are not done yet!
By adding "jest --bail --coverage --findRelatedTests" to our configuration for lint-staged, we also make sure that the commit will not be made if any related unit tests do not pass. Coupled with the lint check, this is the code equivalent of wearing two safety harnesses while fixing broken tiles on your roof.
What about making sure that all commit messages adhere to the commitlint conventional format? Commit messages are not files, so we can not handle them with lint-staged, since lint-staged only works its magic on files staged for commit. Instead, we have to return to our configuration for Husky and add another hook, in which case our package.json will look like so:
If your commit message does not follow the commitlint conventional format, you will not be able to make your commit: so long, poorly formatted and obscure commit messages!
If you get your house in order and write some code that passes both the linting and unit test checks, and your commit message is properly formatted, lint-staged will run the Prettier script on the files staged for commit before the commit is made, which feels like the icing on the cake. At this point, I think we can feel pretty good about ourselves; a bit smug even.
Implementing pre-commit hooks is not more difficult than that, but the gains of doing so are tremendous. While I am always skeptical of adding yet another step to my workflow, using pre-commit hooks has saved me a world of bother, and I would never go back to making my commits in the dark, if I am allowed to end this tutorial on a somewhat pseudo-poetical note.
The web-platform-tests project is a massive suite of tests (over one million in total) which verify that software (mostly web browsers) correctly implement web technologies. It’s as important as it is ambitious: the health of the web depends on a plurality of interoperable implementations.
Although Bocoup has been contributing to the web-platform-tests, or “WPT,” for many years, it wasn’t until late in 2017 that we began collecting test results from web browsers and publishing them to wpt.fyi
Talk about doing God’s work.
The rest of the article is about the incredible pain of scaling a test suite that big. Ultimately Azure Pipelines was helpful.