Month: October 2019

Build a 100% Serverless REST API with Firebase Functions & FaunaDB

Indie and enterprise web developers alike are pushing toward a serverless architecture for modern applications. Serverless architectures typically scale well, avoid the need for server provisioning and most importantly are easy and cheap to set up! And that’s why I believe the next evolution for cloud is serverless because it enables developers to focus on writing applications.

With that in mind, let’s build a REST API (because will we ever stop making these?) using 100% serverless technology.

We’re going to do that with Firebase Cloud Functions and FaunaDB, a globally distributed serverless database with native GraphQL.

Those familiar with Firebase know that Google’s serverless app-building tools also provide multiple data storage options: Firebase Realtime Database and Cloud Firestore. Both are valid alternatives to FaunaDB and are effectively serverless.

But why choose FaunaDB when Firestore offers a similar promise and is available with Google’s toolkit? Since our application is quite simple, it does not matter that much. The main difference is that once my application grows and I add multiple collections, then FaunaDB still offers consistency over multiple collections whereas Firestore does not. In this case, I made my choice based on a few other nifty benefits of FaunaDB, which you will discover as you read along — and FaunaDB’s generous free tier doesn’t hurt, either. 😉

In this post, we’ll cover:

  • Installing Firebase CLI tools
  • Creating a Firebase project with Hosting and Cloud Function capabilities
  • Routing URLs to Cloud Functions
  • Building three REST API calls with Express
  • Establishing a FaunaDB Collection to track your (my) favorite video games
  • Creating FaunaDB Documents, accessing them with FaunaDB’s JavaScript client API, and performing basic and intermediate-level queries
  • And more, of course!

Set Up A Local Firebase Functions Project

For this step, you’ll need Node v8 or higher. Install firebase-tools globally on your machine:

$  npm i -g firebase-tools

Then log into Firebase with this command:

$  firebase login

Make a new directory for your project, e.g. mkdir serverless-rest-api and navigate inside.

Create a Firebase project in your new directory by executing firebase login.

Select Functions and Hosting when prompted.

Choose “functions” and “hosting” when the bubbles appear, create a brand new firebase project, select JavaScript as your language, and choose yes (y) for the remaining options.

Create a new project, then choose JavaScript as your Cloud Function language.

Once complete, enter the functions directory, this is where your code lives and where you’ll add a few NPM packages.

Your API requires Express, CORS, and FaunaDB. Install it all with the following:

$  npm i cors express faunadb

Set Up FaunaDB with NodeJS and Firebase Cloud Functions

Before you can use FaunaDB, you need to sign up for an account.

When you’re signed in, go to your FaunaDB console and create your first database, name it “Games.”

You’ll notice that you can create databases inside other databases . So you could make a database for development, one for production or even make one small database per unit test suite. For now we only need ‘Games’ though, so let’s continue.

Create a new database and name it “Games.”

Then tab over to Collections and create your first Collection named ‘games’. Collections will contain your documents (games in this case) and are the equivalent of a table in other databases— don’t worry about payment details, Fauna has a generous free-tier, the reads and writes you perform in this tutorial will definitely not go over that free-tier. At all times you can monitor your usage in the FaunaDB console.

For the purpose of this API, make sure to name your collection ‘games’ because we’re going to be tracking your (my) favorite video games with this nerdy little API.

Create a Collection in your Games Database and name it “Games.”

Tab over to Security, and create a new Key and name it “Personal Key.” There are 3 different types of keys, Admin/Server/Client. Admin key is meant to manage multiple databases, A Server key is typically what you use in a backend which allows you to manage one database. Finally a client key is meant for untrusted clients such as your browser. Since we’ll be using this key to access one FaunaDB database in a serverless backend environment, choose ‘Server key’.

Under the Security tab, create a new Key. Name it Personal Key.

Save the key somewhere, you’ll need it shortly.

Build an Express REST API with Firebase Functions

Firebase Functions can respond directly to external HTTPS requests, and the functions pass standard Node Request and Response objects to your code — sweet. This makes Google’s Cloud Function requests accessible to middleware such as Express.

Open index.js inside your functions directory, clear out the pre-filled code, and add the following to enable Firebase Functions:

const functions = require('firebase-functions') const admin = require('firebase-admin') admin.initializeApp(functions.config().firebase)

Import the FaunaDB library and set it up with the secret you generated in the previous step:

admin.initializeApp(...)   const faunadb = require('faunadb') const q = faunadb.query const client = new faunadb.Client({   secret: 'secrety-secret...that’s secret :)' })

Then create a basic Express app and enable CORS to support cross-origin requests:

const client = new faunadb.Client({...})   const express = require('express') const cors = require('cors') const api = express()   // Automatically allow cross-origin requests api.use(cors({ origin: true }))

You’re ready to create your first Firebase Cloud Function, and it’s as simple as adding this export:

api.use(cors({...}))   exports.api = functions.https.onRequest(api)

This creates a cloud function named, “api” and passes all requests directly to your api express server.

Routing an API URL to a Firebase HTTPS Cloud Function

If you deployed right now, your function’s public URL would be something like this: https://project-name.firebaseapp.com/api. That’s a clunky name for an access point if I do say so myself (and I did because I wrote this… who came up with this useless phrase?)

To remedy this predicament, you will use Firebase’s Hosting options to re-route URL globs to your new function.

Open firebase.json and add the following section immediately below the “ignore” array:

"ignore": [...], "rewrites": [   {     "source": "/api/v1**/**",     "function": "api"   } ]

This setting assigns all /api/v1/... requests to your brand new function, making it reachable from a domain that humans won’t mind typing into their text editors.

With that, you’re ready to test your API. Your API that does… nothing!

Respond to API Requests with Express and Firebase Functions

Before you run your function locally, let’s give your API something to do.

Add this simple route to your index.js file right above your export statement:

api.get(['/api/v1', '/api/v1/'], (req, res) => {   res     .status(200)     .send(`<img src="https://media.giphy.com/media/hhkflHMiOKqI/source.gif">`) })   exports.api = ...

Save your index.js fil, open up your command line, and change into the functions directory.

If you installed Firebase globally, you can run your project by entering the following: firebase serve.

This command runs both the hosting and function environments from your machine.

If Firebase is installed locally in your project directory instead, open package.json and remove the --only functions parameter from your serve command, then run npm run serve from your command line.

Visit localhost:5000/api/v1/ in your browser. If everything was set up just right, you will be greeted by a gif from one of my favorite movies.

And if it’s not one of your favorite movies too, I won’t take it personally but I will say there are other tutorials you could be reading, Bethany.

Now you can leave the hosting and functions emulator running. They will automatically update as you edit your index.js file. Neat, huh?

FaunaDB Indexing

To query data in your games collection, FaunaDB requires an Index.

Indexes generally optimize query performance across all kinds of databases, but in FaunaDB, they are mandatory and you must create them ahead of time.

As a developer just starting out with FaunaDB, this requirement felt like a digital roadblock.

“Why can’t I just query data?” I grimaced as the right side of my mouth tried to meet my eyebrow.

I had to read the documentation and become familiar with how Indexes and the Fauna Query Language (FQL) actually work; whereas Cloud Firestore creates Indexes automatically and gives me stupid-simple ways to access my data. What gives?

Typical databases just let you do what you want and if you do not stop and think: : “is this performant?” or “how much reads will this cost me?” you might have a problem in the long run. Fauna prevents this by requiring an index whenever you query.
As I created complex queries with FQL, I began to appreciate the level of understanding I had when I executed them. Whereas Firestore just gives you free candy and hopes you never ask where it came from as it abstracts away all concerns (such as performance, and more importantly: costs).

Basically, FaunaDB has the flexibility of a NoSQL database coupled with the performance attenuation one expects from a relational SQL database.

We’ll see more examples of how and why in a moment.

Adding Documents to a FaunaDB Collection

Open your FaunaDB dashboard and navigate to your games collection.

In here, click NEW DOCUMENT and add the following BioShock titles to your collection:

{   "title": "BioShock",   "consoles": [     "windows",     "xbox_360",     "playstation_3",     "os_x",     "ios",     "playstation_4",     "xbox_one"   ],   "release_date": Date("2007-08-21"),   "metacritic_score": 96 }  {   "title": "BioShock 2",   "consoles": [     "windows",     "playstation_3",     "xbox_360",     "os_x"   ],   "release_date": Date("2010-02-09"),   "metacritic_score": 88 }{   "title": "BioShock Infinite",   "consoles": [     "windows",     "playstation_3",     "xbox_360",     "os_x",     "linux"   ],   "release_date": Date("2013-03-26"),   "metacritic_score": 94 }

As with other NoSQL databases, the documents are JSON-style text blocks with the exception of a few Fauna-specific objects (such as Date used in the “release_date” field).

Now switch to the Shell area and clear your query. Paste the following:

Map(Paginate(Match(Index("all_games"))),Lambda("ref",Var("ref")))

And click the “Run Query” button. You should see a list of three items: references to the documents you created a moment ago.

In the Shell, clear out the query field, paste the query provided, and click “Run Query.”

It’s a little long in the tooth, but here’s what the query is doing.

Index("all_games") creates a reference to the all_games index which Fauna generated automatically for you when you established your collection.These default indexes are organized by reference and return references as values. So in this case we use the Match function on the index to return a Set of references. Since we do not filter anywhere, we will receive every document in the ‘games’ collection.

The set that was returned from Match is then passed to Paginate. This function as you would expect adds pagination functionality (forward, backward, skip ahead). Lastly, you pass the result of Paginate to Map, which much like its software counterpart lets you perform an operation on each element in a Set and return an array, in this case it is simply returning ref (the reference id).

As we mentioned before, the default index only returns references. The Lambda operation that we fed to Map, pulls this ref field from each entry in the paginated set. The result is an array of references.

Now that you have a list of references, you can retrieve the data behind the reference by using another function: Get.

Wrap Var("ref") with a Get call and re-run your query, which should look like this:

Map(Paginate(Match(Index("all_games"))),Lambda("ref",Get(Var("ref"))))

Instead of a reference array, you now see the contents of each video game document.

Wrap Var("ref") with a Get function, and re-run the query.

Now that you have an idea of what your game documents look like, you can start creating REST calls, beginning with a POST.

Create a Serverless POST API Request

Your first API call is straightforward and shows off how Express combined with Cloud Functions allow you to serve all routes through one method.

Add this below the previous (and impeccable) API call:

api.get(['/api/v1', '/api/v1/'], (req, res) => {...})   api.post(['/api/v1/games', '/api/v1/games/'], (req, res) => {   let addGame = client.query(     q.Create(q.Collection('games'), {       data: {         title: req.body.title,         consoles: req.body.consoles,         metacritic_score: req.body.metacritic_score,         release_date: q.Date(req.body.release_date)       }     })   )   addGame     .then(response => {       res.status(200).send(`Saved! $ {response.ref}`)       return     })     .catch(reason => {       res.error(reason)     }) })

Please look past the lack of input sanitization for the sake of this example (all employees must sanitize inputs before leaving the work-room).

But as you can see, creating new documents in FaunaDB is easy-peasy.

The q object acts as a query builder interface that maps one-to-one with FQL functions (find the full list of FQL functions here).

You perform a Create, pass in your collection, and include data fields that come straight from the body of the request.

client.query returns a Promise, the success-state of which provides a reference to the newly-created document.

And to make sure it’s working, you return the reference to the caller. Let’s see it in action.

Test Firebase Functions Locally with Postman and cURL

Use Postman or cURL to make the following request against localhost:5000/api/v1/ to add Halo: Combat Evolved to your list of games (or whichever Halo is your favorite but absolutely not 4, 5, Reach, Wars, Wars 2, Spartan…)

$  curl http://localhost:5000/api/v1/games -X POST -H "Content-Type: application/json" -d '{"title":"Halo: Combat Evolved","consoles":["xbox","windows","os_x"],"metacritic_score":97,"release_date":"2001-11-15"}'

If everything went right, you should see a reference coming back with your request and a new document show up in your FaunaDB console.

Now that you have some data in your games collection, let’s learn how to retrieve it.

Retrieve FaunaDB Records Using a REST API Request

Earlier, I mentioned that every FaunaDB query requires an Index and that Fauna prevents you from doing inefficient queries. Since our next query will return games filtered by a game console, we can’t simply use a traditional `where` clause since that might be inefficient without an index. In Fauna, we first need to define an index that allows us to filter.

To filter, we need to specify which terms we want to filter on. And by terms, I mean the fields of document you expect to search on.

Navigate to Indexes in your FaunaDB Console and create a new one.

Name it games_by_console, set data.consoles as the only term since we will filter on the consoles. Then set data.title and ref as values. Values are indexed by range, but they are also just the values that will be returned by the query. Indexes are in that sense a bit like views, you can create an index that returns a different combination of fields and each index can have different security.

To minimize request overhead, we’ve limited the response data (e.g. values) to titles and the reference.

Your screen should resemble this one:

Under indexes, create a new index named games_by_console using the parameters above.

Click “Save” when you’re ready.

With your Index prepared, you can draft up your next API call.

I chose to represent consoles as a directory path where the console identifier is the sole parameter, e.g. /api/v1/console/playstation_3, not necessarily best practice, but not the worst either — come on now.

Add this API request to your index.js file:

api.post(['/api/v1/games', '/api/v1/games/'], (req, res) => {...})   api.get(['/api/v1/console/:name', '/api/v1/console/:name/'], (req, res) => {   let findGamesForConsole = client.query(     q.Map(       q.Paginate(q.Match(q.Index('games_by_console'), req.params.name.toLowerCase())),       q.Lambda(['title', 'ref'], q.Var('title'))     )   )   findGamesForConsole     .then(result => {       console.log(result)       res.status(200).send(result)       return     })     .catch(error => {       res.error(error)     }) })

This query looks similar to the one you used in your SHELL to retrieve all games, but with a slight modification.This query looks similar to the one you used in your SHELL to retrieve all games, but with a slight modification. Note how your Match function now has a second parameter (req.params.name.toLowerCase()) which is the console identifier that was passed in through the URL.

The Index you made a moment ago, games_by_console, had one Term in it (the consoles array), this corresponds to the parameter we have provided to the match parameter. Basically, the Match function searches for the string you pass as its second argument in the index. The next interesting bit is the Lambda function. Your first encounter with Lamba featured a single string as Lambda’s first argument, “ref.”

However, the games_by_console Index returns two fields per result, the two values you specified earlier when you created the Index (data.title and ref). So basically we receive a paginated set containing tuples of titles and references, but we only need titles. In case your set contains multiple values, the parameter of your lambda will be an array. The array parameter above (`[‘title’, ‘ref’]`) says that the first value is bound to the text variable title and the second is bound to the variable ref. text parameter. These variables can then be retrieved again further in the query by using Var(‘title’). In this case, both “title” and “ref,” were returned by the index and your Map with Lambda function maps over this list of results and simply returns only the list of titles for each game.

In fauna, the composition of queries happens before they are executed. When you write var q = q.Match(q.Index('games_by_console'))), the variable just contains a query but no query was executed yet. Only when you pass the query to client.query(q) to be executed, it will execute. You can even pass javascript variables in other Fauna FQL functions to start composing queries. his is a big benefit of querying in Fauna vs the chained asynchronous queries required of Firestore. If you ever have tried to generate very complex queries in SQL dynamically, then you will also appreciate the composition and less declarative nature of FQL.

Save index.js and test out your API with this:

$  curl http://localhost:5000/api/v1/xbox {"data":["Halo: Combat Evolved"]}

Neat, huh? But Match only returns documents whose fields are exact matches, which doesn’t help the user looking for a game whose title they can barely recall.

Although Fauna does not offer fuzzy searching via indexes (yet), we can provide similar functionality by making an index on all words in the string. Or if we want really flexible fuzzy searching we can use the filter syntax. Note that its is not necessarily a good idea from a performance or cost point of view… but hey, we’ll do it because we can and because it is a great example of how flexible FQL is!

Filtering FaunaDB Documents by Search String

The last API call we are going to construct will let users find titles by name. Head back into your FaunaDB Console, select INDEXES and click NEW INDEX. Name the new Index, games_by_title and leave the Terms empty, you won’t be needing them.

Rather than rely on Match to compare the title to the search string, you will iterate over every game in your collection to find titles that contain the search query.

Remember how we mentioned that indexes are a bit like views. In order to filter on title , we need to include `data.title` as a value returned by the Index. Since we are using Filter on the results of Match, we have to make sure that Match returns the title so we can work with it.

Add data.title and ref as Values, compare your screen to mine:

Create another index called games_by_title using the parameters above.

Click “Save” when you’re ready.

Back in index.js, add your fourth and final API call:

api.get(['/api/v1/console/:name', '/api/v1/console/:name/'], (req, res) => {...})   api.get(['/api/v1/games/', '/api/v1/games'], (req, res) => {   let findGamesByName = client.query(     q.Map(       q.Paginate(         q.Filter(           q.Match(q.Index('games_by_title')),           q.Lambda(             ['title', 'ref'],             q.GT(               q.FindStr(                 q.LowerCase(q.Var('title')),                 req.query.title.toLowerCase()               ),               -1             )           )         )       ),       q.Lambda(['title', 'ref'], q.Get(q.Var('ref')))     )   )   findGamesByName     .then(result => {       console.log(result)       res.status(200).send(result)       return     })     .catch(error => {       res.error(error)     }) })

Big breath because I know there are many brackets (Lisp programmers will love this) , but once you understand the components, the full query is quite easy to understand since it’s basically just like coding.

Beginning with the first new function you spot, Filter. Filter is again very similar to the filter you encounter in programming languages. It reduces an Array or Set to a subset based on the result of a Lambda function.

In this Filter, you exclude any game titles that do not contain the user’s search query.

You do that by comparing the result of FindStr (a string finding function similar to JavaScript’s indexOf) to -1, a non-negative value here means FindStr discovered the user’s query in a lowercase-version of the game’s title.

And the result of this Filter is passed to Map, where each document is retrieved and placed in the final result output.

Now you may have thought the obvious: performing a string comparison across four entries is cheap, 2 million…? Not so much.

This is an inefficient way to perform a text search, but it will get the job done for the purpose of this example. (Maybe we should have used ElasticSearch or Solr for this?) Well in that case, FaunaDB is quite perfect as central system to keep your data safe and feed this data into a search engine thanks to the temporal aspect which allows you to ask Fauna: “Hey, give me the last changes since timestamp X?”. So you could setup ElasticSearch next to it and use FaunaDB (soon they have push messages) to update it whenever there are changes. Whoever did this once knows how hard it is to keep such an external search up to date and correct, FaunaDB makes it quite easy.

Test the API by searching for “Halo”:

$  curl http://localhost:5000/api/v1/games?title=halo

Don’t You Dare Forget This One Firebase Optimization

A lot of Firebase Cloud Functions code snippets make one terribly wrong assumption: that each function invocation is independent of another.

In reality, Firebase Function instances can remain “hot” for a short period of time, prepared to execute subsequent requests.

This means you should lazy-load your variables and cache the results to help reduce computation time (and money!) during peak activity, here’s how:

let functions, admin, faunadb, q, client, express, cors, api   if (typeof api === 'undefined') { ... // dump the existing code here }   exports.api = functions.https.onRequest(api)

Deploy Your REST API with Firebase Functions

Finally, deploy both your functions and hosting configuration to Firebase by running firebase deploy from your shell.

Without a custom domain name, refer to your Firebase subdomain when making API requests, e.g. https://{project-name}.firebaseapp.com/api/v1/.

What Next?

FaunaDB has made me a conscientious developer.

When using other schemaless databases, I start off with great intentions by treating documents as if I instantiated them with a DDL (strict types, version numbers, the whole shebang).

While that keeps me organized for a short while, soon after standards fall in favor of speed and my documents splinter: leaving outdated formatting and zombie data behind.

By forcing me to think about how I query my data, which Indexes I need, and how to best manipulate that data before it returns to my server, I remain conscious of my documents.

To aid me in remaining forever organized, my catalog (in FaunaDB Console) of Indexes helps me keep track of everything my documents offer.

And by incorporating this wide range of arithmetic and linguistic functions right into the query language, FaunaDB encourages me to maximize efficiency and keep a close eye on my data-storage policies. Considering the affordable pricing model, I’d sooner run 10k+ data manipulations on FaunaDB’s servers than on a single Cloud Function.

For those reasons and more, I encourage you to take a peek at those functions and consider FaunaDB’s other powerful features.

The post Build a 100% Serverless REST API with Firebase Functions & FaunaDB appeared first on CSS-Tricks.

CSS-Tricks

, , , , , ,

Comparing the Different Types of Native JavaScript Popups

JavaScript has a variety of built-in popup APIs that display special UI for user interaction. Famously:

alert("Hello, World!");

The UI for this varies from browser to browser, but generally you’ll see a little window pop up front and center in a very show-stopping way that contains the message you just passed. Here’s Firefox and Chrome:

Native popups in Firefox (left) and Chrome (right). Note the additional UI preventing additional dialogs in Firefox from triggering it more than once. You can also see how Chrome is pinned to the top of the window.

There is one big problem you should know about up front

JavaScript popups are blocking.

The entire page essentially stops when a popup is open. You can’t interact with anything on the page while one is open — that’s kind of the point of a “modal” but it’s still a UX consideration you should be keenly aware of. And crucially, no other main-thread JavaScript is running while the popup is open, which could (and probably is) unnecessarily preventing your site from doing things it needs to do.

Nine times out of ten, you’d be better off architecting things so that you don’t have to use such heavy-handed stop-everything behavior. Native JavaScript alerts are also implemented by browsers in such a way that you have zero design control. You can’t control *where* they appear on the page or what they look like when they get there. Unless you absolutely need the complete blocking nature of them, it’s almost always better to use a custom user interface that you can design to tailor the experience for the user.

With that out of the way, let’s look at each one of the native popups.

window.alert();

window.alert("Hello World");  <button onclick="alert('Hello, World!');">Show Message</button>  const button = document.querySelectorAll("button"); button.addEventListener("click", () => {   alert("Text of button: " + button.innerText); });

See the Pen
alert("Example");
by Elliot KG (@ElliotKG)
on CodePen.

What it’s for: Displaying a simple message or debugging the value of a variable.

How it works: This function takes a string and presents it to the user in a popup with a button with an “OK” label. You can only change the message and not any other aspect, like what the button says.

The Alternative: Like the other alerts, if you have to present a message to the user, it’s probably better to do it in a way that’s tailor-made for what you’re trying to do.

If you’re trying to debug the value of a variable, consider console.log(<code>"`Value of variable:"`, variable); and looking in the console.

window.confirm();

window.confirm("Are you sure?");  <button onclick="confirm('Would you like to play a game?');">Ask Question</button>  let answer = window.confirm("Do you like cats?"); if (answer) {   // User clicked OK } else {   // User clicked Cancel }

See the Pen
confirm("Example");
by Elliot KG (@ElliotKG)
on CodePen.

What it’s for: “Are you sure?”-style messages to see if the user really wants to complete the action they’ve initiated.

How it works: You can provide a custom message and popup will give you the option of “OK” or “Cancel,” a value you can then use to see what was returned.

The Alternative: This is a very intrusive way to prompt the user. As Aza Raskin puts it:

…maybe you don’t want to use a warning at all.”

There are any number of ways to ask a user to confirm something. Probably a clear UI with a <button>Confirm</button> wired up to do what you need it to do.

window.prompt();

window.prompt("What’s your name?");   let answer = window.prompt("What is your favorite color?"); // answer is what the user typed in, if anything

See the Pen
prompt("Example?", "Default Example");
by Elliot KG (@ElliotKG)
on CodePen.

What it’s for: Prompting the user for an input. You provide a string (probably formatted like a question) and the user sees a popup with that string, an input they can type into, and “OK” and “Cancel” buttons.

How it works: If the user clicks OK, you’ll get what they entered into the input. If they enter nothing and click OK, you’ll get an empty string. If they choose Cancel, the return value will be null.

The Alternative: Like all of the other native JavaScript alerts, this doesn’t allow you to style or position the alert box. It’s probably better to use a <form> to get information from the user. That way you can provide more context and purposeful design.

window.onbeforeunload();

window.addEventListener("beforeunload", () => {   // Standard requires the default to be cancelled.   event.preventDefault();   // Chrome requires returnValue to be set (via MDN)   event.returnValue = ''; });

See the Pen
Example of beforeunload event
by Chris Coyier (@chriscoyier)
on CodePen.

What it’s for: Warn the user before they leave the page. That sounds like it could be very obnoxious, but it isn’t often used obnoxiously. It’s used on sites where you can be doing work and need to explicitly save it. If the user hasn’t saved their work and is about to navigate away, you can use this to warn them. If they *have* saved their work, you should remove it.

How it works: If you’ve attached the beforeunload event to the window (and done the extra things as shown in the snippet above), users will see a popup asking them to confirm if they would like to “Leave” or “Cancel” when attempting to leave the page. Leaving the site may be because the user clicked a link, but it could also be the result of clicking the browser’s refresh or back buttons. You cannot customize the message.

MDN warns that some browsers require the page to be interacted with for it to work at all:

To combat unwanted pop-ups, some browsers don’t display prompts created in beforeunload event handlers unless the page has been interacted with. Moreover, some don’t display them at all.

The Alternative: Nothing that comes to mind. If this is a matter of a user losing work or not, you kinda have to use this. And if they choose to stay, you should be clear about what they should to to make sure it’s safe to leave.

Accessibility

Native JavaScript alerts used to be frowned upon in the accessibility world, but it seems that screen readers have since become smarter in how they deal with them. According to Penn State Accessibility:

The use of an alert box was once discouraged, but they are actually accessible in modern screen readers.

It’s important to take accessibility into account when making your own modals, but there are some great resources like this post by Ire Aderinokun to point you in the right direction.

General alternatives

There are a number of alternatives to native JavaScript popups such as writing your own, using modal window libraries, and using alert libraries. Keep in mind that nothing we’ve covered can fully block JavaScript execution and user interaction, but some can come close by greying out the background and forcing the user to interact with the modal before moving forward.

You may want to look at HTML’s native <dialog> element. Chris recently took a hands-on look) at it. It’s compelling, but apparently suffers from some significant accessibility issues. I’m not entirely sure if building your own would end up better or worse, since handling modals is an extremely non-trivial interactive element to dabble in. Some UI libraries, like Bootstrap, offer modals but the accessibility is still largely in your hands. You might to peek at projects like a11y-dialog.

Wrapping up

Using built-in APIs of the web platform can seem like you’re doing the right thing — instead of shipping buckets of JavaScript to replicate things, you’re using what we already have built-in. But there are serious limitations, UX concerns, and performance considerations at play here, none of which land particularly in favor of using the native JavaScript popups. It’s important to know what they are and how they can be used, but you probably won’t need them a heck of a lot in production web sites.

The post Comparing the Different Types of Native JavaScript Popups appeared first on CSS-Tricks.

CSS-Tricks

, , , , ,
[Top]

It’s All In the Head: Managing the Document Head of a React Powered Site With React Helmet

The document head might not be the most glamorous part of a website, but what goes into it is arguably just as important to the success of your website as its user interface. This is, after all, where you tell search engines about your website and integrate it with third-party applications like Facebook and Twitter, not to mention the assets, ranging from analytics libraries to stylesheets, that you load and initialize there.

A React application lives in the DOM node it was mounted to, and with this in mind, it is not at all obvious how to go about keeping the contents of the document head synchronized with your routes. One way might be to use the componentDidMount lifecycle method, like so:

componentDidMount() {   document.title = "Whatever you want it to be"; }

However, you are not just going to want to change the title of the document, you are also going to want to modify an array of meta and other tags, and it will not be long before you conclude that managing the contents of the document head in this manner gets tedious pretty quickly and prone to error, not to mention that the code you end up with will be anything but semantic. There clearly has to be a better way to keep the document head up to date with your React application. And as you might suspect given the subject matter of this tutorial, there is a simple and easy to use component called React Helmet, which was developed by and is maintained by the National Football League(!).

In this tutorial, we are going to explore a number of common use cases for React Helmet that range from setting the document title to adding a CSS class to the document body. Wait, the document body? Was this tutorial not supposed to be about how to work with the document head? Well, I have got good news for you: React Helmet also lets you work with the attributes of the <html> and <body> tags; and it goes without saying that we have to look into how to do that, too!

View Repo

One important caveat of this tutorial is that I am going to ask you to install Gatsby — a static site generator built on top of React — instead of Create React App. That’s because Gatsby supports server side rendering (SSR) out of the box, and if we truly want to leverage the full power of React Helmet, we will have to use SSR!

Why, you might ask yourself, is SSR important enough to justify the introduction of an entire framework in a tutorial that is about managing the document head of a React application? The answer lies in the fact that search engine and social media crawlers do a very poor job of crawling content that is generated through asynchronous JavaScript. That means, in the absence of SSR, it will not matter that the document head content is up to date with the React application, since Google will not know about it. Fortunately, as you will find out, getting started with Gatsby is no more complicated than getting started with Create React App. I feel quite confident in saying that if this is the first time you have encountered Gatsby, it will not be your last!

Getting started with Gatsby and React Helmet

As is often the case with tutorials like this, the first thing we will do is to install the dependencies that we will be working with.

Let us start by installing the Gatsby command line interface:

npm i -g gatsby-cli

While Gatsby’s starter library contains a plethora of projects that provide tons of built-in features, we are going to restrict ourselves to the most basic of these starter projects, namely the Gatsby Hello World project.

Run the following from your Terminal:

gatsby new my-hello-world-starter https://github.com/gatsbyjs/gatsby-starter-hello-world

my-hello-world-starter is the name of your project, so if you want to change it to something else, do so by all means!

Once you have installed the starter project, navigate into its root directory by running cd [name of your project]/ from the Terminal, and once there, run gatsby develop. Your site is now running at http://localhost:8000, and if you open and edit src/pages/index.js, you will notice that your site is updated instantaneously: Gatsby takes care of all our hot-reloading needs without us even having to think of — and much less touch — a webpack configuration file. Just like Create React App does! While I would recommend all JavaScript developers learn how to set up and configure a project with webpack for a granular understanding of how something works, it sure is nice to have all that webpack boilerplate abstracted away so that we can focus our energy on learning about React Helmet and Gatsby!

Next up, we are going to install React Helmet:

npm i --save react-helmet

After that, we need to install Gatsby Plugin React Helmet to enable server rendering of data added with React Helmet:

npm i --save gatsby-plugin-react-helmet

When you want to use a plugin with Gatsby, you always need to add it to the plugins array in the gatsby-config.js file, which is located at the root of the project directory. The Hello World starter project does not ship with any plugins, so we need to make this array ourselves, like so:

module.exports = {   plugins: [`gatsby-plugin-react-helmet`] }

Great! All of our dependencies are now in place, which means we can move on to the business end of things.

Our first foray with React Helmet

The first question that we need to answer is where React Helmet ought to live in the application. Since we are going to use React Helmet on all of our pages, it makes sense to nest it in a component together with the page header and footer components since they will also be used on every page of our website. This component will wrap the content on all of our pages. This type of component is commonly referred to as a “layout” component in React parlance.

In the src directory, create a new directory called components in which you create a file called layout.js. Once you have done this, copy and paste the code below into this file.

import React from "react" import Helmet from "react-helmet"  export default ({ children }) => (   <>     <Helmet>       <title>Cool</title>     </Helmet>     <div>       <header>         <h1></h1>         <nav>           <ul>           </ul>         </nav>         </header>       {children}       <footer>{`$  {new Date().getFullYear()} No Rights Whatsoever Reserved`}</footer>     </div>   </> )

Let’s break down that code.

First off, if you are new to React, you might be asking yourself what is up with the empty tags that wrap the React Helmet component and the header and footer elements. The answer is that React will go bananas and throw an error if you try to return multiple elements from a component, and for a long time, there was no choice but to nest elements in a parent element — commonly a div — which led to a distinctly unpleasant element inspector experience littered with divs that serve no purpose whatsoever. The empty tags, which are a shorthand way for declaring the Fragment component, were introduced to React as a solution to this problem. They let us return multiple elements from a component without adding unnecessary DOM bloat.

That was quite a detour, but if you are like me, you do not mind a healthy dose of code-related trivia. In any case, let us move on to the <Helmet> section of the code. As you are probably able to deduce from a cursory glance, we are setting the title of the document here, and we are doing it in exactly the same way we would in a plain HTML document; quite an improvement over the clunky recipe I typed up in the introduction to this tutorial! However, the title is hard coded, and we would like to be able to set it dynamically. Before we take a look at how to do that, we are going to put our fancy Layout component to use.

Head over to src/pages/ and open ìndex.js. Replace the existing code with this:

import React from "react" import Layout from "../components/layout"  export default () =>    <Layout>     <div>I live in a layout component, and life is pretty good here!</div>   </Layout>

That imports the Layout component to the application and provides the markup for it.

Making things dynamic

Hard coding things in React does not make much sense because one of the major selling points of React is that makes it’s easy to create reusable components that are customized by passing props to them. We would like to be able to use props to set the title of the document, of course, but what exactly do we want the title to look like? Normally, the document title starts with the name of the website, followed by a separator and ends with the name of the page you are on, like Website Name | Page Name or something similar. You are probably right, in thinking, we could use template literals for this, and right you are!

Let us say that we are creating a website for a company called Cars4All. In the code below, you will see that the Layout component now accepts a prop called pageTitle, and that the document title, which is now rendered with a template literal, uses it as a placeholder value. Setting the title of the document does not get any more difficult than that!

import React from "react" import Helmet from "react-helmet"  export default ({ pageTitle, children }) => (   <>     <Helmet>       <title>{`Cars4All | $  {pageTitle}`}</title>     </Helmet>     <div>       <header>         <h1>Cars4All</h1>         <nav>           <ul>           </ul>         </nav>         </header>       {children}       <footer>{`$  {new Date().getFullYear()} No Rights Whatsoever Reserved`}</footer>     </div>   </> )

Let us update ìndex.js accordingly by setting the pageTitle to “Home”:

import React from "react" import Layout from "../components/layout"  export default () =>    <Layout pageTitle="Home">     <div>I live in a layout component, and life is pretty good here!</div>   </Layout>

If you open http://localhost:8000 in the browser, you will see that the document title is now Cars4All | Home. Victory! However, as stated in the introduction, we will want to do more in the document head than set the title. For instance, we will probably want to include charset, description, keywords, author and viewport meta tags.

How would we go about doing that? The answer is exactly the same way we set the title of the document:

import React from "react" import Helmet from "react-helmet"  export default ({ pageMeta, children }) => (   <>     <Helmet>       <title>{`Cars4All | $  {pageMeta.title}`}</title>              {/* The charset, viewport and author meta tags will always have the same value, so we hard code them! */}       <meta charset="UTF-8" />       <meta name="viewport" content="width=device-width, initial-scale=1.0" />       <meta name="author" content="Bob Trustly" />        {/* The rest we set dynamically with props */}       <meta name="description" content={pageMeta.description} />              {/* We pass an array of keywords, and then we use the Array.join method to convert them to a string where each keyword is separated by a comma */}       <meta name="keywords" content={pageMeta.keywords.join(',')} />     </Helmet>     <div>       <header>         <h1>Cars4All</h1>         <nav>           <ul>           </ul>         </nav>         </header>       {children}       <footer>{`$  {new Date().getFullYear()} No Rights Whatsoever Reserved`}</footer>     </div>   </> )

As you may have noticed, the Layout component no longer accepts a pageTitle prop, but a pageMeta one instead, which is an object that encapsulates all the meta data on a page. You do not have to do bundle all the page data like this, but I am very averse to props bloat. If there is data with a common denominator, I will always encapsulate it like this. Regardless, let us update index.js with the relevant data:

import React from "react" import Layout from "../components/layout"  export default () =>    <Layout     pageMeta={{       title: "Home",       keywords: ["cars", "cheap", "deal"],       description: "Cars4All has a car for everybody! Our prices are the lowest, and the quality the best-est; we are all about having the cake and eating it, too!"     }}   >     <div>I live in a layout component, and life is pretty good here!</div>   </Layout>

If you open http://localhost:8000 again, fire up DevTools and dive into the document head, you will see that all of the meta tags we added are there. Regardless of whether you want to add more meta tags, a canonical URL or integrate your site with Facebook using the Open Graph Protocol, this is how you about about it. One thing that I feel is worth pointing out: if you need to add a script to the document head (maybe because you want to enhance the SEO of your website by including some structured data), then you have to render the script as a string within curly braces, like so:

<script type="application/ld+json">{` {   "@context": "http://schema.org",   "@type": "LocalBusiness",   "address": {   "@type": "PostalAddress",   "addressLocality": "Imbrium",   "addressRegion": "OH",   "postalCode":"11340",   "streetAddress": "987 Happy Avenue"   },   "description": "Cars4All has a car for everybody! Our prices are the lowest, and the quality the best-est; we are all about having the cake and eating it, too!",   "name": "Cars4All",   "telephone": "555",   "openingHours": "Mo,Tu,We,Th,Fr 09:00-17:00",   "geo": {   "@type": "GeoCoordinates",   "latitude": "40.75",   "longitude": "73.98"   }, 			   "sameAs" : ["http://www.facebook.com/your-profile",   "http://www.twitter.com/your-profile",   "http://plus.google.com/your-profile"] } `}</script>

For a complete reference of everything that you can put in the document head, check out Josh Buchea’s great overview.

The escape hatch

For whatever reason, you might have to overwrite a value that you have already set with React Helmet — what do you do then? The clever people behind React Helmet have thought of this particular use case and provided us with an escape hatch: values set in components that are further down the component tree always take precedence over values set in components that find themselves higher up in the component tree. By taking advantage of this, we can overwrite existing values.

Say we have a fictitious component that looks like this:

import React from "react" import Helmet from "react-helmet"  export default () => (   <>     <Helmet>       <title>The Titliest Title of Them All</title>     </Helmet>     <h2>I'm a component that serves no real purpose besides mucking about with the document title.</h2>   </> )

And then we want to include this component in ìndex.js page, like so:

import React from "react" import Layout from "../components/layout" import Fictitious from "../components/fictitious"  export default () =>    <Layout     pageMeta={{       title: "Home",       keywords: ["cars", "cheap", "deal"],       description: "Cars4All has a car for everybody! Our prices are the lowest, and the quality the best-est; we are all about having the cake and eating it, too!"     }}   >     <div>I live in a layout component, and life is pretty good here!</div>     <Fictitious />   </Layout>

Because the Fictitious component hangs out in the underworld of our component tree, it is able to hijack the document title and change it from “Home” to “The Titliest Title of Them All.” While I think it is a good thing that this escape hatch exists, I would caution against using it unless there really is no other way. If other developers pick up your code and have no knowledge of your Fictitious component and what it does, then they will probably suspect that the code is haunted, and we do not want to spook our fellow developers! After all, fighter jets do come with ejection seats, but that is not to say fighter pilots should use them just because they can.

Venturing outside of the document head

As mentioned earlier, we can also use React Helmet to change HTML and body attributes. For example, it’s always a good idea to declare the language of your website, which you do with the HTML lang attribute. That’s set with React Helmet like this:

<Helmet>    /* Setting the language of your page does not get more difficult than this! */   <html lang="en" />        /* Other React Helmet-y stuff...  */ </Helmet>

Now let us really tap into the power of React Helmet by letting the pageMeta prop of the Layout component accept a custom CSS class that is added to the document body. Thus far, our React Helmet work has been limited to one page, so we can really spice things up by creating another page for the Cars4All site and pass a custom CSS class with the Layout component’s pageMeta prop.

First, we need to modify our Layout component. Note that since our Cars4All website will now consist of more than one page, we need to make it possible for site visitors to navigate between these pages: Gatsby’s Link component to the rescue!

Using the Link component is no more difficult than setting its to prop to the name of the file that makes up the page you want to link to. So if we want to create a page for the cars sold by Cars4All and we name the page file cars.js, linking to it is no more difficult than typing out <Link to="/cars/">Our Cars</Link>. When you are on the Our Cars page, it should be possible to navigate back to the ìndex.js page, which we call Home. That means we need to add <Link to="/">Home</Link> to our navigation as well.

In the new Layout component code below, you can see that we are importing the Link component from Gatsby and that the previously empty unordered list in the head element is now populated with the links for our pages. The only thing left to do in the Layout component is add the following snippet:

<body className={pageMeta.customCssClass ? pageMeta.customCssClass : ''}/>

…to the <Helmet> code, which adds a CSS class to the document body if one has been passed with the pageMeta prop. Oh, and given that we are going to pass a CSS class, we do, of course, have to create one. Let’s head back to the src directory and create a new directory called css in which we create a file called main.css. Last, but not least, we have to import it into the Layout component, because otherwise our website will not know that it exists. Then add the following CSS to the file:

.slick {   background-color: yellow;   color: limegreen;   font-family: "Comic Sans MS", cursive, sans-serif; }

Now replace the code in src/components/layout.js with the new Layout code that we just discussed:

import React from "react" import Helmet from "react-helmet" import { Link } from "gatsby" import "../css/main.css"  export default ({ pageMeta, children }) => (   <>     <Helmet>       {/* Setting the language of your page does not get more difficult than this! */}       <html lang="en" />             {/* Add the customCssClass from our pageMeta prop to the document body */}            <body className={pageMeta.customCssClass ? pageMeta.customCssClass : ''}/>              <title>{`Cars4All | $  {pageMeta.title}`}</title>              {/* The charset, viewport and author meta tags will always have the same value, so we hard code them! */}       <meta charset="UTF-8" />       <meta name="viewport" content="width=device-width, initial-scale=1.0" />       <meta name="author" content="Bob Trustly" />        {/* The rest we set dynamically with props */}       <meta name="description" content={pageMeta.description} />              {/* We pass an array of keywords, and then we use the Array.join method to convert them to a string where each keyword is separated by a comma */}       <meta name="keywords" content={pageMeta.keywords.join(',')} />     </Helmet>     <div>       <header>         <h1>Cars4All</h1>         <nav>           <ul>             <li><Link to="/">Home</Link></li>             <li><Link to="/cars/">Our Cars</Link></li>           </ul>         </nav>         </header>       {children}       <footer>{`$  {new Date().getFullYear()} No Rights Whatsoever Reserved`}</footer>     </div>   </> )

We are only going to add a custom CSS class to the document body in the cars.js page, so there is no need to make any modifications to the ìndex.js page. In the src/pages/ directory, create a file called cars.js and add the code below to it.

import React from "react" import Layout from "../components/layout"  export default () =>    <Layout     pageMeta={{       title: "Our Cars",       keywords: <a href="">"toyota", "suv", "volvo"],       description: "We sell Toyotas, gas guzzlers and Volvos. If we don't have the car you would like, let us know and we will order it for you!!!",       customCssClass: "slick"     }}   >     <h2>Our Cars</h2>     <div>A car</div>     <div>Another car</div>     <div>Yet another car</div>     <div>Cars ad infinitum</div>   </Layout>

If you head on over to http://localhost:8000, you will see that you can now navigate between the pages. Moreover, when you land on the cars.js page, you will notice that something looks slightly off… Hmm, no wonder I call myself a web developer and not a web designer! Let’s open DevTools, toggle the document head and navigate back to the ìndex.js page. The content is updated when changing routes!

The icing on the cake

If you inspect the source of your pages, you might feel a tad bit cheated. I promised a SSR React website, but none of our React Helmet goodness can be found in the source.

What was the point of my foisting Gatsby on you, you might ask? Well, patience young padowan! Run gatsby build in Terminal from the root of the site, followed by gatsby serve.

Gatsby will tell you that the site is now running on http://localhost:9000. Dash over there and inspect the source of your pages again. Tadá, it’s all there! You now have a website that has all the advantages of a React SPA without giving up on SEO or integrating with third-party applications and what not. Gatsby is amazing, and it is my sincere hope that you will continue to explore what Gatsby has to offer.

On that note, happy coding!

The post It’s All In the Head: Managing the Document Head of a React Powered Site With React Helmet appeared first on CSS-Tricks.

CSS-Tricks

, , , , , , ,
[Top]

Using the Platform

Tim Kadlec:

So much care and planning has gone into creating the web platform, to ensure that even as new features are added, they’re added in a way that doesn’t break the web for anyone using an older device or browser. Can you say the same for any framework out there? I don’t mean that to be perceived as throwing shade (as the kids say). Building the actual web platform requires a deeper level of commitment to these sorts of things out of necessity.

The platform (meaning using standard features built into browsers) might not have everything you need (it often won’t) and using those features will bring long-term resiliency to what you build in a way that a framework may not. The web evolves and very likely won’t break things. Frameworks evolve and very likely will break things.

Sorta evokes the story of MooTools and Smooshgate.

Direct Link to ArticlePermalink

The post Using the Platform appeared first on CSS-Tricks.

CSS-Tricks

,
[Top]

Learn to Make Your Site Inclusive, by Design

Accessibility is our job. We hear it all the time. But the truth is that it often takes a back seat to competing priorities, deadlines, and decisions from above. How can we solve that?

That’s where An Event Apart comes in. Making sites inclusive by design is just one of the many topics covered over three full days of sessions designed to inspire you and level up your skills while learning from 17 of today’s most talented front-end professionals.

Whether, you’re on the East Coast, West Coast or somewhere in between, An Event Apart is conveniently located near you with conferences happening in San Francisco, Washington D.C., Seattle, Boston, Minneapolis and Orlando. In fact, there’s one happening in Denver right now!

And at An Event Apart, you don’t just learn from the best, you interact with them — at lunch, between sessions, and at the famous first-night Happy Hour party. Web design is more challenging than ever. Attend An Event Apart to be ready for anything the industry throws at you.

CSS-Tricks readers save $ 100 off any two or three days with code AEACP.

Register Today

The post Learn to Make Your Site Inclusive, by Design appeared first on CSS-Tricks.

CSS-Tricks

, , ,
[Top]

Are There Random Numbers in CSS?

CSS allows you to create dynamic layouts and interfaces on the web, but as a language, it is static: once a value is set, it cannot be changed. The idea of randomness is off the table. Generating random numbers at runtime is the territory of JavaScript, not so much CSS. Or is it? If we factor in a little user interaction, we actually can generate some degree of randomness in CSS. Let’s take a look!

Randomization from other languages

There are ways to get some “dynamic randomization” using CSS variables as Robin Rendle explains in an article on CSS-Tricks. But these solutions are not 100% CSS, as they require JavaScript to update the CSS variable with the new random value.

We can use preprocessors such as Sass or Less to generate random values, but once the CSS code is compiled and exported, the values are fixed and the randomness is lost. As Jake Albaugh explains:

Why do I care about random values in CSS?

In the past, I’ve developed simple CSS-only apps such as a trivia game, a Simon game, and a magic trick. But I wanted to do something a little bit more complicated. I’ll leave a discussion about the validity, utility, or practicality of creating these CSS-only snippets for a later time.

Based on the premise that some board games could be represented as Finite State Machines (FSM), they could be represented using HTML and CSS. So I started developing a game of Snakes and Ladders (aka Chutes and Ladders). It is a simple game. The goal is to advance a pawn from the beginning to the end of the board by avoiding the snakes and trying to go up the ladders.

The project seemed feasible, but there was something that I was missing: rolling dice!

The roll of dice (along with the flip of a coin) are universally recognized for randomization. You roll the dice or flip the coin, and you get an unknown value each time.

Simulating a random dice roll

I was going to superimpose layers with labels, and use CSS animations to “rotate” and exchange which layer was on top. Something like this:

Simulation of how the layers animate on a browser

The code to mimic this randomization is not excessively complicated and can be achieved with an animation and different animation delays:

/* The highest z-index is the numbers of sides in the dice */  @keyframes changeOrder {   from { z-index: 6; }    to { z-index: 1; }  }   /* All the labels overlap by using absolute positioning */  label {    animation: changeOrder 3s infinite linear;   background: #ddd;   cursor: pointer;   display: block;   left: 1rem;   padding: 1rem;   position: absolute;   top: 1rem;    user-select: none; }       /* Negative delay so all parts of the animation are in motion */  label:nth-of-type(1) { animation-delay: -0.0s; }  label:nth-of-type(2) { animation-delay: -0.5s; }  label:nth-of-type(3) { animation-delay: -1.0s; }  label:nth-of-type(4) { animation-delay: -1.5s; }  label:nth-of-type(5) { animation-delay: -2.0s; }  label:nth-of-type(6) { animation-delay: -2.5s; }

The animation has been slowed down to allow easier interaction (but still fast enough to see the roadblock explained below). The pseudo-randomness is clearer, too.

See the Pen
Demo of pseudo-randomly generated number with CSS
by Alvaro Montoro (@alvaromontoro)
on CodePen.

But then I hit a roadblock: I was getting random numbers, but sometimes, even when I was clicking on my “dice,” it was not returning any value.

I tried increasing the times in the animation, and that seemed to help a bit, but I was still having some unexpected values.

That’s when I did what most developers do when they find a roadblock they cannot resolve just by searching online: I asked other developers for help in the form of a StackOverflow question.

Luckily for me, the always resourceful Temani Afif came up with an explanation and a solution.

To simplify a little, the problem was that the browser only triggers the click/press event when the element that is active on mouse down is the same element that is active on mouse up.

Because of the rotating animation, the top label on mouse down was not the top label on mouse up, unless I did it fast or slow enough for the animation to circle around. That’s why increasing the animation times hid these issues.

The solution was to apply a position of “static” to break the stacking context, and use a pseudo-element like ::before or ::after with a higher z-index to occupy its place. This way, the active label would always be on top when the mouse went up.

/* The active tag will be static and moved out of the window */  label:active {   margin-left: 200%;   position: static; }  /* A pseudo-element of the label occupies all the space with a higher z-index */ label:active::before {   content: "";   position: absolute;   top: 0;   right: 0;   left: 0;   bottom: 0;   z-index: 10; }

Here is the code with the solution with a faster animation time:

See the Pen
Demo of pseudo-randomly generated number with CSS
by Alvaro Montoro (@alvaromontoro)
on CodePen.

After making this change, the one thing left was to create a small interface to draw a fake dice to click, and the CSS Snakes and Ladders was completed.

This technique has some obvious inconveniences

  • It requires user input: a label must be clicked to trigger the “random number generation.”
  • It doesn’t scale well: it works great with small sets of values, but it is a pain for large ranges.
  • It’s not really random, but pseudo-random: a computer could easily detect which value would be generated in each moment.

But on the other hand, it is 100% CSS (no need for preprocessors or other external helpers) and, for a human user, it can look 100% random.

And talking about hands… This method can be used not only for random numbers but for random anything. In this case, we used it to “randomly” pick the computer choice in a Rock-Paper-Scissors game:

See the Pen
CSS Rock-Paper-Scissors
by Alvaro Montoro (@alvaromontoro)
on CodePen.

The post Are There Random Numbers in CSS? appeared first on CSS-Tricks.

CSS-Tricks

, ,
[Top]

The Current State of Styling Selects in 2019

Best I could tell from the last time I compiled the most wished-for features of CSS, styling form controls was a major ask. Top 5, I’d say. And of the native form elements that people want to style, Greg Whitworth has some data that the <select> element is more requested than any other element — more than double the next element — and it’s the one developers most often customize in some way.

Developers clearly want to style select dropdowns.

You actually do a little. Perhaps more than you realize.

The best crack at this out there comes from Scott Jehl over on the Filament Group blog. I’ll embed a copy here so it’s easy to see:

See the Pen
select-css by Scott/Filament Group
by Chris Coyier (@chriscoyier)
on CodePen.

Notably, this is an entirely cross-browser solution. It’s not something limited to only the most progressive desktop browsers. There are some visual differences across browsers and platforms, but overall it’s pretty consistent and gives you a baseline from which to further customize it.

That’s just the “outside”

Open the select. Hmm, it looks and behaves like you did nothing to it at all.

Styling a <select> doesn’t do anything to the opened dropdown of items. (Screenshot from macOS Chrome)

Some browsers do let you style the inside, but it’s very limited. Any time I’ve gone down this road, I’ve had a bad time getting things cross-browser compliant.

Firefox letting me set the background of the dropdown and the color of a hovered option.

Greg’s data shows that only 14% (third place) of developers found styling the outside to be the most painful part of select elements. I’m gonna steal his chart because it’s absolutely fascinating:

Frustration % Count
Not being able to create a good user experience for searching within the list 27.43% 186
Not being able to style the <option> element to the extent that you needed to 17.85% 121
Not being able to style the default state (dropdown arrow, etc.) 14.01% 95
Not being able to style the pop-up window on desktop (e.g. the border, drop shadows, etc.) 11.36% 77
Insertion of content beyond simple text in the <select> control or its <option>s 11.21% 76
Insertion of arbitrary HTML content in an <option> element 7.82% 53
Not being able to create distinctive unselected/placeholder style and behavior 3.39% 23
Being able to generate new options from a large dataset while the popup is open 3.10% 21
Not being able to style the currently selected <option>(s) to the extent you needed to 1.77% 12
Not being able to style the pop-up window on mobile 1.03% 7
Being able to have the options automatically repeat on scroll (i.e., if you have an list of options 1 – 100, as you reach 100 rather than having the user scroll back to the top, have 1 show up below 100) 1.03% 7

Boiled down, the most painful parts of styling selects are:

  • Search
  • Styling the open dropdown, including the individual options, including more than just text
  • Updating the element without closing it
  • Styling for cases where “nothing” is selected and when an item is selected

I’m surprised multi-select didn’t make the cut. Maybe it’s not on the table for <select> since it wouldn’t be backwards-compatible?

Browser evolution

Edge recently announced they are improving the look of form controls, but no word just yet on standards or how to customize them.

Select styles in Edge/Chromium before (left) and after (right)

It seems like there is good momentum, though. If you want more information and to follow along with all this progress (I know I will!):

The post The Current State of Styling Selects in 2019 appeared first on CSS-Tricks.

CSS-Tricks

, , , ,
[Top]

A Business Case for Dropping Internet Explorer

The distance between Internet Explorer (IE) 11 and every other major browser is an increasingly gaping chasm. Adding support for a technologically obsolete browser adds an inordinate amount of time and frustration to development. Testing becomes onerous. Bug-fixing looms large. Developers have wanted to abandon IE for years, but is it now financially prudent to do so?

First off, we’re talking about a dead browser

Development of IE came to an end in 2015. Microsoft Edge was released as its replacement, with Microsoft announcing that “the latest features and platform updates will only be available in Microsoft Edge”.

Edge was a massive improvement over IE in every respect. Even so, Edge was itself so far behind in implementing web standards that Microsoft recently revealed that they were rebuilding Edge from the ground up using the same technology that powers Google Chrome.

Yet here we are, discussing whether to support Edge’s obsolete ancient relative. Internet Explorer is so bad that a Principal Program Manager at the company published a piece entitled The perils of using Internet Explorer as your default browser on the official Microsoft blog. It’s a browser frozen in time; the web has moved on.

Newspaper headlines from 2015
Publications have spelled the fall of IE since 2015.

Browsers are moving faster than ever before. Consider everything that has happened since 2015. CSS Grid. Custom properties. IE11 will never implement any new features. It’s a browser frozen in time; the web has moved on.

It blocks opportunities and encourages inefficiency

The landscape of browsers has also changed dramatically since Microsoft deprecated IE in 2015. Google developer advocate Sam Thorogood has compiled a list of all the features that are supported by every browser other than IE. Once the new Chromium version of Edge is released, this list will further increase. Taken together, it’s a gargantuan feature set, comprising new HTML elements, new CSS properties and new JavaScript features. Many modern JavaScript features can be made compatible with legacy browsers through the use of polyfills and transpilation. Any CSS feature added to the web over the last four years, however, will fail to work in IE altogether.

Let’s dig a little deeper into the features we have today and how they are affected by IE11. Perhaps most notable of all, after decades of hacking layouts on the web, we finally have CSS grid, which massively simplifies responsive layout. Together with CSS custom properties, object-fit, display: contents and intrinsic sizing, they’re all examples of useful CSS features that are likely to leave a website looking broken if they’re not supported. We’ve had some major additions to CSS over the last five years. It’s the cumulative weight of so many things that undermines IE as much as one killer feature.

While many additions to the web over the last five years have been related to layout and styling, we’ve also had huge steps forwards in functionality, such as Progressive Web Apps. Not every modern API is unusable for websites that need to stay backwards compatible. Most can be wrapped in an if statement.

if ('serviceWorker' in navigator) { // do some stuff with a service worker } else {   // ??? }

You will, however, be delivering a very different experience to IE users. Increasingly, support for IE will limit the choice of tools that are available as library and frameworks utilize modern features.

Take this announcement from Evan You about the release of Vue 3, for example:

The new codebase currently targets evergreen browsers only and assumes baseline native ES2015 support.

The Vue 3 codebase makes use of proxies — a JavaScript feature that cannot be transpiled. MobX is another popular framework that also relies on proxies. Both projects will continue to maintain backwards-compatible versions, but they’ll lack the performance improvements and API niceties gained from dropping IE. Pika, a great new approach to package management, requires support for JavaScript modules, which are not supported in IE. Then there is shadow DOM — a standardized part of the modern web platform that is unlikely to degrade gracefully.

Supporting it takes tremendous effort

When assessing how much extra work is required to provide backwards compatibility for a deprecated browser like IE11, the long list of unimplemented features is only part of the problem. Browsers are incredibly complex pieces of software and, despite web standards, browsers are inconsistent. IE has long been the most bug-ridden browser that is most at odds with web standards. Flexbox (a technology that developers have been using since 2013), for example, is listed on caniuse.com as having partial support on IE due to the “large amount of bugs present.”

IE also offers by far the worst debugging experience — with only a primitive version of DevTools. This makes fixing bugs in IE undoubtedly the most frustrating part of being a developer, and it can be massively time-consuming — taking time away from organizations trying to ship features.

There’s a difference between support — making sure something is functional and looks good enough — versus optimization, where you aim to provide the best experience possible. This does, however, create a potentially confusing grey area. There could be differences of opinion on what constitutes good enough for IE. This comment about IE9 from Dave Rupert is still relevant:

The line for what is considered “broken” is fuzzy. How visually broken does it have to be in order to be functionally broken? I look for cheap fixes, but this is compounded by the fact the offshore QA team doesn’t abide in that nuance, a defect is a defect, which gets logged and assigned to my inbox and pollutes the backlog…Whether it’s polyfills, rogue if-statements, phantom styles, or QA kickbacks; there are costs and technical debt associated with rendering this site on an ever-dwindling sliver of browsers.

If you’re going to take the approach of supporting IE functionally, even if it’s not to the nth degree, still confines you to polyfill, transpile, prefix and test on top of everything else.

It’s already been abandoned by many top websites

Website logos

Popular websites to officially drop support for IE include Youtube, GitHub, Meetup, Slack, Zendesk, Trello, Atlassian, Discord, Spotify, Behance, Wix, Huddle, WhatsApp, Google Earth and Yahoo. Even some of Microsoft’s own product’s, like Teams, have severely reduced support for IE.

Whats App unsupported browser screen

Twitter displays a banner informing IE users that they will not receive the best experience and redirects users to a much older version of the Twitter website. When we think of disruptive companies that are pushing the best in web design, Monzo, Apple Music and Stripe break horribly in IE, while foregoing a warning banner.

Stripe website viewed in Internet Explorer
Stripe offers no support or warning.

Why the new Chromium-powered Edge browser matters

IE usage has been on a slower downward trend following an initial dramatic fall. There’s one primary reason the browser continues to hang on: ancient business applications that don’t work in anything else. Plenty of large companies still use applications that rely on APIs that were never standardized and are now obsolete. Thankfully, the new Edge looks set to solve this issue. In a recent post, the Microsoft Edge Team explained how these companies will finally be able to abandon IE:

The team designed Internet Explorer mode with a goal of 100% compatibility with sites that work today in IE11. Internet Explorer mode appears visually like it’s just a part of the next Microsoft Edge…By leveraging the Enterprise mode site list, IT professionals can enable users of the next Microsoft Edge to simply navigate to IE11-dependent sites and they will just work.

After using the beta version for several months, I can say it’s a genuinely great browser. Dare I say, better than Google Chrome? Microsoft are already pushing it hard. Edge is the default browser for Windows 10. Hundreds of millions of devices still run earlier versions of the operating system, on which Edge has not been available. The new Chromium-powered version will bring support to both Windows 7 and 8. For users stuck on old devices with old operating systems, there is no excuse for using IE anymore. Windows 7, still one of the world’s most popular operating systems, is itself due for end-of-life in January 2020, which should also help drive adoption of Edge when individuals and businesses upgrade to Windows 10.

In other words, it’s the perfect time to drop support.

Performance costs

All current browsers support ECMAScript 2015 (the latest version of JavaScript) — and have done so for quite some time. Transpiling JavaScript down to an older (and slower) version is still common across the industry, but at this point in time is needed only for Internet Explorer. This process, allowing developers to write modern syntax that still works in IE negatively impacts performance. Philip Walton, an engineer at Google, had this to say on the subject:

Larger files take longer to download, but they also take longer to parse and evaluate. When comparing the two versions from my site, the parse/eval times were also consistently about twice as long for the legacy version. […] The cost of shipping lots of unneeded JavaScript to low-end mobile browsers can be significant! We (on the Chrome team) have seen numerous occurrences of polyfill bloat adding seconds to the total startup time of websites on low-end mobile devices.

It’s possible to take a differential serving approach to get around this issue, but it does add a small amount of complexity to build tooling. I’m not sure it’s worth bothering when looking at the entire picture of what it already takes to support IE.

Yet another example: IE requires a massive amount of polyfills if you’re going to utilize modern APIs. This normally involves sending additional, unnecessary code to other browsers in the process. An alternative approach, polyfill.io, costs an additional, blocking HTTP request — even for modern browsers that have no need for polyfills. Both of these approaches are bad for performance.

As for CSS, modern features like CSS grid decrease the need for bulky frameworks like Bootstrap. That’s lots of extra bites we’re unable to shave off if we have to support IE. Other modern CSS properties can replace what’s traditionally done with JavaScript in a way that’s less fragile and more performant. It would be a boon for both performance and cost to take advantage of them.

Let’s talk money

One (overly simplistic) calculation would be to compare the cost of developer time spent on fixing IE bugs and the amount lost productivity working around IE issues versus the revenue from IE users. Unless you’re a large company generating significant revenue from IE, it’s an easy decision. For big corporations, the stakes are much higher. Websites at the scale of Amazon, for example, may generate tens of millions of dollars from IE users, even if they represent less than 1% of total traffic.

I’d argue that any site at such scale would benefit more by dropping support, thanks to reducing load times and bounce rates which are both even more important to revenue. For large companies, the question isn’t whether it’s worth spending a bit of extra development time to assure backwards compatibility. The question is whether you risk degrading the experience for the vast majority of users by compromising performance and opportunities offered by modern features. By providing no incentive for developers to care about new browser features, they’re being held back from innovating and building the best product they can.

It’s a massively valuable asset to have developers who are so curious and inquisitive that they explore and keep up with new technology. By supporting IE, you’re effectively disengaging developers from what’s new. It’s dispiriting to attempt to keep up with what’s new only to learn about features we can’t use. But this isn’t about putting developer experience before user experience. When you improve developer experience, developers are enabled to increase their productivity and ship features — features that users want.

Web development is hard

It was reported earlier this year that the car rental company Hertz was suing Accenture for tens of millions of dollars. Accenture is a Fortune Global 500 company worth billions of dollars. Yet Hertz alleged that, despite an eye-watering price tag, they “never delivered a functional site or mobile app.”

According to The Register:

Among the most mind-boggling allegations in Hertz’s filed complaint is that Accenture didn’t incorporate a responsive design… Despite having missed the deadline by five months, with no completed elements and weighed down by buggy code, Accenture told Hertz it would cost an additional $ 10m – on top of the $ 32m it had already been paid – to finish the project.

The Accenture/Hertz affair is an example of stunning ineptitude but it was also a glaring reminder of the fact that web development is hard. Yet, most companies are failing to take advantage of things that make it easier. Microsoft, Google, Mozilla and Apple are investing massive amounts of money into developing new browser features for a reason. Improvements and innovations that have come to browsers in recent years have expanded what is possible to deliver on the web platform while making developers’ lives easier.

Move fast and ship things

The development industry loves terms — like agile and disruptive — that imply light-footed innovation. Yet rather than focusing on shipping features and creating a great experience for the vast bulk of users, we’re catering to a single outdated legacy browser. All the companies I’ve worked for have constantly talked about technical debt. The weight of legacy code is accurately perceived as something that slows down developers. By failing to take advantage of what modern browsers have to offer, the code we write today is legacy code the moment it is written. By writing for the modern web, you don’t only increase productivity today but also create code that’s easier to maintain in the future. From a long-term perspective, it’s the right decision.

Recruitment and retainment

Developer happiness won’t be viewed as important to the bottom line by some business stakeholders. However, recruiting good engineers is notoriously difficult. Average tenure is low compared to other industries. Nothing can harm developer morale more than a day of IE debugging. In a survey of 76,118 developers conducted by Mozilla “Having to support specific browsers (e.g. IE11)” was ranked as the most frustrating thing in web development. “Avoiding or removing a feature that doesn’t work across browsers” came third while testing across different browsers reached fourth place. By minimising these frustrations, deciding to end support for IE can help with engineer recruitment and retainment.

IE users can still access your website

We live in a multi-device world. Some users will be lucky enough to have a computer provided by their employer, a personal laptop and a tablet. Smartphones are ubiquitous. If an IE user runs into problems using your site, they can complete the transaction on another device. Or they could open a different browser, as Microsoft Edge comes preinstalled on Windows 10.

The reality of cross-browser testing

If you have a thorough and rigorous cross-browser testing process that always gets followed, congratulations! This is rare in my experience. Plenty of companies only test in Chrome. By making cross-browser testing less onerous, it can be made more likely that developers and stakeholders will actually do it. Eliminating all bugs in browsers that are popular is far more worthwhile monetarily than catering to IE.

When do you plan to drop IE support?

Inevitably, your own analytics will be the determining factor in whether dropping IE support is sensible for you. Browser usage varies massively around the world — from almost 10% in South Korea to well below one percent in many parts of the world. Even if you deem today as being too soon for your particular site, be sure to reassess your analytics after the new Microsoft Edge lands.

The post A Business Case for Dropping Internet Explorer appeared first on CSS-Tricks.

CSS-Tricks

, , , ,
[Top]

Animated Position of Focus Ring

Maurice Mahan created FocusOverlay, a “library for creating overlays on focused elements.” That description is a little confusing at you don’t need a library to create focus styles. What the library actually does is animate the focus rings as focus moves from one element to another. It’s based on the same idea as Flying Focus.

I’m not strong enough in my accessibility knowledge to give a definitive answer if this is a great idea or not, but my mind goes like this:

  • It’s a neat effect.
  • I can imagine it being an accessibility win since, while the page will scroll to make sure the next focused element is visible, it doesn’t otherwise help you see where that focus has gone. Movement that directs attention toward the next focused element may help make it more clear.
  • I can imagine it being harmful to accessibility in that it is motion that isn’t usually there and could be surprising.

On that last point, you could conditionally load it depending on a user’s motion preference.

The library is on npm, but is also available as direct linkage thanks to UNPKG. Let’s look at using the URLs to the resources directly to illustrate the concept of conditional loading:

<link    rel="stylesheet"    href="//unpkg.com/focus-overlay@latest/dist/focusoverlay.css"    media="prefers-reduced-motion: no-preference" />  <script> const mq = window.matchMedia("(prefers-reduced-motion: no-preference)");  if (mq.matches) {   let script = document.createElement("script");   script.src = "//unpkg.com/focus-overlay@latest/dist/focusoverlay.js";   document.head.appendChild(script); } </script>

The JavaScript is also 11.5 KB / 4.2 KB compressed and the CSS is 453 B / 290 B compressed, so you’ve always got to factor that into as performance and accessibility are related concepts.

Performance isn’t just script size either. Looking through the code, it looks like the focus ring is created by appending a <div> to the <body> that has a super high z-index value in which to be seen and pointer-events: none as to not interfere. Then it is absolutely positioned with top and left values and sized with width and height. It looks like new positional information is calculated and then applied to this div, and CSS handles the movement. Last I understood, those aren’t particularly performant CSS properties to animate, so I would think a future feature here would be to use animation FLIP to take advantage of only animating transforms.

The post Animated Position of Focus Ring appeared first on CSS-Tricks.

CSS-Tricks

, , ,
[Top]

Bidirectional Horizontal Rules in CSS

Say you have a <blockquote> and the design calls for a thick border along the left side. Well, you might not necessarily mean left side, but actually mean on the side of the start of the text.

That’s exactly what CSS logical properties are meant to address, and Hussein Al Hammad has a nice article laying out some use cases, including the blockquote thing I mentioned above.

By using logical properties, you don’t have to mess around with manually writing selectors including [dir="rtl"] or needing to be aware of writing modes and such. The box model stuff (borders, padding, margin…) will adjust where it needs to be.

Hussein’s blockquote is a good baby step example for understanding of all this:

blockquote {   /* Rather than... */   border-left: 4px solid #aaa;   padding-left: 1.75rem;    /* You'd do... */   border-inline-start: 4px solid #aaa;   padding-inline-start: 1.75rem; }

See the Pen
Logical properties demo: blockquote
by Hussein Al Hammad (@hus_hmd)
on CodePen.

Support is pretty good.

This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.

Desktop

Chrome Opera Firefox IE Edge Safari
69 62* 41 No 76 12.1

Mobile / Tablet

iOS Safari Opera Mobile Opera Mini Android Android Chrome Android Firefox
12.2-12.4 46* No 76 76 68

One thing that threw me off in the article is the term “Horizontal Rules.” First I imagined the <hr> element. Then I imagined wanting to reverse the direction of the design with logical properties. Usually an <hr> is just a line so horizontal direction doesn’t matter, but let’s say it’s like a shorter line along the edge where new lines start after wrapping.

We could draw the shorter line with backgrounds that cover different parts of the box model, then use logical properties where the padding applies. This is a pretty unique edge case, but it’s fun to fiddle with:

See the Pen
<hr>s that are direction aware kinda
by Chris Coyier (@chriscoyier)
on CodePen.

It would be even easier if we had logical gradients.

Direct Link to ArticlePermalink

The post Bidirectional Horizontal Rules in CSS appeared first on CSS-Tricks.

CSS-Tricks

, ,
[Top]