Tag: Google

Getting the Most Out of Variable Fonts on Google Fonts

I have spent the past several years working (alongside a bunch of super talented people) on a font family called Recursive Sans & Mono, and it just launched officially on Google Fonts!

Wanna try it out super fast? Here’s the embed code to use the full Recursive variable font family from Google Fonts (but you will get a lot more flexibility & performance if you read further!)

<link href="https://fonts.googleapis.com/css2?family=Recursive:slnt,wght,CASL,CRSV,MONO@-15..0,300..1000,0..1,0..1,0..1&display=swap" rel="stylesheet">

Recursive is made for code, websites, apps, and more.
Recursive Mono has both Linear and Casual styles for different “voices” in code, along with cursive italics if you want them — plus a wider weight range for monospaced display typography.

Recursive Sans is proportional, but unlike most proportional fonts, letters maintain the same width across styles for more flexibility in UI interactions and layout.

I started Recursive as a thesis project for a type design masters program at KABK TypeMedia, and when I launched my type foundry, Arrow Type, I was subsequently commissioned by Google Fonts to finish and release Recursive as an open-source, OFL font.

You can see Recursive and learn more about it what it can do at recursive.design

Recursive is made to be a flexible type family for both websites and code, where its main purpose is to give developers and designers some fun & useful type to play with, combining fresh aesthetics with the latest in font tech.

First, a necessary definition: variable fonts are font files that fit a range of styles inside one file, usually in a way that allows the font user to select a style from a fluid range of styles. These stylistic ranges are called variable axes, and can be parameters, like font weight, font width, optical size, font slant, or more creative things. In the case of Recursive, you can control the “Monospacedness” (from Mono to Sans) and “Casualness” (between a normal, linear style and a brushy, casual style). Each type family may have one or more of its own axes and, like many features of type, variable axes are another design consideration for font designers.

You may have seen that Google Fonts has started adding variable fonts to its vast collection. You may have read about some of the awesome things variable fonts can do. But, you may not realize that many of the variable fonts coming to Google Fonts (including Recursive) have a lot more stylistic range than you can get from the default Google Fonts front end.

Because Google Fonts has a huge range of users — many of them new to web development — it is understandable that they’re keeping things simple by only showing the “weight” axis for variable fonts. But, for fonts like Recursive, this simplification actually leaves out a bunch of options. On the Recursive page, Google Fonts shows visitors eight styles, plus one axis. However, Recursive actually has 64 preset styles (also called named instances), and a total of five variable axes you can adjust (which account for a slew of more potential custom styles).

Recursive can be divided into what I think of as one of four “subfamilies.” The part shown by Google Fonts is the simplest, proportional (sans-serif) version. The four Recursive subfamilies each have a range of weights, plus Italics, and can be categorized as:

  • Sans Linear: A proportional, “normal”-looking sans-serif font. This is what gets shown on the Google Fonts website.
  • Sans Casual: A proportional “brush casual” font
  • Mono Linear: A monospace “normal” font
  • Mono Casual: A monospace “brush casual” font

This is probably better to visualize than to describe in words. Here are two tables (one for Sans, the other for Mono) showing the 64 named instances:

But again, the main Google Fonts interface only provides access to eight of those styles, plus the Weight axis:

Recursive has 64 preset styles — and many more using when using custom axis settings — but Google Fonts only shows eight of the preset styles, and just the Weight axis of the five available variable axes.

Not many variable fonts today have more than a Weight axis, so this is an understandable UX choice in some sense. Still, I hope they add a little more flexibility in the future. As a font designer & type fan, seeing the current weight-only approach feels more like an artificial flattening than true simplification — sort of like if Google Maps were to “simplify” maps by excluding every road that wasn’t a highway.

Luckily, you can still access the full potential of variable fonts hosted by Google Fonts: meet the Google Fonts CSS API, version 2. Let’s take a look at how you can use this to get more out of Recursive.

But first, it is helpful to know a few things about how variable fonts work.

How variable fonts work, and why it matters

If you’ve ever worked with photos on the web then you know that you should never serve a 9,000-pixel JPEG file when a smaller version will do. Usually, you can shrink a photo down using compression to save bandwidth when users download it.

There are similar considerations for font files. You can often reduce the size of a font dramatically by subsetting the characters included in it (a bit like cropping pixels to just leave the area you need). You can further compress the file by converting it into a WOFF2 file (which is a bit like running a raster image file though an optimizer tool like imageOptim). Vendors that host fonts, like Google Fonts, will often do these things for you automatically.

Now, think of a video file. If you only need to show the first 10 seconds of a 60-second video, you could trim the last 50 seconds to have a much small video file. 

Variable fonts are a bit like video files: they have one or more ranges of information (variable axes), and those can often either be trimmed down or “pinned” to a certain location, which helps to reduce file size. 

Of course, variable fonts are nothing like video files. Fonts record each letter shape in vectors, (similar to SVGs store shape information). Variable fonts have multiple “source locations” which are like keyframes in an animation. To go between styles, the control points that make up letters are mathematically interpolated between their different source locations (also called deltas). A font may have many sets of deltas (at least one per variable axis, but sometimes more). To trim a variable font, then, you must trim out unneeded deltas.

As a specific example, the Casual axis in Recursive takes letterforms from “Linear” to “Casual” by interpolating vector control points between two extremes: basically, a normal drawing and a brushy drawing. The ampersand glyph animation below shows the mechanics in action: control points draw rounded corners at one extreme and shift to squared corners on the other end.

Generally, each added axis doubles the number of drawings that must exist to make a variable font work. Sometimes the number is more or less – Recursive’s Weight axis requires 3 locations (tripling the number of drawings), while its Cursive axis doesn’t require extra locations at all, but actually just activates different alternate glyphs that already exist at each location. But, the general math is: if you only use opt into fewer axes of a variable font, you will usually get a smaller file.

When using the Google Fonts API, you are actually opting into each axis. This way, instead of starting with a big file and whittling it down, you get to pick and choose the parts you want.

Variable axis tags

If you’re going to use the Google Fonts API, you first need to know about font axes abbreviations so you can use them yourself.

Variable font axes have abbreviations in the form of four-letter “tags.” These are lowercase for industry-standard axes and uppercase for axes invented by individual type designers (also called “custom” or “private” axes). 

There are currently five standard axes a font can include: 

  • wght – Weight, to control lightness and boldness
  • wdth – Width, to control overall letter width
  • opsz – Optical Size, to control adjustments to design for better readability at various sizes
  • ital – Italic, generally to switch between separate upright/italic designs
  • slnt – Slant, generally to control upright-to-slanted designs with intermediate values available

Custom axes can be almost anything. Recursive includes three of them — Monospace (MONO), Casual (CASL), and Cursive (CRSV)  — plus two standard axes, wght and slnt.

Google Fonts API basics

When you configure a font embed from the Google Fonts interface, it gives you a bit of HTML or CSS which includes a URL, and this ultimately calls in a CSS document that includes one or more @font-face rules. This includes things like font names as well as links to font files on Google servers.

This URL is actually a way of calling the Google Fonts API, and has a lot more power than you might realize. It has a few basic parts: 

  1. The main URL, specifying the API (https://fonts.googleapis.com/css2)
  2. Details about the fonts you are requesting in one or more family parameters
  3. A font-display property setting in a display parameter

As an example, let’s say we want the regular weight of Recursive (in the Sans Linear subfamily). Here’s the URL we would use with our CSS @import:

@import url('https://fonts.googleapis.com/css2?family=Recursive&display=swap');

Or we can link it up in the <head> of our HTML:

<link href="https://fonts.googleapis.com/css2?family=Recursive&display=swap" rel="stylesheet">

Once that’s in place, we can start applying the font in CSS:

body {   font-family: 'Recursive', sans-serif; }

There is a default value for each axis:

  • MONO 0 (Sans/proportional)
  • CASL 0 (Linear/normal)
  • wght 400 (Regular)
  • slnt 0 (upright)
  • CRSV 0 (Roman/non-cursive lowercase)

Choose your adventure: styles or axes

The Google Fonts API gives you two ways to request portions of variable fonts:

  1. Listing axes and the specific non-default values you want from them
  2. listing axes and the ranges you want from them

Getting specific font styles

Font styles are requested by adding parameters to the Google Fonts URL. To keep the defaults on all axes but use get a Casual style, you could make the query Recursive:CASL@1 (this will serve Recursive Sans Casual Regular). To make that Recursive Mono Casual Regular, specify two axes before the @ and then their respective values (but remember, custom axes have uppercase tags):

https://fonts.googleapis.com/css2?family=Recursive:CASL,MONO@1,1&display=swap

To request both Regular and Bold, you would simply update the family call to Recursive:wght@400;700, adding the wght axis and specific values on it:

https://fonts.googleapis.com/css2?family=Recursive:wght@400;700&display=swap

A very helpful thing about Google Fonts is that you can request a bunch of individual styles from the API, and wherever possible, it will actually serve variable fonts that cover all of those requested styles, rather than separate font files for each style. This is true even when you are requesting specific locations, rather than variable axis ranges — if they can serve a smaller font file for your API request, they probably will.

As variable fonts can be trimmed more flexibility and efficiently in the future, the files served for given API requests will likely get smarter over time. So, for production sites, it may be best to request exactly the styles you need.

Where it gets interesting, however, is that you can also request variable axes. That allows you to retain a lot of design flexibility without changing your font requests every time you want to use a new style.

Getting a full variable font with the Google Fonts API

The Google Fonts API seeks to make fonts smaller by having users opt into only the styles and axes they want. But, to get the full benefits of variable fonts (more design flexibility in fewer files), you should use one or more axes. So, instead of requesting single styles with Recursive:wght@400;700, you can instead request that full range with Recursive:wght@400..700 (changing from the ; to .. to indicate a range), or even extending to the full Recursive weight range with Recursive:wght@300..1000 (which adds very little file size, but a whole lot of extra design oomph).

You can add additional axes by listing them alphabetically (with lowercase standard axes first, then uppercase custom axes) before the @, then specifying their values or ranges after that in the same order. For instance, to add the MONO axis and the wght axis, you could use Recursive:wght,MONO@300..1000,0..1 as the font query.

Or, to get the full variable font, you could use the following URL:

https://fonts.googleapis.com/css2?family=Recursive:slnt,wght,CASL,CRSV,MONO@-15..0,300..1000,0..1,0..1,0..1&display=swap

Of course, you still need to put that into an HTML link, like this:

<link href="https://fonts.googleapis.com/css2?family=Recursive:slnt,wght,CASL,CRSV,MONO@-15..0,300..1000,0..1,0..1,0..1&display=swap" rel="stylesheet">

Customizing it further to balance flexibility and filesize 

While it can be tempting to use every single axis of a variable font, it’s worth remembering that each additional axis adds to the overall files ize. So, if you really don’t expect to use an axis, it makes sense to leave it off. You can always add it later.

Let’s say you want Recursive’s Mono Casual styles in a range of weights,. You could use Recursive:wght,CASL,MONO@300..1000,1,1 like this:

<link href="https://fonts.googleapis.com/css2?family=Recursive:CASL,MONO,wght@1,1,300..1000&display=swap" rel="stylesheet">

You can, of course, add multiple font families to an API call with additional family parameters. Just be sure that the fonts are alphabetized by family name.

<link href="https://fonts.googleapis.com/css2?family=Inter:slnt,wght@-10..0,100..900?family=Recursive:CASL,MONO,wght@1,1,300..1000&display=swap" rel="stylesheet">

Using variable fonts

The standard axes can all be controlled with existing CSS properties. For instance, if you have a variable font with a weight range, you can specify a specific weight with font-weight: 425;. A specific Slant can be requested with font-style: oblique 9deg;. All axes can be controlled with font-variation-settings. So, if you want a Mono Casual very-heavy style of Recursive (assuming you have called the full family as shown above), you could use the following CSS:

body {  font-weight: 950;  font-variation-settings: 'MONO' 1, 'CASL' 1; }

Something good to know: font-variation-settings is much nicer to use along with CSS custom properties

Another useful thing to know is that, while you should be able to activate slant with font-style: italic; or font-style: oblique Xdeg;, browser support for this is inconsistent (at least at the time of this writing), so it is useful to utilize font-variation-settings for the Slant axis, as well.

You can read more specifics about designing with variable fonts at VariableFonts.io and in the excellent collection of CSS-Tricks articles on variable fonts.

Nerdy notes on the performance of variable fonts

If you were to using all 64 preset styles of Recursive as separate WOFF2 files (with their full, non-subset character set), it would be total of about 6.4 MB. By contrast, you could have that much stylistic range (and everything in between) at just 537 KB. Of course, that is a slightly absurd comparison — you would almost never actually use 64 styles on a single web page, especially not with their full character sets (and if you do, you should use subsets and unicode-range).

A better comparison is Recursive with one axis range versus styles within that axis range. In my testing, a Recursive WOFF2 file that’s subset to the “Google Fonts Latin Basic” character set (including only characters to cover English and western European languages), including the full 300–1000 Weight range (and all other axes “pinned” to their default values) is 60 KB. Meanwhile, a single style with the same subset is 25 KB. So, if you use just three weights of Recursive, you can save about 15 KB by using the variable font instead of individual files.

The full variable font as a subset WOFF2 clocks in at 281 KB which is kind of a lot for a font, but not so much if you compare it to the weight of a big JPEG image. So, if you assume that individual styles are about 25 KB, if you plan to use more than 11 styles, you would be better off using the variable font.

This kind of math is mostly an academic exercise for two big reasons:

  1. Variable fonts aren’t just about file size.The much bigger advantage is that they allow you to just design, picking the exact font weights (or other styles) that you want. Is a font looking a little too light? Bump up the font-weight a bit, say from 400 to 425!
  2. More importantly (as explained earlier), if you request variable font styles or axes from Google Fonts, they take care of the heavy lifting for you, sending whatever fonts they deem the most performant and useful based on your API request and the browsers visitors access your site from.

So, you don’t really need to go downloading fonts that the Google Fonts API returns to compare their file sizes. Still, it’s worth understanding the general tradeoffs so you can best decide when to opt into the variable axes and when to limit yourself to a couple of styles.

What’s next?

Fire up CodePen and give the API a try! For CodePen, you will probably want to use the CSS @import syntax, like this in the CSS panel:

@import url('https://fonts.googleapis.com/css2?family=Recursive:CASL,CRSV,MONO,slnt,wght@0..1,0..1,0..1,-15..0,300..1000&display=swap');

It is apparently better to use the HTML link syntax to avoid blocking parallel downloads of other resources. In CodePen, you’d crack open the Pen settings, select HTML, then drop the <link> in the HTML head settings.

Or, hey, you can just fork my CodePen and experiment there:

Take an API configuration shortcut

If you are want to skip the complexity of figuring out exact API calls and looking to opt into variable axes of Recursive and make semi-advanced API calls, I have put together a simple configuration tool on the Recursive minisite (click the “Get Recursive” button). This allows you to quickly select pinned styles or variable ranges that you want to use, and even gives estimates for the resulting file size. But, this only exposes some of the API’s functionality, and you can get more specific if you want. It’s my attempt to get people using the most stylistic range in the smallest files, taking into account the current limitations of variable font instancing.

Use Recursive for code

Also, Recursive is actually designed first as a font to use for code. You can use it on CodePen via your account settings. Better yet, you can download and use the latest Recursive release from GitHub and set it up in any code editor.

Explore more fonts!

The Google Fonts API doc helpfully includes a (partial) list of variable fonts along with details on their available axis ranges. Some of my favorites with axes beyond just Weight are Crimson Pro (ital, wght), Work Sans (ital, wght), Encode Sans (wdth, wght), and Inter (slnt, wght). You can also filter Google Fonts to show only variable fonts, though most of these results have only a Weight axis (still cool and useful, but don’t need custom URL configuration).

Some more amazing variable fonts are coming to Google Fonts. Some that I am especially looking forward to are:

  • Fraunces: “A display, “Old Style” soft-serif typeface inspired by the mannerisms of early 20th century typefaces such as Windsor, Souvenir, and the Cooper Series”
  • Roboto Flex: Like Roboto, but withan extensive ranges of Weight, Width, and Optical Size
  • Crispy: A creative, angular, super-flexible variable display font
  • Science Gothic: A squarish sans “based closely on Bank Gothic, a typeface from the early 1930s—but a lowercase, design axes, and language coverage have been added”

And yes, you can absolutely download and self-host these fonts if you want to use them on projects today. But stay tuned to Google Fonts for more awesomely-flexible typefaces to come!

Of course, the world of type is much bigger than open-source fonts. There are a bunch of incredible type foundries working on exciting, boundary-pushing fonts, and many of them are also exploring new & exciting territory in variable fonts. Many tend to take other approaches to licensing, but for the right project, a good typeface can be an extremely good value (I’m obviously biased, but for a simple argument, just look at how much typography strengthens brands like Apple, Stripe, Google, IBM, Figma, Slack, and so many more). If you want to feast your eyes on more possibilities and you don’t already know these names, definitely check out DJR, OHno, Grilli, XYZ, Dinamo, Typotheque, Underware, Bold Monday, and the many very-fun WIP projects on Future Fonts. (I’ve left out a bunch of other amazing foundries, but each of these has done stuff I particularly love, and this isn’t a directory of type foundries.)

Finally, some shameless plugs for myself: if you’d like to support me and my work beyond Recursive, please consider checking out my WIP versatile sans-serif Name Sans, signing up for my (very) infrequent newsletter, and giving me a follow on Instagram.


The post Getting the Most Out of Variable Fonts on Google Fonts appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , ,

The Fastest Google Fonts

When you use font-display: swap;, which Google Fonts does when you use the default &display=swap part of the URL , you’re already saying, “I’m cool with FOUT,” which is another way of saying web text is displayed right away, and when the web font is ready, “swap” to it.

There is already an async nature to what you are doing, so you might as well extend that async-ness to the rest of the font loading. Harry Roberts:

If you’re going to use font-display for your Google Fonts then it makes sense to asynchronously load the whole request chain.

Harry’s recommended snippet:

<link rel="preconnect"       href="https://fonts.gstatic.com"       crossorigin />  <link rel="preload"       as="style"       href="$  CSS&display=swap" />  <link rel="stylesheet"       href="$  CSS&display=swap"       media="print" onload="this.media='all'" />

$ CSS is the main part of the URL that Google Fonts gives you.

Looks like a ~20% render time savings with no change in how it looks/feels when loading/. Other than that, it’s faster.

Direct Link to ArticlePermalink

The post The Fastest Google Fonts appeared first on CSS-Tricks.

CSS-Tricks

, ,
[Top]

How to Redirect a Search Form to a Site-Scoped Google Search

This is just a tiny little trick that might be helpful on a site where you don’t have the time or desire to build out a really good on-site search solution. Google.com itself can perform searches scoped to one particular site. The trick is getting people there using that special syntax without them even knowing it.

Make a search form:

<form action="https://google.com/search" target="_blank" type="GET">    <label>      Search CSS-Tricks on Google:       <input type="search" name="q">    </label>    <input type="submit" value="Go">  </form>

When that form is submitted, we’ll intercept it and change the value to include the special syntax:

var form = document.querySelector("form");  form.addEventListener("submit", function (e) {   e.preventDefault();   var search = form.querySelector("input[type=search]");   search.value = "site:css-tricks.com " + search.value;   form.submit(); }); 

That’s all.

The post How to Redirect a Search Form to a Site-Scoped Google Search appeared first on CSS-Tricks.

CSS-Tricks

, , , ,
[Top]

Creating an Editable Site with Google Sheets and Eleventy

Remember Tabletop.js? We just covered it a little bit ago in this same exact context: building editable websites. It’s a tool that turns a Google Sheet into an API, that you as a developer can hit for data when building a website. In that last article, we used that API on the client side, meaning JavaScript needed to run on every single page view, hit that URL for the data, and build the page. That might be OK in some circumstances, but let’s do it one better. Let’s hit the API during the build step so that the content is built into the HTML directly. This will be far faster and more resilient.

The situation

As a developer, you might have had to work with clients who keep bugging you with unending revisions on content, sometimes, even after months of building the site. That can be frustrating as it keeps pulling you back, preventing you from doing more productive work.

We’re going to give them the keys to updating content themselves using a tool they are probably already familiar with: Google Sheets.

A new tool

In the last article, we introduced the concept of using Google Sheets with Tabletop.js. Now let’s introduce a new tool to this party: Eleventy

We’ll be using Eleventy (a static site generator) because we want the site to be rendered as a pure static site without having to ship all of the under workings of the site in the client side JavaScript. We’ll be pulling the content from the API at build time and having Eleventy create a minified index.html that we’ll push to the server for the production website. By being static, this allows the page to load faster and is better for security reasons.

The spreadsheet

We’ll be using a demo I built, with its repo and Google Sheet to demonstrate how to replicate something similar in your own projects. First, we’ll need a Google Sheet which will be our data store.

Open a new spreadsheet and enter your own values in the columns just like mine. The first cell of each column is the reference that’ll be used later in our JavaScript, and the second cell is the actual content that gets displayed.

In the first column, “header” is the reference name and “Please edit me!” is the actual content in the first column.

Next up, we’ll publish the data to the web by clicking on File → Publish to the web in the menu bar.

A link will be provided, but it’s technically useless to us, so we can ignore it. The important thing is that the spreadsheet(and its data) is now publicly accessible so we can fetch it for our app.

Take note that we’ll need the unique ID of the sheet from its URL  as we go on.

Node is required to continue, so be sure that’s installed. If you want to cut through the process of installing all of thedependencies for this work, you can fork or download my repo and run:

npm install

Run this command next — I’ll explain why it’s important in a bit:

npm run seed

Then to run it locally:

npm run dev

Alright, let’s go into src/site/_data/prod/sheet.js. This is where we’re going to pull in data from the GoogleSheet, then turn it into an object we can easily use, and finally convert the JavaScript object back to JSON format. The JSON is stored locally for development so we don’t need to hit the API every time.

Here’s the code we want in there. Again, be sure to change the variable sheetID to the unique ID of your own sheet.

 module.exports = () => {   return new Promise((resolve, reject) => {     console.log(`Requesting content from $ {googleSheetUrl}`);     axios.get(googleSheetUrl)       .then(response => {         // massage the data from the Google Sheets API into         // a shape that will more convenient for us in our SSG.         var data = {           "content": []         };         response.data.feed.entry.forEach(item => {           data.content.push({             "header": item.gsx$ header.$ t,             "header2": item.gsx$ header2.$ t,             "body": item.gsx$ body.$ t,             "body2": item.gsx$ body2.$ t,             "body3":  item.gsx$ body3.$ t,             "body4": item.gsx$ body4.$ t,             "body5": item.gsx$ body5.$ t,             "body6":  item.gsx$ body6.$ t,             "body7": item.gsx$ body7.$ t,             "body8": item.gsx$ body8.$ t,             "body9":  item.gsx$ body9.$ t,             "body10": item.gsx$ body10.$ t,             "body11": item.gsx$ body11.$ t,             "body12":  item.gsx$ body12.$ t,             "body13": item.gsx$ body13.$ t,             "body14": item.gsx$ body14.$ t,             "body15":  item.gsx$ body15.$ t,             "body16": item.gsx$ body16.$ t,             "body17": item.gsx$ body17.$ t,                        })         });         // stash the data locally for developing without         // needing to hit the API each time.         seed(JSON.stringify(data), `$ {__dirname}/../dev/sheet.json`);         // resolve the promise and return the data         resolve(data);       })       // uh-oh. Handle any errrors we might encounter       .catch(error => {         console.log('Error :', error);         reject(error);       });   }) }

In module.exports, there’s a promise that’ll resolve our data or throw errors when necessary. You’ll notice that I’m using a axios to fetch the data from the spreadsheet. I like the it handles status error codes by rejecting the promise automatically, unlike something like Fetch that requires monitoring error codes manually.

I created a data object in there with a content array in it. Feel free to change the structure of the object, depending on what the spreadsheet looks like.

We’re using the forEach() method to loop through each spreadsheet column while equating it with the corresponding name we want to allocate to it, while pushing all of these into the data object as content. 

Remember that seed command from earlier? We’re using seed to transform what’s in the data object to JSON by way of JSON.stringify, which is then sent to src/site/_data/dev/sheet.json

Yes! Now have data in a format we can use with any templating engine, like Nunjucks, to manipulate it. But, we’re focusing on content in this project, so we’ll be using the index.md template format to communicate the data stored in the project.

For example, here’s how it looks to pull item.header through a for loop statement:

<div class="listing"> {%- for item in sheet.content -%}   <h1>{{ item.header }} </h1> {%- endfor -%} </div>

If you’re using Nunjucks, or any other templating engine, you’ll have to pull the data accordingly.

Finally, let’s build this out:

npm run build

Note that you’ll want a dist folder in the project where the build process can send the compiled assets.

But that’s not all! If we were to edit the Google Sheet, we won’t see anything update on our site. That’s where Zapier comes in. We can “zap” Google sheet and Netlify so that an update to the Google Sheet triggers a deployment from Netlify.

Assuming you have a Zapier account up and running, we can create the zap by granting permissions for Google and Netlify to talk to one another, then adding triggers.

The recipe we’re looking for? We’re connecting Google Sheets to Netlify so that when a “new or updated sheet row” takes place, Netlify starts a deploy. It’s truly a set-it-and-forget-it sort of deal.

Yay, there we go! We have a performant static site that takes its data from Google Sheets and deploys automatically when updates are made to the sheet.

The post Creating an Editable Site with Google Sheets and Eleventy appeared first on CSS-Tricks.

CSS-Tricks

, , , , ,
[Top]

Build a Node.js Tool to Record and Compare Google Lighthouse Reports

In this tutorial, I’ll show you step by step how to create a simple tool in Node.js to run Google Lighthouse audits via the command line, save the reports they generate in JSON format and then compare them so web performance can be monitored as the website grows and develops.

I’m hopeful this can serve as a good introduction for any developer interested in learning about how to work with Google Lighthouse programmatically.

But first, for the uninitiated…

What is Google Lighthouse?

Google Lighthouse is one of the best-automated tools available on a web developer’s utility belt. It allows you to quickly audit a website in a number of key areas which together can form a measure of its overall quality. These are:

  • Performance
  • Accessibility
  • Best Practices
  • SEO
  • Progressive Web App

Once the audit is complete, a report is then generated on what your website does well… and not so well, with the latter intending to serve as an indicator for what your next steps should be to improve the page.

Here’s what a full report looks like. 

Along with other general diagnostics and web performance metrics, a really useful feature of the report is that each of the key areas is aggregated into color-coded scores between 0-100.

Not only does this allow developers to quickly gauge the quality of a website without further analysis, but it also allows non-technical folk such as stakeholders or clients to understand as well.

For example, this means, it’s much easier to share the win with Heather from marketing after spending time improving website accessibility as she’s more able to appreciate the effort after seeing the Lighthouse accessibility score go up 50 points into the green.

But equally, Simon the project manager may not understand what Speed Index or First Contentful Paint means, but when he sees the Lighthouse report showing website performance score knee deep in the red, he knows you still have work to do.

If you’re in Chrome or the latest version of Edge, you can run a Lighthouse audit for yourself right now using DevTools. Here’s how:

You can also run a Lighthouse audit online via PageSpeed Insights or through popular performance tools, such as WebPageTest.

However, today, we’re only interested in Lighthouse as a Node module, as this allows us to use the tool programmatically to audit, record and compare web performance metrics.

Let’s find out how.

Setup

First off, if you don’t already have it, you’re going to need Node.js. There are a million different ways to install it. I use the Homebrew package manager, but you can also download an installer straight from the Node.js website if you prefer. This tutorial was written with Node.js v10.17.0 in mind, but will very likely work just fine on the most versions released in the last few years.

You’re also going to need Chrome installed, as that’s how we’ll be running the Lighthouse audits.

Next, create a new directory for the project and then cd into it in the console. Then run npm init to begin to create a package.json file. At this point, I’d recommend just bashing the Enter key over and over to skip as much of this as possible until the file is created.

Now, let’s create a new file in the project directory. I called mine lh.js, but feel free to call it whatever you want. This will contain all of JavaScript for the tool. Open it in your text editor of choice, and for now, write a console.log statement.

console.log('Hello world');

Then in the console, make sure your CWD (current working directory) is your project directory and run node lh.js, substituting my file name for whatever you’ve used.

You should see:

$  node lh.js Hello world

If not, then check your Node installation is working and you’re definitely in the correct project directory.

Now that’s out of the way, we can move on to developing the tool itself.

Opening Chrome with Node.js

Let’s install our project’s first dependency: Lighthouse itself.

npm install lighthouse --save-dev

This creates a node_modules directory that contains all of the package’s files. If you’re using Git, the only thing you’ll want to do with this is add it to your .gitignore file.

In lh.js, you’ll next want to delete the test console.log() and import the Lighthouse module so you can use it in your code. Like so:

const lighthouse = require('lighthouse');

Below it, you’ll also need to import a module called chrome-launcher, which is one of Lighthouse’s dependencies and allows Node to launch Chrome by itself so the audit can be run.

const lighthouse = require('lighthouse'); const chromeLauncher = require('chrome-launcher');

Now that we have access to these two modules, let’s create a simple script which just opens Chrome, runs a Lighthouse audit, and then prints the report to the console.

Create a new function that accepts a URL as a parameter. Because we’ll be running this using Node.js, we’re able to safely use ES6 syntax as we don’t have to worry about those pesky Internet Explorer users.

const launchChrome = (url) => {  }

Within the function, the first thing we need to do is open Chrome using the chrome-launcher module we imported and send it to whatever argument is passed through the url parameter. 

We can do this using its launch() method and its startingUrl option.

const launchChrome = url => {   chromeLauncher.launch({     startingUrl: url   }); };

Calling the function below and passing a URL of your choice results in Chrome being opened at the URL when the Node script is run.

launchChrome('https://www.lukeharrison.dev');

The launch function actually returns a promise, which allows us to access an object containing a few useful methods and properties.

For example, using the code below, we can open Chrome, print the object to the console, and then close Chrome three seconds later using its kill() method.

const launchChrome = url => {   chromeLauncher     .launch({       startingUrl: url     })     .then(chrome => {       console.log(chrome);       setTimeout(() => chrome.kill(), 3000);     }); };  launchChrome("https://www.lukeharrison.dev");

Now that we’ve got Chrome figured out, let’s move on to Lighthouse.

Running Lighthouse programmatically

First off, let’s rename our launchChrome() function to something more reflective of its final functionality: launchChromeAndRunLighthouse(). With the hard part out of the way, we can now use the Lighthouse module we imported earlier in the tutorial.

In the Chrome launcher’s then function, which only executes once the browser is open, we’ll pass Lighthouse the function’s url argument and trigger an audit of this website.

const launchChromeAndRunLighthouse = url => {   chromeLauncher     .launch({       startingUrl: url     })     .then(chrome => {       const opts = {         port: chrome.port       };       lighthouse(url, opts);     }); };  launchChromeAndRunLighthouse("https://www.lukeharrison.dev");

To link the lighthouse instance to our Chrome browser window, we have to pass its port along with the URL.

If you were to run this script now, you will hit an error in the console:

(node:47714) UnhandledPromiseRejectionWarning: Error: You probably have multiple tabs open to the same origin.

To fix this, we just need to remove the startingUrl option from Chrome Launcher and let Lighthouse handle URL navigation from here on out.

const launchChromeAndRunLighthouse = url => {   chromeLauncher.launch().then(chrome => {     const opts = {       port: chrome.port     };     lighthouse(url, opts);   }); };

If you were to execute this code, you’ll notice that something definitely seems to be happening. We just aren’t getting any feedback in the console to confirm the Lighthouse audit has definitely run, nor is the Chrome instance closing by itself like before.

Thankfully, the lighthouse() function returns a promise which lets us access the audit results.

Let’s kill Chrome and then print those results to the terminal in JSON format via the report property of the results object.

const launchChromeAndRunLighthouse = url => {   chromeLauncher.launch().then(chrome => {     const opts = {       port: chrome.port     };     lighthouse(url, opts).then(results => {       chrome.kill();       console.log(results.report);     });   }); };

While the console isn’t the best way to display these results, if you were to copy them to your clipboard and visit the Lighthouse Report Viewer, pasting here will show the report in all of its glory.

At this point, it’s important to tidy up the code a little to make the launchChromeAndRunLighthouse() function return the report once it’s finished executing. This allows us to process the report later without resulting in a messy pyramid of JavaScript.

const lighthouse = require("lighthouse"); const chromeLauncher = require("chrome-launcher");  const launchChromeAndRunLighthouse = url => {   return chromeLauncher.launch().then(chrome => {     const opts = {       port: chrome.port     };     return lighthouse(url, opts).then(results => {       return chrome.kill().then(() => results.report);     });   }); };  launchChromeAndRunLighthouse("https://www.lukeharrison.dev").then(results => {   console.log(results); });

One thing you may have noticed is that our tool is only able to audit a single website at the moment. Let’s change this so you can pass the URL as an argument via the command line.

To take the pain out of working with command-line arguments, we’ll handle them with a package called yargs.

npm install --save-dev yargs

Then import it at the top of your script along with Chrome Launcher and Lighthouse. We only need its argv function here.

const lighthouse = require('lighthouse'); const chromeLauncher = require('chrome-launcher'); const argv = require('yargs').argv;

This means if you were to pass a command line argument in the terminal like so:

node lh.js --url https://www.google.co.uk

…you can access the argument in the script like so:

const url = argv.url // https://www.google.co.uk

Let’s edit our script to pass the command line URL argument to the function’s url parameter. It’s important to add a little safety net via the if statement and error message in case no argument is passed.

if (argv.url) {   launchChromeAndRunLighthouse(argv.url).then(results => {     console.log(results);   }); } else {   throw "You haven't passed a URL to Lighthouse"; }

Tada! We have a tool that launches Chrome and runs a Lighthouse audit programmatically before printing the report to the terminal in JSON format.

Saving Lighthouse reports

Having the report printed to the console isn’t very useful as you can’t easily read its contents, nor are they aren’t saved for future use. In this section of the tutorial, we’ll change this behavior so each report is saved into its own JSON file.

To stop reports from different websites getting mixed up, we’ll organize them like so:

  • lukeharrison.dev
    • 2020-01-31T18:18:12.648Z.json
    • 2020-01-31T19:10:24.110Z.json
  • cnn.com
    • 2020-01-14T22:15:10.396Z.json
  • lh.js

We’ll name the reports with a timestamp indicating when the date/time the report was generated. This will mean no two report file names will ever be the same, and it’ll help us easily distinguish between reports.

There is one issue with Windows that requires our attention: the colon (:) is an illegal character for file names. To mitigate this issue, we’ll replace any colons with underscores (_), so a typical report filename will look like:

  • 2020-01-31T18_18_12.648Z.json

Creating the directory

First, we need to manipulate the command line URL argument so we can use it for the directory name.

This involves more than just removing the www, as it needs to account for audits run on web pages which don’t sit at the root (eg: www.foo.com/bar), as the slashes are invalid characters for directory names. 

For these URLs, we’ll replace the invalid characters with underscores again. That way, if you run an audit on https://www.foo.com/bar, the resulting directory name containing the report would be foo.com_bar.

To make dealing with URLs easier, we’ll use a native Node.js module called url. This can be imported like any other package and without having to add it to thepackage.json and pull it via npm.

const lighthouse = require('lighthouse'); const chromeLauncher = require('chrome-launcher'); const argv = require('yargs').argv; const url = require('url');

Next, let’s use it to instantiate a new URL object.

if (argv.url) {   const urlObj = new URL(argv.url);    launchChromeAndRunLighthouse(argv.url).then(results => {     console.log(results);   }); }

If you were to print urlObj to the console, you would see lots of useful URL data we can use.

$  node lh.js --url https://www.foo.com/bar URL {   href: 'https://www.foo.com/bar',   origin: 'https://www.foo.com',   protocol: 'https:',   username: '',   password: '',   host: 'www.foo.com',   hostname: 'www.foo.com',   port: '',   pathname: '/bar',   search: '',   searchParams: URLSearchParams {},   hash: '' }

Create a new variable called dirName, and use the string replace() method on the host property of our URL to get rid of the www in addition to the https protocol:

const urlObj = new URL(argv.url); let dirName = urlObj.host.replace('www.','');

We’ve used let here, which unlike const can be reassigned, as we’ll need to update the reference if the URL has a pathname, to replace slashes with underscores. This can be done with a regular expression pattern, and looks like this:

const urlObj = new URL(argv.url); let dirName = urlObj.host.replace("www.", ""); if (urlObj.pathname !== "/") {   dirName = dirName + urlObj.pathname.replace(///g, "_"); }

Now we can create the directory itself. This can be done through the use of another native Node.js module called fs (short for “file system”).

const lighthouse = require('lighthouse'); const chromeLauncher = require('chrome-launcher'); const argv = require('yargs').argv; const url = require('url'); const fs = require('fs');

We can use its mkdir() method to create a directory, but first have to use its existsSync() method to check if the directory already exists, as Node.js would otherwise throw an error:

const urlObj = new URL(argv.url); let dirName = urlObj.host.replace("www.", ""); if (urlObj.pathname !== "/") {   dirName = dirName + urlObj.pathname.replace(///g, "_"); } if (!fs.existsSync(dirName)) {   fs.mkdirSync(dirName); }

Testing the script at the point should result in a new directory being created. Passing https://www.bbc.co.uk/news as the URL argument would result in a directory named bbc.co.uk_news.

Saving the report

In the then function for launchChromeAndRunLighthouse(), we want to replace the existing console.log with logic to write the report to disk. This can be done using the fs module’s writeFile() method.

launchChromeAndRunLighthouse(argv.url).then(results => {   fs.writeFile("report.json", results, err => {     if (err) throw err;   }); });

The first parameter represents the file name, the second is the content of the file and the third is a callback containing an error object should something go wrong during the write process. This would create a new file called report.json containing the returning Lighthouse report JSON object.

We still need to send it to the correct directory, with a timestamp as its file name. The former is simple — we pass the dirName variable we created earlier, like so:

launchChromeAndRunLighthouse(argv.url).then(results => {   fs.writeFile(`$ {dirName}/report.json`, results, err => {     if (err) throw err;   }); });

The latter though requires us to somehow retrieve a timestamp of when the report was generated. Thankfully, the report itself captures this as a data point, and is stored as the fetchTime property. 

We just need to remember to swap any colons (:) for underscores (_) so it plays nice with the Windows file system.

launchChromeAndRunLighthouse(argv.url).then(results => {   fs.writeFile(     `$ {dirName}/$ {results["fetchTime"].replace(/:/g, "_")}.json`,     results,     err => {       if (err) throw err;     }   ); }); 

If you were to run this now, rather than a timestamped.json filename, instead you would likely see an error similar to:

UnhandledPromiseRejectionWarning: TypeError: Cannot read property 'replace' of undefined

This is happening because Lighthouse is currently returning the report in JSON format, rather than an object consumable by JavaScript.

Thankfully, instead of parsing the JSON ourselves, we can just ask Lighthouse to return the report as a regular JavaScript object instead.

This requires editing the below line from:

return chrome.kill().then(() => results.report);

…to:

return chrome.kill().then(() => results.lhr);

Now, if you rerun the script, the file will be named correctly. However, when opened, it’s only content will unfortunately be…

[object Object]

This is because we’ve now got the opposite problem as before. We’re trying to render a JavaScript object without stringifying it into a JSON object first.

The solution is simple. To avoid having to waste resources on parsing or stringifying this huge object, we can return both types from Lighthouse:

return lighthouse(url, opts).then(results => {   return chrome.kill().then(() => {     return {       js: results.lhr,       json: results.report     };   }); });

Then we can modify the writeFile instance to this:

fs.writeFile(   `$ {dirName}/$ {results.js["fetchTime"].replace(/:/g, "_")}.json`,   results.json,   err => {     if (err) throw err;   } );

Sorted! On completion of the Lighthouse audit, our tool should now save the report to a file with a unique timestamped filename in a directory named after the website URL.

This means reports are now much more efficiently organized and won’t override each other no matter how many reports are saved.

Comparing Lighthouse reports

During everyday development, when I’m focused on improving performance, the ability to very quickly compare reports directly in the console and see if I’m headed in the right direction could be extremely useful. With this in mind, the requirements of this compare functionality ought to be:

  1. If a previous report already exists for the same website when a Lighthouse audit is complete, automatically perform a comparison against it and show any changes to key performance metrics.
  2. I should also be able to compare key performance metrics from any two reports, from any two websites, without having to generate a new Lighthouse report which I may not need.

What parts of a report should be compared? These are the numerical key performance metrics collected as part of any Lighthouse report. They provide insight into the objective and perceived performance of a website.

In addition, Lighthouse also collects other metrics that aren’t listed in this part of the report but are still in an appropriate format to be included in the comparison. These are:

  • Time to first byte – Time To First Byte identifies the time at which your server sends a response.
  • Total blocking time – Sum of all time periods between FCP and Time to Interactive, when task length exceeded 50ms, expressed in milliseconds.
  • Estimated input latency – Estimated Input Latency is an estimate of how long your app takes to respond to user input, in milliseconds, during the busiest 5s window of page load. If your latency is higher than 50ms, users may perceive your app as laggy.

How should the metric comparison be output to the console? We’ll create a simple percentage-based comparison using the old and new metrics to see how they’ve changed from report to report.

To allow for quick scanning, we’ll also color-code individual metrics depending on if they’re faster, slower or unchanged.

We’ll aim for this output:

First Contentful Paint is 0.49% slower First Meaningful Paint is 0.47% slower Speed Index is 12.92% slower Estimated Input Latency is the same Total Blocking Time is 85.71% faster Max Potential First Input Delay is 10.53% faster Time to first byte is 19.89% slower First CPU Idle is 0.47% slower Time to Interactive is 0.02% slower

Compare the new report against the previous report

Let’s get started by creating a new function called compareReports() just below our launchChromeAndRunLighthouse() function, which will contain all the comparison logic. We’ll give it two parameters —from and to  — to accept the two reports used for the comparison.

For now, as a placeholder, we’ll just print out some data from each report to the console to validate that it’s receiving them correctly.

const compareReports = (from, to) => {   console.log(from["finalUrl"] + " " + from["fetchTime"]);   console.log(to["finalUrl"] + " " + to["fetchTime"]); };

As this comparison would begin after the creation of a new report, the logic to execute this function should sit in the then function for launchChromeAndRunLighthouse().

If, for example, you have 30 reports sitting in a directory, we need to determine which one is the most recent and set it as the previous report which the new one will be compared against. Thankfully, we already decided to use a timestamp as the filename for a report, so this gives us something to work with.

First off, we need to collect any existing reports. To make this process easy, we’ll install a new dependency called glob, which allows for pattern matching when searching for files. This is critical because we can’t predict how many reports will exist or what they’ll be called.

Install it like any other dependency:

npm install glob --save-dev

Then import it at the top of the file the same way as usual:

const lighthouse = require('lighthouse'); const chromeLauncher = require('chrome-launcher'); const argv = require('yargs').argv; const url = require('url'); const fs = require('fs'); const glob = require('glob');

We’ll use glob to collect all of the reports in the directory, which we already know the name of via the dirName variable. It’s important to set its sync option to true as we don’t want JavaScript execution to continue until we know how many other reports exist.

launchChromeAndRunLighthouse(argv.url).then(results => {   const prevReports = glob(`$ {dirName}/*.json`, {     sync: true   });    // et al  });

This process returns an array of paths. So if the report directory looked like this:

  • lukeharrison.dev
    • 2020-01-31T10_18_12.648Z.json
    • 2020-01-31T10_18_24.110Z.json

…then the resulting array would look like this:

[  'lukeharrison.dev/2020-01-31T10_18_12.648Z.json',  'lukeharrison.dev/2020-01-31T10_18_24.110Z.json' ]

Because we can only perform a comparison if a previous report exists, let’s use this array as a conditional for the comparison logic:

const prevReports = glob(`$ {dirName}/*.json`, {   sync: true });  if (prevReports.length) { }

We have a list of report file paths and we need to compare their timestamped filenames to determine which one is the most recent.

This means we first need to collect a list of all the file names, trim any irrelevant data such as directory names, and taking care to replace the underscores (_) back with colons (:) to turn them back into valid dates again. The easiest way to do this is using path, another Node.js native module.

const path = require('path');

Passing the path as an argument to its parse method, like so:

path.parse('lukeharrison.dev/2020-01-31T10_18_24.110Z.json');

Returns this useful object:

{   root: '',   dir: 'lukeharrison.dev',   base: '2020-01-31T10_18_24.110Z.json',   ext: '.json',   name: '2020-01-31T10_18_24.110Z' }

Therefore, to get a list of all the timestamp file names, we can do this:

if (prevReports.length) {   dates = [];   for (report in prevReports) {     dates.push(       new Date(path.parse(prevReports[report]).name.replace(/_/g, ":"))     );   } }

Which again if our directory looked like:

  • lukeharrison.dev
    • 2020-01-31T10_18_12.648Z.json
    • 2020-01-31T10_18_24.110Z.json

Would result in:

[  '2020-01-31T10:18:12.648Z',  '2020-01-31T10:18:24.110Z' ]

A useful thing about dates is that they’re inherently comparable by default:

const alpha = new Date('2020-01-31'); const bravo = new Date('2020-02-15');  console.log(alpha > bravo); // false console.log(bravo > alpha); // true

So by using a reduce function, we can reduce our array of dates down until only the most recent remains:

dates = []; for (report in prevReports) {   dates.push(new Date(path.parse(prevReports[report]).name.replace(/_/g, ":"))); } const max = dates.reduce(function(a, b) {   return Math.max(a, b); });

If you were to print the contents of max to the console, it would throw up a UNIX timestamp, so now, we just have to add another line to convert our most recent date back into the correct ISO format:

const max = dates.reduce(function(a, b) {  return Math.max(a, b); }); const recentReport = new Date(max).toISOString();

Assuming these are the list of reports:

  • 2020-01-31T23_24_41.786Z.json
  • 2020-01-31T23_25_36.827Z.json
  • 2020-01-31T23_37_56.856Z.json
  • 2020-01-31T23_39_20.459Z.json
  • 2020-01-31T23_56_50.959Z.json

The value of recentReport would be 2020-01-31T23:56:50.959Z.

Now that we know the most recent report, we next need to extract its contents. Create a new variable called recentReportContents beneath the recentReport variable and assign it an empty function.

As we know this function will always need to execute, rather than manually calling it, it makes sense to turn it into an IFFE (Immediately invoked function expression), which will run by itself when the JavaScript parser reaches it. This is signified by the extra parenthesis:

const recentReportContents = (() => {  })();

In this function, we can return the contents of the most recent report using the readFileSync() method of the native fs module. Because this will be in JSON format, it’s important to parse it into a regular JavaScript object.

const recentReportContents = (() => {   const output = fs.readFileSync(     dirName + "/" + recentReport.replace(/:/g, "_") + ".json",     "utf8",     (err, results) => {       return results;     }   );   return JSON.parse(output); })();

And then, it’s a matter of calling the compareReports() function and passing both the current report and the most recent report as arguments.

compareReports(recentReportContents, results.js);

At the moment this just print out a few details to the console so we can test the report data is coming through OK:

https://www.lukeharrison.dev/ 2020-02-01T00:25:06.918Z https://www.lukeharrison.dev/ 2020-02-01T00:25:42.169Z

If you’re getting any errors at this point, try deleting any report.json files or reports without valid content from earlier in the tutorial.

Compare any two reports

The remaining key requirement was the ability to compare any two reports from any two websites. The easiest way to implement this would be to allow the user to pass the full report file paths as command line arguments which we’ll then send to the compareReports() function.

In the command line, this would look like:

node lh.js --from lukeharrison.dev/2020-02-01T00:25:06.918Z --to cnn.com/2019-12-16T15:12:07.169Z

Achieving this requires editing the conditional if statement which checks for the presence of a URL command line argument. We’ll add an additional check to see if the user has just passed a from and to path, otherwise check for the URL as before. This way we’ll prevent a new Lighthouse audit.

if (argv.from && argv.to) {  } else if (argv.url) {  // et al }

Let’s extract the contents of these JSON files, parse them into JavaScript objects, and then pass them along to the compareReports() function. 

We’ve already parsed JSON before when retrieving the most recent report. We can just extrapolate this functionality into its own helper function and use it in both locations.

Using the recentReportContents() function as a base, create a new function called getContents() which accepts a file path as an argument. Make sure this is just a regular function, rather than an IFFE, as we don’t want it executing as soon as the JavaScript parser finds it.

const getContents = pathStr => {   const output = fs.readFileSync(pathStr, "utf8", (err, results) => {     return results;   });   return JSON.parse(output); };  const compareReports = (from, to) => {   console.log(from["finalUrl"] + " " + from["fetchTime"]);   console.log(to["finalUrl"] + " " + to["fetchTime"]); };

Then update the recentReportContents() function to use this extrapolated helper function instead:

const recentReportContents = getContents(dirName + '/' + recentReport.replace(/:/g, '_') + '.json');

Back in our new conditional, we need to pass the contents of the comparison reports to the compareReports() function.

if (argv.from && argv.to) {   compareReports(     getContents(argv.from + ".json"),     getContents(argv.to + ".json")   ); }

Like before, this should print out some basic information about the reports in the console to let us know it’s all working fine.

node lh.js --from lukeharrison.dev/2020-01-31T23_24_41.786Z --to lukeharrison.dev/2020-02-01T11_16_25.221Z

Would lead to:

https://www.lukeharrison.dev/ 2020-01-31T23_24_41.786Z https://www.lukeharrison.dev/ 2020-02-01T11_16_25.221Z

Comparison logic

This part of development involves building comparison logic to compare the two reports received by the compareReports() function. 

Within the object which Lighthouse returns, there’s a property called audits that contains another object listing performance metrics, opportunities, and information. There’s a lot of information here, much of which we aren’t interested in for the purposes of this tool.

Here’s the entry for First Contentful Paint, one of the nine performance metrics we wish to compare:

"first-contentful-paint": {   "id": "first-contentful-paint",   "title": "First Contentful Paint",   "description": "First Contentful Paint marks the time at which the first text or image is painted. [Learn more](https://web.dev/first-contentful-paint).",   "score": 1,   "scoreDisplayMode": "numeric",   "numericValue": 1081.661,   "displayValue": "1.1 s" }

Create an array listing the keys of these nine performance metrics. We can use this to filter the audit object:

const compareReports = (from, to) => {   const metricFilter = [     "first-contentful-paint",     "first-meaningful-paint",     "speed-index",     "estimated-input-latency",     "total-blocking-time",     "max-potential-fid",     "time-to-first-byte",     "first-cpu-idle",     "interactive"   ]; };

Then we’ll loop through one of the report’s audits object and then cross-reference its name against our filter list. (It doesn’t matter which audit object, as they both have the same content structure.)

If it’s in there, then brilliant, we want to use it.

const metricFilter = [   "first-contentful-paint",   "first-meaningful-paint",   "speed-index",   "estimated-input-latency",   "total-blocking-time",   "max-potential-fid",   "time-to-first-byte",   "first-cpu-idle",   "interactive" ];  for (let auditObj in from["audits"]) {   if (metricFilter.includes(auditObj)) {     console.log(auditObj);   } }

This console.log() would print the below keys to the console:

first-contentful-paint first-meaningful-paint speed-index estimated-input-latency total-blocking-time max-potential-fid time-to-first-byte first-cpu-idle interactive

Which means we would use from['audits'][auditObj].numericValue and to['audits'][auditObj].numericValue respectively in this loop to access the metrics themselves.

If we were to print these to the console with the key, it would result in output like this:

first-contentful-paint 1081.661 890.774 first-meaningful-paint 1081.661 954.774 speed-index 15576.70313351777 1098.622294504341 estimated-input-latency 12.8 12.8 total-blocking-time 59 31.5 max-potential-fid 153 102 time-to-first-byte 16.859999999999985 16.096000000000004 first-cpu-idle 1704.8490000000002 1918.774 interactive 2266.2835 2374.3615

We have all the data we need now. We just need to calculate the percentage difference between these two values and then log it to the console using the color-coded format outlined earlier.

Do you know how to calculate the percentage change between two values? Me neither. Thankfully, everybody’s favorite monolith search engine came to the rescue.

The formula is:

((From - To) / From) x 100

So, let’s say we have a Speed Index of 5.7s for the first report (from), and then a value of 2.1s for the second (to). The calculation would be:

5.7 - 2.1 = 3.6 3.6 / 5.7 = 0.63157895 0.63157895 * 100 = 63.157895

Rounding to two decimal places would yield a decrease in the speed index of 63.16%.

Let’s put this into a helper function inside the compareReports() function, below the metricFilter array.

const calcPercentageDiff = (from, to) => {   const per = ((to - from) / from) * 100;   return Math.round(per * 100) / 100; };

Back in our auditObj conditional, we can begin to put together the final report comparison output.

First off, use the helper function to generate the percentage difference for each metric.

for (let auditObj in from["audits"]) {   if (metricFilter.includes(auditObj)) {     const percentageDiff = calcPercentageDiff(       from["audits"][auditObj].numericValue,       to["audits"][auditObj].numericValue     );   } }

Next, we need to output values in this format to the console:

First Contentful Paint is 0.49% slower First Meaningful Paint is 0.47% slower Speed Index is 12.92% slower Estimated Input Latency is the same Total Blocking Time is 85.71% faster Max Potential First Input Delay is 10.53% faster Time to first byte is 19.89% slower First CPU Idle is 0.47% slower Time to Interactive is 0.02% slower

This requires adding color to the console output. In Node.js, this can be done by passing a color code as an argument to the console.log() function like so:

console.log('x1b[36m', 'hello') // Would print 'hello' in cyan

You can get a full reference of color codes in this Stackoverflow question.  We need green and red, so that’s x1b[32m and x1b[31m respectively. For metrics where the value remains unchanged, we’ll just use white. This would be x1b[37m.

Depending on if the percentage increase is a positive or negative number, the following things need to happen:

  • Log color needs to change (Green for negative, red for positive, white for unchanged)
  • Log text contents change.
    • ‘[Name] is X% slower for positive numbers
    • ‘[Name] is X% faster’ for negative numbers
    • ‘[Name] is unchanged’ for numbers with no percentage difference.
  • If the number is negative, we want to remove the minus/negative symbol, as otherwise, you’d have a sentence like ‘Speed Index is -92.95% faster’ which doesn’t make sense.

There are many ways this could be done. Here, we’ll use the Math.sign() function, which returns 1 if its argument is positive, 0 if well… 0, and -1 if the number is negative. That’ll do.

for (let auditObj in from["audits"]) {   if (metricFilter.includes(auditObj)) {     const percentageDiff = calcPercentageDiff(       from["audits"][auditObj].numericValue,       to["audits"][auditObj].numericValue     );      let logColor = "x1b[37m";     const log = (() => {       if (Math.sign(percentageDiff) === 1) {         logColor = "x1b[31m";         return `$ {percentageDiff + "%"} slower`;       } else if (Math.sign(percentageDiff) === 0) {         return "unchanged";       } else {         logColor = "x1b[32m";         return `$ {percentageDiff + "%"} faster`;       }     })();     console.log(logColor, `$ {from["audits"][auditObj].title} is $ {log}`);   } }

So, there we have it.

You can create new Lighthouse reports, and if a previous one exists, a comparison is made.

And you can also compare any two reports from any two sites.

Complete source code

Here’s the completed source code for the tool, which you can also view in a Gist via the link below.

const lighthouse = require("lighthouse"); const chromeLauncher = require("chrome-launcher"); const argv = require("yargs").argv; const url = require("url"); const fs = require("fs"); const glob = require("glob"); const path = require("path");  const launchChromeAndRunLighthouse = url => {   return chromeLauncher.launch().then(chrome => {     const opts = {       port: chrome.port     };     return lighthouse(url, opts).then(results => {       return chrome.kill().then(() => {         return {           js: results.lhr,           json: results.report         };       });     });   }); };  const getContents = pathStr => {   const output = fs.readFileSync(pathStr, "utf8", (err, results) => {     return results;   });   return JSON.parse(output); };  const compareReports = (from, to) => {   const metricFilter = [     "first-contentful-paint",     "first-meaningful-paint",     "speed-index",     "estimated-input-latency",     "total-blocking-time",     "max-potential-fid",     "time-to-first-byte",     "first-cpu-idle",     "interactive"   ];    const calcPercentageDiff = (from, to) => {     const per = ((to - from) / from) * 100;     return Math.round(per * 100) / 100;   };    for (let auditObj in from["audits"]) {     if (metricFilter.includes(auditObj)) {       const percentageDiff = calcPercentageDiff(         from["audits"][auditObj].numericValue,         to["audits"][auditObj].numericValue       );        let logColor = "x1b[37m";       const log = (() => {         if (Math.sign(percentageDiff) === 1) {           logColor = "x1b[31m";           return `$ {percentageDiff.toString().replace("-", "") + "%"} slower`;         } else if (Math.sign(percentageDiff) === 0) {           return "unchanged";         } else {           logColor = "x1b[32m";           return `$ {percentageDiff.toString().replace("-", "") + "%"} faster`;         }       })();       console.log(logColor, `$ {from["audits"][auditObj].title} is $ {log}`);     }   } };  if (argv.from && argv.to) {   compareReports(     getContents(argv.from + ".json"),     getContents(argv.to + ".json")   ); } else if (argv.url) {   const urlObj = new URL(argv.url);   let dirName = urlObj.host.replace("www.", "");   if (urlObj.pathname !== "/") {     dirName = dirName + urlObj.pathname.replace(///g, "_");   }    if (!fs.existsSync(dirName)) {     fs.mkdirSync(dirName);   }    launchChromeAndRunLighthouse(argv.url).then(results => {     const prevReports = glob(`$ {dirName}/*.json`, {       sync: true     });      if (prevReports.length) {       dates = [];       for (report in prevReports) {         dates.push(           new Date(path.parse(prevReports[report]).name.replace(/_/g, ":"))         );       }       const max = dates.reduce(function(a, b) {         return Math.max(a, b);       });       const recentReport = new Date(max).toISOString();        const recentReportContents = getContents(         dirName + "/" + recentReport.replace(/:/g, "_") + ".json"       );        compareReports(recentReportContents, results.js);     }      fs.writeFile(       `$ {dirName}/$ {results.js["fetchTime"].replace(/:/g, "_")}.json`,       results.json,       err => {         if (err) throw err;       }     );   }); } else {   throw "You haven't passed a URL to Lighthouse"; }

View Gist

Next steps

With the completion of this basic Google Lighthouse tool, there’s plenty of ways to develop it further. For example:

  • Some kind of simple online dashboard that allows non-technical users to run Lighthouse audits and view metrics develop over time. Getting stakeholders behind web performance can be challenging, so something tangible they can interest with themselves could pique their interest.
  • Build support for performance budgets, so if a report is generated and performance metrics are slower than they should be, then the tool outputs useful advice on how to improve them (or calls you names).

Good luck!

The post Build a Node.js Tool to Record and Compare Google Lighthouse Reports appeared first on CSS-Tricks.

CSS-Tricks

, , , , , , ,
[Top]

Google Fonts + Variable Fonts

I see Google Fonts rolled out a new design (Tweet). Compared to the last big redesign, this feels much more iterative. I can barely tell the difference really, except it’s blue instead of red and this one pretty rad checkbox: Show only variable fonts.

An option to only show variable fonts is a pretty bold feature for the main navigation up there. That’s a strong commitment to this feature. With Google Fonts having about 90% of the market share of hosted web fonts and serving trillions of requests, that’s going to spike interest and usage of variable fonts in a big way. Web designers and developers have been excited about variable fonts for a while, but I’d bet this is the year we start seeing it in the wild in a much bigger way.

Something about variable fonts inspired the micro-site. See v-fonts.com and Axis-Praxis. Here’s come another one: variablefonts.io! Like the others, it also has interactive examples, but it’s also full of direct up-to-date advice and links to resources.

Another thing that’s really great that Google Fonts has done somewhat recently is allowed for the usage of font-display. It’s got a good default (swap), and is easily changable as a query param. Matt Hobbs has a recent article about what it is, how important it can be, and how to use it.

And while we’re talking Google Fonts, I ran across the browser extension Snapfont the other day. It’s a pay-what-you-want thing (I tossed them a fiver).

It just hard-replaces every font on the site with one you pick to get a quick taste of it. There are no options, so it’s not for fine-tuning any choices. The “Heading” button didn’t even work for me. But I like how simple and easy it was to get a taste for a site with a new font.

The post Google Fonts + Variable Fonts appeared first on CSS-Tricks.

CSS-Tricks

, ,
[Top]

Creating an Editable Webpage With Google Spreadsheets and Tabletop.js

Please raise your hand if you’ve ever faced never-ending content revision requests from your clients. It’s not that the changes themselves are difficult, but wouldn’t it be less complicated if clients could just make the revisions themselves? That would save everyone valuable time, and  allow you to turn your attention to more important tasks.

In the case where the site is built on the flat files (e.g. HTML, CSS and JavaScript) instead of a CMS (e.g. WordPress), you’ll need some other sort of solution to edit the content without directly editing those files.

Tabletop.js allows us to use Google Spreadsheets as a sort of data store, by taking the spreadsheet and making it easily accessible through JavaScript. It provides the data from a Google Spreadsheet in JSON format, which can then use in an app, like pulling data from any other API. In this article, we’ll be adding data to a spreadsheet then setting up Tabletop so that it can pull data from the data source to our HTML. Let us get straight to the code!

This article is going to be based off a real-world site I built when I was initially trying to wrap my head around Tabletop. By the way, I always advise developers to build applications with any form of technology they’re trying to learn, even after watching or reading tutorials.

We’ll be using the demo I made, with its source code and spreadsheet . The first thing we’ll need is a Google account to access the spreadsheet. 

Open a new spreadsheet and input your own values in the columns just like mine. The first cell in on each column been the reference that’ll be used later in our JavaScript, and the second cell been the actual content for the website.

Next up, we’ll publish the data to the web by clicking on File → Publish to the web in the menu bar.

A link will be provided, but it’s technically useless to us, so we can ignore. The important thing is that the spreadsheet (and its data) is now publicly accessible so we can fetch it for our app.

That said, there is a link we need. Clicking the big green “Share” button in the upper-right corner of the page will trigger a modal that provides a sharable link to the spreadsheet and lets us set permissions as well. Let’s grab that link and set the permissions so that anyone with the link can view the spreadsheet. That way, the data won’t inadvertently be edited by someone else.

Now is the time to initialize Tabletop in our project. Let’s link up to their hosted minified file. Similarly, we could copy the raw minified code, drop it into our own script file and host it ourselves.

Here’s the document file linked up to Tabletop’s CDN and with code snagged from the documentation.

<script src='https://cdnjs.cloudflare.com/ajax/libs/tabletop.js/1.5.1/tabletop.min.js'></script>  <script type='text/javascript'>       var publicSpreadsheetUrl = 'https://docs.google.com/spreadsheets/d/1sbyMINQHPsJctjAtMW0lCfLrcpMqoGMOJj6AN-sNQrc/pubhtml';    function init() {     Tabletop.init( {       key: publicSpreadsheetUrl,       callback: showInfo,       simpleSheet: true      } )   }    function showInfo(data, tabletop) {     alert('Successfully processed!')     console.log(data);   }    window.addEventListener('DOMContentLoaded', init) </script>

Replace the publicSpreadsheetUrl variable in the code with the sharable link from the spreadsheet. See, told you we’d need that!

Now to the interesting part. Let’s give the HTML unique IDs and leave them empty. Then, inside the showInfo function, we’ll use the forEach() method to loop through each spreadsheet column while equating it with the corresponding ID.innerHTML method which, in turn, loads the spreadsheet’s data into the HTML tag through the ID.

function showInfo(data, tabletop) {   data.forEach(function(data) {     header.innerHTML = data.header;     header2.innerHTML = data.header2;     body.innerHTML = data.body;     body2.innerHTML = data.body2;     body3.innerHTML = data.body3;     body4.innerHTML = data.body4;     body5.innerHTML = data.body5;     body6.innerHTML = data.body6;     body7.innerHTML = data.body7;     body8.innerHTML = data.body8;     body9.innerHTML = data.body9;     body10.innerHTML = data.body10;     body11.innerHTML = data.body11;     body12.innerHTML = data.body12;     body13.innerHTML = data.body13;     body14.innerHTML = data.body14;     body15.innerHTML = data.body15;     body16.innerHTML = data.body16;     body17.innerHTML = data.body17;  }); } window.addEventListener('DOMContentLoaded', init)

This is a section of HTML from my demo showing the empty tags. This is a good way to go, but it could be abstracted further but creating the HTML elements directly from JavaScript.

<!-- Start Section One: Keep track of your snippets --> <section class="feature">   <div class="intro-text">     <h3 id="body"></h3>     <p id="body2">            </p>   </div>   <div class="track-snippets">     <div class="snippet-left"><img src="img/image-computer.png" alt="computer" /></div>     <div class="snippet-right">       <div>         <h4 id="body3"></h4>         <p id="body4">         </p>       </div>       <div>         <h4 id="body5"></h4>         <p id="body6"></p>       </div>       <div>         <h4 id="body7"></h4>         <p id="body8">         </p>       </div>     </div>   </div> </section>

Hey, look at that! Now we can change the content on a website in real-time by editing the content in the spreadsheet.

The post Creating an Editable Webpage With Google Spreadsheets and Tabletop.js appeared first on CSS-Tricks.

CSS-Tricks

, , , , ,
[Top]

How We Tagged Google Fonts and Created goofonts.com

GooFonts is a side project signed by a developer-wife and a designer-husband, both of them big fans of typography. We’ve been tagging Google Fonts and built a website that makes searching through and finding the right font easier.

GooFonts uses WordPress in the back end and NuxtJS (a Vue.js framework) on the front end. I’d love to tell you the story behind goofonts.com and share a few technical details regarding the technologies we’ve chosen and how we adapted and used them for this project.

Why we built GooFonts

At the moment of writing this article, there are 977 typefaces offered by Google Fonts. You can check the exact number at any moment using the Google Fonts Developer API. You can retrieve the dynamic list of all fonts, including a list of the available styles and scripts for each family.

The Google Fonts website provides a beautiful interface where you can preview all fonts, sorting them by trending, popularity, date, or name. 

But what about the search functionality? 

You can include and exclude fonts by five categories: serif, sans-serif, display, handwriting, and monospace.

You can search within scripts (like Latin Extended, Cyrillic, or Devanagari (they are called subsets in Google Fonts). But you cannot search within multiple subsets at once.

You can search by four properties: thickness, slant, width, and “number of styles.” A style, also called variant, refers both to the style (italic or regular) and weights (100, 200, up to 900). Often, the body font requires three variants: regular, bold, and italic. The “number of styles” property sorts out fonts with many variants, but it does not allow to select fonts that come in the “regular, bold, italic” combo.

There is also a custom search field where you can type your query. Unfortunately, the search is performed exclusively over the names of the fonts. Thus, the results often include font families uniquely from services other than Google Fonts. 

Let’s take the “cartoon” query as an example. It results in “Cartoon Script” from an external foundry Linotype.

I can remember working on a project that demanded two highly stylized typefaces — one evoking the old Wild West, the other mimicking a screenplay. That was the moment when I decided to tag Google Fonts. 🙂

GooFonts in action

Let me show you how GooFonts works. The dark sidebar on the right is your  “search” area. You can type your keywords in the search field — this will perform an “AND” search. For example, you can look for fonts that are at once cartoon and slab. 

We handpicked a bunch of keywords — click any of them! If your project requires some specific subsets, check them in the subsets sections.  You can also check all the variants that you need for your font.

If you like a font, click its heart icon, and it will be stored in your browser’s localStorage. You can find your bookmarked fonts on the goofonts.com/bookmarks page. Together with the code, you might need to embed them.

How we built it: the WordPress part

To start, we needed some kind of interface where we could preview and tag each font. We also needed a database to store those tags. 

I had some experience with WordPress. Moreover, WordPress comes with its REST API,  which opens multiple possibilities for dealing with the data on the front end. That choice was made quickly.

I went for the most straightforward possible initial setup. Each font is a post, and we use post tags for keywords. A custom post type could have worked as well, but since we are using WordPress only for the data, the default content type works perfectly well.

Clearly, we needed to add all the fonts programmatically. We also needed to be able to programmatically update the fonts, including adding new ones or adding new available variants and subsets.

The approach described below can be useful with any other data available via an external API. In a custom WordPress plugin, we register a menu page from which we can check for updates from the API. For simplicity, the page will display a title, a button to activate the update and a progress bar for some visual feedback.

/**  * Register a custom menu page.   */ function register_custom_menu_page() {   add_menu_page(      'Google Fonts to WordPress',      'WP GooFonts',      'manage_options',      'wp-goofonts-menu',    function() { ?>             <h1>Google Fonts API</h1>     <button type="button" id="wp-goofonts-button">Run</button>     <p id="info"></p>             <progress id="progress" max="100" value="0"></progress>   <?php }   ); } add_action( 'admin_menu', 'register_custom_menu_page' );

Let’s start by writing the JavaScript part. While most of the examples of using Ajax with WordPress implements jQuery and the jQuery.ajax method, the same can be obtained without jQuery, using axios and a small helper Qs.js for data serialization.

We want to load our custom script in the footer, after loading axios and qs:

add_action( 'admin_enqueue_scripts' function() {   wp__script( 'axios', 'https://unpkg.com/axios/dist/axios.min.js' );   wp_enqueue_script( 'qs', 'https://unpkg.com/qs/dist/qs.js' );   wp_enqueue_script( 'wp-goofonts-admin-script', plugin_dir_url( __FILE__ ) . 'js/wp-goofonts.js', array( 'axios', 'qs' ), '1.0.0', true ); });

Let’s look how the JavaScript could look like:

const BUTTON = document.getElementById('wp-goofonts-button') const INFO = document.getElementById('info') const PROGRESS = document.getElementById('progress') const updater = {   totalCount: 0,   totalChecked: 0,   updated: [],   init: async function() {     try {       const allFonts = await axios.get('https://www.googleapis.com/webfonts/v1/webfonts?key=API_KEY&sort=date')       this.totalCount = allFonts.data.items.length       INFO.textContent = `Fetched $ {this.totalCount} fonts.`       this.updatePost(allFonts.data.items, 0)     } catch (e) {       console.error(e)     }   },   updatePost: async function(els, index) {     if (index === this.totalCount) {       return     }                     const data = {       action: 'goofonts_update_post',       font: els[index],     }     try {        const apiRequest = await axios.post(ajaxurl, Qs.stringify(data))        this.totalChecked++        PROGRESS.setAttribute('value', Math.round(100*this.totalChecked/this.totalCount))        this.updatePost(els, index+1)     } catch (e) {        console.error(e)       }    } }  BUTTON.addEventListener('click', () => {   updater.init() })

The init method makes a request to the Google Fonts API. Once the data from the API is available, we call the recursive asynchronous updatePost method that sends an individual font in the POST request to the WordPress server.

Now, it’s important to remember that WordPress implements Ajax in its specific way. First of all, each request must be sent to wp-admin/admin-ajax.php. This URL is available in the administration area as a global JavaScript variable ajaxurl.

Second, all WordPress Ajax requests must include an action argument in the data. The value of the action determines which hook tag will be used on the server-side.

In our case, the action value is goofonts_update_post. That means what happens on the server-side is determined by the wp_ajax_goofonts_update_post hook.

add_action( 'wp_ajax_goofonts_update_post', function() {   if ( isset( $ _POST['font'] ) ) {     /* the post tile is the name of the font */     $ title = wp_strip_all_tags( $ _POST['font']['family'] );     $ variants = $ _POST['font']['variants'];     $ subsets = $ _POST['font']['subsets'];     $ category = $ _POST['font']['category'];     /* check if the post already exists */     $ object = get_page_by_title( $ title, 'OBJECT', 'post' );     if ( NULL === $ object ) {       /* create a new post and set category, variants and subsets as tags */       goofonts_new_post( $ title, $ category, $ variants, $ subsets );     } else {       /* check if $ variants or $ subsets changed */       goofonts_update_post( $ object, $ variants, $ subsets );     }   } });  function goofonts_new_post( $ title, $ category, $ variants, $ subsets ) {   $ post_id =  wp_insert_post( array(     'post_author'  =>  1,     'post_name'    =>  sanitize_title( $ title ),     'post_title'   =>  $ title,     'post_type'    =>  'post',     'post_status'  => 'draft',     )   );   if ( $ post_id > 0 ) {     /* the easy part of tagging ;) append the font category, variants and subsets (these three come from the Google Fonts API) as tags */     wp_set_object_terms( $ post_id, $ category, 'post_tag', true );     wp_set_object_terms( $ post_id, $ variants, 'post_tag', true );     wp_set_object_terms( $ post_id, $ subsets, 'post_tag', true );   } }

This way, in less than a minute, we end up with almost one thousand post drafts in the dashboard — all of them with a few tags already in place. And that’s the moment when the crucial, most time-consuming part of the project begins. We need to start manually add tags for each font one by one.
The default WordPress editor does not make much sense in this case. What we needed is a preview of the font. A link to the font’s page on fonts.google.com also comes in handy.

custom meta box does the job very well. In most cases, you will use meta boxes for custom form elements to save some custom data related to the post. In fact, the content of a meta box can be practically any HTML.

function display_font_preview( $ post ) {   /* font name, for example Abril Fatface */   $ font = $ post->post_title;   /* font as in url, for example Abril+Fatface */   $ font_url_part = implode( '+', explode( ' ', $ font ));   ?>   <div class="font-preview">      <link href="<?php echo 'https://fonts.googleapis.com/css?family=' . $ font_url_part . '&display=swap'; ?>" rel="stylesheet">     <header>       <h2><?php echo $ font; ?></h2>       <a href="<?php echo 'https://fonts.google.com/specimen/' . $ font_url_part; ?>" target="_blank" rel="noopener">Specimen on Google Fonts</a>     </header>     <div contenteditable="true" style="font-family: <?php echo $ font; ?>">       <p>The quick brown fox jumps over a lazy dog.</p>       <p style="text-transform: uppercase;">The quick brown fox jumps over a lazy dog.</p>       <p>1 2 3 4 5 6 7 8 9 0</p>       <p>& ! ; ? {}[]</p>     </div>   </div> <?php }  add_action( 'add_meta_boxes', function() {   add_meta_box(     'font_preview', /* metabox id */     'Font Preview', /* metabox title */     'display_font_preview', /* content callback */     'post' /* where to display */   ); });

Tagging fonts is a long-term task with a lot of repetition. It also requires a big dose of consistency. That’s why we started by defining a set of tag “presets.” That could be, for example:

{   /* ... */   comic: {     tags: 'comic, casual, informal, cartoon'   },   cursive: {     tags: 'cursive, calligraphy, script, manuscript, signature'   },   /* ... */ }

Next with some custom CSS and JavaScript, we “hacked” the WordPress editor and tags form by enriching it with the set of preset buttons. 

How we built it: The front end part (using NuxtJS)

The goofonts.com interface was designed by Sylvain Guizard, a french graphic and web designer (who also happens to be my husband). We wanted something simple with a distinguished “search” area. Sylvain deliberately went for colors that are not too far from the Google Fonts identity. We were looking for a balance between building something unique and original while avoiding user confusion.

While I did not hesitate choosing WordPress for the back-end, I didn’t want to use it on front end. We were aiming for an app-like experience and I, personally, wanted to code in JavaScript, using Vue.js in particular.

I came across an example of a website using NuxtJS with WordPress and decided to give it a try. The choice was made immediately. NuxtJS is a very popular Vue.js framework, and I really enjoy its simplicity and flexibility. 
I’ve been playing around with different NuxtJS settings to end up with a 100% static website. The fully static solution felt the most performant; the overall experience seemed the most fluid.That also means that my WordPress site is only used during the build process. Thus, it can run on my localhost. This is not negligible since it eliminates the hosting costs and most of all, lets me skip the security-related WordPress configuration and relieves me of the security-related stress. 😉

If you are familiar with NuxtJS, you probably know that the full static generation is not (yet) a part of NuxtJS. The prerendered pages try to fetch the data again when you are navigating.

That’s why we have to somehow “hack” the 100% static generation. In this case, we are saving the useful parts of the fetched data to a JSON file before each build process. This is possible, thanks to Nuxt hooks, in particular, its builder hooks.

Hooks are typically used in Nuxt modules:

/* modules/beforebuild.js */  const fs = require('fs') const axios = require('axios')  const sourcePath = 'http://wpgoofonts.local/wp-json/wp/v2/' const path = 'static/allfonts.json'  module.exports = () => {   /* write data to the file, replacing the file if it already exists */   const storeData = (data, path) => {     try {       fs.writeFileSync(path, JSON.stringify(data))     } catch (err) {       console.error(err)     }   }   async function getData() {         const fetchedTags = await axios.get(`$ {sourcePath}tags?per_page=500`)       .catch(e => { console.log(e); return false })        /* build an object of tag_id: tag_slug */     const tags = fetchedTags.data.reduce((acc, cur) => {       acc[cur.id] = cur.slug       return acc     }, {})        /* we want to know the total number or pages */     const mhead = await axios.head(`$ {sourcePath}posts?per_page=100`)       .catch(e => { console.log(e); return false })     const totalPages = mhead.headers['x-wp-totalpages']    /* let's fetch all fonts */     let fonts = []     let i = 0     while (i < totalPages) {       i++       const response = await axios.get(`$ {sourcePath}posts?per_page=100&page=$ {i}`)       fonts.push.apply(fonts, response.data)     }      /* and reduce them to an object with entries like: {roboto: {name: Roboto, tags: ["clean","contemporary", ...]}} */     fonts = (fonts).reduce((acc, el) => {       acc[el.slug] = {         name: el.title.rendered,         tags: el.tags.map(i => tags[i]),       }       return acc     }, {})    /* save the fonts object to a .json file */     storeData(fonts, path)   }    /* make sure this happens before each build */   this.nuxt.hook('build:before', getData) }
/* nuxt.config.js */ module.exports = {   // ...   buildModules: [     ['~modules/beforebuild']   ], // ... }

As you can see, we only request a list of tags and a list posts. That means we only use default WordPress REST API endpoints, and no configuration is required.

Final thoughts

Working on GooFonts was a long-term adventure. It is also this kind of projects that needs to be actively maintained. We regularly keep checking Google Fonts for the new typefaces, subsets, or variants. We tag new items and update our database. Recently, I was genuinely excited to discover that Bebas Neue has joint the family. We also have our personal favs among the much lesser-known specimens.

As a trainer that gives regular workshops, I can observe real users playing with GooFonts. At this stage of the project, we want to get as much feedback as possible. We would love GooFonts to be a useful, handy and intuitive tool for web designers. One of the to-do features is searching a font by its name. We would also love to add a possibility to share the bookmarked sets and create multiple “collections” of fonts.

As a developer, I truly enjoyed the multi-disciplinary aspect of this project. It was the first time I worked with the WordPress REST API, it was my first big project in Vue.js, and I learned so much about typography.  

Would we do anything differently if we could? Absolutely. It was a learning process. On the other hand, I don’t think we would change the main tools. The flexibility of both WordPress and Nuxt.js proved to be the right choice. Starting over today, I would definitely took time to explore GraphQL, and I will probably implement it in the future.

I hope that you find some of the discussed methods useful. As I said before, your feedback is very precious. If you have any questions or remarks, please let me know in the comments!

The post How We Tagged Google Fonts and Created goofonts.com appeared first on CSS-Tricks.

CSS-Tricks

, , , ,
[Top]

How Google PageSpeed Works: Improve Your Score and Search Engine Ranking

This article is from my friend Ben who runs Calibre, a tool for monitoring the performance of websites. We use Calibre here on CSS-Tricks to keep an eye on things. In fact, I just popped over there to take a look and was notified of some little mistakes that slipped by, and I fixed them. Recommended!

In this article, we uncover how PageSpeed calculates it’s critical speed score.

It’s no secret that speed has become a crucial factor in increasing revenue and lowering abandonment rates. Now that Google uses page speed as a ranking factor, many organizations have become laser-focused on performance.

Last year, Google made two significant changes to their search indexing and ranking algorithms:

From this, we’re able to state two truths:

  • The speed of your site on mobile will affect your overall SEO ranking.
  • If your pages load slowly, it will reduce your ad quality score, and ads will cost more.

Google wrote:

Faster sites don’t just improve user experience; recent data shows that improving site speed also reduces operating costs. Like us, our users place a lot of value in speed — that’s why we’ve decided to take site speed into account in our search rankings.

To understand how these changes affect us from a performance perspective, we need to grasp the underlying technology. PageSpeed 5.0 is a complete overhaul of previous editions. It’s now being powered by Lighthouse and CrUX (Chrome User Experience Report).

This upgrade also brings a new scoring algorithm that makes it far more challenging to receive a high PageSpeed score.

What changed in PageSpeed 5.0?

Before 5.0, PageSpeed ran a series of heuristics against a given page. If the page has large, uncompressed images, PageSpeed would suggest image compression. Cache-Headers missing? Add them.

These heuristics were coupled with a set of guidelines that would likely result in better performance, if followed, but were merely superficial and didn’t actually analyze the load and render experience that real visitors face.

In PageSpeed 5.0, pages are loaded in a real Chrome browser that is controlled by Lighthouse. Lighthouse records metrics from the browser, applies a scoring model to them, and presents an overall performance score. Guidelines for improvement are suggested based on how specific metrics score.

Like PageSpeed, Lighthouse also has a performance score. In PageSpeed 5.0, the performance score is taken from Lighthouse directly. PageSpeed’s speed score is now the same as Lighthouse’s Performance score.

Calibre scores 97 on Google’s Pagespeed

Now that we know where the PageSpeed score comes from, let’s dive into how it’s calculated, and how we can make meaningful improvements.

What is Google Lighthouse?

Lighthouse is an open source project run by a dedicated team from Google Chrome. Over the past couple of years, it has become the go-to free performance analysis tool.

Lighthouse uses Chrome’s Remote Debugging Protocol to read network request information, measure JavaScript performance, observe accessibility standards and measure user-focused timing metrics like First Contentful Paint, Time to Interactive or Speed Index.

If you’re interested in a high-level overview of Lighthouse architecture, read this guide from the official repository.

How Lighthouse calculates the Performance Score

During performance tests, Lighthouse records many metrics focused on what a user sees and experiences. There are six metrics used to create the overall performance score. They are:

  • Time to Interactive (TTI)
  • Speed Index
  • First Contentful Paint (FCP)
  • First CPU Idle
  • First Meaningful Paint (FMP)
  • Estimated Input Latency

Lighthouse will apply a 0 – 100 scoring model to each of these metrics. This process works by obtaining mobile 75th and 95th percentiles from HTTP Archive, then applying a log normal function.

Following the algorithm and reference data used to calculate Time to Interactive, we can see that if a page managed to become "interactive" in 2.1 seconds, the Time to Interactive metric score would be 92/100.

Once each metric is scored, it’s assigned a weighting which is used as a modifier in calculating the overall performance score. The weightings are as follows:

Metric Weighting
Time to Interactive (TTI) 5
Speed Index 4
First Contentful Paint 3
First CPU Idle 2
First Meaningful Paint 1
Estimated Input Latency 0

These weightings refer to the impact of each metric in regards to mobile user experience.

In the future, this may also be enhanced by the inclusion of user-observed data from the Chrome User Experience Report dataset.

You may be wondering how the weighting of each metric affects the overall performance score. The Lighthouse team have created a useful Google Spreadsheet calculator explaining this process:

Picture of a spreadsheet that can be used to calculate performance scores

Using the example above, if we change (time to) interactive from 5 seconds to 17 seconds (the global average mobile TTI), our score drops to 56% (aka 56 out of 100).

Whereas, if we change First Contentful Paint to 17 seconds, we’d score 62%.

TTI is the most impactful metric to your performance score. Therefore, to receive a high PageSpeed score, you will need a speedy TTI measurement.

Moving the needle on TTI

At a high level, there are two significant factors that hugely influence TTI:

  • The amount of JavaScript delivered to the page
  • The run time of JavaScript tasks on the main thread

Our Time to Interactive guide explains how TTI works in great detail, but if you’re looking for some quick no-research wins, we’d suggest: Reducing the amount of JavaScript

Where possible, remove unused JavaScript code or focus on only delivering a script that will be run by the current page. That might mean removing old polyfills or replacing third-party libraries with smaller, more modern alternatives.

It’s important to remember that the cost of JavaScript is not only the time it takes to download it. The browser needs to decompress, parse, compile and eventually execute it, which takes non-trivial time, especially in mobile devices.

Effective measures for reducing the amount of scripts from your pages include:

  • Reviewing and removing polyfills that are no longer required for your audience.
  • Understanding the cost of each third-party JavaScript library. Use webpack-bundle-analyser or source-map-explorer to visualise the how large each library is.
  • Modern JavaScript tooling (like webpack) can break-up large JavaScript applications into a series of small bundles that are automatically loaded as a user navigates. This approach is known as code splitting and is extremely effective in improving TTI.
  • Service workers that will cache the bytecode result of a parsed and compiled script. If you’re able to make use of this, visitors will pay a one-time performance cost for parse and compilation. After that, it’ll be mitigated by cache.

Monitoring Time to Interactive

To successfully uncover significant differences in user experience, we suggest using a performance monitoring system (like Calibre!) that allows for testing a minimum of two devices; a fast desktop and a low-mid range mobile phone.

That way, you’ll have the data for both the best and worst case of what your customers experience. It’s time to come to terms that your customers aren’t using the same powerful hardware as you.

In-depth manual profiling

To get the best results in profiling JavaScript performance, test pages using intentionally slow mobile devices. If you have an old phone in a desk drawer, this is a great second-life for it.

An excellent substitute for using a real device is to use Chrome DevTools hardware emulation mode. We’ve written an extensive performance profiling guide to help you get started with runtime performance.

What about the other metrics?

Speed Index, First Contentful Paint and First Meaningful Paint are all browser-paint-based metrics. They’re influenced by similar factors and can often be improved at the same time.

It’s objectively easier to improve these metrics as they are calculated by how quickly a page renders. Following the Lighthouse Performance audit rules closely will result in these metrics improving.

If you aren’t already preloading your fonts or optimizing for critical requests, that is an excellent place to start a performance journey. Our article titled “The Critical Request” explains in great detail how the browser fetches and renders critical resources used to render your pages.

Tracking your progress and making meaningful improvements

Google’s newly updated search console, Lighthouse and PageSpeed Insights, are a great way to get initial visibility into the performance of your pages but fall short for teams that need to continuously track and improve the performance of their pages.

Continuous performance monitoring is essential to ensuring speed improvements last, and teams get instantly notified when regressions happen. Manual testing introduces unexpected variability in results and makes testing from different regions as well as on various devices nearly impossible without a dedicated lab environment.

Speed has become a crucial factor for SEO rankings, especially now that nearly 50% of web traffic comes from mobile devices.

To avoid losing your search positioning, ensure you’re using an up-to-date performance suite to track key pages. (Pssst, we built Calibre to be your performance companion. It has Lighthouse built-in. Hundreds of teams from around the globe are using it every day.)

Related Articles

The post How Google PageSpeed Works: Improve Your Score and Search Engine Ranking appeared first on CSS-Tricks.

CSS-Tricks

, , , , , , ,
[Top]

Weekly Platform News: Event Timing, Google Earth for Web, undead session cookies

Šime posts regular content for web developers on webplatform.news.

In this week’s news, Wikipedia helps identify three slow click handlers, Google Earth comes to the web, SVG properties in CSS get more support, and what to do in the event of zombie cookies.

Tracking down slow event handlers with Event Timing

Event Timing is experimentally available in Chrome (as an Origin Trial) and Wikipedia is taking part in the trial. This API can be used to accurately determine the duration of event handlers with the goal of surfacing slow events.

We quickly identified 3 very frequent slow click handlers experienced frequently by real users on Wikipedia. […] Two of those issues are caused by expensive JavaScript calls causing style recalculation and layout.

(via Gilles Dubuc)

Google Earth for Web beta available

The preview version of Google Earth for Web (powered by WebAssembly) is now available. You can try it out in Chromium-based browsers and Firefox — it runs single-threaded in browsers that don’t yet have (re-)enabled SharedArrayBuffer — but not in Safari because of its lack of full support for WebGL2.

(via Jordon Mears)

SVG geometry properties in CSS

Firefox Nightly has implemented SVG geometry properties (x, y, r, etc.) in CSS. This feature is already supported in Chrome and Safari and is expected to ship in Firefox 69 in September.

See the Pen
Animating SVG geometry properties with CSS
by Šime Vidas (@simevidas)
on CodePen.

(via Jérémie Patonnier)

Browsers can keep session cookies alive

Chrome and Firefox allow users to restore the previous browser session on startup. With this option enabled, closing the browser will not delete the user’s session cookies, nor empty the sessionStorage of web pages.

Given this session resumption behavior, it’s more important than ever to ensure that your site behaves reasonably upon receipt of an outdated session cookie (e.g. redirect the user to the login page instead of showing an error).

(via Eric Lawrence)

The post Weekly Platform News: Event Timing, Google Earth for Web, undead session cookies appeared first on CSS-Tricks.

CSS-Tricks

, , , , , , , , ,
[Top]