Tag: Google

How Google PageSpeed Works: Improve Your Score and Search Engine Ranking

This article is from my friend Ben who runs Calibre, a tool for monitoring the performance of websites. We use Calibre here on CSS-Tricks to keep an eye on things. In fact, I just popped over there to take a look and was notified of some little mistakes that slipped by, and I fixed them. Recommended!

In this article, we uncover how PageSpeed calculates it’s critical speed score.

It’s no secret that speed has become a crucial factor in increasing revenue and lowering abandonment rates. Now that Google uses page speed as a ranking factor, many organizations have become laser-focused on performance.

Last year, Google made two significant changes to their search indexing and ranking algorithms:

From this, we’re able to state two truths:

  • The speed of your site on mobile will affect your overall SEO ranking.
  • If your pages load slowly, it will reduce your ad quality score, and ads will cost more.

Google wrote:

Faster sites don’t just improve user experience; recent data shows that improving site speed also reduces operating costs. Like us, our users place a lot of value in speed — that’s why we’ve decided to take site speed into account in our search rankings.

To understand how these changes affect us from a performance perspective, we need to grasp the underlying technology. PageSpeed 5.0 is a complete overhaul of previous editions. It’s now being powered by Lighthouse and CrUX (Chrome User Experience Report).

This upgrade also brings a new scoring algorithm that makes it far more challenging to receive a high PageSpeed score.

What changed in PageSpeed 5.0?

Before 5.0, PageSpeed ran a series of heuristics against a given page. If the page has large, uncompressed images, PageSpeed would suggest image compression. Cache-Headers missing? Add them.

These heuristics were coupled with a set of guidelines that would likely result in better performance, if followed, but were merely superficial and didn’t actually analyze the load and render experience that real visitors face.

In PageSpeed 5.0, pages are loaded in a real Chrome browser that is controlled by Lighthouse. Lighthouse records metrics from the browser, applies a scoring model to them, and presents an overall performance score. Guidelines for improvement are suggested based on how specific metrics score.

Like PageSpeed, Lighthouse also has a performance score. In PageSpeed 5.0, the performance score is taken from Lighthouse directly. PageSpeed’s speed score is now the same as Lighthouse’s Performance score.

Calibre scores 97 on Google’s Pagespeed

Now that we know where the PageSpeed score comes from, let’s dive into how it’s calculated, and how we can make meaningful improvements.

What is Google Lighthouse?

Lighthouse is an open source project run by a dedicated team from Google Chrome. Over the past couple of years, it has become the go-to free performance analysis tool.

Lighthouse uses Chrome’s Remote Debugging Protocol to read network request information, measure JavaScript performance, observe accessibility standards and measure user-focused timing metrics like First Contentful Paint, Time to Interactive or Speed Index.

If you’re interested in a high-level overview of Lighthouse architecture, read this guide from the official repository.

How Lighthouse calculates the Performance Score

During performance tests, Lighthouse records many metrics focused on what a user sees and experiences. There are six metrics used to create the overall performance score. They are:

  • Time to Interactive (TTI)
  • Speed Index
  • First Contentful Paint (FCP)
  • First CPU Idle
  • First Meaningful Paint (FMP)
  • Estimated Input Latency

Lighthouse will apply a 0 – 100 scoring model to each of these metrics. This process works by obtaining mobile 75th and 95th percentiles from HTTP Archive, then applying a log normal function.

Following the algorithm and reference data used to calculate Time to Interactive, we can see that if a page managed to become "interactive" in 2.1 seconds, the Time to Interactive metric score would be 92/100.

Once each metric is scored, it’s assigned a weighting which is used as a modifier in calculating the overall performance score. The weightings are as follows:

Metric Weighting
Time to Interactive (TTI) 5
Speed Index 4
First Contentful Paint 3
First CPU Idle 2
First Meaningful Paint 1
Estimated Input Latency 0

These weightings refer to the impact of each metric in regards to mobile user experience.

In the future, this may also be enhanced by the inclusion of user-observed data from the Chrome User Experience Report dataset.

You may be wondering how the weighting of each metric affects the overall performance score. The Lighthouse team have created a useful Google Spreadsheet calculator explaining this process:

Picture of a spreadsheet that can be used to calculate performance scores

Using the example above, if we change (time to) interactive from 5 seconds to 17 seconds (the global average mobile TTI), our score drops to 56% (aka 56 out of 100).

Whereas, if we change First Contentful Paint to 17 seconds, we’d score 62%.

TTI is the most impactful metric to your performance score. Therefore, to receive a high PageSpeed score, you will need a speedy TTI measurement.

Moving the needle on TTI

At a high level, there are two significant factors that hugely influence TTI:

  • The amount of JavaScript delivered to the page
  • The run time of JavaScript tasks on the main thread

Our Time to Interactive guide explains how TTI works in great detail, but if you’re looking for some quick no-research wins, we’d suggest: Reducing the amount of JavaScript

Where possible, remove unused JavaScript code or focus on only delivering a script that will be run by the current page. That might mean removing old polyfills or replacing third-party libraries with smaller, more modern alternatives.

It’s important to remember that the cost of JavaScript is not only the time it takes to download it. The browser needs to decompress, parse, compile and eventually execute it, which takes non-trivial time, especially in mobile devices.

Effective measures for reducing the amount of scripts from your pages include:

  • Reviewing and removing polyfills that are no longer required for your audience.
  • Understanding the cost of each third-party JavaScript library. Use webpack-bundle-analyser or source-map-explorer to visualise the how large each library is.
  • Modern JavaScript tooling (like webpack) can break-up large JavaScript applications into a series of small bundles that are automatically loaded as a user navigates. This approach is known as code splitting and is extremely effective in improving TTI.
  • Service workers that will cache the bytecode result of a parsed and compiled script. If you’re able to make use of this, visitors will pay a one-time performance cost for parse and compilation. After that, it’ll be mitigated by cache.

Monitoring Time to Interactive

To successfully uncover significant differences in user experience, we suggest using a performance monitoring system (like Calibre!) that allows for testing a minimum of two devices; a fast desktop and a low-mid range mobile phone.

That way, you’ll have the data for both the best and worst case of what your customers experience. It’s time to come to terms that your customers aren’t using the same powerful hardware as you.

In-depth manual profiling

To get the best results in profiling JavaScript performance, test pages using intentionally slow mobile devices. If you have an old phone in a desk drawer, this is a great second-life for it.

An excellent substitute for using a real device is to use Chrome DevTools hardware emulation mode. We’ve written an extensive performance profiling guide to help you get started with runtime performance.

What about the other metrics?

Speed Index, First Contentful Paint and First Meaningful Paint are all browser-paint-based metrics. They’re influenced by similar factors and can often be improved at the same time.

It’s objectively easier to improve these metrics as they are calculated by how quickly a page renders. Following the Lighthouse Performance audit rules closely will result in these metrics improving.

If you aren’t already preloading your fonts or optimizing for critical requests, that is an excellent place to start a performance journey. Our article titled “The Critical Request” explains in great detail how the browser fetches and renders critical resources used to render your pages.

Tracking your progress and making meaningful improvements

Google’s newly updated search console, Lighthouse and PageSpeed Insights, are a great way to get initial visibility into the performance of your pages but fall short for teams that need to continuously track and improve the performance of their pages.

Continuous performance monitoring is essential to ensuring speed improvements last, and teams get instantly notified when regressions happen. Manual testing introduces unexpected variability in results and makes testing from different regions as well as on various devices nearly impossible without a dedicated lab environment.

Speed has become a crucial factor for SEO rankings, especially now that nearly 50% of web traffic comes from mobile devices.

To avoid losing your search positioning, ensure you’re using an up-to-date performance suite to track key pages. (Pssst, we built Calibre to be your performance companion. It has Lighthouse built-in. Hundreds of teams from around the globe are using it every day.)

Related Articles

The post How Google PageSpeed Works: Improve Your Score and Search Engine Ranking appeared first on CSS-Tricks.

CSS-Tricks

, , , , , , ,

Weekly Platform News: Event Timing, Google Earth for Web, undead session cookies

Šime posts regular content for web developers on webplatform.news.

In this week’s news, Wikipedia helps identify three slow click handlers, Google Earth comes to the web, SVG properties in CSS get more support, and what to do in the event of zombie cookies.

Tracking down slow event handlers with Event Timing

Event Timing is experimentally available in Chrome (as an Origin Trial) and Wikipedia is taking part in the trial. This API can be used to accurately determine the duration of event handlers with the goal of surfacing slow events.

We quickly identified 3 very frequent slow click handlers experienced frequently by real users on Wikipedia. […] Two of those issues are caused by expensive JavaScript calls causing style recalculation and layout.

(via Gilles Dubuc)

Google Earth for Web beta available

The preview version of Google Earth for Web (powered by WebAssembly) is now available. You can try it out in Chromium-based browsers and Firefox — it runs single-threaded in browsers that don’t yet have (re-)enabled SharedArrayBuffer — but not in Safari because of its lack of full support for WebGL2.

(via Jordon Mears)

SVG geometry properties in CSS

Firefox Nightly has implemented SVG geometry properties (x, y, r, etc.) in CSS. This feature is already supported in Chrome and Safari and is expected to ship in Firefox 69 in September.

See the Pen
Animating SVG geometry properties with CSS
by Šime Vidas (@simevidas)
on CodePen.

(via Jérémie Patonnier)

Browsers can keep session cookies alive

Chrome and Firefox allow users to restore the previous browser session on startup. With this option enabled, closing the browser will not delete the user’s session cookies, nor empty the sessionStorage of web pages.

Given this session resumption behavior, it’s more important than ever to ensure that your site behaves reasonably upon receipt of an outdated session cookie (e.g. redirect the user to the login page instead of showing an error).

(via Eric Lawrence)

The post Weekly Platform News: Event Timing, Google Earth for Web, undead session cookies appeared first on CSS-Tricks.

CSS-Tricks

, , , , , , , , ,
[Top]

Weekly Platform News: Event Timing, Google Earth for Web, undead session cookies

Šime posts regular content for web developers on webplatform.news.

In this week’s news, Wikipedia helps identify three slow click handlers, Google Earth comes to the web, SVG properties in CSS get more support, and what to do in the event of zombie cookies.

Tracking down slow event handlers with Event Timing

Event Timing is experimentally available in Chrome (as an Origin Trial) and Wikipedia is taking part in the trial. This API can be used to accurately determine the duration of event handlers with the goal of surfacing slow events.

We quickly identified 3 very frequent slow click handlers experienced frequently by real users on Wikipedia. […] Two of those issues are caused by expensive JavaScript calls causing style recalculation and layout.

(via Gilles Dubuc)

Google Earth for Web beta available

The preview version of Google Earth for Web (powered by WebAssembly) is now available. You can try it out in Chromium-based browsers and Firefox — it runs single-threaded in browsers that don’t yet have (re-)enabled SharedArrayBuffer — but not in Safari because of its lack of full support for WebGL2.

(via Jordon Mears)

SVG geometry properties in CSS

Firefox Nightly has implemented SVG geometry properties (x, y, r, etc.) in CSS. This feature is already supported in Chrome and Safari and is expected to ship in Firefox 69 in September.

See the Pen
Animating SVG geometry properties with CSS
by Šime Vidas (@simevidas)
on CodePen.

(via Jérémie Patonnier)

Browsers can keep session cookies alive

Chrome and Firefox allow users to restore the previous browser session on startup. With this option enabled, closing the browser will not delete the user’s session cookies, nor empty the sessionStorage of web pages.

Given this session resumption behavior, it’s more important than ever to ensure that your site behaves reasonably upon receipt of an outdated session cookie (e.g. redirect the user to the login page instead of showing an error).

(via Eric Lawrence)

The post Weekly Platform News: Event Timing, Google Earth for Web, undead session cookies appeared first on CSS-Tricks.

CSS-Tricks

, , , , , , , , ,
[Top]

Google Fonts is Adding font-display

Google announced at I/O that their font service will now support the font-display property which resolves a number of web performance issues. If you’re hearing cries of joy, that’s probably Chris as he punches the air in celebration. He’s wanted this feature for some time and suggests that all @font-face blocks ought to consider using the property.

Zach Leatherman has the lowdown:

This is big news—it means developers now have more control over Google Fonts web font loading behavior. We can enforce instant rendering of fallback text (when using font-display: swap) rather than relying on the browser default behavior of invisible text for up to 3 seconds while the web font request is in-flight.

It’s also a bit of trailblazing, too. To my knowledge, this is the first web font host that’s shipping support for this very important font-display feature.

Yes, a big deal indeed! You may recall that font-display instructs the browser how (and kinda when) to load fonts. That makes it a possible weapon to fight fight FOUT and FOIT issues. Chris has mentioned how he likes the optional value for that exact reason.

@font-face {   font-family: "Open Sans Regular";   src: url("...");   font-display: optional; }

Oh and this is a good time to remind everyone of Monica Dinculescu’s font-display demo where she explores all the various font-display values and how they work in practice. It’s super nifty and worth checking out.

Direct Link to ArticlePermalink

The post Google Fonts is Adding font-display appeared first on CSS-Tricks.

CSS-Tricks

, , ,
[Top]

How to Get a Progressive Web App into the Google Play Store

PWA (Progressive Web Apps) have been with us for some time now. Yet, each time I try explaining it to clients, the same question pops up: “Will my users be able to install the app using app stores?” The answer has traditionally been no, but this changed with Chrome 72 which shipped a new feature called TWA (Trusted Web Activities).

Trusted Web Activities are a new way to integrate your web-app content such as your PWA with yourAndroid app using a protocol based on Custom Tabs.

In this article, I will use Netguru’s existing PWA (Wordguru) and explain step-by-step what needs to be done to make the application available and ready to be installed straight from the Google Play app store.

Some of the things we cover here may sound silly to any Android Developers out there, but this article is written from the perspective of a front-end developer, particularly one who has never used Android Studio or created an Android Application. Also, please do note that a lot of what we’re covering here is still extremely experimental since it’s limited to Chrome 72.

Step 1: Set up a Trusted Web Activity

Setting up a TWA doesn’t require you to write any Java code, but you will need to have Android Studio. If you’ve developed iOS or Mac software before, this is a lot like Xcode in that it provides a nice development environment designed to streamline Android development. So, grab that and meet me back here.

Create a new TWA project in Android Studio

Did you get Android Studio? Well, I can’t actually hear or see you, so I’ll assume you did. Go ahead and crack it open, then click on “Start a new Android Studio project.” From there, let’s choose the “Add No Activity” option, which allows us to configure the project.

The configuration is fairly straightforward, but it’s always good to know what is what:

  • Name The name of the application (but I bet you knew that).
  • Package name: An identifier for Android applications on the Play Store. It must be unique, so I suggest using the URL of the PWA in reverse order (e.g. com.netguru.wordguru).
  • Save location: Where the project will exist locally.
  • Language: This allows us to select a specific code language, but there’s no need for that since our app is already, you know, written. We can leave this at Java, which is the default selection.
  • Minimum API level: This is the version of the Android API we’re working with and is required by the support library (which we’ll cover next). Let’s use API 19.

There are few checkboxes below these options. Those are irrelevant for us here, so leave them all unchecked, then move on to Finish.

Add TWA Support Library

A support library is required for TWAs. The good news is that we only need to modify two files to fill that requirement and the both live in the same project directory: Gradle Scripts. Both are named build.gradle, but we can distinguish which is which by looking at the description in the parenthesis.

There’s a Git package manager called JitPack that’s made specifically for Android apps. It’s pretty robust, but the bottom line is that it makes publishing our web app a breeze. It is a paid service, but I’d say it’s worth the cost if this is your first time getting something into the Google Play store.

Editor Note: This isn’t a sponsored plug for JitPack. It’s worth calling out because this post is assuming little-to-no familiarity with Android Apps or submitting apps to Google Play and it has less friction for managing an Android App repo that connects directly to the store. That said, it’s totally not a requirement.

Once you’re in JitPack, let’s connect our project to it. Open up that build.gradle (Project: Wordguru) file we just looked at and tell it to look at JitPack for the app repository:

allprojects {   repositories {     ...     maven { url 'https://jitpack.io' }     ...   } }

OK, now let’s open up that other build.gradle file. This is where we can add any required dependencies for the project and we do indeed have one:

// build.gradle (Module: app)  dependencies {   ...   implementation 'com.github.GoogleChrome:custom-tabs-client:a0f7418972'   ... }

TWA library uses Java 8 features, so we’re going to need enable Java 8. To do that we need to add compileOptions to the same file:

// build.gradle (Module: app)  android {   ...   compileOptions {     sourceCompatibility JavaVersion.VERSION_1_8     targetCompatibility JavaVersion.VERSION_1_8   }   ... }

There are also variables called manifestPlaceholders that we’ll cover in the next section. For now, let’s add the following to define where the app is hosted, the default URL and the app name:

// build.gradle (Module: app)  android {   ...   defaultConfig {     ...     manifestPlaceholders = [       hostName: "wordguru.netguru.com",       defaultUrl: "https://wordguru.netguru.com",       launcherName: "Wordguru"     ]     ...   }   ... }

Provide app details in the Android App Manifest

Every Android app has an Android App Manifest (AndroidManifest.xml) which provides essential details about the app, like the operating system it’s tied to, package information, device compatibility, and many other things that help Google Play display the app’s requirements.

The thing we’re really concerned with here is Activity (<activity>). This is what implements the user interface and is required for the “Activities” in “Trusted Web Activities.”

Funny enough, we selected the “Add No Activity” option when setting up our project in Android Studio and that’s because our manifest is empty and contains only the application tag.

Let’s start by opening up the manfifest file. We’ll replace the existing package name with our own application ID and the label with the value from the manifestPlaceholders variables we defined in the previous section.

Then, we’re going to actually add the TWA activity by adding an <activity> tag inside the <application> tag.

  <manifest   xmlns:android="http://schemas.android.com/apk/res/android"   package="com.netguru.wordguru"> // highlight    <application     android:allowBackup="true"     android:icon="@mipmap/ic_launcher"     android:label="$ {launcherName}" // highlight     android:supportsRtl="true"     android:theme="@style/AppTheme">      <activity       android:name="android.support.customtabs.trusted.LauncherActivity"       android:label="$ {launcherName}"> // highlight        <meta-data         android:name="android.support.customtabs.trusted.DEFAULT_URL"         android:value="$ {defaultUrl}" /> // highlight               <intent-filter>         <action android:name="android.intent.action.MAIN" />         <category android:name="android.intent.category.LAUNCHER" />       </intent-filter>               <intent-filter android:autoVerify="true">         <action android:name="android.intent.action.VIEW"/>         <category android:name="android.intent.category.DEFAULT" />         <category android:name="android.intent.category.BROWSABLE"/>         <data           android:scheme="https"           android:host="$ {hostName}"/> // highlight       </intent-filter>     </activity>   </application> </manifest>

And that, my friends, is Step 1. Let’s move on to Step 2.

Step 2: Verify the relationship between the website and the app

TWAs require a connection between the Android application and the PWA. To do that, we use Digital Asset Links.

The connection must be set on both ends, where TWA is the application and PWA is the website.

To establish that connection we need to modify our manifestPlaceholders again. This time, we need to add an extra element called assetStatements that keeps the information about our PWA.

// build.gradle (Module: app)  android {   ...   defaultConfig {     ...     manifestPlaceholders = [       ...       assetStatements: '[{ "relation": ["delegate_permission/common.handle_all_urls"], ' +         '"target": {"namespace": "web", "site": "https://wordguru.netguru.com"}}]'       ...     ]     ...   }   ... }

Now, we need to add a new meta-data tag to our application tag. This will inform the Android application that we want to establish the connection with the application specified in the manifestPlaceholders.

  <manifest   xmlns:android="http://schemas.android.com/apk/res/android"   package="$ {packageId}">    <application>     ...       <meta-data         android:name="asset_statements"         android:value="$ {assetStatements}" />     ...   </application> </manifest>

That’s it! we just established the application to website relationship. Now let’s jump into the conversion of website to application.

To establish the connection in the opposite direction, we need to create a .json file that will be available in the app’s /.well-known/assetlinks.json path. The file can be created using a generator that’s built into Android Studio. See, I told you Android Studio helps streamline Android development!

We need three values to generate the file:

  • Hosting site domain: This is our PWA URL (e.g. https://wordguru.netguru.com/).
  • App package name: This is our TWA package name (e.g. com.netguru.wordguru).
  • App package fingerprint (SHA256): This is a unique cryptographic hash that is generated based on Google Play Store keystore.

We already have first and second value. We can get the last one using Android Studio.

First we need to generate signed APK. In the Android Studio go to: Build → Generate Signed Bundle or APK → APK.

Next, use the existing keystore, if you already have one. If you need one, go to “Create new…” first.

Then let’s fill out the form. Be sure to remember the credentials as those are what the application will be signed with and they confirm your ownership of the application.

This will create a keystore file that is required to generate the app package fingerprint (SHA256). This file is extremely important as it is works as a proof that you are the owner of the application. If this file is lost, you will not be able to do any further updates to your application in the store.

Next up, let’s select type of bundle. In this case, we’re choosing “release” because it gives us a production bundle. We also need to check the signature versions.

This will generate our APK that will be used later to create a release in Google Play store. After creating our keystore, we can use it to generate required app package fingerprint (the SHA256).

Let’s head back to Android Studio, and go to Tools → App Links Assistant. This will open a sidebar that shows the steps that are required to create a relationship between the application and website. We want to go to Step 3, “Declare Website Association” and fill in required data: Site domain and Application ID. Then, select the keystore file generated in the previous step.

After filling the form press “Generate Digital Asset Links file” which will generate our assetlinks.json file. If we open that up, it should look something like this:

[{   "relation": ["delegate_permission/common.handle_all_urls"],   "target": {     "namespace": "android_app",     "package_name": "com.netguru.wordguru",     "sha256_cert_fingerprints": ["8A:F4:....:29:28"]   } }]

This is the file we need to make available in our app’s /.well-known/assetlinks.json path. I will not describe how to make it available on that path as it is too project-specific and outside the scope of this article.

We can test the relationship by clicking on the “Link and Verify” button. If all goes well, we get a confirmation with “Success!”

Yay! We’ve established a two-way relationship between our Android application and our PWA. It’s all downhill from here, so let’s drive it home.

Step 3: Get required assets

Google Play requires a few assets to make sure the app is presented nicely in the store. Specifically, here’s what we need:

  • App Icons: We need a variety of sizes, including 48×48, 72×72, 96×96, 144×144, 192×192… or we can use an adaptive icon.
  • High-res Icon: This is a 512×512 PNG image that is used throughout the store.
  • Feature Graphic: This is a 1024×500 JPG or 24-bit PNG (no alpha) banner that Google Play uses on the app details view.
  • Screenshots: Google Play will use these to show off different views of the app that users can check out prior to downloading it.

Having all those, we can proceed to the Google Play Store developers console and publish the application!

Step 4: Publish to Google Play!

Let’s go to the last step and finally push our app to the store.

Using the APK that we generated earlier (which is located in the AndroidStudioProjects directory), we need to go to the Google Play console to publish our application. I will not describe the process of publishing an application in the store as the wizard makes it pretty straightforward and we are provided step-by-step guidance throughout the process.

It may take few hours for the application to be reviewed and approved, but when it is, it will finally appear in the store.

If you can’t find the APK, you can create a new one by going to Build → Generate signed bundle / APK → Build APK, passing our existing keystore file and filling the alias and password that we used when we generated the keystore. After the APK is generated, a notice should appear and you can get to the file by clicking on the “Locate” link.

Congrats, your app is in Google Play!

That’s it! We just pushed our PWA to the Google Play store. The process is not as intuitive as we would like it to be, but still, with a bit of effort it is definitely doable, and believe me, it gives that great filling at the end when you see your app displayed in the wild.

It is worth pointing out that this feature is still very much early phase and I would consider it experimental for some time. I would not recommend going with a production release of your application for now because this only works with Chrome 72 and above — any version before that will be able to install the app, but the app itself will crash instantly which is not the best user experience.

Also, the official release of custom-tabs-client does not support TWA yet. If you were wondering why we used raw GitHub link instead of the official library release, well, that’s why.

The post How to Get a Progressive Web App into the Google Play Store appeared first on CSS-Tricks.

CSS-Tricks

, , , ,
[Top]

Extending Google Analytics on CSS-Tricks with Custom Dimensions

The idea for this article sparked when Chris wrote this in Thank You (2018 Edition):

I almost wish our URLs had years in them because I still don’t have a way to scope analytic data to only show me data from content published this year. I can see the most popular stuff from the year, but that’s regardless of when it was published, and that’s dominated by the big guides we’ve had for years and keep updated.

I have been a long-time reader of CSS-Tricks, but have not yet had something to contribute with. Until now. Being a Google Analytics specialist by day, this was at last something I could contribute to CSS-Tricks. Let’s extend Google Analytics on CSS-Tricks!

Enter Google Analytics custom dimensions

Google Analytics gives you a lot of interesting insights about what visitors are doing on a website, just by adding the basic Google Analytics snippet to every page.

But Google Analytics is a one-size-fits-all tool.

In order to make it truly meaningful for a specific website like CSS-Tricks we can add additional meta information to our Google Analytics data.

The year an article was posted is an example of such meta data that Google Analytics does not have out of the box, but it’s something that is easily added to make the data much more useful. That’s where custom dimensions come in.

Create the custom dimension in Google Analytics

The first thing to do is create the new custom dimension. In the Google Analytics UI, click the gear icon, click Custom Definitions and then click Custom Dimensions.

Google Analytics admin interface

This shows a list of any existing custom dimensions. Click the red button to create a new custom dimension.

Custom dimensions overview

Let’s give the custom dimension a descriptive name. In this case, “year” seems quite appropriate since that’s what we want to measure.

The scope is important because it defines how the meta data should be applied to the existing data. In this case, the article year is related to each article the user is viewing, so we need to set it to the “hit” scope.

Another example would be meta data about the entire session, like if the user is logged in, that would be saved in a session-scoped custom dimension.

Alright, let’s save our dimension.

When the custom dimension is created, Google Analytics provides examples for how to implement it using JavaScript. We’re allowed up to 20 custom dimensions and each custom dimension is identified by an index. In this case, “year” is the first custom dimension, so it was created in Index 1 (see dimension1 in the JavaScript code below).

Custom dimension created at Index 1

If we had other custom dimensions defined, then those would live in another index. There is no way to change the index of a custom dimension, so take note of the one being used. A list of all indices can always be found in the overview:

That’s it, now it’s time to code!

Now we have to extract the article year in the code and add it to the payload so that it is sent to Google Analytics with the page view hit.

This is the code we need to execute, per the snippet we were provided when creating the custom dimension:

var dimensionValue = 'SOME_DIMENSION_VALUE'; ga('set', 'dimension1', dimensionValue);

Here is the tricky part. The ga() function is created when the Google Analytics snippet is loaded. In order to minimize the performance hit, it is placed at the bottom of the page on CSS-Tricks. This is what the basic Google Analytics snippet looks like:

<script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-12345-1', 'auto'); ga('send', 'pageview'); </script>

We need to set the custom dimension value after the snippet is parsed and before the page view hit is sent to Google Analytics. Hence, we need to set it here:

// ... ga('create', 'UA-12345-1', 'auto'); ga('set', 'dimension1', dimensionValue); // Set the custom dimension value ga('send', 'pageview');

This code is usually placed outside a WordPress Loop, but that’s where we would have access to meta information like the article year. Because of this, we need to store the article year in a JavaScript variable inside the loop, then reference that variable in the Google Analytics snippet when we get to the bottom of the page.

Save the article year within the loop

In WordPress, a standard loop starts here:

<?php if ( have_posts() ) : while ( have_posts() ) : the_post(); ?>

…and ends here:

<?php endwhile; else : ?> 	<p><?php esc_html_e( 'Sorry, no posts matched your criteria.' ); ?></p> <?php endif; ?>

Somewhere between those lines, we extract the year and save it in a JavaScript variable:

<script> 	var articleYear = "<?php the_time('Y') ?>"; </script>

Reference the article year in the Google Analytics snippet

The Google Analytics snippets is placed on all pages of the website, but the year does not make sense for all pages (e.g. the homepage). Being the good JavaScript developers that we are, we will check if the variable has been defined in order to avoid any errors.

ga('create', 'UA-68528-29', 'auto'); if (typeof articleYear !== "undefined") { 		ga('set', 'dimension1', articleYear); } ga('send', 'pageview');

That’s it! The Google Analytics page view hit will now include the article year for all pages where it is defined.

Custom dimensions do not apply to historical data

One thing to know about custom dimension — or any other modifications to your Google Analytics data — is that they only apply to new data being collected from the website. The custom dimensions described in this article was implemented in January 2019, and that means if we look at data from 2018 it will not have any data for the custom dimensions.

This is important to keep in mind for the rest of this article, when we begin to look into the data. The custom dimensions are added to all posts on CSS-Tricks, going all the way back to 2007, but we are only looking at page views that happened in 2019 — after the custom dimensions was implemented. For example, when we look at articles from 2011, we are not looking at page views in 2011 — we are looking at page views of posts from 2011 in 2019. This is important to keep in mind, when we start to look at posts from previous years.

All set? OK, let’s take a look at the new data!

Viewing the data in Google Analytics

The easiest way to see the new data is to go to Behavior → Site Content → All Pages, which will show the most viewed pages:

All Pages report

In the dropdown above the table, select “year” as a secondary dimension.

Year as secondary dimension

That gives us a table like the one below, showing the year for all articles. Notice how the homepage, which is the second most viewed page, is removed from the table because it does not have a year associated with it.

We start to get a better understanding of the website. The most viewed page (by far) is the complete guide to Flexbox which was published back in 2013. Talk about evergreen content!

Table with year as secondary dimension

Secondary is good, primary is better

OK, so the above table adds some understanding of the most viewed pages, but let’s flip the dimensions so that year is the primary dimension. There is no standard report for viewing custom dimensions as the primary dimension, so we need to create a custom report.

Custom Reports overview

Give the Custom Report a good name. Finding the metrics (blue) and dimensions (green) is easiest by searching.

Create the Custom Report

Here is what the final Custom Report should look like, with some useful metrics and dimensions. Notice how we have selected Page below Year. This will become useful in a second.

The final Custom Report

Once we hit Save, we see the aggregate numbers for all article years. 2013 is still on top, but we now see that 2011 also had some great content, which was not in the top 10 lists we previously looked at. This suggests that no single article from 2011 stood out, but in total, 2011 had some great articles that still receive a lot of views in 2019.

Aggregated numbers for article years

The percentage next to the number of page views is the percentage of the total page views. Notice how 2018 “only” accounts for 8.11% of all page views and 2019 accounts for 6.24%. This is not surprising, but shows that CSS-Tricks is partly a huge success because of the vast amount of strong reference material posted over the years, which users keep referring to.

Let’s look into 2011.

Remember how we set up the Custom Report with the Page below the Year in dimensions? This means we can now click 2011 and drill-down into that year.

It looks like a lot of almanac pages were published in 2011, which in aggregate has a lot of page views. Notice the lower-right corner where it says “1-10 of 375.” This means that 375 articles from 2011 have been viewed on the site in 2019. That is impressive!

Back to the question: Great content from 2018

Before I forget: Let’s answer that initial question from Chris.

Let’s scope the analytics data to content published this year (2018). Here are the top 10 posts:

Top 10 posts published in 2018

Understanding the two-headed beast

In Thank You (2018 Edition), Chris also wrote:

For the last few years, I’ve been trying to think of CSS-Tricks as this two-headed beast. One head is that we’re trying to produce long-lasting referential content. We want to be a site that you come to or land on to find answers to front-end questions. The other head is that we want to be able to be read like a magazine. Subscribe, pop by once a week, snag the RSS feed… whatever you like, we hope CSS-Tricks is interesting to read as a hobbyist magazine or industry rag.

Let’s dig into that with another custom dimension: Post type. CSS-Tricks uses a number of custom post types like videos, almanac entries, and snippets in addition to the built-in post types, like posts or pages.

Let’s also extract that, like we did with the article year:

<script> 	var articleYear = "<?php the_time('Y') ?>"; 	var articleType = "<?php get_post_type($ post->ID) ?>"; </script>

We’ll save it into custom dimension Index 2, which is hit-scoped just like we did with year. Now we can build a new custom report like this:

Custom post types

Now we know that blog posts account for 55% of page views, while snippets and almanac (the long-lasting referential content) account for 44%.

Now, blog posts can also be referential content, so it is safe to say that at least half of the traffic on CSS-Tricks is coming because of the referential content.

From a one-man band to a 333-author content team

When CSS-Tricks started in 2007 it was just Chris. At the time of writing, 333 authors have contributed.

Let’s see how those authors have contributed to the page views on CSS-Tricks using — you probably guessed it — another custom dimension!

<script> 	var articleYear = "<?php the_time('Y') ?>"; 	var articleAuthor = "<?php the_author() ?>"; 	var articleType = "<?php get_post_type($ post->ID) ?>"; </script>

Here are the top 10 most viewed authors in 2019.

Top 10 authors on CSS-Tricks

Let’s break this down even further by year with a secondary dimension and select 500 rows in the lower-right corner, so we get all 465 rows.

Top 10 authors and year

We can then export the data to Excel and make a pivot table of the data, counting authors per year.

Excel pivot table with count of authors per year

You like charts? We can make one with some beautiful v17 colors, showing the number of authors per year.

Authors per year

It is amazing to see the steady growth in authors contributing to CSS-Tricks per year. And given 2019 already has 33 different authors, it looks like 2019 could set a new record.

But are those new authors generating any page views?

Let’s make a new pivot chart where we compare Chris to all other authors.

Pivot table comparing page views

…and then chart that over time.

Share of page views by author per year

It definitely looks like CSS-Tricks is becoming a multi-author site. While Chris is still the #1 author, it is good to see that the constant flow of new high-quality content does not solely depend on him, which is a good trend for CSS-Tricks and makes it possible to cover a lot more topics going forward.

But what happened in 2011, you might ask? Let’s have a look. In a custom report, you can have five levels of dimensions. For now we will stick with four.

Custom report with four dimensions to drill into

Now we can click on the year 2011 and get the list of authors.

2011 authors

Hello Sara Cope! What awesome content did you write in 2011?

Sara Cope almanac pages

Looks like a lot of those almanac pages we saw earlier. Click that!

107 almanac pages by Sara Cope

Indeed, a lot of almanac pages! 107 to be exact. A lot of great content that still receives lots of page views in 2019 to boot.

Summary

Google Analytics is a powerful tool to understand what users are doing on your website, and with a little work, meta data that is specific to your website can make it extra powerful. As seen in this article, adding a few simple meta data that’s already accessible in WordPress can unlock a world of opportunities to analyze and add a whole new dimension of knowledge about the content and visitors of a site, like we did here on CSS-Tricks.


If you’re interested in another similar journey involving custom dimensions and making Google Analytics data way more useful, check out Chris Coyier and Philip Walton in Learning to Use Google Analytics More Effectively at CodePen.

The post Extending Google Analytics on CSS-Tricks with Custom Dimensions appeared first on CSS-Tricks.

CSS-Tricks

, , , , ,
[Top]

Google Fonts and font-display

The font-display descriptor in @font-face blocks is really great. It goes a long way, all by itself, for improving the perceived performance of web font loading. Loading web fonts is tricky stuff and having a tool like this that works as well as it does is a big deal for the web.

It’s such a big deal that Google’s own Pagespeed Insights / Lighthouse will ding you for not using it. A cruel irony, as their own Google Fonts (easily the most-used repository of custom fonts on the web) don’t offer any way to use font-display.

Summarized by Daniel Dudas here:

Google Developers suggests using Lighthouse -> Lighthouse warns about not using font-display on loading fonts -> Web page uses Google Fonts the way it’s suggested on Google Fonts -> Google Fonts doesn’t supports font-display -> Facepalm.

Essentially, we developers would love a way to get font-display in that @font-face block that Google serves up, like this:

@font-face {   font-family: "Open Sans Regular";   src: url("...");   font-display: swap; }

Or, some kind of alternative that is just as easy and just as effective.

Seems like query params is a possibility

When you use a Google Font, they give you a URL that coughs up a stylesheet and makes the font work. Like this:

https://fonts.googleapis.com/css?family=Roboto

They also support URL params for a variety of things, like weights:

https://fonts.googleapis.com/css?family=Open+Sans:400,700

And subsets:

http://fonts.googleapis.com/css?family=Creepster&text=TRICKS https://fonts.googleapis.com/css?family=Open+Sans:400,700&amp;subset=cyrillic

So, why not…

http://fonts.googleapis.com/css?family=Creepster&font-display=swap

The lead on the project says that caching is an issue with that (although it’s been refuted by some since they already support arbitrary text params).

Adding query params reduces x-site cache hits. If we end up with something for font-display plus a bunch of params for variable fonts that could present us with problems.

They say that again later in the thread, so it sounds unlikely that we’re going to get query params any time soon, but I’d love to be wrong.

Option: Download & Self-Host Them

All Google Fonts are open source, so we can snag a copy of them to use for whatever we want.

Once the font files are self-hosted and served, we’re essentially writing @font-face blocks to link them up ourselves and we’re free to include whatever font-display we want.

Option: Fetch & Alter

Robin Richtsfeld posted an idea to run an Ajax call from JavaScript for the font, then alter the response to include font-display and inject it.

const loadFont = (url) => {   // the 'fetch' equivalent has caching issues   var xhr = new XMLHttpRequest();   xhr.open('GET', url, true);   xhr.onreadystatechange = () => {     if (xhr.readyState == 4 && xhr.status == 200) {       let css = xhr.responseText;       css = css.replace(/}/g, 'font-display: swap; }');        const head = document.getElementsByTagName('head')[0];       const style = document.createElement('style');       style.appendChild(document.createTextNode(css));       head.appendChild(style);     }   };   xhr.send(); }  loadFont('https://fonts.googleapis.com/css?family=Rammetto+One');

Clever! Although, I’m not entirely sure how this fits into the world of font loading. Since we’re now handling loading this font in JavaScript, the loading performance is tied to when and how we’re loading the script that runs this. If we’re going to do that, maybe we ought to look into using the official webfontloader?

Option: Service Workers

Similar to above, we can fetch the font and alter it, but do it at the Service Worker level so we can cache it (perhaps more efficiently). Adam Lane wrote this:

self.addEventListener("fetch", event => {   event.respondWith(handleRequest(event)) });  async function handleRequest(event) {   const response = await fetch(event.request);   if (event.request.url.indexOf("https://fonts.googleapis.com/css") === 0 && response.status < 400) {     // Assuming you have a specific cache name setup        const cache = await caches.open("google-fonts-stylesheets");     const cacheResponse = await cache.match(event.request);     if (cacheResponse) {       return cacheResponse;   }   const css = await response.text();   const patched = css.replace(/}/g, "font-display: swap; }");   const newResponse = new Response(patched, {headers: response.headers});   cache.put(event.request, newResponse.clone());     return newResponse;   }   return response; }

Even Google agrees that using Service Workers to help Google Fonts is a good idea. Workbox, their library for abstracting service worker management, uses Google Fonts as the first demo on the homepage:

// Cache the Google Fonts stylesheets with a stale while revalidate strategy. workbox.routing.registerRoute(   /^https://fonts.googleapis.com/,   workbox.strategies.staleWhileRevalidate({     cacheName: 'google-fonts-stylesheets',   }), );  // Cache the Google Fonts webfont files with a cache first strategy for 1 year. workbox.routing.registerRoute(   /^https://fonts.gstatic.com/,   workbox.strategies.cacheFirst({     cacheName: 'google-fonts-webfonts',     plugins: [       new workbox.cacheableResponse.Plugin({         statuses: [0, 200],       }),       new workbox.expiration.Plugin({         maxAgeSeconds: 60 * 60 * 24 * 365,       }),     ],   }), );

Option: Cloudflare Workers

Pier-Luc Gendreau looked into using Cloudflare workers to handle this, but then followed up with Supercharge Google Fonts with Cloudflare and Service Workers, apparently for even better perf.

It has a repo.

Option: Wait for @font-feature-values

One of the reasons Google might be dragging its heels on this (they’ve said the same), is that there is a new CSS @rule called @font-feature-values that is designed just for this situation. Here’s the spec:

This mechanism can be used to set a default display policy for an entire font-family, and enables developers to set a display policy for @font-face rules that are not directly under their control. For example, when a font is served by a third-party font foundry, the developer does not control the @font-face rules but is still able to set a default font-display policy for the provided font-family. The ability to set a default policy for an entire font-family is also useful to avoid the ransom note effect (i.e. mismatched font faces) because the display policy is then applied to the entire font family.

There doesn’t seem to be much movement at all on this (just a little), but it doesn’t seem pretty awesome to wait on it.

The post Google Fonts and font-display appeared first on CSS-Tricks.

CSS-Tricks

, ,
[Top]

Google Labs Web Components

I think it’s kinda cool to see Google dropping repos of interesting web components. It demonstrates the possibilities of cool new web features and allows them to ship them in a way that’s compatible with entirely web standards.

Here’s one: <two-up>

I wanted to give it a try, so I linked up their example two-up-min.js script in a Pen and used the element by itself to see how it works. They expose the component’s styling with custom properties, which I’d say is a darn nice use case for those.

<two-up&rt; by Chris Coyier (@chriscoyier) on CodePen.

The post Google Labs Web Components appeared first on CSS-Tricks.

CSS-Tricks

, ,
[Top]