Weekly Platform News: Impact of Third-Party Code, Passive Mixed Content, Countries with the Slowest Connections

In this week’s roundup, Lighthouse sheds light on third-party scripts, insecure resources will get blocked on secure sites, and many country connection speeds are still trying to catch up to others… literally.


Measure the impact of third-party code during page load

Lighthouse, Chrome’s built-in auditing tool, now shows a warning when the impact of third-party code on page load performance is too high. The pre-existing “Third-party usage” diagnostic audit will now fail if the total main-thread blocking time caused by third-parties is larger than 250ms during page load.

Note: This feature was added in Lighthouse version 5.3.0, which is currently available in Chrome Canary.

(via Patrick Hulce)

Passive mixed content is coming to an end

Currently, browsers still allow web pages loaded over a secure connection (HTTPS) to load images, videos, and audio over an insecure connection. Such insecurely-loaded resources on securely-loaded pages are known as “passive mixed content,” and they represent a security and privacy risk.

An insecurely-loaded image can allow an attacker to communicate incorrect information to the user (e.g., a fabricated stock chart), mutate client-side state (e.g., set a cookie), or induce the user to take an unintended action (e.g., changing the label on a button).

Starting next February, Chrome will auto-upgrade all passive mixed content to https:, and resources that fail to load over https: will be blocked. According to data from Chrome Beta, auto-upgrade currently fails for about 30% of image loads.

(via Emily Stark)

Fast connections are still not common in many countries

Data from Chrome UX Report shows that there are still many countries and territories in the world where most people access the Internet over a 3G or slower connection. (This includes a number of small island nations that are not visible on this map.)

(via Paul Calvano)

More news…

Read even more news in my weekly Sunday issue that can be delivered to you via email every Monday morning.

More News →

The post Weekly Platform News: Impact of Third-Party Code, Passive Mixed Content, Countries with the Slowest Connections appeared first on CSS-Tricks.

CSS-Tricks

, , , , , , , , , , ,

Images Are Not Static Content

We constantly hear about the importance of keeping websites lean and fast. A fast-loading website makes users more satisfied, and satisfied users spend more time and money on your website. However, website optimization is a complex task, as there is not one silver bullet to fix all of the issues causing poor performance.

We also hear that addressing the performance of images is a low hanging fruit if you want to improve your site’s user experience However, anyone who has gotten their hands dirty trying to optimize images and cover major use cases and scenarios with responsive images knows that the complexity of this task escalates quickly. For most medium to large sites, image optimization is not a task suited to humans. This is why image content delivery networks (CDN) exist.

An image CDN is indeed a content delivery network built especially for images. Just like the name suggests. So, why would we need a special CDN to serve images? Why not use a regular CDN to serve static files? Short answer is that images are not static files…

Most image CDNs treat an image as dynamic content by optimizing the image in different ways based on context where the image is consumed.

Explained a bit differently; if you’re using responsive images on your website, an image cdn will automatically generate the derivatives of the image according to the sizes specified in the markup, usually based on some URL parameters. For example, the below code selects from 3 derivatives specified in the srcset attribute based on 3 breakpoints:

<img src="//i.foo.com/image.jpg" alt="cat"    srcset="//i.foo.com/image.jpg?width=320 320w, //i.foo.com/image.jpg?width=640 640w, //i.foo.com/image.jpg?width=1280 1280w"    sizes="(max-width: 480px) 100vw, (max-width: 900px) 33vw, 254px">

This way, the developer or designer doesn’t have to worry about creating all the image versions beforehand. Which is very good news, because the number of derivative images may quickly grow exponentially based on many break points, image formats, and screen resolutions. And that is before we’ve started talking about art direction.

Dynamic Image Optimization on Autopilot

Now that we’ve seen how an image CDN can create different sizes of an image on the fly, let’s examine how this improves web performance.

Before we go further to choose an image CDN for the examples later, it is important to point out the difference between an image CDN and a digital asset management tool (DAM). A DAM, such as Cloudinary, is mostly focused for file management aspect and often allows you to edit images and apply art direction like filters. Usually these DAMs need a general purpose CDN in front and there is little support for automation of image optimization tasks.

On the opposite end of the scale is ImageEngine. ImageEngine is the most effective image CDN on the market thanks to its built in device detection that enables superior image optimization for mobile traffic. Since mobile devices account for more than 50% of the traffic in many countries, ImageEngine truly has an advantage over other CDNs. While most other image CDNs only offer little or no automatic optimization, ImageEngine has more advanced approach thanks to its focus on mobile traffic. Hence, ImageEngine will be able to produce the best results with less implementation effort and maintenance.

How ImageEngine Improves Web Performance

With ImageEngine handling all image traffic, images are no longer static content. Images are now adapted and served exactly in the size, format, compression rate and resolution needed. Fine. But how do we measure the improvement?

These days, the “go to tool” for identifying performance issues and measuring performance is Google Lighthouse. Lighthouse is available as a standalone app and in your Chrome developer tools.

We’ll run a performance audit on an e-commerce demo page listing product images.

The page has a typical responsive grid layout with product images. The layout has a few breakpoints where the display size of the images change because number of items per row changes. Moreover, there is a mouse over feature displaying a different image of the product. The mouseover effect is handled by JavaScript and even the hidden image is always loaded in our example. So all in all, quite a few images and potential sizes.

Step One: Assess Current State

Running the Lighthouse audit on the demo-page we see a number of issues, summarized in a performance score of 98. The best score is 100, so 98 might not seem that bad. Which is true, but pay more attention to the metrics below the score. The performance score is calculated based on a few metrics with varied weighting. The images on our page have direct and indirect impact on these metrics.

In the details of the report, we see a few opportunities related to images listed:

  • Properly size images. The images does not have the right pixel size. This is quite common on pages with a responsive or fluid layout.
  • Serve images in next-gen formats. For Chrome this basically mean to convert images to webp. Usually webp is a more efficient format than most others when it comes to byte size and decode speed.
  • Efficiently encode images. There is more compression that can be applied to the images before impacting perceived visual quality.

The estimated savings (to the right in the report) are huge. This demonstrates why addressing images is considered a low hanging fruit for performance.

If you haven’t signed up already, create a free ImageEngine trial account. Once you’ve completed signup you can define the image origin (usually your website) and a domain from which you want to serve images from. The image may be something like images.mydomain.com. You point this domain name to ImageEngine with a CNAME record in your DNS, and you’re good to go.

The next step is changing the markup to make the most out of ImageEngine’s automatic features.

If our previous image tag looked like this:

<img class="pic-1" src="images/demo9/img-1.jpg">

Our new image tag will look like this when the ImageEngine domain name is serving the images:

<img class="pic-1" src="https://images.mydomain.com/images/demo9/img-1.jpg">

Because our grid layout is fluid with 4 breakpoints, we might also consider to use responsive images syntax:

<img   class="pic-1"    src="https://images.mydomain.com/images/demo9/img-1.jpg"    sizes="(max-width: 576px) 93vw,           (max-width: 768px) 238px, (max-width: 768px) 238px,           (max-width: 992px) 148px, 253px" >

Thanks to ImageEngine’s support for Client hints, ImageEngine will now generate the exact pixel size needed. Client hints are additional HTTP headers the browser can send to enable more accurate image resizing. Client hints are currently only supported by Chrome browsers

Step Three: Measure the Improvement

Running the Lighthouse audit again, we see that the score is now 100. But more importantly, look at the improvements in timings. “Time to interactive” for example. 0.7 seconds less waiting for the user in order to interact with the page. All because images are optimized properly.

What does really “optimized” mean in this case? Why is the page faster and user experience better with ImageEngine? Most of the positive impact is due to reduction in byte size of the images. The less bytes, the faster are the images transferred from the host (or ImageEngine’s edge servers) to the browser. Moreover, lighter images are usually faster to decode and render onto the users screen. This is very simplified, but let’s see how much ImageEngine reduces the image payload using WebPageTest.org to compare our demo site with-, and without ImageEngine:

ImageEngine reduces the image payload to only 25% of the original size.

Bonus: Fix Caching

In the continuous hunt for improved performance, you may have seen this alert from Lighthouse.

Lighthouse thinks the images have a too short Time To Live (TTL) -measured in seconds- in the browser cache. By default, ImageEngine passes on the cache directives given by the origin but luckily this can be changed in ImageEngine’s management interface.

Next Step: Automate Image Optimization

We’ve seen how images should no longer can be treated as static content if we want a high performing web site. Because images have such a high impact on website performance, images must be tailored according to the capabilities and context of the browser and user.

A purpose-built image CDN will relieve humans of the responsibility of trying to accommodate all possible combinations of image formats, sizes and compression levels. Managing image derivatives, is not a task for humans as it will quickly grow to become unmanageable.

Using tools like Lighthouse and WebPageTest.org document the positive impact image CDNs like ImageEngine has on important performance metrics.

The post Images Are Not Static Content appeared first on CSS-Tricks.

CSS-Tricks

, ,
[Top]

Recipes for Performance Testing Single Page Applications in WebPageTest

WebPageTest is an online tool and an Open Source project to help developers audit the performance of their websites. As a Web Performance Evangelist at Theodo, I use it every single day. I am constantly amazed at what it offers to the web development community at large and the web performance folks particularly — for free.

But things can get difficult pretty quickly when dealing with Single Page Applications — usually written with React, Vue, Svelte or any other front-end framework. How can you get through a log in page? How can you test the performance of your users’ flow, when most of it happens client-side and does not have a specific URL to point to?

Throughout this article, we are going to find out how to solve these problems (and many more), and you’ll be ready to test the performance of your Single Page Application with WebPageTest!

Note: This articles requires an intermediate understanding about some of WebPageTest advanced features.

If you are curious about web performance and want a good introduction to WebPageTest, I would highly recommend the following resources:

The problem with testing Single Page Applications

Single Page Applications (SPAs) radically changed the way websites work. Instead of letting the back end (e.g. Django, Rails and Laravel) do most of the grunt work and delivering “ready-to-use” HTML to the browser, SPAs rely heavily on JavaScript to have the browser compute HTML. Such front-end frameworks include React, Vue, Angular or Svelte.

The simplicity of WebPageTest is what makes part of its appeal to developers: head to http://webpagetest.org/easy, enter your URL, wait a little, and voilà! Your performance audit is ready.

If you are building an SPA and want to measure its performance, you could rely on end-to-end testing tools like Selenium, Cypress or Puppeteer. However, I have found that none of these has the amount of performance-related information and easy-to-use tooling that WebPageTest offers.

But testing SPAs with WebPageTest can be complex.

In many SPAs, most of the site is protected behind a log in form. I often use Netlify for hosting my sites (including my personal blog), and most of the time I spend in the application is on authenticated pages, like the dashboard listing all my websites. As the information on my dashboard is specific to me, any other user trying to access https://app.netlify.com/teams/phacks/sites is not going to see my dashboard, but will instead be redirected to either a login or 404 page.

The same goes for WebPageTest. If I enter my dashboard URL into http://webpagetest.org/easy, the audit will be performed against the login page.

Moreover, testing and monitoring the performance of dynamic interactions in SPAs cannot be achieved with simple WebPageTest audits.

Here’s an example. Nuage is a domain name registrar with fancy animations and a beautiful, dynamic interface. When you search for domain names to buy, an asynchronous call fetches the results of the request and the results are displayed as they are retrieved.

As you might have noticed in the video above, the URL of the page does not change as I type my search terms. As a consequence, it is not possible to test the performance of the search experience using a simple WebPageTest audit as we do not have a proper URL to point to the action of searching something — only to an empty search page.

Some other problems can arise from the SPA paradigm shift when using WebPageTest:

  • Clicking around to navigate a webpage is usually harder than merely heading to a new URL, but it is sometimes the only option in SPAs.
  • Authentication in SPAs is usually implemented using JSON Web Tokens instead of good ol’ cookies, which rules out the option of setting authentication cookies directly in WebPageTest (as described here).
  • Using React and Redux (or other application state management libraries) for your SPA can mean that forms are harder to fill out programmatically, since using .innerText() or .value() to set a field’s value may not forward it to the application store.
  • As API calls are often asynchronous and various loaders can be used to indicate a loading state, those can “trick” WebPageTest into believing the page has actually finished loading when it has, in fact, not. I have seen it happen with longer-than-usual API calls (5+ seconds).

As I have faced these problems on several projects, I have come up with a range of tips and techniques to counter them.

The many ways of selecting an element

Selecting DOM elements is a key part of doing all sorts of automated testing, be it for end-to-end testing with Selenium or Cypress or for performance testing with WebPageTest. Selecting DOM elements allows us to click on links and buttons, fill in forms and more generally interact with the application.

There are several ways of selecting a particular DOM elements using native browser APIs, that range from the straightforward document.getElementsByClassName to the thorny but really powerful XPath selectors. In this section, we will see three different possibilities, ordered by increasing complexity.

Get an element by id, className or tagName

If the element you want to select (say, an “Empty Cart” button) has a specific and unique id (e.g. #empty-cart), class name, or is the only button on the page, you can click on it using the getElementsBy methods:

const emptyCartButton = document.getElementsById("empty-cart")[0]; // or document.getElementsByClassName(".empty-cart-button")[0] // or document.getElementsByTagName("button")[0] emptyCartButton.click();

If you have several buttons on the same page, you can filter the resulting list before interacting with the element:

const buttons = document.getElementsByTagName("button"); const emptyCartButton = buttons.filter(button =>   button.innerText.includes("Empty Cart") )[0]; emptyCartButton.click();

Use complex CSS selectors

Sometimes, the particular element you want to interact with does not present an interesting unicity property in either its ID, class or tag.

One way to circumvent this issue is to add this unicity manually, for testing purposes only. Adding #perf-test-empty-cart-button to a specific button is innocuous for your website markup and can dramatically simplify your testing setup.

However, this solution can sometimes be out of reach: you may not have access to the source code of the application, or may not be able to deploy new versions quickly. In those situations, it is useful to know about document.querySelector (and its variant document.querySelectorAll) and using complex CSS selectors.

Here are a few examples of what can be achieved with document.querySelector:

// Select the first input with the `name="username"` property document.querySelector("input[name='username']"); // Select all number inputs document.querySelectorAll("input[type='number']");  // Select the first h1 inside the <section> document.querySelector("section h1");  // Select the first direct descendent of a <nav> which is of type <img> document.querySelector("nav > img");

What’s interesting here is you have the full power of CSS selectors at hand. I encourage you to have a look at the always-useful MDN’s reference table of selectors!

Going nuclear: XPath selectors

XML Path Language (XPath), albeit really powerful, is harder to grasp and maintain than the CSS solutions above. I rarely have to use it, but it is definitively useful to know that it exists.

One such instance is when you want to select a node by its text value, and can’t resort to CSS selectors. Here’s a handy snippet to use in those cases:

// Returns the  that has the exact content 'Sep 16, 2015' document.evaluate(   "//span[text()='Sep 16, 2015']",   document,   null,   XPathResult.FIRST_ORDERED_NODE_TYPE,   null ).singleNodeValue;

I will not go into details on how to use it as it would have me wander away from the goal of this article. To be fair, I don’t even know what many of the parameters above even mean. However, I can definitely recommend the MDN documentation should you want to read on the topic.

Recipes for common use cases

In the following section, we will see how to test the performance in common use cases of Single Page Applications. I call these my testing recipes.

In order to illustrate those recipes, I will use the React Admin demo website as an example. React Admin is an open source project aimed at building admin applications and back offices.

It is a typical example of a SPA because it uses React (as the name suggests), calls remote APIs, has a login interface, many forms and relies on client-side routing. I encourage you to go take a quick look at the website (the demo account is demo/demo ) in order to have an idea of what we will be trying to achieve.

Authentication and forms

The authentication page of React Admin requires the user to input a username and a password:

The authentication screen of React Admin

Intuitively, one could take the following approach to filling in the form and submit:

const [usernameInput, passwordInput] = document.getElementsByTagName("input"); usernameInput.value = "demo"; // innerText could also be used here passwordInput.value = "demo"; document.getElementsByTagName("button")[0].click();

If you run these commands sequentially in a DevTools console on the login page, you will see that all fields are reset and the login request fails upon submitting by clicking the button. The problem comes from the fact that the new values we set with .value() (or .innerText()) are not kicked back to the Redux store, and thus not “processed” by the application.

What we need to do then is explicitly tell React that the value has changed so that it will update internal bookkeeping accordingly. This can be achieved using the Event interface.

const updateInputValue = (input, newValue) => {   let lastValue = input.value;   input.value = newValue;   let event = new Event("input", { bubbles: true });   let tracker = input._valueTracker;   if (tracker) {     tracker.setValue(lastValue);   }   input.dispatchEvent(event); };

Note: this solution is pretty hacky (even according to its own author), however it works well for our purposes here.

Our updated script becomes:

const updateInputValue = (input, newValue) => {   let lastValue = input.value;   input.value = newValue;   let event = new Event("input", { bubbles: true });   let tracker = input._valueTracker;   if (tracker) {     tracker.setValue(lastValue);   }   input.dispatchEvent(event); };  const [usernameInput, passwordInput] = document.getElementsByTagName("input");  updateInputValue(usernameInput, "demo"); updateInputValue(passwordInput, "demo");  document.getElementsByTagName("button")[0].click();

Hurrah! You can try it in your browser’s console—It works like a charm.

Translating this to an actual WebPageTest script (with scripting keywords, single line commands and tab-separated parameters) would look like this:

setEventName    Go to Login  navigate    https://marmelab.com/react-admin-demo/  setEventName    Login      exec    const updateInputValue = (input, newValue) => {  let lastValue = input.value;  input.value = newValue;  let event = new Event("input", { bubbles: true });  let tracker = input._valueTracker;  if (tracker) {  tracker.setValue(lastValue);  }  input.dispatchEvent(event);};  exec    const [usernameInput, passwordInput] = document.getElementsByTagName("input")  exec    updateInputValue(usernameInput, "demo") exec    updateInputValue(passwordInput, "demo")  execAndWait document.getElementsByTagName("button")[0].click()

Note that clicking on the submit button leads us to a new page and triggers API calls, which means we need to use the execAndWait command.

You can see the full results of the test at this address. (Note: the results may have been archived by WebPageTest — you can, however, run the test again yourself!)

Here is a short video (captured by WebPageTest) in which you can see that we indeed passed the authentication step:

Navigating between pages

For traditional Server Rendered pages, navigating from one URL to the next in WebPageTest scripting is done via the navigate <url> command.

However, for SPAs, this does not reflect the experience of the user, as client-side routing means that the server has no role in navigation. Thus, hitting a URL directly would significantly slow down the measured performance (because of the time it takes for the JavaScript framework to be compiled, parsed and executed), a slowdown that the user does not experience when changing pages. As it is crucial to simulate the user flow the best we can, we need to handle the navigation on the client as well.

Hopefully, this is a lot simpler to do than filling up forms. We only need to select the link (or button) that will take us to the new page, and .click() on it! Let’s follow through our previous example, although now we want to test the performance of the Reviews list, and of a single Review page.

A user would typically click on the Reviews item on the left-hand navigation menu, then on any item in the list. Inspecting the elements in DevTools may lead us to a selection strategy as follows:

document.querySelector("a[href='#reviews']"); // select the Reviews link in the menu document.querySelector("table tr"); // select the first item in the Reviews list

As both clicks lead to page transition and API calls (to fetch the reviews), we need to use the execAndWait keyword for the script:

setEventName    Go to Login  navigate    https://marmelab.com/react-admin-demo/  setEventName    Login  exec    const updateInputValue = (input, newValue) => {  let lastValue = input.value;  input.value = newValue;  let event = new Event("input", { bubbles: true });  let tracker = input._valueTracker;  if (tracker) {    tracker.setValue(lastValue);  }  input.dispatchEvent(event);};  exec    const [usernameInput, passwordInput] = document.getElementsByTagName("input")  exec    updateInputValue(usernameInput, "demo") exec    updateInputValue(passwordInput, "demo")  execAndWait document.getElementsByTagName("button")[0].click()  setEventName    Go to Reviews  execAndWait document.querySelector("a[href='#/reviews']").click()  setEventName    Open a single Review  execAndWait document.querySelector("table tbody tr").click()

Here’s the video of the complete script running on WebPageTest:

The audit result from WebPageTest shows the performance metrics and waterfall graphs for each step of the script, allowing us to monitor the performance of each API call and interaction:

What about Internet Explorer 11 compatibility?

WebPageTest allows us to select which location, browser and network conditions the test will use. Internet Explorer 11 (IE11) is among the available browser options, and if you try the previous scripts on IE11, they will fail.

This is due to two reasons:

The ES6 syntax problem can be overcome by translating our scripts to ES5 syntax (no arrow functions, no let and const, no array destructuring), which might look like this:

setEventName    Go to Login  navigate    https://marmelab.com/react-admin-demo/  setEventName    Login  exec    var updateInputValue = function(input, newValue) {  var lastValue = input.value;  input.value = newValue;  var event = new Event("input", { bubbles: true });  var tracker = input._valueTracker;  if (tracker) {    tracker.setValue(lastValue);  }  input.dispatchEvent(event);};  exec    var usernameInput = document.getElementsByTagName("input")[0] exec    var passwordInput = document.getElementsByTagName("input")[1]  exec    updateInputValue(usernameInput, "demo") exec    updateInputValue(passwordInput, "demo")  execAndWait document.getElementsByTagName("button")[0].click()  setEventName    Go to Reviews  execAndWait document.querySelector("a[href='#/reviews']").click()  setEventName    Open a single Review  execAndWait document.querySelector("table tbody tr").click()

In order to bypass the absence of CustomEvent support, we can turn to polyfills and add one manually at the top of the script. This polyfill is available on MDN:

(function() {   if (typeof window.CustomEvent === "function") return false;   function CustomEvent(event, params) {     params = params || { bubbles: false, cancelable: false, detail: undefined };     var evt = document.createEvent("CustomEvent");     evt.initCustomEvent(       event,       params.bubbles,       params.cancelable,       params.detail     );     return evt;   }   CustomEvent.prototype = window.Event.prototype;   window.CustomEvent = CustomEvent; })();

We can then replace all mentions of Event by CustomEvent, set the polyfill to fit on a single line and we are good to go!

setEventName    Go to Login  navigate    https://marmelab.com/react-admin-demo/  exec    (function(){if(typeof window.CustomEvent==="function")return false;function CustomEvent(event,params){params=params||{bubbles:false,cancelable:false,detail:undefined};var evt=document.createEvent("CustomEvent");evt.initCustomEvent(event,params.bubbles,params.cancelable,params.detail);return evt}CustomEvent.prototype=window.Event.prototype;window.CustomEvent=CustomEvent})();  setEventName    Login  exec    var updateInputValue = function(input, newValue) {  var lastValue = input.value;  input.value = newValue;  var event = new CustomEvent("input", { bubbles: true });  var tracker = input._valueTracker;  if (tracker) {    tracker.setValue(lastValue);  }  input.dispatchEvent(event);};  exec    var usernameInput = document.getElementsByTagName("input")[0] exec    var passwordInput = document.getElementsByTagName("input")[1]  exec    updateInputValue(usernameInput, "demo") exec    updateInputValue(passwordInput, "demo")  execAndWait document.getElementsByTagName("button")[0].click()  setEventName    Go to Reviews  execAndWait document.querySelector("a[href='#/reviews']").click()  setEventName    Open a single Review  execAndWait document.querySelector("table tbody tr").click()

Et voilà!

General tips and tricks for WebPageTest scripting

One last thing I want to do is provide a few tips and tricks that make writing WebPageTest scripts easier. Feel free to DM me on Twitter if you have any suggestions!

Security first!

Remember to tick both privacy checkboxes if your script includes senstitive data, like credentials!

WebPageTest security controls

Browse the docs

The WebPageTest Scripting docs are full of features that I didn’t cover in this article, ranging from DNS Overriding to iPhone Spoofing and even if/else conditionals.

When you plan on writing a new script, I recommend to have a look at the available parameters first and see if any can help make your scripting easier or more robust.

Long loading states

Sometimes, a remote API call (say, for fetching the reviews) will take a long time. A loading indicator, such as a spinner, can be used to tell the user to wait a bit as something is happening.

WebPageTest tries to detect when a page has finished loading by figuring out if things are changing on the screen. If your loading indicator lasts a long time, WebPageTest might mistake it for an integral part of your page design and cut the audit before the API call returns — thus truncating your measures.

A way to circumvent this issue is to tell WebPageTest to wait at least a certain duration before stopping the test. This is a parameter available under the Advanced tab:

WebPageTest minimum test duration

Keeping your script (and results) human-readable

  • Use blank lines and comments (//) generously because single-line JavaScript commands can sometimes be hard to grasp.
  • Keep a multi-line version somewhere as your reference, and single-line everything as you are about to test. This helps readability. Like, a lot.
  • Use setEventName to name your different “steps.” This makes for more readable tests as it explicits the sequence of pages the audit goes through, and also appears in the WebPageTest results.

Iterating on your scripts

  • First, make sure that your script works in the browser. To do so, strip the WebPageTest keywords (the first word of every line of your script), then copy and paste each line in the browser console to verify that everything is working as expected at every step of the way.
  • Once you are ready to submit your test to WebPageTest, do it first with very light settings: only one run, a fast browser (cough — not IE11 — cough), no network throttling, no repeat view, a well-dimensioned instance (Dulles, VA, usually has good response times). This will help you detect and correct errors way faster.

Automating your scripts

Your test script is running smoothly, and you start getting performance reports of your Single Page App. As you ship new features, it is important that you monitor its performance regularly to catch regressions at the earliest.

To address this problem, I am currently working on Falco, a soon-to-be-open-sourced WebPageTest test runner. Falco takes care of automating your audits, then presents the results in an easy-to-read interface while letting you read the full reports when you need it. You can follow me on Twitter to know when it goes open source, and learn more about web performance and WebPageTest!

The post Recipes for Performance Testing Single Page Applications in WebPageTest appeared first on CSS-Tricks.

CSS-Tricks

, , , , , ,
[Top]

20 Scary Halloween Graphics to Sweeten Your Website Design in 2019

[Top]

Blocking Third-Party Hands from the Cookie Jar

Third-party cookies are set on your computer from domains other than the one that you’re actually on right now. For example, if I log into css-tricks.com, I’ll get a cookie from css-tricks.com that handles my authentication. But css-tricks.com might also load an image from some other site. A common tactic in online advertising is to render a “tracking pixel” image (well named, right?) that is used to track advertising impressions. That request to another site for the image (say, ad.doubleclick.com) also can set a cookie.

Eric Lawrence explains the issue:

The tracking pixel’s cookie is called a third party cookie because it was set by a domain unrelated to the page itself.

If you later visit B.textslashplain.com, which also contains a tracking pixel from ad.doubleclick.net, the tracking pixel’s cookie set on your visit to A.example.com is sent to ad.doubleclick.net, and now that tracker knows that you’ve visited both sites. As you browse more and more sites that contain a tracking pixel from the same provider, that provider can build up a very complete profile of the sites you like to visit, and use that information to target ads to you, sell the data to a data aggregation company, etc.

But times are a changin’. Eric goes on to explain the browser landscape:

The default stuff is the big deal, because all browsers offer some way to block third-party cookies. But of course, nobody actually does it. Jeremy:

It’s hard to believe that we ever allowed third-party cookies and scripts in the first place. Between them, they’re responsible for the worst ills of the World Wide Web.

2019 is the year we apparently reached the breaking point.

The post Blocking Third-Party Hands from the Cookie Jar appeared first on CSS-Tricks.

CSS-Tricks

, , , ,
[Top]

Let’s Not Forget About Container Queries

Container queries are always on the top of the list of requested improvements to CSS. The general sentiment is that if we had container queries, we wouldn’t write as many global media queries based on page size. That’s because we’re actually trying to control a more scoped container, and the only reason we use media queries for that now is because it’s the best tool we have in CSS. I absolutely believe that.

There is another sentiment that goes around once in a while that goes something like: “you developers think you need container queries but you really don’t.” I am not a fan of that. It seems terribly obvious that we would do good things with them if they were available, not the least of which is writing cleaner, portable, understandable code. Everyone seems to agree that building UIs from components is the way to go these days which makes the need for container queries all the more obvious.

It’s wonderful that there are modern JavaScript ideas that help us do use them today — but that doesn’t mean the technology needs to stay there. It makes way more sense in CSS.

Here’s my late 2019 thought dump on the subject:

  • Philip Walton’s “Responsive Components: a Solution to the Container Queries Problem” is a great look at using JavaScript’s ResizeObserver as one way to solve the issue today. It performs great and works anywhere. The demo site is the best one out there because it highlights the need for responsive components (although there are other documented use cases as well). Philip even says a pure CSS solution would be more ideal.
  • CSS nesting got a little round of enthusiasm about a year ago. The conversation makes it seem like nesting is plausible. I’m in favor of this as a long-time fan of sensible Sass nesting. It makes me wonder if the syntax for container queries could leverage the same sort of thing. Maybe nested queries are scoped to the parent selector? Or you prefix the media statement with an ampersand as the current spec does with descendant selectors?
  • Other proposed syntaxes generally involve some use of the colon, like .container:media(max-width: 400px) { }. I like that, too. Single-colon selectors (pseduo selectors) are philosophically “select the element under these conditions” — like :hover, :nth-child, etc. — so a media scope makes sense.
  • I don’t think syntax is the biggest enemy of this feature; it’s the performance of how it is implemented. Last I understood, it’s not even performance as much as it mucks with the entire rendering flow of how browsers work. That seems like a massive hurdle. I still don’t wanna forget about it. There is lots of innovation happening on the web and, just because it’s not clear how to implement it today, that doesn’t mean someone won’t figure out a practical path forward tomorrow.

The post Let’s Not Forget About Container Queries appeared first on CSS-Tricks.

CSS-Tricks

, , , ,
[Top]

Patterns for Practical CSS Custom Properties Use

I’ve been playing around with CSS Custom Properties to discover their power since browser support is finally at a place where we can use them in our production code. I’ve been using them in a number different ways and I’d love for you to get as excited about them as I am. They are so useful and powerful!

I find that CSS variables usage tends to fall into categories. Of course, you’re free to use CSS variables however you like, but thinking of them in these different categories might help you understand the different ways in which they can be used.

  • Variables. The basics, such as setting, a brand color to use wherever needed.
  • Default Values. For example, a default border-radius that can be overridden later.
  • Cascading Values. Using clues based on specificity, such as user preferences.
  • Scoped Rulesets. Intentional variations on individual elements, like links and buttons.
  • Mixins. Rulesets intended to bring their values to a new context.
  • Inline Properties. Values passed in from inline styles in our HTML.

The examples we’ll look at are simplified and condensed patterns from a CSS framework I created and maintain called Cutestrap.

A quick note on browser support

There are two common lines of questions I hear when Custom Properties come up. The first is about browser support. What browsers support them? Are there fallbacks we need to use where they aren’t supported?

The global market share that support the things we’re covering in this post is 85%. Still, it’s worth cross-referencing caniuse) with your user base to determine how much and where progressive enhancement makes sense for your project.

The second question is always about how to use Custom Properties. So let’s dive into usage!

Pattern 1: Variables

The first thing we’ll tackle is setting a variable for a brand color as a Custom Property and using it on an SVG element. We’ll also use a fallback to cover users on trailing browsers.

html {   --brand-color: hsl(230, 80%, 60%); }  .logo {   fill: pink; /* fallback */   fill: var(--brand-color); }

Here, we’ve declared a variable called --brand-color in our html ruleset. The variable is defined on an element that’s always present, so it will cascade to every element where it’s used. Long story short, we can use that variable in our .logo ruleset.

We declared a pink fallback value for trailing browsers. In the second fill declaration, we pass --brand-color into the var() function, which will return the value we set for that Custom Property.

That’s pretty much how the pattern goes: define the variable (--variable-name) and then use it on an element (var(--variable-name)).

See the Pen
Patterns for Practical Custom Properties: Example 1.0
by Tyler Childs (@tylerchilds)
on CodePen.

Pattern 2: Default values

The var() function we used in the first example can also provide default values in case the Custom Property it is trying to access is not set.

For example, say we give buttons a rounded border. We can create a variable — we’ll call it --roundness — but we won’t define it like we did before. Instead, we’ll assign a default value when putting the variable to use.

.button {   /* --roundness: 2px; */   border-radius: var(--roundness, 10px); }

A use case for default values without defining the Custom Property is when your project is still in design but your feature is due today. This make it a lot easier to update the value later if the design changes.

So, you give your button a nice default, meet your deadline and when --roundness is finally set as a global Custom Property, your button will get that update for free without needing to come back to it.

See the Pen
Patterns for Practical Custom Properties: Example 2.0
by Tyler Childs (@tylerchilds)
on CodePen.

You can edit on CodePen and uncomment the code above to see what the button will look like when --roundness is set!

Pattern 3: Cascading values

Now that we’ve got the basics under our belt, let’s start building the future we owe ourselves. I really miss the personality that AIM and MySpace had by letting users express themselves with custom text and background colors on profiles.

Let’s bring that back and build a school message board where each student can set their own font, background color and text color for the messages they post.

User-based themes

What we’re basically doing is letting students create a custom theme. We’ll set the theme configurations inside data-attribute rulesets so that any descendants — a .message element in this case — that consume the themes will have access to those Custom Properties.

.message {   background-color: var(--student-background, #fff);   color: var(--student-color, #000);   font-family: var(--student-font, "Times New Roman", serif);   margin-bottom: 10px;   padding: 10px; }  [data-student-theme="rachel"] {   --student-background: rgb(43, 25, 61);   --student-color: rgb(252, 249, 249);   --student-font: Arial, sans-serif; }  [data-student-theme="jen"] {   --student-background: #d55349;   --student-color: #000;   --student-font: Avenir, Helvetica, sans-serif; }  [data-student-theme="tyler"] {   --student-background: blue;   --student-color: yellow;   --student-font: "Comic Sans MS", "Comic Sans", cursive; }

Here’s the markup:

<section>   <div data-student-theme="chris">     <p class="message">Chris: I've spoken at events and given workshops all over the world at conferences.</p>   </div>   <div data-student-theme="rachel">     <p class="message">Rachel: I prefer email over other forms of communication.</p>   </div>   <div data-student-theme="jen">     <p class="message">Jen: This is why I immediately set up my new team with Slack for real-time chat.</p>   </div>   <div data-student-theme="tyler">     <p class="message">Tyler: I miss AIM and MySpace, but this message board is okay.</p>   </div> </section>

We have set all of our student themes using [data-student-theme] selectors for our student theme rulesets. The Custom Properties for background, color, and font will apply to our .message ruleset if they are set for that student because .message is a descendant of the div containing the data-attribute that, in turn, contains the Custom Property values to consume. Otherwise, the default values we provided will be used instead.

See the Pen
Patterns for Practical Custom Properties: Example 3.0
by Tyler Childs (@tylerchilds)
on CodePen.

Readable theme override

As fun and cool as it is for users to control custom styles, what users pick won’t always be accessible with considerations for contrast, color vision deficiency, or anyone that prefers their eyes to not bleed when reading. Remember the GeoCities days?

Let’s add a class that provides a cleaner look and feel and set it on the parent element (<section>) so it overrides any student theme when it’s present.

.readable-theme [data-student-theme] {   --student-background: hsl(50, 50%, 90%);   --student-color: hsl(200, 50%, 10%);   --student-font: Verdana, Geneva, sans-serif; }
<section class="readable-theme">   ... </section>

We’re utilizing the cascade to override the student themes by setting a higher specificity such that the background, color, and font will be in scope and will apply to every .message ruleset.

See the Pen
Patterns for Practical Custom Properties: Example 3.1
by Tyler Childs (@tylerchilds)
on CodePen.

Pattern 4: Scoped rulesets

Speaking of scope, we can scope Custom Properties and use them to streamline what is otherwise boilerplate CSS. For example, we can define variables for different link states.

a {   --link: hsl(230, 60%, 50%);   --link-visited: hsl(290, 60%, 50%);   --link-hover: hsl(230, 80%, 60%);   --link-active: hsl(350, 60%, 50%); }  a:link {   color: var(--link); }  a:visited {   color: var(--link-visited); }  a:hover {   color: var(--link-hover); }  a:active {   color: var(--link-active); }
<a href="#">Link Example</a>

Now that we’ve written out the Custom Properties globally on the <a> element and used them on our link states, we don’t need to write them again. These are scoped to our <a> element’s ruleset so they are only set on anchor tags and their children. This allows us to not pollute the global namespace.

Example: Grayscale link

Going forward, we can control the links we just created by changing the Custom Properties for our different use cases. For example, let’s create a gray-colored link.

.grayscale {   --link: LightSlateGrey;   --link-visited: Silver;   --link-hover: DimGray;   --link-active: LightSteelBlue; }
<a href="#" class="grayscale">Link Example</a>

We’ve declared a .grayscale ruleset that contains the colors for our different link states. Since the selector for this ruleset has a greater specificity then the default, these variable values are used and then applied to the pseudo-class rulesets for our link states instead of what was defined on the <a> element.

See the Pen
Patterns for Practical Custom Properties: Example 4.0
by Tyler Childs (@tylerchilds)
on CodePen.

Example: Custom links

If setting four Custom Properties feels like too much work, what if we set a single hue instead? That could make things a lot easier to manage.

.custom-link {   --hue: 30;   --link: hsl(var(--hue), 60%, 50%);   --link-visited: hsl(calc(var(--hue) + 60), 60%, 50%);   --link-hover: hsl(var(--hue), 80%, 60%);   --link-active: hsl(calc(var(--hue) + 120), 60%, 50%); }  .danger {   --hue: 350; }
<a href="#" class="custom-link">Link Example</a> <a href="#" class="custom-link danger">Link Example</a>

See the Pen
Patterns for Practical Custom Properties: Example 4.1
by Tyler Childs (@tylerchilds)
on CodePen.

By introducing a variable for a hue value and applying it to our HSL color values in the other variables, we merely have to change that one value to update all four link states.

Calculations are powerful in combination with Custom Properties since they let
your styles be more expressive with less effort. Check out this technique by Josh Bader where he uses a similar approach to enforce accessible color contrasts on buttons.

Pattern 5: Mixins

A mixin, in regards to Custom Properties, is a function declared as a Custom Property value. The arguments for the mixin are other Custom Properties that will recalculate the mixin when they’re changed which, in turn, will update styles.

The custom link example we just looked at is actually a mixin. We can set the value for --hue and then each of the four link states will recalculate accordingly.

Example: Baseline grid foundation

Let’s learn more about mixins by creating a baseline grid to help with vertical rhythm. This way, our content has a pleasant cadence by utilizing consistent spacing.

.baseline, .baseline * {   --rhythm: 2rem;   --line-height: var(--sub-rhythm, var(--rhythm));   --line-height-ratio: 1.4;   --font-size: calc(var(--line-height) / var(--line-height-ratio)); }  .baseline {   font-size: var(--font-size);   line-height: var(--line-height); }

We’ve applied the ruleset for our baseline grid to a .baseline class and any of its descendants.

  • --rhythm: This is the foundation of our baseline. Updating it will impact all the other properties.
  • --line-height: This is set to --rhythm by default, since --sub-rhythm is not set here.
  • --sub-rhythm: This allows us to override the --line-height — and subsequently, the --font-size — while maintaining the overall baseline grid.
  • --line-height-ratio: This helps enforce a nice amount of spacing between lines of text.
  • --font-size: This is calculated by dividing our --line-height by our --line-height-ratio.

We also set our font-size and line-height in our .baseline ruleset to use the --font-size and --line-height from our baseline grid. In short, whenever the rhythm changes, the line height and font size change accordingly while maintaining a legible experience.

OK, let’s put the baseline to use.

Let’s create a tiny webpage. We’ll use our --rhythm Custom Property for all of the spacing between elements.

.baseline h2, .baseline p, .baseline ul {   padding: 0 var(--rhythm);   margin: 0 0 var(--rhythm); }  .baseline p {   --line-height-ratio: 1.2; }  .baseline h2 {   --sub-rhythm: calc(3 * var(--rhythm));   --line-height-ratio: 1; }  .baseline p, .baseline h2 {   font-size: var(--font-size);   line-height: var(--line-height); }  .baseline ul {   margin-left: var(--rhythm); }
<section class="baseline">   <h2>A Tiny Webpage</h2>   <p>This is the tiniest webpage. It has three noteworthy features:</p>   <ul>     <li>Tiny</li>     <li>Exemplary</li>     <li>Identifies as Hufflepuff</li>   </ul> </section>

We’re essentially using two mixins here: --line-height and --font-size. We need to set the properties font-size and line-height to their Custom Property counterparts in order to set the heading and paragraph. The mixins have been recalculated in those rulesets, but they need to be set before the updated styling will be applied to them.

See the Pen
Patterns for Practical Custom Properties: Example 5.0
by Tyler Childs (@tylerchilds)
on CodePen.

Something to keep in mind: You probably do not want to use the Custom Property values in the ruleset itself when applying mixins using a wildcard selector. It gives those styles a higher specificity than any other inheritance that comes along with the cascade, making them hard to override without using !important.

Pattern 6: Inline properties

We can also declare Custom Properties inline. Let’s build a lightweight grid system demonstrate.

.grid {   --columns: auto-fit;    display: grid;   gap: 10px;   grid-template-columns: repeat(var(--columns), minmax(0, 1fr)); }
<div class="grid">   <img src="https://www.fillmurray.com/900/600" alt="Bill Murray" />   <img src="https://www.placecage.com/900/600" alt="Nic Cage" />   <img src="https://www.placecage.com/g/900/600" alt="Nic Cage gray" />   <img src="https://www.fillmurray.com/g/900/600" alt="Bill Murray gray" />   <img src="https://www.placecage.com/c/900/600" alt="Nic Cage crazy" />   <img src="https://www.placecage.com/gif/900/600" alt="Nic Cage gif" /> </div>

By default, the grid has equally sized columns that will automatically lay themselves into a single row.

See the Pen
Patterns for Practical Custom Properties: Example 6.0
by Tyler Childs (@tylerchilds)
on CodePen.

To control the number of columns we can set our --columns Custom Property
inline on our grid element.

<div class="grid" style="--columns: 3;">   ... </div>

See the Pen
Patterns for Practical Custom Properties: Example 6.1
by Tyler Childs (@tylerchilds)
on CodePen.


We just looked at six different use cases for Custom Properties — at least ones that I commonly use. Even if you were already aware of and have been using Custom Properties, hopefully seeing them used these ways gives you a better idea of when and where to use them effectively.

Are there different types of patterns you use with Custom Properties? Share them in the comments and link up some demos — I’d love to see them!

If you’re new to Custom Properties are are looking to level up, try playing around with the examples we covered here, but add media queries to the mix. You’ll see how adaptive these can be and how many interesting opportunities open up when you have the power to change values on the fly.

Plus, there are a ton of other great resources right here on CSS-Tricks to up your Custom Properties game in the Custom Properties Guide.

See the Pen
Thank you for Reading!
by Tyler Childs (@tylerchilds)
on CodePen.

The post Patterns for Practical CSS Custom Properties Use appeared first on CSS-Tricks.

CSS-Tricks

, , ,
[Top]

A Snippet to See all SVGs in a Sprite

I think of an SVG sprite as this:

<svg display="none">   <symbol id="icon-one"> ... <symbol>   <symbol id="icon-two"> ... <symbol>   <symbol id="icon-three"> ... <symbol> </svg>

I was long a fan of that approach for icon systems (<use>-ing them as needed), but I favor including the SVGs directly as needed these days. Still, sprites are fine, and fairly popular.

What if you have a sprite, and you wanna see what’s in it?

Here’s a tiny bit of JavaScript that will loop through all the symbols it finds and inject a SVG that uses each one…

const sprite = document.querySelector("#sprite"); const symbols = sprite.querySelectorAll("symbol");  symbols.forEach(symbol => {   document.body.insertAdjacentHTML("beforeend", `   <svg width="50" height="50">      <use xlink:href="#$  {symbol.id}" />   <svg> `) });

See the Pen
Visually turn a sprite into individual SVGs
by Chris Coyier (@chriscoyier)
on CodePen.

That’s all.

The post A Snippet to See all SVGs in a Sprite appeared first on CSS-Tricks.

CSS-Tricks

, ,
[Top]

Wufoo Cracks the Code for Forms So You Don’t Have To

There was a lot of buzz about forms last week when Jason Grisby pointed to a missing pattern attribute on Chipotle’s order form that could have been used to help-through millions of dollars in orders. Adrian Roselli followed that up with the common mistake of forgetting for and id attributes on form inputs and the potential cost of it.

Forms are hard. And that’s without thinking about more complex features, like building conditional logic into questions, getting into validation, triggering emails on submission, handling inputs on different devices, storing submissions, or integrating with other services, among many, many other things. Forms aren’t just hard, they are downright complicated.

That’s why I’m glad there’s a company like Wufoo that has all of that sorted out. There’s been many a time where I convince myself I can build a form myself only to abandon the idea for an embedded Wufoo form instead.

Why Wufoo? First off, it’s been around forever. They focus on forms and forms alone, so I’m confident they know exactly what they’re doing. I get all the semantic markup I want based on their tried and tested product and adding it to my (or any other) site is as easy as dropping in a snippet.

Plus, Wufoo continues to innovate! They’re releasing new features all the time. Just this past month, they shipped a new Zapier integration that opens up a ton of possibilities, like sending submissions to a Google spreadsheet, firing off submission notifications in Slack, creating Trello cards from submissions, and more. And again, that’s on top of an already stacked featured set that offers everything from multi-page forms and showing and hiding fields conditionally to collecting payments and allowing file uploads over a secure encrypted connection.

You can see where we use Wufoo here on CSS-Tricks to power the contact form. What’s cool about that simple form is we can direct email notifications to specific inboxes based on the contact selection. It even integrates with Mailchimp, so we can offer an option to sign up for our newsletter directly in the contact form.

We also decided to use Wufoo to improve the way we accept guest posts (and you should definitely submit an idea). We used to lean on plain ol’ email and the contact form, but using Wufoo has allowed us to level up so we can collect more details about a post submission upfront and tailor the form based on the type of submission it is.

I’d say Wufoo is great for any type of form. It handles anything you throw at it, easily integrates into any site, and helps prevent costly mistakes that are apparently worth gobs of cash for some companies.

The post Wufoo Cracks the Code for Forms So You Don’t Have To appeared first on CSS-Tricks.

CSS-Tricks

, , , ,
[Top]

Clipping, Clipping, and More Clipping!

There are so many things you can do with clipping paths. I’ve been exploring them for quite some time and have come up with different techniques and use cases for them — and I want to share my findings with you! I hope this will spark new ideas for fun things you can do with the CSS clip-path property. Hopefully, you’ll either put them to use on your projects or play around and have fun with them.

Before we dive in, it’s worth mentioning that this is my third post here on CSS-Tricks about clip paths. You might want to check those out for a little background:

This article is full of new ideas!

Idea 1: The Double Clip

One neat trick is to use clipping paths to cut content many times. It might sound obvious, but I haven’t seen many people using this concept.

For example, let’s look at an expanding menu:

See the Pen
The more menu
by Mikael Ainalem (@ainalem)
on CodePen.

Clipping can only be applied to a DOM node once. A node cannot have several active instances of the same CSS rule, so that means one clip-path per instance. Yet, there is no upper limit for how many times you can combine clipped nodes. We can, for example, place a clipped <div> inside another clipped <div> and so on. In the ancestry of DOM nodes, we can apply as many separate cuts as we want.

That’s exactly what I did in the demo above. I let a clipped node fill out another clipped node. The parent acts as a boundary, which the child fills up while zooming in. This creates an unusual effect where a rounded menu appears. Think of it as an advanced method of overflow: hidden.

You can, of course, argue that SVGs are better suited for this purpose. Compared to clipping paths, SVG is capable of doing a lot more. Among other things, SVG provides smooth scaling. If clipping fully supported Bézier curves, the conversation would be different. This is not the case at the time of writing. Regardless, clipping paths are very convenient. One node, one CSS rule and you’re good to go. As far as the demo above is concerned, clipping paths do a good job and thus are a viable option.

I put together short video that explains the inner workings of the menu:

Idea 2: Zooming Clip Paths

Another (less obvious) trick is to use clipping paths for zooming. We can actually use CSS transitions to animate clipping paths!

The transition system is quite astonishing in how it’s built. In my opinion, its addition to the web is one of the biggest leaps that the web has taken in recent years. It supports transitions between a whole range of different values. Clipping paths are among the accepted values we can animate. Animation, in general, means interpolation between two extremes. For clipping, this translates to an interpolation between two complete, different paths. Here’s where the web’s refined animation system shows its strength. It doesn’t only work with single values — it also works when animating sets of values.

When animating clipping paths specifically, each coordinate gets interpolated separately. This is important. It makes clipping path animations look coherent and smooth.

Let’s look at the demo. Click on an image to restart the effect:

See the Pen
Brand cut zoom
by Mikael Ainalem (@ainalem)
on CodePen.

I’m using clip-path transitions in this demo. It’s used to zoom in from one clipping path covering a tiny region going into another huge one. The smallest version of the clipping path is much smaller than the resolution — in other words, it’s invisible to the eye when applied. The other extreme is slightly bigger than the viewport. At this zoom level, no cuts are visible since all clipping takes place outside the visible area. Animating between these two different clipping paths creates an interesting effect. The clipped shape seems to reveal the content behind it as it zooms in.

As you may have noticed, the demo uses different shapes. In this case, I’m using logos of popular shoe brands. This gives you an idea of what the effect would look like when used in a more realistic scenario.

Again, here’s a video that walks through the technical stuff in fine detail:

Idea 3: A Clipped Overlay

Another idea is to use clipping paths to create highlight effects. For example, let’s say we want to use clipping paths to create an active state in a menu.

See the Pen
Skewed stretchy menu
by Mikael Ainalem (@ainalem)
on CodePen.

The demo above is not currently supported by Safari.

The clipped path above stretches between the different menu options when it animates. Besides, we’re using an interesting shape to make the UI stand out a bit.

The demo uses an altered copy of the same content where the duplicate copy sits on top of the existing content. It’s placed in the exact same position as the menu and serves as the active state. In essence, it appears like any other regular active state for a menu. The difference is that it’s created with clipping paths rather than fancy CSS styles on HTML elements.

Using clipping enables creating some unusual effects here. The skewed shape is one thing, but we also get the stretchy effect as well. The menu comes with two separate cuts — one on the left and one on the right — which makes it possible to animate the cuts with different timing using transition delays. The result is a stretchy animation with very little effort. As the default easing is non-linear, the delay causes a slight rubber band effect.

The second trick here is to apply the delays depending on direction. If the active state needs to move to the right, then the right side needs to start animating first, and vice versa. I get the directional awareness by using a dash of JavaScript to apply the correct class accordingly on clicks.

Idea 4: Slices of the Pie

How often do you see a circular expanding menu on the web? Preposterous, right!? Well, clipping paths not only make it possible but fairly trivial as well.

See the Pen
The circular menu
by Mikael Ainalem (@ainalem)
on CodePen.

We normally see menus that contain links ordered in a single line or even in dropdowns, like the first trick we looked at. What we’re doing here instead is placing those links inside arcs rather than rectangles. Using rectangles would be the conventional way, of course. The idea here is to explore a more mobile-friendly interaction with two specific UX principles in mind:

  • A clear target that is comfortable to tap with a thumb
  • Change takes place close to the focal point — the place where your visual focus is at the moment

The demo is not specifically about clipping paths. I just happen to use clipping paths to create the pen. Again, like the expandable menu demo earlier, it’s a question of convenience. With clipping and a border radius of 50%, I get the arcs I need in no time.

Idea 5: The Toggle

Toggles never cease to amaze web developers like us. It seems like someone introduces a new interpretation of a toggle every week. Well, here’s mine:

See the Pen
Inverted toggle
by Mikael Ainalem (@ainalem)
on CodePen.

The demo is a remake of this dribbble shot by Oleg Frolov. It combines all three of the techniques we covered in this article. Those are:

  • The double clip
  • Zooming clip paths
  • A clipped overlay

All these on/off switches seem to have one thing in common. They consist of an oval background and a circle, resembling real mechanical switches. The way this toggle works is by scaling up a circular clipping path inside a rounded container. The container cuts the content by overflow: hidden, i.e. double clipping.

Another key part of the demo is having two alternating versions in markup. They are the original and its yin-yang inverted mirrored copy. Using two versions instead of one is, at risk of being repetitive, a matter of convenience. With two, we only need to create a transition for the first version. Then, we can reuse most of it for the second. At the end of the transition the toggle switches over to the opposite version. As the inverted version is identical with the previous end state, it’s impossible to spot the shift. The good thing about this technique is reusing parts of the animation. The drawback is the jank we get when interrupting the animation. That happens when the user punches the toggle before the animation has completed.

Let’s again have look under the hood:

Closing words

You might think: Exploration is one thing, but what about production? Can I use clipping paths on a site I’m currently working on? Is it ready for prime time?

Well, that question doesn’t have a straightforward answer. There are, among other things, two problems to take a closer look at:

1. Browser support
2. Performance

At the time of writing there is, according to caniuse, about 93% browser support. I’d say we’re on the verge of of mass adoption. Note, this number is takes the WebKit prefix into account.

There’s also always the IE argument but it’s really no argument to me. I can’t see that it’s worthwhile to go the extra mile for IE. Should you create workarounds for an insecure browser? Your users are better off with a modern browser. There are, of course, a few rare cases where legacy is a must. But you probably won’t consider modern CSS at all in those cases.

How about performance then? Well, performance gets tricky as things mount up, but nothing that I’d say would prevent us from using clipping paths today. It’s always measured performance that counts. It’s probable that clipping, on average, causes a bigger performance hit than other CSS rules. But remember that the practices we’ve covered here are recommendations, not law. Treat them as such. Make a habit out of measuring performance.

Go on, cut your web pages in pieces. See what happens!

The post Clipping, Clipping, and More Clipping! appeared first on CSS-Tricks.

CSS-Tricks

,
[Top]