Tag: Page

Reliably Send an HTTP Request as a User Leaves a Page

On several occasions, I’ve needed to send off an HTTP request with some data to log when a user does something like navigate to a different page or submit a form. Consider this contrived example of sending some information to an external service when a link is clicked:

<a href="/some-other-page" id="link">Go to Page</a>  <script> document.getElementById('link').addEventListener('click', (e) => {   fetch("/log", {     method: "POST",     headers: {       "Content-Type": "application/json"     },      body: JSON.stringify({       some: "data"     })   }); }); </script>

There’s nothing terribly complicated going on here. The link is permitted to behave as it normally would (I’m not using e.preventDefault()), but before that behavior occurs, a POST request is triggered on click. There’s no need to wait for any sort of response. I just want it to be sent to whatever service I’m hitting.

On first glance, you might expect the dispatch of that request to be synchronous, after which we’d continue navigating away from the page while some other server successfully handles that request. But as it turns out, that’s not what always happens.

Browsers don’t guarantee to preserve open HTTP requests

When something occurs to terminate a page in the browser, there’s no guarantee that an in-process HTTP request will be successful (see more about the “terminated” and other states of a page’s lifecycle). The reliability of those requests may depend on several things — network connection, application performance, and even the configuration of the external service itself.

As a result, sending data at those moments can be anything but reliable, which presents a potentially significant problem if you’re relying on those logs to make data-sensitive business decisions.

To help illustrate this unreliability, I set up a small Express application with a page using the code included above. When the link is clicked, the browser navigates to /other, but before that happens, a POST request is fired off.

While everything happens, I have the browser’s Network tab open, and I’m using a “Slow 3G” connection speed. Once the page loads and I’ve cleared the log out, things look pretty quiet:

Viewing HTTP request in the network tab

But as soon as the link is clicked, things go awry. When navigation occurs, the request is cancelled.

Viewing HTTP request fail in the network tab

And that leaves us with little confidence that the external service was actually able process the request. Just to verify this behavior, it also occurs when we navigate programmatically with window.location:

document.getElementById('link').addEventListener('click', (e) => { + e.preventDefault();    // Request is queued, but cancelled as soon as navigation occurs.    fetch("/log", {     method: "POST",     headers: {       "Content-Type": "application/json"     },      body: JSON.stringify({       some: 'data'     }),   });  + window.location = e.target.href; });

Regardless of how or when navigation occurs and the active page is terminated, those unfinished requests are at risk for being abandoned.

But why are they cancelled?

The root of the issue is that, by default, XHR requests (via fetch or XMLHttpRequest) are asynchronous and non-blocking. As soon as the request is queued, the actual work of the request is handed off to a browser-level API behind the scenes.

As it relates to performance, this is good — you don’t want requests hogging the main thread. But it also means there’s a risk of them being deserted when a page enters into that “terminated” state, leaving no guarantee that any of that behind-the-scenes work reaches completion. Here’s how Google summarizes that specific lifecycle state:

A page is in the terminated state once it has started being unloaded and cleared from memory by the browser. No new tasks can start in this state, and in-progress tasks may be killed if they run too long.

In short, the browser is designed with the assumption that when a page is dismissed, there’s no need to continue to process any background processes queued by it.

So, what are our options?

Perhaps the most obvious approach to avoid this problem is, as much as possible, to delay the user action until the request returns a response. In the past, this has been done the wrong way by use of the synchronous flag supported within XMLHttpRequest. But using it completely blocks the main thread, causing a host of performance issues — I’ve written about some of this in the past — so the idea shouldn’t even be entertained. In fact, it’s on its way out of the platform (Chrome v80+ has already removed it).

Instead, if you’re going to take this type of approach, it’s better to wait for a Promise to resolve as a response is returned. After it’s back, you can safely perform the behavior. Using our snippet from earlier, that might look something like this:

document.getElementById('link').addEventListener('click', async (e) => {   e.preventDefault();    // Wait for response to come back...   await fetch("/log", {     method: "POST",     headers: {       "Content-Type": "application/json"     },      body: JSON.stringify({       some: 'data'     }),   });    // ...and THEN navigate away.    window.location = e.target.href; });

That gets the job done, but there are some non-trivial drawbacks.

First, it compromises the user’s experience by delaying the desired behavior from occurring. Collecting analytics data certainly benefits the business (and hopefully future users), but it’s less than ideal to make your present users to pay the cost to realize those benefits. Not to mention, as an external dependency, any latency or other performance issues within the service itself will be surfaced to the user. If timeouts from your analytics service cause a customer from completing a high-value action, everyone loses.

Second, this approach isn’t as reliable as it initially sounds, since some termination behaviors can’t be programmatically delayed. For example, e.preventDefault() is useless in delaying someone from closing a browser tab. So, at best, it’ll cover collecting data for some user actions, but not enough to be able to trust it comprehensively.

Instructing the browser to preserve outstanding requests

Thankfully, there are options to preserve outstanding HTTP requests that are built into the vast majority of browsers, and that don’t require user experience to be compromised.

Using Fetch’s keepalive flag

If the keepalive flag is set to true when using fetch(), the corresponding request will remain open, even if the page that initiated that request is terminated. Using our initial example, that’d make for an implementation that looks like this:

<a href="/some-other-page" id="link">Go to Page</a>  <script>   document.getElementById('link').addEventListener('click', (e) => {     fetch("/log", {       method: "POST",       headers: {         "Content-Type": "application/json"       },        body: JSON.stringify({         some: "data"       }),        keepalive: true     });   }); </script>

When that link is clicked and page navigation occurs, no request cancellation occurs:

Viewing HTTP request succeed in the network tab

Instead, we’re left with an (unknown) status, simply because the active page never waited around to receive any sort of response.

A one-liner like this an easy fix, especially when it’s part of a commonly used browser API. But if you’re looking for a more focused option with a simpler interface, there’s another way with virtually the same browser support.

Using Navigator.sendBeacon()

The Navigator.sendBeacon()function is specifically intended for sending one-way requests (beacons). A basic implementation looks like this, sending a POST with stringified JSON and a “text/plain” Content-Type:

navigator.sendBeacon('/log', JSON.stringify({   some: "data" }));

But this API doesn’t permit you to send custom headers. So, in order for us to send our data as “application/json”, we’ll need to make a small tweak and use a Blob:

<a href="/some-other-page" id="link">Go to Page</a>  <script>   document.getElementById('link').addEventListener('click', (e) => {     const blob = new Blob([JSON.stringify({ some: "data" })], { type: 'application/json; charset=UTF-8' });     navigator.sendBeacon('/log', blob));   }); </script>

In the end, we get the same result — a request that’s allowed to complete even after page navigation. But there’s something more going on that may give it an edge over fetch(): beacons are sent with a low priority.

To demonstrate, here’s what’s shown in the Network tab when both fetch() with keepalive and sendBeacon() are used at the same time:

Viewing HTTP request in the network tab

By default, fetch() gets a “High” priority, while the beacon (noted as the “ping” type above) have the “Lowest” priority. For requests that aren’t critical to the functionality of the page, this is a good thing. Taken straight from the Beacon specification:

This specification defines an interface that […] minimizes resource contention with other time-critical operations, while ensuring that such requests are still processed and delivered to destination.

Put another way, sendBeacon() ensures its requests stay out of the way of those that really matter for your application and your user’s experience.

An honorable mention for the ping attribute

It’s worth mentioning that a growing number of browsers support the ping attribute. When attached to links, it’ll fire off a small POST request:

<a href="http://localhost:3000/other" ping="http://localhost:3000/log">   Go to Other Page </a>

And those requests headers will contain the page on which the link was clicked (ping-from), as well as the href value of that link (ping-to):

headers: {   'ping-from': 'http://localhost:3000/',   'ping-to': 'http://localhost:3000/other'   'content-type': 'text/ping'   // ...other headers },

It’s technically similar to sending a beacon, but has a few notable limitations:

  1. It’s strictly limited for use on links, which makes it a non-starter if you need to track data associated with other interactions, like button clicks or form submissions.
  2. Browser support is good, but not great. At the time of this writing, Firefox specifically doesn’t have it enabled by default.
  3. You’re unable to send any custom data along with the request. As mentioned, the most you’ll get is a couple of ping-* headers, along with whatever other headers are along for the ride.

All things considered, ping is a good tool if you’re fine with sending simple requests and don’t want to write any custom JavaScript. But if you’re needing to send anything of more substance, it might not be the best thing to reach for.

So, which one should I reach for?

There are definitely tradeoffs to using either fetch with keepalive or sendBeacon() to send your last-second requests. To help discern which is the most appropriate for different circumstances, here are some things to consider:

You might go with fetch() + keepalive if:

  • You need to easily pass custom headers with the request.
  • You want to make a GET request to a service, rather than a POST.
  • You’re supporting older browsers (like IE) and already have a fetch polyfill being loaded.

But sendBeacon() might be a better choice if:

  • You’re making simple service requests that don’t need much customization.
  • You prefer the cleaner, more elegant API.
  • You want to guarantee that your requests don’t compete with other high-priority requests being sent in the application.

Avoid repeating my mistakes

There’s a reason I chose to do a deep dive into the nature of how browsers handle in-process requests as a page is terminated. A while back, my team saw a sudden change in the frequency of a particular type of analytics log after we began firing the request just as a form was being submitted. The change was abrupt and significant — a ~30% drop from what we had been seeing historically.

Digging into the reasons this problem arose, as well as the tools that are available to avoid it again, saved the day. So, if anything, I’m hoping that understanding the nuances of these challenges help someone avoid some of the pain we ran into. Happy logging!


Reliably Send an HTTP Request as a User Leaves a Page originally published on CSS-Tricks. You should get the newsletter.

CSS-Tricks

, , , , , ,

The Single Page App Morality Play

Baldur Bjarnason brings some baby bear porridge to the discussion of Single Page App (SPA) vs. Multi Page App (MPA).

Single-Page-Apps can be fantastic. Most teams will mess them up because most teams operate in dysfunctional organisations. Multi-Page-Apps can also be fantastic, both in highly functional organisations that can apply them when and where they are appropriate and in dysfunctional ones, as they enforce a limit to project scope.

Both approaches can be good and bad. Baldur makes the point that management plays the biggest role in ensuring either outcome.

Another truth: there are an awful lot of projects out there that aren’t all-in on either approach, but are a mixture. I feel that. I also feel like there is a strong desire to declare a winner or have a simple checklist to decide which approach to go for, and that just ain’t gonna happen because the landscape is too shifty and there are just too many factors. Technology: it’s complicated.

A recent episode of Web Rush went into all this as well:

Direct Link to ArticlePermalink


The post The Single Page App Morality Play appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , ,
[Top]

What is Your Page Title on a Google Search Engine Results Page?

Whatever Google wants it to be. I always thought it was exactly what your <title> element was. Perhaps in lieu of that, what the first <h1> on the page is. But recently I noticed some pages on this site that were showing a title on SERPs that was a string that appeared nowhere at all in the source of the page.

When I first noticed it, I tweeted my basic findings…

The thread has more info than what unfurled here.

This is a known thing. Apparently, they’ve been doing this for a long time (~10 years), but it’s the first I’ve noticed it. And it’s undergone a recent change:

The article is pretty clear about things:

[…] we think our new system is producing titles that work better for documents overall, to describe what they are about, regardless of the particular query.

Also, while we’ve gone beyond HTML text to create titles for over a decade, our new system is making even more use of such text. In particular, we are making use of text that humans can visually see when they arrive at a web page. We consider the main visual title or headline shown on a page, content that site owners often place within <H1> tags or other header tags, and content that’s large and prominent through the use of style treatments.

Other text contained in the page might be considered, as might be text within links that point at pages.

The change is in response to people having sucky <title> text. Like it’s too long, too jacked up with SEO garbage (irony!), or are just plain non-descriptive.

I’m not entirely sure how much I care just yet.

Part of me thinks, well, google.com isn’t the web. As important as it is, it’s a proprietary product by a private company and they can do whatever they want within the bounds of the law. In this case, it’s clear the intention is to help: to provide titles that are more clear than what the original page has.

Part of me thinks, well, that sucks that, as site owners, we have no control. If Google wanted to change the SERP title for every results to this website to “CSS-Tricks is a stupid website, never visit it,” they could and that’s that.

Part of me connects this kind of work to AMP. AMP was basically saying, “Y’all are absolutely horrible at building performant mobile websites, so we’re going to build a strict set of rules such that you can’t screw it up anymore, and dangling a carrot of better SERP placement if you buy into the rules.” This way of creating page titles is basically saying, “Y’all are absolutely horrible at providing good titles, so we’re going to title your pages for you so you can’t screw it up anymore and we can improve our SERPs.”

Except with AMP, you had to put in the development hours to make it happen. It was opt-in, even if the carrot was un-ignorable by content companies. This doesn’t carry the risk of burning development hours, but it’s also not something we get to opt into.


The post What is Your Page Title on a Google Search Engine Results Page? appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , ,
[Top]

How do you make a layout with pictures down one side of a page matched up with paragraphs on the other side?

I got this exact question in an email the other day, and I thought it would make a nice blog post because of how wonderfully satisfying this is to do in CSS these days. Plus we can sprinkle in polish to it as we go.

HTML-wise, I’m thinking image, text, image, text, etc.

<img src="..." alt="..." height="" width="" /> <p>Text text text...</p>  <img src="..." alt="..." height="" width="" /> <p>Text text text...</p>  <img src="..." alt="..." height="" width="" /> <p>Text text text...</p>

If that was our entire body in an HTML document, the answer to the question in the blog post title is literally two lines of CSS:

body {   display: grid;   grid-template-columns: min-content 1fr; }

It’s going to look something like this…

Not pretty but we got the job done very quickly.

So cool. Thanks CSS. But let’s clean it up. Let’s make sure there is a gap, set the default type, and reign in the layout.

body {   display: grid;   padding: 2rem;   grid-template-columns: 300px 1fr;   gap: 1rem;   align-items: center;   max-width: 800px;   margin: 0 auto;   font: 500 100%/1.5 system-ui; } img {   max-width: 100%;   height: auto; }

I mean… ship it, right? Close, but maybe we can just add a quick mobile style.

@media (max-width: 650px) {   body {     display: block;     font-size: 80%;   }   p {     position: relative;     margin: -3rem 0 2rem 1rem;     padding: 1rem;     background: rgba(white, 0.8);   } }

OK, NOW ship it!


The post How do you make a layout with pictures down one side of a page matched up with paragraphs on the other side? appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , , ,
[Top]

What Google’s New Page Experience Update Means for Images on Your Website

It’s easy to forget that, as a search engine, Google doesn’t just compare keywords to generate search results. Google knows that if people don’t enjoy their experience on a web page, they won’t stay on the page long enough to consume the content — no matter how relevant it is.

As a result, Google has been experimenting with ways to analyze the user experience of web pages using quantifiable metrics. Factoring these into its search engine rankings, it’s hoped to provide users not only with great, relevant content but with awesome user experiences as well.

Google’s soon-to-be-launched page experience update is a major step in this direction. Website owners with image-heavy websites need to be particularly vigilant to adapt to these changes or risk falling by the wayside. In this article, we’ll talk about everything you need to know regarding this update, and how you can take full advantage of it.

Note: Google introduced their plans for Page Experience in May 2020 and announced in November 2020 that it will begin rolling out in May 2021. However, Google has since delayed their plans for a gradual rollout starting mid-Jun 2021. This was done in order to give website admins time to deal with the shifting conditions brought about by the COVID-19 pandemic first.

Some Background

Before we get into the latest iteration of changes to how Google factors user experience metrics into search engine rankings, let’s get some context. In April 2020, Google made its most pivotal move in this direction yet by introducing a new initiative: core web vitals.

Core web vitals (CWV) were introduced to help web developers deal with the challenges of trying to optimize for search engine rankings using testable metrics – something that’s difficult to do with a highly subjective thing like user experience.

To do this, Google identified three key metrics (what it calls “user-centric performance metrics”). These are:

  1. LCP (Largest Contentful Paint): The largest element above the fold when a web page initially loads. Typically, this is a large featured image or header. How fast the largest content element loads plays a huge role in how fast the user perceives the overall loading speed of the page.
  2. FID (First Input Delay): The time it takes between when a user first interacts with the page and when the main thread is free for the browser to process the event. This can be clicking/tapping a button, link, or interacting with any other dynamic element. Delays when interacting with a page can obviously be frustrating to users which is why keeping FID low is crucial.
  3. Cumulative Layout Shift (CLS): This calculates the visual stability of a page when it first loads. The algorithm takes the size of the elements and the distance they move relevant to the viewport into account. Pages that load with high instability can cause miscues by users, also leading to frustrating situations.

These metrics have evolved from more rudimentary ones that have been in use for some time, such as SI (Speed Index), FCP (First Contentful Paint), TTI (Time-to-interactive), etc.

The reason this is important is because images can play a significant role in how your website’s CWVs score. For example, the LCP is more often than not an above-the-fold image or, at the very least, will have to compete with an image to be loaded first. Images that aren’t correctly used can also negatively impact CLS. Slow-loading images can also impact the FID by adding further delays to the overall rendering of the page.

What’s more, this came on the back of Google’s renewed focus on mobile-first indexing. So, not only are these metrics important for your website, but you have to ensure that your pages score well on mobile devices as well.

It’s clear that, in general, Google is increasingly prioritizing user experience when it comes to search engine rankings. Which brings us to the latest update – Google now plans to incorporate page experience as a ranking factor, starting with an early rollout in mid-June 2021.

So, what is page experience? In short, it’s a ranking signal that combines data from a number of metrics to try and determine how good or bad the user experience of a web page is. It consists of the following factors:

  • Core Web Vitals: Using the same, unchanged, core web vitals. Google has established guidelines and recommended rankings that you can find here. You need an overall “good” CWV rating to qualify for a “good” page experience score.
  • Mobile Usability: A URL must have no mobile usability errors to qualify for a “good” page experience score.
  • Security Issues: Any flagged security issues will disqualify websites.
  • HTTPS: Pages must be served via HTTPS to qualify.
  • Ad Experience: Measures to what degree ads negatively affect the user experience on your web page, for example, by being intrusive, distracting, etc.

As part of this change, Google announced its intention to include a visual indicator, or badge, that highlights web pages that have passed its page experience criteria. This will be similar to previous badges the search engine has used to promote AMP (Accelerated Mobile Pages) or mobile-friendly pages.

This official recognition will give high-performing web pages a massive advantage in the highly competitive arena that is Google’s SERPs. This visual cue will undoubtedly boost CTRs and organic traffic, especially for sites that already rank well. This feature may drop as soon as May if it passes its current trial phase.

Another bit of good news for non-AMP users is that all pages will now become eligible for Top Stories in both the browser and Google News app. Although Google states that pages can qualify for Top Stories “irrespective of its Core Web Vitals score or page experience status,” it’s hard to imagine this not playing a role for eligibility now or down the line.

Key Takeaway: What Does This Mean For Images on Your Website?

Google noted a 70% surge in consumer usage of their Lighthouse and PageSpeed Insight tools, showing that website owners are catching up on the importance of optimizing their pages. This means that standards will only become higher and higher when competing with other websites for search engine rankings.

Google has reaffirmed that, despite these changes, content is still king. However, content is more than just the text on your pages, and truly engaging and user-friendly content also consists of thoughtfully used media, the majority of which will likely be images.

With the proposed page experience badges and Top Stories eligibility up for grabs, the stakes have never been higher to rank highly with the Google Search algorithm. You need every advantage that you can get. And, as I’m about to show, optimizing your image assets can have a tangible effect on scoring well according to these metrics.

What Can You Do To Keep Up?

Before I propose my solution to help you optimize image assets for core web vitals, let’s look at why images are often detrimental to performance:

  • Images bloat the overall size of your website pages, especially if the images are unoptimized (i.e. uncompressed, not properly sized, etc.)
  • Images need to be responsive to different devices. You need much smaller image sizes to maintain the same visual quality on smaller screens.
  • Different contexts (browsers, OSs, etc.) have different formats for optimally rendering images. However, most images are still used in .JPG/.PNG format.
  • Website owners don’t always know about the best practices associated with using images on website pages, such as always explicitly specifying width/height attributes.

Using conventional methods, it can take a lot of blood, sweat, and tears to tackle these issues. Most solutions, such as manually editing images and hard-coding responsive syntax have inherent issues with scalability, the ability to easily update/adjust to changes, and bloat your development pipeline.

To optimize your image assets, particularly with a focus on improving CWVs, you need to:

  • Reduce image payloads
  • Implement effective caching
  • Speed up delivery
  • Transform images into optimal next-gen formats
  • Ensure images are responsive
  • Implement run-time logic to apply the optimal setting in different contexts

Luckily, there is a class of tools designed specifically to solve these challenges and provide these solutions — image CDNs. Particularly, I want to focus on ImageEngine which has consistently outperformed other CDNs on page performance tests I’ve conducted.

ImageEngine is an intelligent, device-aware image CDN that you can use to serve your website images (including GIFs). ImageEngine uses WURFL device detection to analyze the context your website pages are accessed from (device, screen size, DPI, OS, browser, etc.) and optimize your image assets accordingly. Based on these criteria, it can optimize images by intelligently resizing, reformatting, and compressing them.

It’s a completely automatic, set-it-and-forget-it solution that requires little to no intervention once it’s set up. The CDN has over 20 global PoPs with the logic built into the edge servers for faster across different regions. ImageEngine claims to achieve cache-hit ratios of as high as 98%+ as well as reduce image payloads by 75%+.

Step-by-Step Test + How to Use ImageEngine to Improve Page Experience

To illustrate the difference using an image CDN like ImageEngine can make, I’ll show you a practical test.

First, let’s take a look at how a page with a massive amount of image content scores using PageSpeed Insights. It’s a simple page, but consists of a large number of high-quality images with some interactive elements, such as buttons and links as well as text.

FID is unique because it relies on data from real-world interactions users have with your website. As a result, FID can only be collected “in the field.” If you have a website with enough traffic, you can get the FID by generating a Page Experience Report in the Google Console.

However, for lab results, from tools like Lighthouse or PageSpeed Insights, we can surmise the impact of blocking resources by looking at TTI and TBT.

Oh, yes, and I’ll also be focussing on the results of a mobile audit for a number of reasons:

  1. Google themselves are prioritizing mobile signals and mobile-first indexing
  2. Optimizing web pages and images assets are often most challenging for mobile devices/general responsiveness
  3. It provides the opportunity to show the maximum improvement a image CDN can provide

With that in mind, here are the results for our page:

So, as you can see, we have some issues. Helpfully, PageSpeed Insights flags the two CWVs present, LCP and CLS. As you can see, because of the huge image payload (roughly 35 MB), we have a ridiculous LCP of nearly 1 minute.

Because of the straightforward layout and the fact that I did explicitly give images width and height attributes, our page happened to be stable with a 0 CLS. However, it’s important to realize that slow loading images can also impact the perceived stability of your pages. So, even if you can’t directly improve on CLS, the faster sizable elements such as images load, the better the overall experience for real-world users.

TTI and TBT were also sub-par. It will take at least two  seconds from the moment the first element appears on the page until when the page can start to become interactive.

As you can see from the opportunities for improvement and diagnostics, almost all issues were image-related:

Setting Up ImageEngine and Testing the Results

Ok, so now it’s time to add ImageEngine into the mix and see how it improves performance metrics on the same page.

Setting up ImageEngine on nearly any website is relatively straightforward. First, go to ImageEngine.io and signup for a free trial. Just follow the simple 3-step signup process where you will need to:

  1. provide the website you want to optimize, 
  2. the web location where your images are stored, and then 
  3. copy the delivery address ImageEngine assigns to you.

The latter will be in the format of {random string}.cdn.imgeng.in but can be updated from within the ImageEngine dashboard.

To serve images via this domain, simply go back to your website markup and update the <img> src attributes. For example:

From:

<img src=”mywebsite.com/images/header-image.jpg”/>

To:

<img src=”myimages.cdn.imgeng.in/images/header-image.jpg”/>

That’s all you need to do. ImageEngine will now automatically pull your images and dynamically optimize them for best results when visitors view your website pages. You can check the official integration guides in the documentation on how to use ImageEngine with Magento, Shopify, Drupal, and more. There is also an official WordPress plugin.

Here’s the results for my ImageEngine test page once it’s set up:

As you can see, the results are nearly flawless. All metrics were improved, scoring in the green – even Speed Index and LCP which were significantly affected by the large images.

As a result, there were no more opportunities for improvement. And, as you can see, ImageEngine reduced the total page payload to 968 kB, cutting down image content by roughly 90%:

Conclusion

To some extent, it’s more of the same from Google who has consistently been moving in a mobile direction and employing a growing list of metrics to hone in on the best possible “page experience” for its search engine users. Along with reaffirming their move in this direction, Google stated that they will continue to test and revise their list of signals.

Other metrics that can be surfaced in their tools, such as TTFB, TTI, FCP, TBT, or possibly entirely new metrics may play even larger roles in future updates.

Finding solutions that help you score highly for these metrics now and in the future is key to staying ahead in this highly competitive environment. While image optimization is just one facet, it can have major implications, especially for image-heavy sites.

An image CDN like ImageEngine can solve almost all issues related to image content, with minimal time and effort as well as future proof your website against future updates.


The post What Google’s New Page Experience Update Means for Images on Your Website appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , , ,
[Top]

Still Hoping for Better Native Page Transitions

It sure would be nice to be able to animate the transition between pages if we want to on the web, at least without resorting to hacks or full-blown architecture choices just to achieve it. Some kind of API that would run stuff (it could integrate with WAAPI?) before the page is unloaded, and then some buddy API that would do the same on the way in.

We do have an onbeforeunload API, but I’m not sure what kind of baggage that has. That, or otherwise, is all possible now, but what I want are purpose-built APIs that help us do it cleanly (understandable functions) and with both performance (working as quickly as clicking links normally does) and accessibility (like focus handling) in mind.

If you’re building a single-page app anyway, you get the freedom to animate between views because the page never reloads. The danger here is that you might pick a single-page app just for this ability, which is what I mean by having to buy into a site architecture just to achieve this. That feels like an unfortunate trade-off, as single-page apps bring a ton of overhead, like tooling and accessibility concerns, that you wouldn’t have otherwise needed.

Without a single page app, you could use something like Turbo and animate.css like this. Or, Adam’s new transition.style, a clip-path() based homage to Daniel Edan’s masterpiece. Maybe if that approach was paired with instant.page it would be as fast as any other internal link click.

There are other players trying to figure this out, like smoothState.js and Swup. The trick being: intercept the action to move to the next page, run the animation first, then load the next page, and animate the new page in. Technically, it slows things down a bit, but you can do it pretty efficiently and the movement adds enough distraction that the perceived performance might even be better.

Ideally, we wouldn’t have to animate the entire page but we could have total control to make more interesting transitions. Heck, I was doing that a decade ago with a page for a musician where clicking around the site just moved things around so that the audio would keep playing (and it was fun).

This would be a great place for the web platform to step in. I remember Jake pushed for this years ago, but I’m not sure if that went anywhere. Then we got portals which are… ok? Those are like if you load an iframe on the page and then animate it to take over the whole page (and update the URL). Not much animation nuance possible there, but you could certainly swipe some pages around or fade them in and out (hey here’s another one: Highway), like jQuery Mobile did back in ancient times.


The post Still Hoping for Better Native Page Transitions appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , ,
[Top]

How We Improved the Accessibility of Our Single Page App Menu

I recently started working on a Progressive Web App (PWA) for a client with my team. We’re using React with client-side routing via React Router, and one of the first elements that we made was the main menu. Menus are a key component of any site or app. That’s really how folks get around, so making it accessible was a super high priority for the team.

But in the process, we learned that making an accessible main menu in a PWA isn’t as obvious as it might sound. I thought I’d share some of those lessons with you and how we overcame them.

As far as requirements go, we wanted a menu that users could not only navigate using a mouse, but using a keyboard as well, the acceptance criteria being that a user should be able to tab through the top-level menu items, and the sub-menu items that would otherwise only be visible if a user with a mouse hovered over a top-level menu item. And, of course, we wanted a focus ring to follow the elements that have focus.

The first thing we had to do was update the existing CSS that was set up to reveal a sub-menu when a top-level menu item is hovered. We were previously using the visibility property, changing between visible and hidden on the parent container’s hovered state. This works fine for mouse users, but for keyboard users, focus doesn’t automatically move to an element that is set to visibility: hidden (the same applies for elements that are given display: none). So we removed the visibility property, and instead used a very large negative position value:

.menu-item {   position: relative; }  .sub-menu {   position: absolute   left: -100000px; /* Kicking off  the page instead of hiding visiblity */ }  .menu-item:hover .sub-menu {   left: 0; }

This works perfectly fine for mouse users. But for keyboard users, the sub menu still wasn’t visible even though focus was within that sub menu! In order to make the sub-menu visible when an element within it has focus, we needed to make use of :focus and :focus-within on the parent container:

.menu-item {   position: relative; }  .sub-menu {   position: absolute   left: -100000px; }  .menu-item:hover .sub-menu, .menu-item:focus .sub-menu, .menu-item:focus-within .sub-menu {   left: 0; }

This updated code allows the the sub-menus to appear as each of the links within that menu gets focus. As soon as focus moves to the next sub menu, the first one hides, and the second becomes visible. Perfect! We considered this task complete, so a pull request was created and it was merged into the main branch.

But then we used the menu ourselves the next day in staging to create another page and ran into a problem. Upon selecting a menu item—regardless of whether it’s a click or a tab—the menu itself wouldn’t hide. Mouse users would have to click off to the side in some white space to clear the focus, and keyboard users were completely stuck! They couldn’t hit the esc key to clear focus, nor any other key combination. Instead, keyboard users would have to press the tab key enough times to move the focus through the menu and onto another element that didn’t cause a large drop down to obscure their view.

The reason the menu would stay visible is because the selected menu item retained focus. Client-side routing in a Single Page Application (SPA) means that only a part of the page will update; there isn’t a full page reload.

There was another issue we noticed: it was difficult for a keyboard user to use our “Jump to Content” link. Web users typically expect that pressing the tab key once will highlight a “Jump to Content” link, but our menu implementation broke that. We had to come up with a pattern to effectively replicate the “focus clearing” that browsers would otherwise give us for free on a full page reload.

The first option we tried was the easiest: Add an onClick prop to React Router’s Link component, calling document.activeElement.blur() when a link in the menu is selected:

const Menu = () => {   const clearFocus = () => {     document.activeElement.blur();   }    return (     <ul className="menu">       <li className="menu-item">         <Link to="/" onClick={clearFocus}>Home</Link>       </li>       <li className="menu-item">         <Link to="/products" onClick={clearFocus}>Products</Link>         <ul className="sub-menu">           <li>             <Link to="/products/tops" onClick={clearFocus}>Tops</Link>           </li>           <li>             <Link to="/products/bottoms" onClick={clearFocus}>Bottoms</Link>           </li>           <li>             <Link to="/products/accessories" onClick={clearFocus}>Accessories</Link>           </li>         </ul>       </li>     </ul>   ); }

This approach worked well for “closing” the menu after an item is clicked. However, if a keyboard user pressed the tab key after selecting one of the menu links, then the next link would become focused. As mentioned earlier, pressing the tab key after a navigation event would ideally focus on the “Jump to Content” link first.

At this point, we knew we were going to have to programmatically force focus to another element, preferably one that’s high up in the DOM. That way, when a user starts tabbing after a navigation event, they’ll arrive at or near the top of the page, similiar to a full page reload, making it much easier to access the jump link.

We initially tried to force focus on the <body> element itself, but this didn’t work as the body isn’t something the user can interact with. There wasn’t a way for it to receive focus.

The next idea was to force focus on the logo in the header, as this itself is just a link back to the home page and can receive focus. However, in this particular case, the logo was below the “Jump To Content” link in the DOM, which means that a user would have to shift + tab to get to it. No good.

We finally decided that we had to render an interact-able element, for example, an anchor element, in the DOM, at a point that’s above than the “Jump to Content” link. This new anchor element would be styled so that it’s invisible and that users are unable to focus on it using “normal” web interactions (i.e. it’s taken out of the normal tab flow). When a user selects a menu item, focus would be programmatically forced to this new anchor element, which means that pressing tab again would focus directly on the “Jump to Content” link. It also meant that the sub-menu would immediately hide itself once a menu item is selected.

const App = () => {   const focusResetRef = React.useRef();    const handleResetFocus = () => {     focusResetRef.current.focus();   };    return (     <Fragment>       <a         ref={focusResetRef}         href="javascript:void(0)"         tabIndex="-1"         style={{ position: "fixed", top: "-10000px" }}         aria-hidden       >Focus Reset</a>       <a href="#main" className="jump-to-content-a11y-styles">Jump To Content</a>       <Menu onSelectMenuItem={handleResetFocus} />       ...     </Fragment>   ) }

Some notes of this new “Focus Reset” anchor element:

  • href is set to javascript:void(0) so that if a user manages to interact with the element, nothing actually happens. For example, if a user presses the return key immediately after selecting a menu item, that will trigger the interaction. In that instance, we don’t want the page to do anything, or the URL to change.
  • tabIndex is set to -1 so that a user can’t “normally” move focus to this element. It also means that the first time a user presses the tab key upon loading a page, this element won’t be focused, but the “Jump To Content” link instead.
  • style simply moves the element out of the viewport. Setting to position: fixed ensures it’s taken out of the document flow, so there isn’t any vertical space allocated to the element
  • aria-hidden tells screen readers that this element isn’t important, so don’t announce it to users

But we figured we could improve this even further! Let’s imagine we have a mega menu, and the menu doesn’t hide automatically when a mouse user clicks a link. That’s going to cause frustration. A user will have to precisely move their mouse to a section of the page that doesn’t contain the menu in order to clear the :hover state, and therefore allow the menu to close.

What we need is to “force clear” the hover state. We can do that with the help of React and a clearHover class:

// Menu.jsx const Menu = (props) => {   const { onSelectMenuItem } = props;   const [clearHover, setClearHover] = React.useState(false);    const closeMenu= () => {     onSelectMenuItem();     setClearHover(true);   }    React.useEffect(() => {     let timeout;     if (clearHover) {       timeout = setTimeout(() => {         setClearHover(false);       }, 0); // Adjust this timeout to suit the applications' needs     }     return () => clearTimeout(timeout);   }, [clearHover]);    return (     <ul className={`menu $  {clearHover ? "clearHover" : ""}`}>       <li className="menu-item">         <Link to="/" onClick={closeMenu}>Home</Link>       </li>       <li className="menu-item">         <Link to="/products" onClick={closeMenu}>Products</Link>         <ul className="sub-menu">           {/* Sub Menu Items */}         </ul>       </li>     </ul>   ); }

This updated code hides the menu immediately when a menu item is clicked. It also hides immediately when a keyboard user selects a menu item. Pressing the tab key after selecting a navigation link moves the focus to the “Jump to Content” link.

At this point, our team had updated the menu component to a point where we were super happy. Both keyboard and mouse users get a consistent experience, and that experience follows what a browser does by default for a full page reload.

Our actual implementation is slightly different than the example here so we could use the pattern on other projects. We put it into a React Context, with the Provider set to wrap the Header component, and the Focus Reset element being automatically added just before the Provider’s children. That way, the element is placed before the “Jump to Content” link in the DOM hierarchy. It also allows us to access the focus reset function with a simple hook, instead of having to prop drill it.

We have created a Code Sandbox that allows you to play with the three different solutions we covered here. You’ll definitely see the pain points of the earlier implementation, and then see how much better the end result feels!

We would love to hear feedback on this implementation! We think it’s going to work well, but it hasn’t been released to in the wild yet, so we don’t have definitive data or user feedback. We’re certainly not a11y experts, just doing our best with what we do know, and are very open and willing to learn more on the topic.


The post How We Improved the Accessibility of Our Single Page App Menu appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , ,
[Top]

Fading in a Page on Load with CSS & JavaScript

Louis Lazaris demonstrates a very simple way of doing this.

  1. Hide the body (with JavaScript) right away with with a CSS class that declares opacity: 0
  2. Wait for all the JavaScript to execute
  3. Unhide the body by transitioning it back to opacity: 1

Like this:

Louis demonstrates a callback method, as well as mentioning you could wait for window.load or a DOM Ready event. I suppose you could also just have the line that sets the className to visible as the very last line of script that runs like I did above.

Louis knows it’s not particularly en vogue:

I know nowadays we’re obsessed in this industry with gaining every millisecond in page performance. But in a couple of projects that I recently overhauled, I added a subtle and clean loading mechanism that I think makes the experience nicer, even if it does ultimately slightly delay the time that the user is able to start interacting with my page.

I think of stuff like font-display: swap; which is dedicated to rendering your text as absolutely fast as possible, FOUT be damned, rather than chiller options.

Direct Link to ArticlePermalink


The post Fading in a Page on Load with CSS & JavaScript appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , ,
[Top]

This page is a truly naked, brutalist html quine.

Here’s a fun page coming from secretGeek.net. You don’t normally think “fun” with brutalist minimalism but the CSS trickery that makes it work on this page is certainly that.

The HTML is literally displayed on the page as tags. So, in a sense, the HTML is both the page markup and the content. The design is so minimal (or “naked”) that it’s code leaks through! Very cool.

The page explains the trick, but I’ll paraphrase it here:

  • Everything is a block-level element via { display:block; }
  • …except for anchors, code, emphasis and strong, which remain inline with a,code,em,strong {display:inline}
  • Use ::before and ::after to display the HTML tags as content (e.g. p::before { content: '<p>'})

The page ends with a nice snippet culled from Josh Li’s “58 bytes of css to look great nearly everywhere”:

html {   max-width: 70ch;   padding: 2ch;   margin: auto;   color: #333;   font-size: 1.2em; }

Direct Link to ArticlePermalink


The post This page is a truly naked, brutalist html quine. appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , , ,
[Top]

Optimizing CSS for faster page loads

A straightforward post with some perf data from Tomas Pustelnik. It’s a good reminder that CSS is a crucial part of thinking web performance, and for a huge reason:

Any time [the browser] encounters any external resource (CSS, JS, images, etc.) it will assign it a download priority and initiate its download. Priorities are important because some resources are critical to render a page (eg. main stylesheet and JS files) while others may be less important (like images or stylesheets for other media types).

In the case of CSS, this priority is usually high because stylesheets are necessary to create CSSOM (CSS Object Model). To render a webpage browser has to construct both DOM and CSSOM.

That’s why CSS is often referred to as a “blocking” resource. That’s desirable to some degree: we wouldn’t want to see flash-of-unstyled-websites. But we get real perf gains when we make CSS smaller because it’s quicker to download, parse, and apply.

Aside from the techniques in the post, I’m sure advocates of atomic/all-utility CSS would love it pointed out that the stylesheets from those approaches are generally way smaller, and thus more performant. CSS-in-JS approaches will sometimes bundle styles into scripts so, to be fair, you get a little perf gain at the top by not loading the CSS there, but a perf loss from increasing the JavaScript bundle size in the process. (I haven’t seen a study with a fair comparison though, so I don’t know if it’s a wash or what.)

Direct Link to ArticlePermalink


The post Optimizing CSS for faster page loads appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , ,
[Top]