Tag: Page

What Google’s New Page Experience Update Means for Images on Your Website

It’s easy to forget that, as a search engine, Google doesn’t just compare keywords to generate search results. Google knows that if people don’t enjoy their experience on a web page, they won’t stay on the page long enough to consume the content — no matter how relevant it is.

As a result, Google has been experimenting with ways to analyze the user experience of web pages using quantifiable metrics. Factoring these into its search engine rankings, it’s hoped to provide users not only with great, relevant content but with awesome user experiences as well.

Google’s soon-to-be-launched page experience update is a major step in this direction. Website owners with image-heavy websites need to be particularly vigilant to adapt to these changes or risk falling by the wayside. In this article, we’ll talk about everything you need to know regarding this update, and how you can take full advantage of it.

Note: Google introduced their plans for Page Experience in May 2020 and announced in November 2020 that it will begin rolling out in May 2021. However, Google has since delayed their plans for a gradual rollout starting mid-Jun 2021. This was done in order to give website admins time to deal with the shifting conditions brought about by the COVID-19 pandemic first.

Some Background

Before we get into the latest iteration of changes to how Google factors user experience metrics into search engine rankings, let’s get some context. In April 2020, Google made its most pivotal move in this direction yet by introducing a new initiative: core web vitals.

Core web vitals (CWV) were introduced to help web developers deal with the challenges of trying to optimize for search engine rankings using testable metrics – something that’s difficult to do with a highly subjective thing like user experience.

To do this, Google identified three key metrics (what it calls “user-centric performance metrics”). These are:

  1. LCP (Largest Contentful Paint): The largest element above the fold when a web page initially loads. Typically, this is a large featured image or header. How fast the largest content element loads plays a huge role in how fast the user perceives the overall loading speed of the page.
  2. FID (First Input Delay): The time it takes between when a user first interacts with the page and when the main thread is free for the browser to process the event. This can be clicking/tapping a button, link, or interacting with any other dynamic element. Delays when interacting with a page can obviously be frustrating to users which is why keeping FID low is crucial.
  3. Cumulative Layout Shift (CLS): This calculates the visual stability of a page when it first loads. The algorithm takes the size of the elements and the distance they move relevant to the viewport into account. Pages that load with high instability can cause miscues by users, also leading to frustrating situations.

These metrics have evolved from more rudimentary ones that have been in use for some time, such as SI (Speed Index), FCP (First Contentful Paint), TTI (Time-to-interactive), etc.

The reason this is important is because images can play a significant role in how your website’s CWVs score. For example, the LCP is more often than not an above-the-fold image or, at the very least, will have to compete with an image to be loaded first. Images that aren’t correctly used can also negatively impact CLS. Slow-loading images can also impact the FID by adding further delays to the overall rendering of the page.

What’s more, this came on the back of Google’s renewed focus on mobile-first indexing. So, not only are these metrics important for your website, but you have to ensure that your pages score well on mobile devices as well.

It’s clear that, in general, Google is increasingly prioritizing user experience when it comes to search engine rankings. Which brings us to the latest update – Google now plans to incorporate page experience as a ranking factor, starting with an early rollout in mid-June 2021.

So, what is page experience? In short, it’s a ranking signal that combines data from a number of metrics to try and determine how good or bad the user experience of a web page is. It consists of the following factors:

  • Core Web Vitals: Using the same, unchanged, core web vitals. Google has established guidelines and recommended rankings that you can find here. You need an overall “good” CWV rating to qualify for a “good” page experience score.
  • Mobile Usability: A URL must have no mobile usability errors to qualify for a “good” page experience score.
  • Security Issues: Any flagged security issues will disqualify websites.
  • HTTPS: Pages must be served via HTTPS to qualify.
  • Ad Experience: Measures to what degree ads negatively affect the user experience on your web page, for example, by being intrusive, distracting, etc.

As part of this change, Google announced its intention to include a visual indicator, or badge, that highlights web pages that have passed its page experience criteria. This will be similar to previous badges the search engine has used to promote AMP (Accelerated Mobile Pages) or mobile-friendly pages.

This official recognition will give high-performing web pages a massive advantage in the highly competitive arena that is Google’s SERPs. This visual cue will undoubtedly boost CTRs and organic traffic, especially for sites that already rank well. This feature may drop as soon as May if it passes its current trial phase.

Another bit of good news for non-AMP users is that all pages will now become eligible for Top Stories in both the browser and Google News app. Although Google states that pages can qualify for Top Stories “irrespective of its Core Web Vitals score or page experience status,” it’s hard to imagine this not playing a role for eligibility now or down the line.

Key Takeaway: What Does This Mean For Images on Your Website?

Google noted a 70% surge in consumer usage of their Lighthouse and PageSpeed Insight tools, showing that website owners are catching up on the importance of optimizing their pages. This means that standards will only become higher and higher when competing with other websites for search engine rankings.

Google has reaffirmed that, despite these changes, content is still king. However, content is more than just the text on your pages, and truly engaging and user-friendly content also consists of thoughtfully used media, the majority of which will likely be images.

With the proposed page experience badges and Top Stories eligibility up for grabs, the stakes have never been higher to rank highly with the Google Search algorithm. You need every advantage that you can get. And, as I’m about to show, optimizing your image assets can have a tangible effect on scoring well according to these metrics.

What Can You Do To Keep Up?

Before I propose my solution to help you optimize image assets for core web vitals, let’s look at why images are often detrimental to performance:

  • Images bloat the overall size of your website pages, especially if the images are unoptimized (i.e. uncompressed, not properly sized, etc.)
  • Images need to be responsive to different devices. You need much smaller image sizes to maintain the same visual quality on smaller screens.
  • Different contexts (browsers, OSs, etc.) have different formats for optimally rendering images. However, most images are still used in .JPG/.PNG format.
  • Website owners don’t always know about the best practices associated with using images on website pages, such as always explicitly specifying width/height attributes.

Using conventional methods, it can take a lot of blood, sweat, and tears to tackle these issues. Most solutions, such as manually editing images and hard-coding responsive syntax have inherent issues with scalability, the ability to easily update/adjust to changes, and bloat your development pipeline.

To optimize your image assets, particularly with a focus on improving CWVs, you need to:

  • Reduce image payloads
  • Implement effective caching
  • Speed up delivery
  • Transform images into optimal next-gen formats
  • Ensure images are responsive
  • Implement run-time logic to apply the optimal setting in different contexts

Luckily, there is a class of tools designed specifically to solve these challenges and provide these solutions — image CDNs. Particularly, I want to focus on ImageEngine which has consistently outperformed other CDNs on page performance tests I’ve conducted.

ImageEngine is an intelligent, device-aware image CDN that you can use to serve your website images (including GIFs). ImageEngine uses WURFL device detection to analyze the context your website pages are accessed from (device, screen size, DPI, OS, browser, etc.) and optimize your image assets accordingly. Based on these criteria, it can optimize images by intelligently resizing, reformatting, and compressing them.

It’s a completely automatic, set-it-and-forget-it solution that requires little to no intervention once it’s set up. The CDN has over 20 global PoPs with the logic built into the edge servers for faster across different regions. ImageEngine claims to achieve cache-hit ratios of as high as 98%+ as well as reduce image payloads by 75%+.

Step-by-Step Test + How to Use ImageEngine to Improve Page Experience

To illustrate the difference using an image CDN like ImageEngine can make, I’ll show you a practical test.

First, let’s take a look at how a page with a massive amount of image content scores using PageSpeed Insights. It’s a simple page, but consists of a large number of high-quality images with some interactive elements, such as buttons and links as well as text.

FID is unique because it relies on data from real-world interactions users have with your website. As a result, FID can only be collected “in the field.” If you have a website with enough traffic, you can get the FID by generating a Page Experience Report in the Google Console.

However, for lab results, from tools like Lighthouse or PageSpeed Insights, we can surmise the impact of blocking resources by looking at TTI and TBT.

Oh, yes, and I’ll also be focussing on the results of a mobile audit for a number of reasons:

  1. Google themselves are prioritizing mobile signals and mobile-first indexing
  2. Optimizing web pages and images assets are often most challenging for mobile devices/general responsiveness
  3. It provides the opportunity to show the maximum improvement a image CDN can provide

With that in mind, here are the results for our page:

So, as you can see, we have some issues. Helpfully, PageSpeed Insights flags the two CWVs present, LCP and CLS. As you can see, because of the huge image payload (roughly 35 MB), we have a ridiculous LCP of nearly 1 minute.

Because of the straightforward layout and the fact that I did explicitly give images width and height attributes, our page happened to be stable with a 0 CLS. However, it’s important to realize that slow loading images can also impact the perceived stability of your pages. So, even if you can’t directly improve on CLS, the faster sizable elements such as images load, the better the overall experience for real-world users.

TTI and TBT were also sub-par. It will take at least two  seconds from the moment the first element appears on the page until when the page can start to become interactive.

As you can see from the opportunities for improvement and diagnostics, almost all issues were image-related:

Setting Up ImageEngine and Testing the Results

Ok, so now it’s time to add ImageEngine into the mix and see how it improves performance metrics on the same page.

Setting up ImageEngine on nearly any website is relatively straightforward. First, go to ImageEngine.io and signup for a free trial. Just follow the simple 3-step signup process where you will need to:

  1. provide the website you want to optimize, 
  2. the web location where your images are stored, and then 
  3. copy the delivery address ImageEngine assigns to you.

The latter will be in the format of {random string}.cdn.imgeng.in but can be updated from within the ImageEngine dashboard.

To serve images via this domain, simply go back to your website markup and update the <img> src attributes. For example:

From:

<img src=”mywebsite.com/images/header-image.jpg”/>

To:

<img src=”myimages.cdn.imgeng.in/images/header-image.jpg”/>

That’s all you need to do. ImageEngine will now automatically pull your images and dynamically optimize them for best results when visitors view your website pages. You can check the official integration guides in the documentation on how to use ImageEngine with Magento, Shopify, Drupal, and more. There is also an official WordPress plugin.

Here’s the results for my ImageEngine test page once it’s set up:

As you can see, the results are nearly flawless. All metrics were improved, scoring in the green – even Speed Index and LCP which were significantly affected by the large images.

As a result, there were no more opportunities for improvement. And, as you can see, ImageEngine reduced the total page payload to 968 kB, cutting down image content by roughly 90%:

Conclusion

To some extent, it’s more of the same from Google who has consistently been moving in a mobile direction and employing a growing list of metrics to hone in on the best possible “page experience” for its search engine users. Along with reaffirming their move in this direction, Google stated that they will continue to test and revise their list of signals.

Other metrics that can be surfaced in their tools, such as TTFB, TTI, FCP, TBT, or possibly entirely new metrics may play even larger roles in future updates.

Finding solutions that help you score highly for these metrics now and in the future is key to staying ahead in this highly competitive environment. While image optimization is just one facet, it can have major implications, especially for image-heavy sites.

An image CDN like ImageEngine can solve almost all issues related to image content, with minimal time and effort as well as future proof your website against future updates.


The post What Google’s New Page Experience Update Means for Images on Your Website appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , , ,

Still Hoping for Better Native Page Transitions

It sure would be nice to be able to animate the transition between pages if we want to on the web, at least without resorting to hacks or full-blown architecture choices just to achieve it. Some kind of API that would run stuff (it could integrate with WAAPI?) before the page is unloaded, and then some buddy API that would do the same on the way in.

We do have an onbeforeunload API, but I’m not sure what kind of baggage that has. That, or otherwise, is all possible now, but what I want are purpose-built APIs that help us do it cleanly (understandable functions) and with both performance (working as quickly as clicking links normally does) and accessibility (like focus handling) in mind.

If you’re building a single-page app anyway, you get the freedom to animate between views because the page never reloads. The danger here is that you might pick a single-page app just for this ability, which is what I mean by having to buy into a site architecture just to achieve this. That feels like an unfortunate trade-off, as single-page apps bring a ton of overhead, like tooling and accessibility concerns, that you wouldn’t have otherwise needed.

Without a single page app, you could use something like Turbo and animate.css like this. Or, Adam’s new transition.style, a clip-path() based homage to Daniel Edan’s masterpiece. Maybe if that approach was paired with instant.page it would be as fast as any other internal link click.

There are other players trying to figure this out, like smoothState.js and Swup. The trick being: intercept the action to move to the next page, run the animation first, then load the next page, and animate the new page in. Technically, it slows things down a bit, but you can do it pretty efficiently and the movement adds enough distraction that the perceived performance might even be better.

Ideally, we wouldn’t have to animate the entire page but we could have total control to make more interesting transitions. Heck, I was doing that a decade ago with a page for a musician where clicking around the site just moved things around so that the audio would keep playing (and it was fun).

This would be a great place for the web platform to step in. I remember Jake pushed for this years ago, but I’m not sure if that went anywhere. Then we got portals which are… ok? Those are like if you load an iframe on the page and then animate it to take over the whole page (and update the URL). Not much animation nuance possible there, but you could certainly swipe some pages around or fade them in and out (hey here’s another one: Highway), like jQuery Mobile did back in ancient times.


The post Still Hoping for Better Native Page Transitions appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , ,
[Top]

How We Improved the Accessibility of Our Single Page App Menu

I recently started working on a Progressive Web App (PWA) for a client with my team. We’re using React with client-side routing via React Router, and one of the first elements that we made was the main menu. Menus are a key component of any site or app. That’s really how folks get around, so making it accessible was a super high priority for the team.

But in the process, we learned that making an accessible main menu in a PWA isn’t as obvious as it might sound. I thought I’d share some of those lessons with you and how we overcame them.

As far as requirements go, we wanted a menu that users could not only navigate using a mouse, but using a keyboard as well, the acceptance criteria being that a user should be able to tab through the top-level menu items, and the sub-menu items that would otherwise only be visible if a user with a mouse hovered over a top-level menu item. And, of course, we wanted a focus ring to follow the elements that have focus.

The first thing we had to do was update the existing CSS that was set up to reveal a sub-menu when a top-level menu item is hovered. We were previously using the visibility property, changing between visible and hidden on the parent container’s hovered state. This works fine for mouse users, but for keyboard users, focus doesn’t automatically move to an element that is set to visibility: hidden (the same applies for elements that are given display: none). So we removed the visibility property, and instead used a very large negative position value:

.menu-item {   position: relative; }  .sub-menu {   position: absolute   left: -100000px; /* Kicking off  the page instead of hiding visiblity */ }  .menu-item:hover .sub-menu {   left: 0; }

This works perfectly fine for mouse users. But for keyboard users, the sub menu still wasn’t visible even though focus was within that sub menu! In order to make the sub-menu visible when an element within it has focus, we needed to make use of :focus and :focus-within on the parent container:

.menu-item {   position: relative; }  .sub-menu {   position: absolute   left: -100000px; }  .menu-item:hover .sub-menu, .menu-item:focus .sub-menu, .menu-item:focus-within .sub-menu {   left: 0; }

This updated code allows the the sub-menus to appear as each of the links within that menu gets focus. As soon as focus moves to the next sub menu, the first one hides, and the second becomes visible. Perfect! We considered this task complete, so a pull request was created and it was merged into the main branch.

But then we used the menu ourselves the next day in staging to create another page and ran into a problem. Upon selecting a menu item—regardless of whether it’s a click or a tab—the menu itself wouldn’t hide. Mouse users would have to click off to the side in some white space to clear the focus, and keyboard users were completely stuck! They couldn’t hit the esc key to clear focus, nor any other key combination. Instead, keyboard users would have to press the tab key enough times to move the focus through the menu and onto another element that didn’t cause a large drop down to obscure their view.

The reason the menu would stay visible is because the selected menu item retained focus. Client-side routing in a Single Page Application (SPA) means that only a part of the page will update; there isn’t a full page reload.

There was another issue we noticed: it was difficult for a keyboard user to use our “Jump to Content” link. Web users typically expect that pressing the tab key once will highlight a “Jump to Content” link, but our menu implementation broke that. We had to come up with a pattern to effectively replicate the “focus clearing” that browsers would otherwise give us for free on a full page reload.

The first option we tried was the easiest: Add an onClick prop to React Router’s Link component, calling document.activeElement.blur() when a link in the menu is selected:

const Menu = () => {   const clearFocus = () => {     document.activeElement.blur();   }    return (     <ul className="menu">       <li className="menu-item">         <Link to="/" onClick={clearFocus}>Home</Link>       </li>       <li className="menu-item">         <Link to="/products" onClick={clearFocus}>Products</Link>         <ul className="sub-menu">           <li>             <Link to="/products/tops" onClick={clearFocus}>Tops</Link>           </li>           <li>             <Link to="/products/bottoms" onClick={clearFocus}>Bottoms</Link>           </li>           <li>             <Link to="/products/accessories" onClick={clearFocus}>Accessories</Link>           </li>         </ul>       </li>     </ul>   ); }

This approach worked well for “closing” the menu after an item is clicked. However, if a keyboard user pressed the tab key after selecting one of the menu links, then the next link would become focused. As mentioned earlier, pressing the tab key after a navigation event would ideally focus on the “Jump to Content” link first.

At this point, we knew we were going to have to programmatically force focus to another element, preferably one that’s high up in the DOM. That way, when a user starts tabbing after a navigation event, they’ll arrive at or near the top of the page, similiar to a full page reload, making it much easier to access the jump link.

We initially tried to force focus on the <body> element itself, but this didn’t work as the body isn’t something the user can interact with. There wasn’t a way for it to receive focus.

The next idea was to force focus on the logo in the header, as this itself is just a link back to the home page and can receive focus. However, in this particular case, the logo was below the “Jump To Content” link in the DOM, which means that a user would have to shift + tab to get to it. No good.

We finally decided that we had to render an interact-able element, for example, an anchor element, in the DOM, at a point that’s above than the “Jump to Content” link. This new anchor element would be styled so that it’s invisible and that users are unable to focus on it using “normal” web interactions (i.e. it’s taken out of the normal tab flow). When a user selects a menu item, focus would be programmatically forced to this new anchor element, which means that pressing tab again would focus directly on the “Jump to Content” link. It also meant that the sub-menu would immediately hide itself once a menu item is selected.

const App = () => {   const focusResetRef = React.useRef();    const handleResetFocus = () => {     focusResetRef.current.focus();   };    return (     <Fragment>       <a         ref={focusResetRef}         href="javascript:void(0)"         tabIndex="-1"         style={{ position: "fixed", top: "-10000px" }}         aria-hidden       >Focus Reset</a>       <a href="#main" className="jump-to-content-a11y-styles">Jump To Content</a>       <Menu onSelectMenuItem={handleResetFocus} />       ...     </Fragment>   ) }

Some notes of this new “Focus Reset” anchor element:

  • href is set to javascript:void(0) so that if a user manages to interact with the element, nothing actually happens. For example, if a user presses the return key immediately after selecting a menu item, that will trigger the interaction. In that instance, we don’t want the page to do anything, or the URL to change.
  • tabIndex is set to -1 so that a user can’t “normally” move focus to this element. It also means that the first time a user presses the tab key upon loading a page, this element won’t be focused, but the “Jump To Content” link instead.
  • style simply moves the element out of the viewport. Setting to position: fixed ensures it’s taken out of the document flow, so there isn’t any vertical space allocated to the element
  • aria-hidden tells screen readers that this element isn’t important, so don’t announce it to users

But we figured we could improve this even further! Let’s imagine we have a mega menu, and the menu doesn’t hide automatically when a mouse user clicks a link. That’s going to cause frustration. A user will have to precisely move their mouse to a section of the page that doesn’t contain the menu in order to clear the :hover state, and therefore allow the menu to close.

What we need is to “force clear” the hover state. We can do that with the help of React and a clearHover class:

// Menu.jsx const Menu = (props) => {   const { onSelectMenuItem } = props;   const [clearHover, setClearHover] = React.useState(false);    const closeMenu= () => {     onSelectMenuItem();     setClearHover(true);   }    React.useEffect(() => {     let timeout;     if (clearHover) {       timeout = setTimeout(() => {         setClearHover(false);       }, 0); // Adjust this timeout to suit the applications' needs     }     return () => clearTimeout(timeout);   }, [clearHover]);    return (     <ul className={`menu $  {clearHover ? "clearHover" : ""}`}>       <li className="menu-item">         <Link to="/" onClick={closeMenu}>Home</Link>       </li>       <li className="menu-item">         <Link to="/products" onClick={closeMenu}>Products</Link>         <ul className="sub-menu">           {/* Sub Menu Items */}         </ul>       </li>     </ul>   ); }

This updated code hides the menu immediately when a menu item is clicked. It also hides immediately when a keyboard user selects a menu item. Pressing the tab key after selecting a navigation link moves the focus to the “Jump to Content” link.

At this point, our team had updated the menu component to a point where we were super happy. Both keyboard and mouse users get a consistent experience, and that experience follows what a browser does by default for a full page reload.

Our actual implementation is slightly different than the example here so we could use the pattern on other projects. We put it into a React Context, with the Provider set to wrap the Header component, and the Focus Reset element being automatically added just before the Provider’s children. That way, the element is placed before the “Jump to Content” link in the DOM hierarchy. It also allows us to access the focus reset function with a simple hook, instead of having to prop drill it.

We have created a Code Sandbox that allows you to play with the three different solutions we covered here. You’ll definitely see the pain points of the earlier implementation, and then see how much better the end result feels!

We would love to hear feedback on this implementation! We think it’s going to work well, but it hasn’t been released to in the wild yet, so we don’t have definitive data or user feedback. We’re certainly not a11y experts, just doing our best with what we do know, and are very open and willing to learn more on the topic.


The post How We Improved the Accessibility of Our Single Page App Menu appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , ,
[Top]

Fading in a Page on Load with CSS & JavaScript

Louis Lazaris demonstrates a very simple way of doing this.

  1. Hide the body (with JavaScript) right away with with a CSS class that declares opacity: 0
  2. Wait for all the JavaScript to execute
  3. Unhide the body by transitioning it back to opacity: 1

Like this:

Louis demonstrates a callback method, as well as mentioning you could wait for window.load or a DOM Ready event. I suppose you could also just have the line that sets the className to visible as the very last line of script that runs like I did above.

Louis knows it’s not particularly en vogue:

I know nowadays we’re obsessed in this industry with gaining every millisecond in page performance. But in a couple of projects that I recently overhauled, I added a subtle and clean loading mechanism that I think makes the experience nicer, even if it does ultimately slightly delay the time that the user is able to start interacting with my page.

I think of stuff like font-display: swap; which is dedicated to rendering your text as absolutely fast as possible, FOUT be damned, rather than chiller options.

Direct Link to ArticlePermalink


The post Fading in a Page on Load with CSS & JavaScript appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , ,
[Top]

This page is a truly naked, brutalist html quine.

Here’s a fun page coming from secretGeek.net. You don’t normally think “fun” with brutalist minimalism but the CSS trickery that makes it work on this page is certainly that.

The HTML is literally displayed on the page as tags. So, in a sense, the HTML is both the page markup and the content. The design is so minimal (or “naked”) that it’s code leaks through! Very cool.

The page explains the trick, but I’ll paraphrase it here:

  • Everything is a block-level element via { display:block; }
  • …except for anchors, code, emphasis and strong, which remain inline with a,code,em,strong {display:inline}
  • Use ::before and ::after to display the HTML tags as content (e.g. p::before { content: '<p>'})

The page ends with a nice snippet culled from Josh Li’s “58 bytes of css to look great nearly everywhere”:

html {   max-width: 70ch;   padding: 2ch;   margin: auto;   color: #333;   font-size: 1.2em; }

Direct Link to ArticlePermalink


The post This page is a truly naked, brutalist html quine. appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , , ,
[Top]

Optimizing CSS for faster page loads

A straightforward post with some perf data from Tomas Pustelnik. It’s a good reminder that CSS is a crucial part of thinking web performance, and for a huge reason:

Any time [the browser] encounters any external resource (CSS, JS, images, etc.) it will assign it a download priority and initiate its download. Priorities are important because some resources are critical to render a page (eg. main stylesheet and JS files) while others may be less important (like images or stylesheets for other media types).

In the case of CSS, this priority is usually high because stylesheets are necessary to create CSSOM (CSS Object Model). To render a webpage browser has to construct both DOM and CSSOM.

That’s why CSS is often referred to as a “blocking” resource. That’s desirable to some degree: we wouldn’t want to see flash-of-unstyled-websites. But we get real perf gains when we make CSS smaller because it’s quicker to download, parse, and apply.

Aside from the techniques in the post, I’m sure advocates of atomic/all-utility CSS would love it pointed out that the stylesheets from those approaches are generally way smaller, and thus more performant. CSS-in-JS approaches will sometimes bundle styles into scripts so, to be fair, you get a little perf gain at the top by not loading the CSS there, but a perf loss from increasing the JavaScript bundle size in the process. (I haven’t seen a study with a fair comparison though, so I don’t know if it’s a wash or what.)

Direct Link to ArticlePermalink


The post Optimizing CSS for faster page loads appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , ,
[Top]

Menu Reveal By Page Rotate Animation

There are many different approaches to menus on websites. Some menus are persistent, always in view and display all the options. Other menus are hidden by design and need to be opened to view the options. And there are even additional approaches on how hidden menus reveal their menu items. Some fly out and overlap the content, some push the content away, and others will do some sort of full-screen deal.

Whatever the approach, they all have their pros and cons and the right one depends on the situation where it’s being used. Frankly, I tend to like fly-out menus in general. Not for all cases, of course. But when I’m looking for a menu that is stingy on real estate and easy to access, they’re hard to beat.

What I don’t like about them is how often they conflict with the content of the page. A fly-out menu, at best, obscures the content and, at worst, removes it completely from the UI.

I tried taking another approach. It has the persistence and availability of a fixed position as well as the space-saving attributes of a hidden menu that flies out, only without removing the user from the current content of the page.

Here’s how I made it.

The toggle

We’re building a menu that has two states — open and closed — and it toggles between the two. This is where the Checkbox Hack comes into play. It’s perfect because a checkbox has two common interactive states — checked and unchecked (there’s also the indeterminate) — that can be used to trigger those states.

The checkbox is hidden and placed under the menu icon with CSS, so the user never sees it even though they interact with it. Checking the box (or, ahem, the menu icon) reveals the menu. Unchecking it hides it. Simple as that. We don’t even need JavaScript to do the lifting!

Of course, the Checkbox Hack isn’t the only way to do this, and if you want to toggle a class to open and close the menu with JavaScript, that’s absolutely fine.

It’s important the checkbox precedes the main content in the source code, because the :checked selector we’re going to ultimately write to make this work needs to use a sibling selector. If that’ll cause layout concerns for you, use Grid or Flexbox for your layouts as they are source order independent, like how I used its advantage for counting in CSS.

 The checkbox’s default style (added by the browser) is stripped out, using the appearance CSS property, before adding its pseudo element with the menu icon so that the user doesn’t see the square of the checkbox.

First, the basic markup:

<input type="checkbox">  <div id="menu">   <!--menu options--> </div> <div id="page">   <!--main content--> </div>

…and the baseline CSS for the Checkbox Hack and menu icon:

/* Hide checkbox and reset styles */ input[type="checkbox"] {   appearance: initial; /* removes the square box */   border: 0; margin: 0; outline: none; /* removes default margin, border and outline */   width: 30px; height: 30px; /* sets the menu icon dimensions */   z-index: 1;  /* makes sure it stacks on top */ }  
 /* Menu icon */ input::after {   content: "55";   display: block;    font: 25pt/30px "georgia";    text-indent: 10px;   width: 100%; height: 100%; }  
 /* Page content container */ #page {   background: url("earbuds.jpg") #ebebeb center/cover;   width: 100%; height: 100%; }

I threw in the styles for the #page content as well, which is going to be a full size background image.

The transition

Two things happen when the menu control is clicked. First, the menu icon changes to an × mark, symbolizing that it can be clicked to close the menu. So, we select the ::after pseudo element of checkbox input when the input is in a :checked state:

input:checked::after {   content: "d7"; /* changes to × mark */   color: #ebebeb; }

Second, the main content (our “earbuds” image) transforms, revealing the menu underneath. It moves to the right, rotates and scales down, and its left side corners get angular. This is to give the appearance of the content getting pushed back, like a door that swings open. 

input:checked ~ #page {    clip-path: polygon(0 8%, 100% 0, 100% 100%, 0 92%);   transform: translateX(40%) rotateY(10deg) scale(0.8);    transform-origin: right center;    transition: all .3s linear; } 

I used clip-path to change the corners of the image.

Since we’re applying a transition on the transformations, we need an initial clip-path value on the #page so there’s something to transition from. We’ll also drop a transition on #page while we’re at it because that will allow it to close as smoothly as it opens.

#page {   background: url("earbuds.jpeg") #ebebeb center/cover;    clip-path: polygon(0 0, 100% 0, 100% 100%, 0 100%);   transition: all .3s linear;   width: 100%; height: 100%; }

We’re basically done with the core design and code. When the checkbox is unchecked (by clicking the × mark) the transformation on the earbud image will automatically be undone and it’ll be brought back to the front and centre. 

A sprinkle of JavaScript

Even though we have what we’re looking for, there’s still one more thing that would give this a nice boost in the UX department: close the menu when clicking (or tapping) the #page element. That way, the user doesn’t need to look for or even use the × mark to get back to the content.

Since this is merely an additional way to hide the menu, we can use JavaScript. And if JavaScript is disabled for some reason? No big deal. It’s just an enhancement that doesn’t prevent the menu from working without it.

document.querySelector("#page").addEventListener('click', (e, checkbox = document.querySelector('input')) => {    if (checkbox.checked) { checkbox.checked = false; e.stopPropagation(); } });

What this three-liner does is add a click event handler over the #page element that un-checks the checkbox if the checkbox is in a :checked state, which closes the menu.

We’ve been looking at a demo made for a vertical/portrait design, but works just as well at larger landscape screen sizes, depending on the content we’re working with.


This is just one approach or take on the typical fly-out menu. Animation opens up lots of possibilities and there are probably dozens of other ideas you might have in mind. In fact, I’d love to hear (or better yet, see) them, so please share!


The post Menu Reveal By Page Rotate Animation appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , ,
[Top]

Memorize Scroll Position Across Page Loads

Hakim El Hattab tweeted a really nice little UX enhancement for a static site that includes a scrollable sidebar of navigation.

The trick is to throw the scroll position into localStorage right before the page is exited, and when loaded, grab that value and scroll to it. I’ll retype it from the tweet…

let sidebar = document.querySelector(".sidebar");  let top = localStorage.getItem("sidebar-scroll"); if (top !== null) {   sidebar.scrollTop = parseInt(top, 10); }  window.addEventListener("beforeunload", () => {   localStorage.setItem("sidebar-scroll", sidebar.scrollTop); });

What is surprising is that you don’t get a flash-of-wrong-scroll-position. I wonder why? Maybe it has to do with fancy paint holding stuff browsers are doing now? Not sure.

The post Memorize Scroll Position Across Page Loads appeared first on CSS-Tricks.

CSS-Tricks

, , , , ,
[Top]

My Flywheel Landing Page

Flywheel is my WordPress hosting partner here. I use Local every day for my WordPress local development environment and use their hosting for all my WordPress sites as part of my whole flow, so I’m glad they aren’t just a sponsor but a product I use and like.

Last November some of their crew came out and shot some photos and video with me, which was cool and fun and made me feel famous lolll.

They ultimately built a landing page from it, which is super well done!

Just look at me being all famous.

Check out some photos that Kimberly Bailey took while the gang was here.

And the video!

Anyway feel free to check out that landing page and if your company is headed to make a WordPress hosting decision soon, might as well use my referral link 😉

The post My Flywheel Landing Page appeared first on CSS-Tricks.

CSS-Tricks

, ,
[Top]

How to Get All Custom Properties on a Page in JavaScript

We can use JavaScript to get the value of a CSS custom property. Robin wrote up a detailed explanation about this in Get a CSS Custom Property Value with JavaScript. To review, let’s say we’ve declared a single custom property on the HTML element:

html {   --color-accent: #00eb9b; }

In JavaScript, we can access the value with getComputedStyle and getPropertyValue:

const colorAccent = getComputedStyle(document.documentElement)   .getPropertyValue('--color-accent'); // #00eb9b

Perfect. Now we have access to our accent color in JavaScript. You know what’s cool? If we change that color in CSS, it updates in JavaScript as well! Handy.

What happens, though, when it’s not just one property we need access to in JavaScript, but a whole bunch of them?

html {   --color-accent: #00eb9b;   --color-accent-secondary: #9db4ff;   --color-accent-tertiary: #f2c0ea;   --color-text: #292929;   --color-divider: #d7d7d7; }

We end up with JavaScript that looks like this:

const colorAccent = getComputedStyle(document.documentElement).getPropertyValue('--color-accent'); // #00eb9b const colorAccentSecondary = getComputedStyle(document.documentElement).getPropertyValue('--color-accent-secondary'); // #9db4ff const colorAccentTertiary = getComputedStyle(document.documentElement).getPropertyValue('--color-accent-tertiary'); // #f2c0ea const colorText = getComputedStyle(document.documentElement).getPropertyValue('--color-text'); // #292929 const colorDivider = getComputedStyle(document.documentElement).getPropertyValue('--color-text'); // #d7d7d7

We’re repeating ourselves a lot. We could shorten each one of these lines by abstracting the common tasks to a function.

const getCSSProp = (element, propName) => getComputedStyle(element).getPropertyValue(propName); const colorAccent = getCSSProp(document.documentElement, '--color-accent'); // #00eb9b // repeat for each custom property...

That helps reduce code repetition, but we still have a less-than-ideal situation. Every time we add a custom property in CSS, we have to write another line of JavaScript to access it. This can and does work fine if we only have a few custom properties. I’ve used this setup on production projects before. But, it’s also possible to automate this.

Let’s walk through the process of automating it by making a working thing.

What are we making?

We’ll make a color palette, which is a common feature in pattern libraries. We’ll generate a grid of color swatches from our CSS custom properties. 

Here’s the complete demo that we’ll build step-by-step.

A preview of our CSS custom property-driven color palette. Showing six cards, one for each color, including the custom property name and hex value in each card.
Here’s what we’re aiming for.

Let’s set the stage. We’ll use an unordered list to display our palette. Each swatch is a <li> element that we’ll render with JavaScript. 

<ul class="colors"></ul>

The CSS for the grid layout isn’t pertinent to the technique in this post, so we won’t look at in detail. It’s available in the CodePen demo.

Now that we have our HTML and CSS in place, we’ll focus on the JavaScript. Here’s an outline of what we’ll do with our code:

  1. Get all stylesheets on a page, both external and internal
  2. Discard any stylesheets hosted on third-party domains
  3. Get all rules for the remaining stylesheets
  4. Discard any rules that aren’t basic style rules
  5. Get the name and value of all CSS properties
  6. Discard non-custom CSS properties
  7. Build HTML to display the color swatches

Let’s get to it.

Step 1: Get all stylesheets on a page

The first thing we need to do is get all external and internal stylesheets on the current page. Stylesheets are available as members of the global document.

document.styleSheets

That returns an array-like object. We want to use array methods, so we’ll convert it to an array. Let’s also put this in a function that we’ll use throughout this post.

const getCSSCustomPropIndex = () => [...document.styleSheets];

When we invoke getCSSCustomPropIndex, we see an array of CSSStyleSheet objects, one for each external and internal stylesheet on the current page.

The output of getCSSCustomPropIndex, an array of CSSStyleSheet objects

Step 2: Discard third-party stylesheets

If our script is running on https://example.com any stylesheet we want to inspect must also be on https://example.com. This is a security feature. From the MDN docs for CSSStyleSheet:

In some browsers, if a stylesheet is loaded from a different domain, accessing cssRules results in SecurityError.

That means that if the current page links to a stylesheet hosted on https://some-cdn.com, we can’t get custom properties — or any styles — from it. The approach we’re taking here only works for stylesheets hosted on the current domain.

CSSStyleSheet objects have an href property. Its value is the full URL to the stylesheet, like https://example.com/styles.css. Internal stylesheets have an href property, but the value will be null.

Let’s write a function that discards third-party stylesheets. We’ll do that by comparing the stylesheet’s href value to the current location.origin.

const isSameDomain = (styleSheet) => {   if (!styleSheet.href) {     return true;   } 
   return styleSheet.href.indexOf(window.location.origin) === 0; };

Now we use isSameDomain as a filter ondocument.styleSheets.

const getCSSCustomPropIndex = () => [...document.styleSheets]   .filter(isSameDomain);

With the third-party stylesheets discarded, we can inspect the contents of those remaining.

Step 3: Get all rules for the remaining stylesheets

Our goal for getCSSCustomPropIndex is to produce an array of arrays. To get there, we’ll use a combination of array methods to loop through, find values we want, and combine them. Let’s take a first step in that direction by producing an array containing every style rule.

const getCSSCustomPropIndex = () => [...document.styleSheets]   .filter(isSameDomain)   .reduce((finalArr, sheet) => finalArr.concat(...sheet.cssRules), []);

We use reduce and concat because we want to produce a flat array where every first-level element is what we’re interested in. In this snippet, we iterate over individual CSSStyleSheet objects. For each one of them, we need its cssRules. From the MDN docs:

The read-only CSSStyleSheet property cssRules returns a live CSSRuleList which provides a real-time, up-to-date list of every CSS rule which comprises the stylesheet. Each item in the list is a CSSRule defining a single rule.

Each CSS rule is the selector, braces, and property declarations. We use the spread operator ...sheet.cssRules to take every rule out of the cssRules object and place it in finalArr. When we log the output of getCSSCustomPropIndex, we get a single-level array of CSSRule objects.

Example output of getCSSCustomPropIndex producing an array of CSSRule objects

This gives us all the CSS rules for all the stylesheets. We want to discard some of those, so let’s move on.

Step 4: Discard any rules that aren’t basic style rules

CSS rules come in different types. CSS specs define each of the types with a constant name and integer. The most common type of rule is the CSSStyleRule. Another type of rule is the CSSMediaRule. We use those to define media queries, like @media (min-width: 400px) {}. Other types include CSSSupportsRule, CSSFontFaceRule, and CSSKeyframesRule. See the Type constants section of the MDN docs for CSSRule for the full list.

We’re only interested in rules where we define custom properties and, for the purposes in this post, we’ll focus on CSSStyleRule. That does leave out the CSSMediaRule rule type where it’s valid to define custom properties. We could use an approach that’s similar to what we’re using to extract custom properties in this demo, but we’ll exclude this specific rule type to limit the scope of the demo.

To narrow our focus to style rules, we’ll write another array filter:

const isStyleRule = (rule) => rule.type === 1;

Every CSSRule has a type property that returns the integer for that type constant. We use isStyleRule to filter sheet.cssRules.

const getCSSCustomPropIndex = () => [...document.styleSheets]   .filter(isSameDomain)   .reduce((finalArr, sheet) => finalArr.concat(     [...sheet.cssRules].filter(isStyleRule)   ), []);

One thing to note is that we are wrapping ...sheet.cssRules in brackets so we can use the array method filter.

Our stylesheet only had CSSStyleRules so the demo results are the same as before. If our stylesheet had media queries or font-face declarations, isStyleRule would discard them.

Step 5: Get the name and value of all properties

Now that we have the rules we want, we can get the properties that make them up. CSSStyleRule objects have a style property that is a CSSStyleDeclaration object. It’s made up of standard CSS properties, like color, font-family, and border-radius, plus custom properties. Let’s add that to our getCSSCustomPropIndex function so that it looks at every rule, building an array of arrays along the way:

const getCSSCustomPropIndex = () => [...document.styleSheets]   .filter(isSameDomain)   .reduce((finalArr, sheet) => finalArr.concat(     [...sheet.cssRules]       .filter(isStyleRule)       .reduce((propValArr, rule) => {         const props = []; /* TODO: more work needed here */         return [...propValArr, ...props];       }, [])   ), []);

If we invoke this now, we get an empty array. We have more work to do, but this lays the foundation. Because we want to end up with an array, we start with an empty array by using the accumulator, which is the second parameter of reduce. In the body of the reduce callback function, we have a placeholder variable, props, where we’ll gather the properties. The return statement combines the array from the previous iteration — the accumulator — with the current props array.

Right now, both are empty arrays. We need to use rule.style to populate props with an array for every property/value in the current rule:

const getCSSCustomPropIndex = () => [...document.styleSheets]   .filter(isSameDomain)   .reduce((finalArr, sheet) => finalArr.concat(     [...sheet.cssRules]       .filter(isStyleRule)       .reduce((propValArr, rule) => {         const props = [...rule.style].map((propName) => [           propName.trim(),           rule.style.getPropertyValue(propName).trim()         ]);         return [...propValArr, ...props];       }, [])   ), []);

rule.style is array-like, so we use the spread operator again to put each member of it into an array that we loop over with map. In the map callback, we return an array with two members. The first member is propName (which includes color, font-family, --color-accent, etc.). The second member is the value of each property. To get that, we use the getPropertyValue method of CSSStyleDeclaration. It takes a single parameter, the string name of the CSS property. 

We use trim on both the name and value to make sure we don’t include any leading or trailing whitespace that sometimes gets left behind.

Now when we invoke getCSSCustomPropIndex, we get an array of arrays. Every child array contains a CSS property name and a value.

Output of getCSSCustomPropIndex showing an array of arrays containing every property name and value

This is what we’re looking for! Well, almost. We’re getting every property in addition to custom properties. We need one more filter to remove those standard properties because all we want are the custom properties.

Step 6: Discard non-custom properties

To determine if a property is custom, we can look at the name. We know custom properties must start with two dashes (--). That’s unique in the CSS world, so we can use that to write a filter function:

([propName]) => propName.indexOf("--") === 0)

Then we use it as a filter on the props array:

const getCSSCustomPropIndex = () =>   [...document.styleSheets].filter(isSameDomain).reduce(     (finalArr, sheet) =>       finalArr.concat(         [...sheet.cssRules].filter(isStyleRule).reduce((propValArr, rule) => {           const props = [...rule.style]             .map((propName) => [               propName.trim(),               rule.style.getPropertyValue(propName).trim()             ])             .filter(([propName]) => propName.indexOf("--") === 0); 
           return [...propValArr, ...props];         }, [])       ),     []   );

In the function signature, we have ([propName]). There, we’re using array destructuring to access the first member of every child array in props. From there, we do an indexOf check on the name of the property. If -- is not at the beginning of the prop name, then we don’t include it in the props array.

When we log the result, we have the exact output we’re looking for: An array of arrays for every custom property and its value with no other properties.

The output of getCSSCustomPropIndex showing an array of arrays containing every custom property and its value

Looking more toward the future, creating the property/value map doesn’t have to require so much code. There’s an alternative in the CSS Typed Object Model Level 1 draft that uses CSSStyleRule.styleMap. The styleMap property is an array-like object of every property/value of a CSS rule. We don’t have it yet, but If we did, we could shorten our above code by removing the map:

// ... const props = [...rule.styleMap.entries()].filter(/*same filter*/); // ...

At the time of this writing, Chrome and Edge have implementations of styleMap but no other major browsers do. Because styleMap is in a draft, there’s no guarantee that we’ll actually get it, and there’s no sense using it for this demo. Still, it’s fun to know it’s a future possibility!

We have the data structure we want. Now let’s use the data to display color swatches.

Step 7: Build HTML to display the color swatches

Getting the data into the exact shape we needed was the hard work. We need one more bit of JavaScript to render our beautiful color swatches. Instead of logging the output of getCSSCustomPropIndex, let’s store it in variable.

const cssCustomPropIndex = getCSSCustomPropIndex();

Here’s the HTML we used to create our color swatch at the start of this post:

<ul class="colors"></ul>

We’ll use innerHTML to populate that list with a list item for each color:

document.querySelector(".colors").innerHTML = cssCustomPropIndex.reduce(   (str, [prop, val]) => `$ {str}<li class="color">     <b class="color__swatch" style="--color: $ {val}"></b>     <div class="color__details">       <input value="$ {prop}" readonly />       <input value="$ {val}" readonly />     </div>    </li>`,   "");

We use reduce to iterate over the custom prop index and build a single HTML-looking string for innerHTML. But reduce isn’t the only way to do this. We could use a map and join or forEach. Any method of building the string will work here. This is just my preferred way to do it.

I want to highlight a couple specific bits of code. In the reduce callback signature, we’re using array destructuring again with [prop, val], this time to access both members of each child array. We then use the prop and val variables in the body of the function.

To show the example of each color, we use a b element with an inline style:

<b class="color__swatch" style="--color: $ {val}"></b>

That means we end up with HTML that looks like:

<b class="color__swatch" style="--color: #00eb9b"></b>

But how does that set a background color? In the full CSS we use the custom property --color as the value of  background-color for each .color__swatch. Because external CSS rules inherit from inline styles, --color  is the value we set on the b element.

.color__swatch {   background-color: var(--color);   /* other properties */ }

We now have an HTML display of color swatches representing our CSS custom properties!


This demo focuses on colors, but the technique isn’t limited to custom color props. There’s no reason we couldn’t expand this approach to generate other sections of a pattern library, like fonts, spacing, grid settings, etc. Anything that might be stored as a custom property can be displayed on a page automatically using this technique.

The post How to Get All Custom Properties on a Page in JavaScript appeared first on CSS-Tricks.

CSS-Tricks

, , ,
[Top]