Posted on Leave a comment

Nesting Components in Figma

For the past couple of weeks, I’ve been building our UI Kit at Gusto, where I work, and this is a Figma document that contains all of our design patterns and components so that designers on our team can hop in, go shopping for a component that they need, and then get back to working on the problem that they’re trying to solve.

There’s a couple things that I’ve learned since I started. First, building a UI Kit is immensely delicate work and takes a really long time (although it happens to be very satisfying all the while). But, most importantly, embedding Figma components within other components is sort of magic.

Here’s why.

First, it’s important to note that I’ve tried to break down our components into the smallest, littlest chunks. So, for example, our Breadcrumbs, Tabs, and Progress Bar components are all separate from one another and I’ve dumped them all into a Symbols page.

Here’s an example of how I’ve started to build our form elements:

From what I can tell, this is how a lot of UI Kits are designed — there’s a welcome page that introduces what this document is and how to use it; there’s a symbols page that the design systems folks will maintain that has everything from buttons to forms inside it as symbols or components; and then there’s typically another page that has examples of these symbols that represent the final application.

Shopify’s design system, Polaris, does also this with their Sketch file, but so do a lot of examples I’ve seen from other big design teams:

But anyway, going back to my design in Figma — notice below that a forward slash (/) is used in the name of ProgressBar/Two and ProgressBar/Three components.

Well, that’s Figma’s naming convention for identifying Instances. What this means is that when a designer drags in the ProgressBar component from the UI Kit, they can switch between different options, like this:

That’s nifty! But once I broke up our UI into these tiny components, I started to wonder how I might combine these pieces together to make things even easier for our design team. I soon realized that in our app we have navigation items like breadcrumbs or progress bars but they always have a title associated with them. Once I figured that out, I started a series of new components called Header/Default, Header/Breadcrumbs, Header/ProgressBar, etc., which have all these components embedded within them.

So, now when a designer drags in the Header component into their mockups, they can do the following:

We’re switching between the different Header instances there and that doesn’t look like much, yet. But! Since we’re nesting components within our Header component, designers can jump down into the subcomponents, like ProgressBar and update that, too:

How neat is that? And again, this doesn’t look particularly useful just yet but nesting components within larger components means that you can start to use them in clever ways.

Where this gets interesting is here: at Gusto, we have two different UIs for our types of customers. We have admins that run payroll and then their employees that can access their account to see how much they’ve been paid. There’s different navigation and options for both, so I created two components for them: Frame/Admin and Frame/Employee.

These two components have the sidebar and navigation items but are then placed into a separate component called Layout/Default where we’ve placed our Header component. But since these components are instances and nested together, we can begin to click-clack bits of the UI together to get the precise interface that we want.

Now, whenever designers need to switch between different UIs, they can use these nested components and instances to toggle between them super fast. I’ve only just started experimenting with this but the idea is that by using these nested components you give folks a way to toggle between the different variants inside them whilst also providing a nice API for larger layouts.

If you’re using instances in Figma, Sketch, or another design tool — let me know! I’m constantly on the lookout for improving things here, but I think this is certainly a good start.

The post Nesting Components in Figma appeared first on CSS-Tricks.

CSS-Tricks

Posted on Leave a comment

Blue Beanie Day 2018

Another year!

I feel the same this year as I have in the past. Web standards, as an overall idea, has entirely taken hold and won the day. That’s worth celebrating, as the web would be kind of a joke without them. So now, our job is to uphold them. We need to cry foul when we see a browser go rogue and ship an API outside the standards process. That version of competition is what could lead the web back to a dark place where we’re creating browser-specific versions. That becomes painful, we stop doing it, and slowly, the web loses.

Direct Link to ArticlePermalink

The post Blue Beanie Day 2018 appeared first on CSS-Tricks.

CSS-Tricks

Posted on Leave a comment

DevTools for Designers

This is such an interesting conversation thread that keeps popping up year after year. The idea is that there could (and perhaps should) be in-browser tooling that helps web designers do their job. This tooling already exists to some degree. Let’s check in on perspectives from a wide array of people and companies who have shared thoughts on this topic.

Ahmad Shadeed wrote for us last year about how DevTools can be useful to designers in a number of ways, like changing state, content, colors, variables, etc.:

Editing things visually like that will give [designers] more control over some design details, they can tweak things in the browser and show the result to the developer to be implemented.


In a post titled, “A DevTools for Designers”, A.J. Kandy wrote that, just because you’re a designer, it doesn’t mean you don’t know how to code — but you might not be production-level and might be faster elsewhere:

I can edit front-end markup; I’m just way faster at drawing rectangles and arranging them into user interfaces. I’m technical, but not a coder.

It sparked a lot of responses and thoughts back when we originally shared the post:

It’s one thing to augment the existing DevTools to be better for designers. Firefox has done great work in that area with stuff like their animation tooling, flexbox and grid inspectors. At the same time, it’s also nice to see entirely fresh takes on how we can approach it! For example, Google dropped VisBug, an extension with designers squarely in mind. The video is only 30 seconds:

There have been a lot of opinions about browser extensions that allow design editing over the years. Check out options like Stylebot (Chrome store link).


There is another visual design browser plugin called Visual Inspector:


Don’t forget this classic trick:


Oliver Williams wrote the following in “The ultimate web design tool: a browser”:

Browser dev tools were traditionally useful for debugging JavaScript and inspecting network requests. More recently, we’ve seen browsers add graphical interfaces for manipulating CSS. Most browsers offer a color picker and eyedropper tool for selecting colors. In Chrome, this tool will helpfully display a color-contrast ratio. Chrome also offers a GUI for adding or tweaking text-shadow and box-shadow.


Perhaps design tooling will lead us in this direction in a big way?

Vlad works with Webflow, so you can see where he’s coming from with that.


Jye SR chimed in with his post, “5 Ways DevTools Made My Life Easier”:

… it’s possible to use Chrome DevTools to investigate competitors, see what’s not working with add-ons, change your viewport, understand page load timings and edit the web; all of which can help digital marketers, product managers or anyone working with a website to do their job more efficiently. It’s a tool which I use every day and I hope that you will too!


Hard to look at all that and not see this is where tooling is headed.

The post DevTools for Designers appeared first on CSS-Tricks.

CSS-Tricks

Posted on Leave a comment

Preventing Content Reflow From Lazy-Loaded Images

You know the concept of lazy loading images. It prevents the browser from loading images until those images are in (or nearly in) the browser’s viewport.

There are a plethora of JavaScript-based lazy loading solutions. GitHub has over 3,400 different lazy load repos, and those are just the ones with “lazy load” in a searchable string! Most of them rely on the same trick: Instead of putting an image’s URL in the src attribute, you put it in data-src — which is the same pattern for responsive images:

  • JavaScript watches the user scroll down the page
  • When the use encounters an image, JavaScript moves the data-src value into src where it belongs
  • The browser requests the image and it loads into view

The result is the browser loading fewer images up front so that the page loads faster. Additionally, if the user never scrolls far enough to see an image, that image is never loaded. That equals faster page loads and less data the user needs to spend.

“This is amazing!” you may be thinking. And, you’re right… it is amazing!

That said, it does indeed introduce a noticeable problem: images not containing the src attribute (including when it’s empty or invalid) have no height. This means that they’re not the right size in the page layout until they’re lazy-loaded.

When a user scrolls and images are lazy-loaded, those img elements go from a height of 0 pixels to whatever they need to be. This causes reflow, where the content below or around the image gets pushed to make room for the freshly loaded image. Reflow is a problem because it’s a user-blocking operation. It slows down the browser by forcing it to recalculate the layout of any elements that are affected by that image’s shape. The CSS scroll-behavior property may help here at some point, but its support needs to improve before it’s a viable option.

Lazy loading doesn’t guarantee that the image will fully load before it enters the viewport. The result is a perceived janky experience, even if it’s a big performance win.

There are other issues with lazy loading images that are worth mentioning but are outside the scope of this post. For example, if JavaScript fails to run at all, then no images will load on the page. That’s a common concern for any JavaScript-based solution but this article only concerned with solving the problems introduced by reflow.

If we could force pre-loaded images to maintain their normal width and height (i.e. their aspect ratio), we could prevent reflow problems while still lazy loading them. This is something I recently had to solve building a progressive web app at DockYard where I work.

For future reference, there’s an HTML attribute called intrinsicsize that’s designed to preserve the aspect ratio, but right now, that’s just experimental in Chrome.

Here’s how we did it.

Maintaining aspect ratio

There are many ways to go about the way we can maintain aspect ratios. Chris once rounded up an exhaustive list of options, but here’s what we’re looking at for image-specific options.

The image itself

The image src provides a natural aspect ratio. Even when an image is resized responsively, its natural dimensions still apply. Here’s a pretty common bit of responsive image CSS:

img {   max-width: 100%;   height: auto; }

That CSS is telling images not to exceed the width of the element that contains them, but to scale the height properly so that there’s no “stretching” or “squishing” as the image is resized. Even if the image has inline height and width attributes, this CSS will keep them behaving nicely on small viewports.

However, that “natural aspect ratio” behavior breaks down if there’s no src yet. Browsers don’t care about data-src and don’t do anything with it, so it’s not really a viable solution for lazy loading reflow; but it is important to help understand the “normal” way images are laid out once they’ve loaded.

A pseudo-element

Many developers — including myself — have been frustrated trying to use pseudo-elements (e.g. ::before and ::after) to add decorations to img elements. Browsers don’t render an image’s pseudo-elements because img is a replaced element, meaning its layout is controlled by an external resource.

However, there is an exception to that rule: If an image’s src attribute is invalid, browsers will render its pseudo-elements. So, if we store the src for an image in data-src and the src is empty, then we can use a pseudo-element to set an aspect ratio:

[data-src]::before {   content: '';   display: block;   padding-top: 56.25%; }

That’ll set a 16:9 aspect ratio on ::before for any element with a data-src attribute. As soon as the data-src becomes the src, the browser stops rendering ::before and the image’s natural aspect ratio takes over.

Here’s a demo:

See the Pen Image Aspect Ratio: ::before padding by James Steinbach (@jdsteinbach) on CodePen.

There are a couple drawbacks to this solution, however. First, it relies on CSS and HTML working together. Your stylesheet needs to have a declaration for each image aspect ratio you need to support. It would be much better if the template could insert an image without needing CSS edits.

Second, it doesn’t work in Safari 12 and below, or Edge, at the time of writing. That’s a pretty big traffic swatch to send poor layouts. To be fair, maintaining the aspect ratio is a bit of a progressive enhancement — there’s nothing “broken” about the final rendered page. Still, it’s much more ideal to solve the reflow problem and for images to render as expected.

Data URI (Base64) PNGs

Another way we attempted to preserve the aspect ratio was to inline data URI for the src. as PNG. Using png-pixel.com will help with the lift of all that base64-encoding with any dimensions and colors. This can go straight into the image’s src attribute in the HTML:

<img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAMAAAACCAQAAAA3fa6RAAAADklEQVR42mNkAANGCAUAACMAA2w/AMgAAAAASUVORK5CYII=" data-src="//picsum.photos/900/600" alt="Lazy loading test image" />

The inline PNG there has a 3:2 aspect ratio (the same aspect ratio as the final image). When src is replaced with the data-src value, the image will maintain its aspect ratio exactly like we want!

Here’s a demo:

See the Pen Image Aspect Ratio: inline base64 PNG by James Steinbach (@jdsteinbach) on CodePen.

And, yes, this approach also comes with some drawbacks. Although the browser support is much better, it’s complicated to maintain. We need to generate a base64 string for each new image size, then make that object of strings available to whatever templating tool that’s being used. It’s also not the most efficient way to represent this data.

I kept exploring and found a smaller way.

Combine SVG with base64

After exploring the inline PNG option, I wondered if SVG might be a smaller format for inline images and here’s what I found: An SVG with a viewBox declaration is a placeholder image with an easily editable native aspect ratio.

First, I tried base64-encoding an SVG. Here’s an example of what that looked like in my HTML:

<img src="data:image/svg+xml;base64,PHN2ZyB4bWxucz0naHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmcnIHZpZXdCb3g9JzAgMCAzIDInPjwvc3ZnPg==" data-src="//picsum.photos/900/600" alt="Lazy loading test image">

On small, simple aspect ratios, this is roughly equivalent in size to the base64 PNGs. A 1:1 ratio would be 114 bytes with base64 PNG and 106 bytes with base64 SVG. A 2:3 ratio is 118 bytes with base64 PNG and 106 bytes with base64 SVG.

However, using base64 SVG for larger, more complex ratios stay small, which is a real winner in file size. A 16:9 ratio is 122 bytes in base64 PNG and 110 bytes in base64 SVG. A 923:742 ratio is 3,100 bytes in base64 PNG but only 114b in base64 SVG! (That’s not a common aspect ratio, but I needed to test with custom dimensions with my client’s use case.)

Here’s a table to see those comparisons more clearly:

Aspect Ratio base64 PNG base64 SVG
1:1 114 bytes 106 bytes
2:3 118 bytes 106 bytes
16:9 122 bytes 110 bytes
923:742 3,100 bytes 114 bytes

The differences are negligible with simple ratios, but you can see how extremely well SVG scales as ratios become complex.

We’ve got much better browser support now. This technique is supported by all the big players, including Chrome, Firefox, Safari, Opera, IE11, and Edge, but also has great support in mobile browsers, including Safari iOS, Chrome for Android, and Samsung for Android (from 4.4 up).

Here’s a demo:

See the Pen Image Aspect Ratio: inline base64 SVG by James Steinbach (@jdsteinbach) on CodePen.

🏆 We have a winner!

Yes, we do, but stick with me as we improve this approach even more! I remembered Chris suggesting that we should not use base64 encoding with SVG inlined in CSS background-images and thought that advice might apply here, too.

In this case, instead of base64-encoding the SVGs, I used the “Optimized URL-encoded” technique from that post. Here’s the markup:

<img src="data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 3 2'%3E%3C/svg%3E" data-src="//picsum.photos/900/600" alt="Lazy loading test image" />

This is just a tad smaller than base64 SVG. The 1:1 is 106 bytes in base64 and 92 bytes when URL-encoding. 16:9 outputs 110 bytes in base64 and 97 bytes when URL-encoded.

If you’re interested in more data size by file and encoding format, this demo compares different byte sizes between all of these techniques.

However, the real benefits that make the URL-encoded SVG a clear winner are that its format is human-readable, easily template-able, and infinitely customizable!

You don’t need to create a CSS block or generate a base64 string to get a perfect placeholder for images where the dimensions are unknown! For example, here’s a little React component that uses this technique:

const placeholderSrc = (width, height) => `data:image/svg+xml,%3Csvg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 $  {width} $  {height}"%3E%3C/svg%3E`  const lazyImage = ({url, width, height, alt}) => {   return (     <img       src={placeholderSrc(width, height)}       data-src=https://css-tricks.com/preventing-content-reflow-from-lazy-loaded-images/       alt={alt} />   ) }

See the Pen React LazyLoad Image with Stable Aspect Ratio by James Steinbach (@jdsteinbach) on CodePen.

Or, if you prefer Vue:

See the Pen Vue LazyLoad Image with Stable Aspect Ratio by James Steinbach (@jdsteinbach) on CodePen.

I’m happy to report that browser support hasn’t changed with this improvement — we’ve still got the full support as base64 SVG!

Conclusion

We’ve explored several techniques to prevent content reflow by preserving the aspect ratio of a lazy-loaded image before the swap happens. The best technique I was able to find is inlined and optimized URL-encoded SVG with image dimensions defined in the viewBox attribute. That can be scripted with a function like this:

const placeholderSrc = (width, height) => `data:image/svg+xml,%3Csvg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 $  {width} $  {height}"%3E%3C/svg%3E`

There are several benefits to this technique:

  • Solid browser support across desktop and mobile
  • Smallest byte size
  • Human-readable format
  • Easily templated without run-time encoding calls
  • Infinitely extensible

What do you think of this approach? Have you used something similar or have a completely different way of handling reflow? Let me know!

The post Preventing Content Reflow From Lazy-Loaded Images appeared first on CSS-Tricks.

CSS-Tricks

Posted on Leave a comment

What If?

Harry Roberts writes about working on a project with a rather nasty design flaw. The website was entirely dependent on images loading before rendering any of the content. He digs into why this bad for accessibility and performance but goes further to describe how this can ripple into other problems:

While ever you build under the assumption that things will always work smoothly, you’re leaving yourself completely ill-equipped to handle the scenario that they don’t. Remember the fallacies; think about resilience.

Harry then suggests that we should always ask ourselves a key question when developing a website: what if this image doesn’t load? For example, if the user is on a low-end device, using a flakey network, using an obscure browser, looking at the site without a crucial API or feature available… you get the idea.

While we’re on this note, we asked what makes a good front-end developer a little while back and I think this is the best answer to that question after reading Harry’s post: a good front-end developer is constantly asking themselves, “What if?”

Direct Link to ArticlePermalink

The post What If? appeared first on CSS-Tricks.

CSS-Tricks

Posted on Leave a comment

Front-End Developers Have to Manage the Loading Experience

Web performance is a huge complicated topic. There are metrics like total requests, page weight, time to glass, time to interactive, first input delay, etc. There are things to think about like asynchronous requests, render blocking, and priority downloading. We often talk about performance budgets and performance culture.

How that first document comes down from the server is a hot topic. That is where most back-end related performance talk enters the picture. It gives rise to architectures like the JAMstack, where gosh, at least we don’t have to worry about index.html being slow.

Images have a performance story all to themselves (formats! responsive images!). Fonts also (FOUT’n’friends!). CSS also (talk about render blocking!). Service workers can be involved at every level. And, of course, JavaScript is perhaps the most talked about villain of performance. All of this is balanced with perhaps the most important general performance concept: perceived performance. Front-end developers already have a ton of stuff we’re responsible for regarding performance. 80% is the generally quoted number and that sounds about right to me.

For a moment, let’s assume we’re going to build a site and we’re not going to server-side render it. Instead, we’re going to load an empty document and kick off data API calls as quickly as we can, then render the site with that data. Not a terribly rare scenario these days. As you might imagine, >we now have another major concern: handling the loading experience.

I mused about this the other day. Here’s an example:

I’d say that loading experience is pretty janky, and I’m on about the best hardware and internet connection money can buy. It’s not a disaster and surely many, many thousands of people use this particular site successfully every day. That said, it doesn’t feel fast, smooth, or particularly nice like you’d think a high-budget website would in these Future Times.

Part of the reason is probably because that page isn’t server-side rendered. For whatever reason (we can’t really know from the outside), that’s not the way they went. Could be developer efficiency, security, a temporary state during a re-write… who knows! (It probably isn’t ignorance.)

What are we to do? Well, I think this is a somewhat new problem in front-end development. We’ve told the browser: “Hey, we got this. We’re gonna load things all out of order depending on when our APIs cough stuff up to us and our front-end framework decides it’s time to do so.” I can see the perspective here where this isn’t ideal and we’ve given up something that browsers are incredibly good at only to do it less well ourselves. But hey, like I’ve laid out a bit here, the world is complicated.

What is actually happening is that these front-end frameworks are aware of this issue and are doing things to help manage it. Back in April of this year, Dan Abramov introduced React Suspense. It seems like a tool for helping front-end devs like us manage the idea that we now need to deal with more loading state stuff than we ever have before:

At about 14 minutes, he gets into fetching data with placeholder components, caching and such. This issue isn’t isolated to React, of course, but keeping in that theme, here’s a conference talk by Andrew Clark that hit home with me even more quickly (but ultimately uses the same demo and such):

Just the idea of waiting to show spinners for a little bit can go a long way in de-jankifying loading.

Mikael Ainalem puts a point on this in a recent article, A Brief History of Flickering Spinners. He explains more clearly what I was trying to say:

One reason behind this development is the change we’ve seen in asynchronous programming. Asynchronous programming is a lot easier than it used to be. Most modern languages have good support for loading data on the fly. Modern JavaScript has incorporated Promises and with ES7 comes the async and await keywords. With the async/await keywords one can easily fetch data and process it when needed. This means that we need to think a step further about how we show users that data is loading.

Plus, he offers some solutions!

See the Pen Flickering spinners by Mikael Ainalem (@ainalem) on CodePen.

We’ve got to get better at this.

The post Front-End Developers Have to Manage the Loading Experience appeared first on CSS-Tricks.

CSS-Tricks

Posted on Leave a comment

FUIF: Responsive Images by Design

Jon Sneyers:

One of the main motivations for FUIF is to have an image format that is responsive by design, which means it’s no longer necessary to produce many variants of the same image: low-quality placeholders, thumbnails, many downscaled versions for many display resolutions. A single file, truncated at different offsets, can do the same thing.

FLIF isn’t anywhere near ready to use, but it’s a fascinating idea. I love the idea that the format stores the image data in such a way that you request just first few kilobytes of the file and to essentially get a low-quality version, then you request more as needed. See this little demo from Eric Portis that shows it off somewhat via a Service Worker and a progressive JPG.

If this idea ever does get legs and support in browsers, Cloudinary is super well suited to take advantage of that, as they serve the best image format for the current browser — and that is massive for image performance.

Direct Link to ArticlePermalink

The post FUIF: Responsive Images by Design appeared first on CSS-Tricks.

CSS-Tricks