Tag: need

You don’t need external assets in an HTML file

A fun exercise from Terence Eden. You can send an HTML file over the wire including anything a website might need without requesting any other files. CSS and JavaScript are easy, because there are <script> and <style> tags. Images and fonts (and pretty much whatever other kind of asset) aren’t too hard because Data URLs exist. See Terence’s post for an extra-tricky final version including .zip files.

Reminds me of a couple of other tricks…

Direct Link to ArticlePermalink


The post You don’t need external assets in an HTML file appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , ,

WordPress Caching: All You Need To Know

Here’s Ashley Rich at Delicious Brains writing about all the layers of caching that are relevant to a WordPress site. I think we all know that caching is complicated, but jeez, it’s a journey to understand all the caches at work here. The point of cache being speed and reducing burden on the worst bottlenecks and slowest/busiest parts of a web stack.

Here’s my own understanding:

  • Files can be cached by the browser. This is the fastest possible cache as no network request happens at all. Assets like images, CSS, and JavaScript are often cached this way because they don’t change terribly frequently, but you have to make sure you’re telling browsers that it’s OK to do this and have a mechanism in place to break that cache if you need to (e.g. by changing file names). You very rarely cache the HTML this way, as it changes the most and file-name-cache-busting of HTML seems more tricky than it’s worth.
  • Files can be cached at the CDN level. This is great because even though network traffic is happening, CDN servers are very fast and likely geographically closer to users than your origin server. If users get files from here, they never even trouble your origin server. You’ll need a way to break this cache as well, which again is probably through changing file names. You might cache HTML at this level even without changing file names if you have a mechanism to clear that cache globally when content changes.
  • The origin server might cache built HTML pages. On a WordPress site, the pages are built with PHP which probably triggers MySQL queries. If the server can save the result of the things that have already executed, that means it can serve a “static” file as a response, which it can do much faster than having to run the PHP and MySQL. That’ll work for logged out users, who all get the same response, but not for logged in users who have dynamic content on the page (like the WordPress admin bar).
  • The database has its own special caching. After a MySQL query is executed, the results can be saved in an Object Cache, meaning the same request can come from that cache instead of having to run the query again. You get that automatically to some degree, but ideally it gets wired up to a more persistent store, which you do not get automatically

Phew. It gets a little easier with Jamstack since your pages are prebuilt and CDN-hosted already, and in the case of Netlify, you don’t even have to worry about cache busting.

But even as complex as this is, I don’t worry about it all that much. This WordPress site uses Flywheel for hosting which deals with the database and server-level caching, I have Cloudflare in front of it with special WordPress optimization for the CDN caching, and roll-my-own file-name cache busting (I wish this part was easier). I’d certainly trust SpinupWP to get it right too, given Ashley’s great writeup I’m linking to here.

Direct Link to ArticlePermalink


The post WordPress Caching: All You Need To Know appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , ,
[Top]

Web Frameworks: Why You Don’t Always Need Them

Richard MacManus explaining Daniel Kehoe’s approach to building websites:

There are three key web technologies underpinning Kehoe’s approach:

  • ES6 Modules: JavaScript ES6 can support import modules, which are also supported by browsers.
  • Module CDNs: JavaScript modules can now be downloaded from third-party content delivery networks (CDNs).
  • Custom HTML elements: Developers can now create custom HTML tags, via Web Components.

Using a no build process and only features that are built into browser, and yet that still buys you a pretty powerful setup. You can still use stuff off npm. You can still get templating. You can still build with components. You still get isolation where needed.

I’d say today you’re:

  • Giving up some DX (hot module reloading, JSX, framework doodads)
  • Gaining some DX (can jump into project and just start working)
  • Giving up some performance (no tree shaking, loads of network requests)
  • Widening your hiring pool (more people know core technologies than specific tools)

But it’s not hard to imagine a tomorrow where we give up less and gain more, making the tools we use today less necessary. I’m quite sure we’ll always still find a way to jam more tools into what we’re doing. Hammer something something nail.

Direct Link to ArticlePermalink


The post Web Frameworks: Why You Don’t Always Need Them appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , ,
[Top]

Building Custom Data Importers: What Engineers Need to Know

Importing data is a common pain-point for engineering teams. Whether its importing CRM data, inventory SKUs, or customer details, importing data into various applications and building a solution for this is a frustrating experience nearly every engineer can relate to. Data import, as a critical product experience is a huge headache. It reduces the time to value for customers, strains internal resources, and takes valuable development cycles away from developing key, differentiating product features.

Frequent error messages end-users receive when importing data. Why do we expect customers to fix this themselves?

Data importers, specifically CSV importers, haven’t been treated as key product features within the software, and customer experience. As a result, engineers tend to dedicate an exorbitant amount of effort creating less-than-ideal solutions for customers to successfully import their data.

Engineers typically create lengthy, technical documentation for customers to review when an import fails. However, this doesn’t truly solve the issue but instead offsets the burden of a great experience from the product to an end-user.

In this article, we’ll address the current problems with importing data and discuss a few key product features that are necessary to consider if you’re faced with a decision to build an in-house solution.

Importing data is typically frustrating for anyone involved at a data-led company. Simply put, there has never been a standard for importing customer data. Until now, teams have deferred to CSV templates, lengthy documentation, video tutorials, or buggy in-house solutions to allow users the ability to import spreadsheets. Companies trying to import CSV data can run into a variety of issues such as:

  • Fragmented data: With no standard way to import data, we get emails going back and forth with attached spreadsheets that are manually imported. As spreadsheets get passed around, there are obvious version control challenges. Who made this change? Why don’t these numbers add up as they did in the original spreadsheet? Why are we emailing spreadsheets containing sensitive data?
  • Improper formatting: CSV import errors frequently occur when formatting isn’t done correctly. As a result, companies often rely on internal developer resources to properly clean and format data on behalf of the client — a process that can take hours per customer, and may lead to churn anyway. This includes correcting dates or splitting fields that need to be healed prior to importing.
  • Encoding errors: There are plenty of instances where a spreadsheet can’t be imported when it’s not improperly encoded. For example, a company may need a file to be saved with UTF-8 encoding (the encoding typically preferred for email and web pages) in order to then be uploaded properly to their platform. Incorrect encoding can result in a lengthy chain of communication where the customer is burdened with correcting and re-importing their data.
  • Data normalization: A lack of data normalization results in data redundancy and a never-ending string of data quality problems that make customer onboarding particularly challenging. One example includes formatting email addresses, which are typically imported into a database, or checking value uniqueness, which can result in a heavy load on engineers to get the validation working correctly.

Remember building your first CSV importer?

When it comes down to creating a custom-built data importer, there are a few critical features that you should include to help improve the user experience. (One caveat – building a data importer can be time-consuming not only to create but also maintain – it’s easy to ensure your company has adequate engineering bandwidth when first creating a solution, but what about maintenance in 3, 6, or 12 months?)

A preview of Flatfile Portal. It integrates in minutes using a few lines of JavaScript.

Data mapping

Mapping or column-matching (they are often used interchangeably) is an essential requirement for a data importer as the file import will typically fail without it. An example is configuring your data model to accept contact-level data. If one of the required fields is “address” and the customer who is trying to import data chooses a spreadsheet where the field is labeled “mailing address,” the import will fail because “mailing address” doesn’t correlate with “address” in a back-end system. This is typically ‘solved’ by providing a pre-built CSV template for customers, who then have to manipulate their data, effectively increasing time-to-value during a product experience. Data mapping needs to be included in the custom-built product as a key feature to retain data quality and improve the customer data onboarding experience.

Auto-column matching CSV data is the bread and butter of Portal, saving massive amounts of time for customers while providing a delightful import experience.

Data validation

Data validation, which checks if the data matches an expected format or value, is another critical feature to include in a custom data importer. Data validation is all about ensuring the data is accurate and is specific to your required data model. For example, if special characters can’t be used within a certain template, error messages can appear during the import stage. Having spreadsheets with hundreds of rows containing validation errors results in a nightmare for customers, as they’ll have to fix these issues themselves, or your team, which will spend hours on end cleaning data. Automatic data validators allow for streamlining of healing incoming data without the need for a manual review.

We built Data Hooks into Portal to quickly normalize data on import. A perfect use-case would be validating email uniqueness against a database.

Data parsing

Data parsing is the process of taking an aggregation of information (in a spreadsheet) and breaking it into discrete parts. It’s the separation of data. In a custom-built data importer, a data parsing feature should not only have the ability to go from a file to an array of discrete data but also streamline the process for customers.

Data transformation

Data transformation means making changes to imported data as it’s flowing into your system to meet an expected or desired value. Rather than sending data back to users with an error message for them to fix, data transformation can make small, systematic tweaks so that the users’ data is more usable in your backend. For example, when transferring a task list, prioritization data could be transformed into a different value, such as numbers instead of labels.

Data Hooks normalize imported customer data automatically using validation rules set in the Portal JSON config. These highly adaptable hooks can be worked to auto-validate nearly any incoming customer data.

We’ve baked all of the above features into Portal, our flagship CSV importer at Flatfile. Now that we’ve reviewed some of the must-have features of a data importer, the next obvious question for an engineer building an in-house importer is typically… should they?

Engineering teams that are taking on this task typically use custom or open source solutions, which may not adhere to specific use-cases. Building a comprehensive data importer also brings UX challenges when building a UI and maintaining application code to handle data parsing, normalization, and mapping. This is prior to considering how customer data requirements may change in future months and the ramifications of maintaining a custom-built solution.

Companies facing data import challenges are now considering integrating a pre-built data importer such as Flatfile Portal. We’ve built Portal to be the elegant import button for web apps. With just a few lines of JavaScript, Portal can be implemented alongside any data model and validation ruleset, typically in a few hours. Engineers no longer need to dedicate hours cleaning up and formatting data, nor do they need to custom build a data importer (unless they want to!). With Flatfile, engineers can focus on creating product-differentiating features, rather than work on solving spreadsheet imports.

Importing data is wrought with challenges and there are several critical features necessary to include when building a data importer. The alternative to a custom-built solution is to look for a pre-built data importer such as Portal.

Flatfile’s mission is to remove barriers between humans and data. With AI-assisted data onboarding, they eliminate repetitive work and make B2B data transactions fast, intuitive, and error-free. Flatfile automatically learns how imported data should be structured and cleaned, enabling customers and teams to spend more time using their data instead of fixing it. Flatfile has transformed over 300 million rows of data for companies like ClickUp, Blackbaud, Benevity, and Toast. To learn more about Flatfile’s products, Portal and Concierge, visit flatfile.io.


The post Building Custom Data Importers: What Engineers Need to Know appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , , ,
[Top]

What ya need there is a bit of templating

I had a fella write in to me the other day. He had some HTML, CSS, and JavaScript, and it just wasn’t behaving like he thought it ought to. The HTML had some placeholders in it and the JavaScript had some data in it, and the assumption was that the data would fill the placeholders.

To those of us with some degree of web knowledge, we can look at this and see why it’s not working like he thought it would. But I think it’s also valuable to try to see things from that perspective and then look at solutions that are hopefully as simple as the original problem seems to be.

The HTML was something like this…

<!DOCTYPE html> <html lang="en"> <head>   <meta charset="utf-8">   <title>Test</title>   <link rel="stylesheet" href="test.css">   <script src="data.js"></script> </head> <body>   <section>     <div>{company_name}</div>   </section> </body> </html>

The JavaScript was like this…

var company_data = {   "{company_name}" : "SOME COMPANY", };

There is nothing invalid going on here.

That’s all perfectly valid code. It is linked up right. It will run. It just doesn’t do anything besides render {company_name} to the screen. The expectation is that it would render SOME COMPANY to the screen instead, replacing the {company_name} placeholder with the data from the JavaScript file.

Let’s fix it with a one-liner.

In this exact scenario, to display the correct company name, we need to select that element in the DOM and replace its content with our data. We could do that by adding this one extra line to the JavaScript:

var company_data = {   "{company_name}": "SOME COMPANY" };  document.querySelector("div").innerHTML = company_data["{company_name}"];

That’s not particularly re-usable or resiliant, but hey, it’s also not over-thinking or over-tooling it.

The expectation was templating

I think we can see at this point that what he hoped would happen is that this sort of templating would automatically happen. Provide an object with keys that match what is in the HTML, the content in that HTML is automatically swapped out. It just doesn’t work that way with raw web technologies.

No joke, there are hundreds of ways to go about handling this. Here’s a few off the top of my head:

  • Use a templating language like Handlebars or Mustache
  • Use a static site generator like Eleventy, which uses Liquid by default
  • Make an HTML <template> and write your own script to use it
  • Make a Web Component
  • Use a back-end language instead, or a language like Nunjucks to process ahead of time
  • Use a preprocessor like Pug

As a general preference, I’d say doing the templating server-side or during a build is ideal — why mess with the DOM if you don’t need to?

But just to ignore that advice for a second, here’s an example of doing it client-side with Handlebars, just so the fella from the original email has a working example of how that can work:


The post What ya need there is a bit of templating appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, ,
[Top]

We need more inclusive web performance metrics

Scott Jehl argues that performance metrics such as First Contentful Paint and Largest Contentful Paint don’t really capture the full picture of everyone’s experience with websites:

These metrics are often touted as measures of usability or meaning, but they are not necessarily meaningful for everyone. In particular, users relying on assistive technology (such as a screenreader) may not perceive steps in the page loading process until after the DOM is complete, or even later depending on how JavaScript may block that process. Also, a page may not be usable to A.T. until it becomes fully interactive, since many applications often deliver accessible interactivity via external JavaScript

Scott then jots down some thoughts on how we might do that. I think this is always so very useful to keep in mind: what we experience on our site, and what we measure too, might not be the full picture.

Direct Link to ArticlePermalink

The post We need more inclusive web performance metrics appeared first on CSS-Tricks.

CSS-Tricks

, , , ,
[Top]

Everything You Need to Know About FLIP Animations in React

With a very recent Safari update, Web Animations API (WAAPI) is now supported without a flag in all modern browsers (except IE).  Here’s a handy Pen where you can check which features your browser supports. The WAAPI is a nice way to do animation (that needs to be done in JavaScript) because it’s native — meaning it requires no additional libraries to work. If you’re completely new to WAAPI, here’s a very good introduction by Dan Wilson.

One of the most efficient approaches to animation is FLIP. FLIP requires a bit of JavaScript to do its thing. 

Let’s take a look at the intersection of using the WAAPI, FLIP, and integrating all that into React. But we’ll start without React first, then get to that.

FLIP and WAAPI

FLIP animations are made much easier by the WAAPI!

Quick refresher on FLIP: The big idea is that you position the element where you want it to end up first. Next, apply transforms to move it to the starting position. Then unapply those transforms. 

Animating transforms is super efficient, thus FLIP is super efficient. Before WAAPI, we had to directly manipulate element’s styles to set transforms and wait for the next frame to unset/invert it:

// FLIP Before the WAAPI el.style.transform = `translateY(200px)`; 
 requestAnimationFrame(() => {   el.style.transform = ''; });

A lot of libraries are built upon this approach.  However, there are several problems with this:

  • Everything feels like a huge hack.
  • It is extremely difficult to reverse the FLIP animation. While CSS transforms are reversed “for free” once a class is removed, this is not the case here. Starting a new FLIP while a previous one is running can cause glitches. Reversing requires parsing a transform matrix with getComputedStyles and using it to calculate the current dimensions before setting a new animation.
  • Advanced animations are close to impossible. For example, to prevent distorting a scaled parent’s children, we need to have access to current scale value each frame. This can only be done by parsing the transform matrix.
  • There’s lots of browser gotchas. For example, sometimes getting a FLIP animation to work flawlessly in Firefox requires calling requestAnimationFrame twice:
requestAnimationFrame(() => {   requestAnimationFrame(() => {     el.style.transform = '';   }); });

We get none of these problems when WAAPI is used. Reversing can be painlessly done with the reverse function.The counter-scaling of children is also possible. And when there is a bug, it is easy to pinpoint the exact culprit since we’re only working with simple functions, like animate and reverse, rather than combing through things like the requestAnimationFrame approach. 

Here’s the outline of the WAAPI version:

el.classList.toggle('someclass'); const keyframes = /* Calculate the size/position diff */; el.animate(keyframes, 2000);

FLIP and React

To understand how FLIP animations work in React, it is important to know how and, most importantly, why they work in plain JavaScript. Recall the anatomy of a FLIP animation:

Diagram. Cache current site and position, make a style change, get new size and position, calculate the difference, set transforms, and cancel transforms. Each item has a purple background, except the last one, indicating they happen before paint.

Everything that has a purple background needs to happen before the “paint” step of rendering. Otherwise, we would see a flash of new styles for a moment which is not good. Things get a little bit more complicated in React since all DOM updates are done for us.

The magic of FLIP animations is that an element is transformed before the browser has a chance to paint. So how do we know the “before paint” moment in React?

Meet the useLayoutEffect hook. If you even wondered what is for… this is it! Anything we pass in this callback happens synchronously after DOM updates but before paint. In other words, this is a great place to set up a FLIP!

Let us do something the FLIP technique is very good for: animating the DOM position. There’s nothing CSS can do if we want to animate how an element moves from one DOM position to another. (Imagine completing a task in a to-do list and moving it to the list of “completed” tasks like when you click on items in the Pen below.)

Let’s look at the simplest example. Clicking on any of the two squares in the following Pen makes them swap positions. Without FLIP, it would happen instantly.

There’s a lot going on there. Notice how all work happens inside lifecycle hook callbacks: useEffect and useLayoutEffect. What makes it a little bit confusing is that the timeline of our FLIP animation is not obvious from code alone since it happens across two React renders. Here’s the anatomy of a React FLIP animation to show the different order of operations:

Diagram. Cache the size and position, retrieve previous size and position from cache, get new size and position, calculate the difference, and play the animation.

Although useEffect always runs after useLayoutEffect and after browser paint, it is important that we cache the element’s position and size after the first render. We won’t get a chance to do it on second render because useLayoutEffect is run after all DOM updates. But the procedure is essentially the same as with vanilla FLIP animations.

Caveats

Like most things, there are some caveats to consider when working with FLIP in React.

Keep it under 100ms

A FLIP animation is calculation. Calculation takes time and before you can show that smooth 60fps transform you need to do quite some work. People won’t notice a delay if it is under 100ms, so make sure everything is below that. The Performance tab in DevTools is a good place to check that.

Unnecessary renders

We can’t use useState for caching size, positions and animation objects because every setState will cause an unnecessary render and slow down the app. It can even cause bugs in the worst of cases. Try using useRef instead and think of it as an object that can be mutated without rendering anything.

Layout thrashing

Avoid repeatedly triggering browser layout. In the context of FLIP animations, that means avoid looping through elements and reading their position with getBoundingClientRect, then immediately animating them with animate. Batch “reads” and “writes” whenever possible. This will allow for extremely smooth animations.

Animation canceling

Try randomly clicking on the squares in the earlier demo while they move, then again after they stop. You will see glitches. In real life, users will interact with elements while they move, so it’s worth making sure they are canceled, paused, and updated smoothly. 

However, not all animations can be reversed with reverse. Sometimes, we want them to stop and then move to a new position (like when randomly shuffling a list of elements). In this case, we need to:

  • obtain a size/position of a moving element
  • finish the current animation
  • calculate the new size and position differences
  • start a new animation

In React, this can be harder than it seems. I wasted a lot of time struggling with it. The current animation object must be cached. A good way to do it is to create a Map so to get the animation by an ID. Then, we need to obtain the size and position of the moving element. There are two ways to do it:

  1. Use a function component: Simply loop through every animated element right in the body of the function and cache the current positions.
  2. Use a class component: Use the getSnapshotBeforeUpdate lifecycle method.

In fact, official React docs recommend using getSnapshotBeforeUpdate “because there may be delays between the “render” phase lifecycles (like render) and “commit” phase lifecycles (like getSnapshotBeforeUpdate and componentDidUpdate).” However, there is no hook counterpart of this method yet. I found that using the body of the function component is fine enough.

Don’t fight the browser

I’ve said it before, but avoid fighting the browser and try to make things happen the way the browser would do it. If we need to animate a simple size change, then consider whether CSS would suffice (e.g.  transform: scale()) . I’ve found that FLIP animations are used best where browsers really can’t help:

  • Animating DOM position change (as we did above)
  • Sharing layout animations

The second is a more complicated version of the first. There are two DOM elements that act and look as one changing its position (while another is unmounted/hidden). This tricks enables some cool animations. For example, this animation is made with a library I built called react-easy-flip that uses this approach:

Libraries

There are quite a few libraries that make FLIP animations in React easier and abstract the boilerplate. Ones that are currently maintained actively include: react-flip-toolkit and mine, react-easy-flip.

If you do not mind something heavier but capable of more general animations, check out framer-motion. It also does cool shared layout animations! There is a video digging into that library.


Resources and references

The post Everything You Need to Know About FLIP Animations in React appeared first on CSS-Tricks.

CSS-Tricks

, , , , , ,
[Top]

“All these things are quite easy to do, they just need somebody to sit down and just go through the website”

I saw a video posted on Twitter from Channel 5 News in the UK (I have no idea what the credibility of them is, it’s an ocean away from me) with anchor Claudia Liza asking Glen Turner and Kristina Barrick questions about website accessibility.

Apparently, they often post videos with captions, but this particular video doesn’t (ironically). So, I’ve transcribed it here as I found them pretty well-spoken.

[Claudia Liza]: … you do have a visual impairment. How does that make it difficult for you to shop online?

[Glen Turner]: Well, I use various special features on my devices to shop online to make it easier. So, I enlarge the text, I’ll invert the colors to make the background dark so that I don’t have glare. I will zoom in on pictures, I will use speech to read things to me because it’s too difficult sometimes. But sometimes websites and apps aren’t designed in a way that is compatible with that. So sometimes the text will be poorly contrasted so you’ll have things like brown on black, or red on black, or yellow on white, something like that. Or the menu system won’t be very easy to navigate, or images won’t have descriptions for the visually impaired because images can have descriptions embedded that a speech reader will read back to them. So all these various factors make it difficult or impossible to shop on certain websites.

[Claudia Liza]: What do you need retailers to do? How do they need to change their technology on their websites and apps to make it easier?

It’s quite easy to do a lot of these things, really. Check the colors on your website. Make sure you’ve got light against dark and there is a very clear distinctive contrast. Make sure there are descriptions for the visually impaired. Make sure there are captions on videos for the hearing impaired. Make sure your menus are easy to navigate and make it easy to get around. All these things are quite easy to do, they just need somebody to sit down and just go through the website and check that it’s all right and consult disabled people as well. Ideally, you’ve got disabled people in your organization you employ, but consult the wider disabled community as well. There is loads of us online there is loads of us spread all over the country. There is 14 million of us you can talk to, so come and talk to us and say, “You know, is our website accessible for you? What can we do to improve it?” Then act on it when we give you our advice.

[Claudia Liza]: It makes sense doesn’t it, Glen? It sounds so simple. But Christina, it is a bit tricky for retailers. Why is that? What do other people with disabilities tell you?

So, we hear about content on websites being confusing in the way it’s written. There’s lots of information online about how to make an accessible website. There’s a global minimum legal standard called WCAG and there’s lot of resources online. Scope has their own which has loads of information on how to make your website accessible.

I think the problem really is generally lack of awareness. It doesn’t get spoken about a lot. I think that disabled consumers – there’s not a lot of places to complain. Sometimes they’ll go on a website and there isn’t even a way to contact that business to tell them that their website isn’t accessible. So what Scope is trying to do is raise the voices of disabled people. We have crowdsourced a lot of people’s feedback on where they experience inaccessible websites. We’re raising that profile and trying to get businesses to change.

[Claudia Liza]: So is it legal when retails aren’t making their websites accessible?

Yeah, so, under the Equality Act 2010, it’s not legal to create an inaccessible website, but what we’ve found is that government isn’t generally enforcing that as a law.

[Claudia Liza]: Glenn, do you feel confident that one day you’ll be able to buy whatever you want online?

I would certainly like to think that would be the case. As I say, you raise enough awareness and get the message out there and alert business to the fact that there is a huge consumer market among the disabled community, and we’ve got a 274 billion pound expenditure a year that we can give to them. Then if they are aware of that, then yeah, hopefully they will open their doors to us and let us spend our money with them.

The post “All these things are quite easy to do, they just need somebody to sit down and just go through the website” appeared first on CSS-Tricks.

CSS-Tricks

, , , , , , , , , , ,
[Top]

Two Images and an API: Everything We Need for Recoloring Products

I recently found a solution to dynamically update the color of any product image. So with just one <img> of a product, we can colorize it in different ways to show different color options. We don’t even need any fancy SVG or CSS to get it done!

We’ll be using an image editor (e.g. Photoshop or Sketch) and the image transformation service imgix. (This isn’t a sponsored post and there is no affiliation here — it’s just a technique I want to share.)

See the Pen
Dynamic Car color
by Der Dooley (@ddools)
on CodePen.

I work with a travel software company called CarTrawler on the engineering team, and I recently undertook a project a revamp our car images library that we use to display car rental search results. I wanted to take this opportunity to introduce dynamically colored cars.

We may sometimes load up to 200 different cars at the same time, so speed and performance are key requirements. We also have five different products throughout unique code bases, so avoiding over-engineering is vital to success.

I wanted to be able to dynamically change the color of each of these cars without needing additional front-end changes to the code.

Step 1: The Base Layer

I’m using car photos here, but this technique could be applied to any product. First we need a base layer. This is the default layer we would display without any color and it should look good on its own.

Step 2: The Paint Layer

Next we create a paint layer that is the same dimensions as the base layer, but only contains the areas where the colors should change dynamically.

A light color is key for the paint layer. Using white or a light shade of gray gives us a great advantage because we are ultimately “blending” this image with color. Anything darker or in a different hue would make it hard to mix this base color with other colors.

Step 3: Using the imgix API

This is where things get interesting. I’m going to leverage multiple parameters from the imgix API. Let’s apply a black to our paint layer.

(Source URL)

We changed the color by applying a standard black hex value of #000000.

https://ddools.imgix.net/cars/paint.png?w=600&bri=-18&con=26&monochrome=000000

If you noticed the URL of the image above, you might be wondering: What the heck are all those parameters? The imgix API docs have a lot of great information, so no need to go into greater detail here. But I will explain the parameters I used.

  • w. The width I want the image to be
  • bri. Adjusts the brightness level
  • con. Adjusts the amount of contrast
  • monochrome. The dynamic hex color

Because we are going to stack our layers via imgix we will need to encode our paint layer. That means replacing some of the characters in the URL with encoded values — like we’d do if we were using inline SVG as a background image in CSS.

https%3A%2F%2Fddools.imgix.net%2Fcars%2Fpaint.png%3Fw%3D600%26bri%3D-18%26con%3D26%26monochrome%3D000000

Step 4: Stack the Layers

Now we are going to use imgix’s watermark parameter to stack the paint layer on top of our base layer.

https://ddools.imgix.net/cars/base.png?w=600&mark-align=center,middle&mark=[PAINTLAYER]

Let’s look at the parameters being used:

  • w. This is the image width and it must be identical for both layers.
  • mark-align. This centers the paint layer on top of the base layer.
  • mark. This is where the encoded paint layer goes.

In the end, you will get a single URL that will look like something like this:

https://ddools.imgix.net/cars/base.png?w=600&mark-align=center,middle&mark=https%3A%2F%2Fddools.imgix.net%2Fcars%2Fpaint.png%3Fw%3D600%26bri%3D-18%26con%3D26%26monochrome%3D000000

That gives the car in black:

(Source URL)

Now that we have one URL, we can basically swap out the black hex value with any other colors we want. Let’s try blue!

(Source URL)

Or green!

(Source URL)

Why not red?

(Source URL)

That’s it! There are certainly other ways to accomplish the same thing, but this seems so straightforward that it’s worth sharing. There was no need code a bunch of additional functionality. No complex libraries to manage or wrangle. All we need is a couple of images that an online tool will stack and blend for us. Seems like a pretty reasonable solution!

The post Two Images and an API: Everything We Need for Recoloring Products appeared first on CSS-Tricks.

CSS-Tricks

, , , ,
[Top]

Don’t comma-separate :focus-within if you need deep browser support

I really like :focus-within. It’s a super useful selector that allows you to essentially select a parent element when any of its children are in focus.

Say you wanted to reveal some extra stuff when a <div> is hovered…

div:hover {   .extra-stuff {      /* reveal it */   } }

That’s not particularly keyboard-friendly. But if something in .extra-stuff is tab-able anyway (meaning it can be focused), that means you could write it like this to make it a bit more accessible:

div:hover, div:focus-within {   .extra-stuff {      /* reveal it */   } }

That’s nice, but it causes a tricky problem.

Browsers ignore entire selectors if it doesn’t understand any part of them. So, if you’re dealing with a browser that doesn’t support :focus-within then it would ignore the CSS example above, meaning you’ve also lost the :hover state.

Instead:

div:hover {   .extra-stuff {      /* reveal it */   } } div:focus-within {   .extra-stuff {      /* reveal it */   } }

That is safer. But it’s repetitive. If you have a preprocessor like Sass…

@mixin reveal {   .extra-stuff {      transform: translateY(0);   } } div:hover {   @include reveal; } div:focus-within {   @include reveal; }

See the Pen
Mixing for :focus-within without comma-separating
by Chris Coyier (@chriscoyier)
on CodePen.

I’d say it’s a pretty good use-case for having native CSS mixins.

The post Don’t comma-separate :focus-within if you need deep browser support appeared first on CSS-Tricks.

CSS-Tricks

, , , , , ,
[Top]