More Control Over CSS Borders With background-image

You can make a typical CSS border dashed or dotted. For example:

.box {    border: 1px dashed black;    border: 3px dotted red; }

You don’t have all that much control over how big or long the dashes or gaps are. And you certainly can’t give the dashes slants, fading, or animation! You can do those things with some trickery though.

Amit Sheen build this really neat Dashed Border Generator:

The trick is using four multiple backgrounds. The background property takes comma-separated values, so by setting four backgrounds (one along the top, right, bottom, and left) and sizing them to look like a border, it unlocks all this control.

So like:

.box {   background-image: repeating-linear-gradient(0deg, #333333, #333333 10px, transparent 10px, transparent 20px, #333333 20px), repeating-linear-gradient(90deg, #333333, #333333 10px, transparent 10px, transparent 20px, #333333 20px), repeating-linear-gradient(180deg, #333333, #333333 10px, transparent 10px, transparent 20px, #333333 20px), repeating-linear-gradient(270deg, #333333, #333333 10px, transparent 10px, transparent 20px, #333333 20px);   background-size: 3px 100%, 100% 3px, 3px 100% , 100% 3px;   background-position: 0 0, 0 0, 100% 0, 0 100%;   background-repeat: no-repeat; }

I like gumdrops.


The post More Control Over CSS Borders With background-image appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , ,

Nailing the Perfect Contrast Between Light Text and a Background Image

Have you ever come across a site where light text is sitting on a light background image? If you have, you’ll know how difficult that is to read. A popular way to avoid that is to use a transparent overlay. But this leads to an important question: Just how transparent should that overlay be? It’s not like we’re always dealing with the same font sizes, weights, and colors, and, of course, different images will result in different contrasts.

Trying to stamp out poor text contrast on background images is a lot like playing Whac-a-Mole. Instead of guessing, we can solve this problem with HTML <canvas> and a little bit of math.

Like this:

We could say “Problem solved!” and simply end this article here. But where’s the fun in that? What I want to show you is how this tool works so you have a new way to handle this all-too-common problem.

Here’s the plan

First, let’s get specific about our goals. We’ve said we want readable text on top of a background image, but what does “readable” even mean? For our purposes, we’ll use the WCAG definition of AA-level readability, which says text and background colors need enough contrast between them such that that one color is 4.5 times lighter than the other.

Let’s pick a text color, a background image, and an overlay color as a starting point. Given those inputs, we want to find the overlay opacity level that makes the text readable without hiding the image so much that it, too, is difficult to see. To complicate things a bit, we’ll use an image with both dark and light space and make sure the overlay takes that into account.

Our final result will be a value we can apply to the CSS opacity property of the overlay that gives us the right amount of transparency that makes the text 4.5 times lighter than the background.

Optimal overlay opacity: 0.521

To find the optimal overlay opacity we’ll go through four steps:

  1. We’ll put the image in an HTML <canvas>, which will let us read the colors of each pixel in the image.
  2. We’ll find the pixel in the image that has the least contrast with the text.
  3. Next, we’ll prepare a color-mixing formula we can use to test different opacity levels on top of that pixel’s color.
  4. Finally, we’ll adjust the opacity of our overlay until the text contrast hits the readability goal. And these won’t just be random guesses — we’ll use binary search techniques to make this process quick.

Let’s get started!

Step 1: Read image colors from the canvas

Canvas lets us “read” the colors contained in an image. To do that, we need to “draw” the image onto a <canvas> element and then use the canvas context (ctx) getImageData() method to produce a list of the image’s colors.

function getImagePixelColorsUsingCanvas(image, canvas) {   // The canvas's context (often abbreviated as ctx) is an object   // that contains a bunch of functions to control your canvas   const ctx = canvas.getContext('2d'); 
   // The width can be anything, so I picked 500 because it's large   // enough to catch details but small enough to keep the   // calculations quick.   canvas.width = 500; 
   // Make sure the canvas matches proportions of our image   canvas.height = (image.height / image.width) * canvas.width; 
   // Grab the image and canvas measurements so we can use them in the next step   const sourceImageCoordinates = [0, 0, image.width, image.height];   const destinationCanvasCoordinates = [0, 0, canvas.width, canvas.height]; 
   // Canvas's drawImage() works by mapping our image's measurements onto   // the canvas where we want to draw it   ctx.drawImage(     image,     ...sourceImageCoordinates,     ...destinationCanvasCoordinates   ); 
   // Remember that getImageData only works for same-origin or    // cross-origin-enabled images.   // https://developer.mozilla.org/en-US/docs/Web/HTML/CORS_enabled_image   const imagePixelColors = ctx.getImageData(...destinationCanvasCoordinates);   return imagePixelColors; }

The getImageData() method gives us a list of numbers representing the colors in each pixel. Each pixel is represented by four numbers: red, green, blue, and opacity (also called “alpha”). Knowing this, we can loop through the list of pixels and find whatever info we need. This will be useful in the next step.

Image of a blue and purple rose on a light pink background. A section of the rose is magnified to reveal the RGBA values of a specific pixel.

Step 2: Find the pixel with the least contrast

Before we do this, we need to know how to calculate contrast. We’ll write a function called getContrast() that takes in two colors and spits out a number representing the level of contrast between the two. The higher the number, the better the contrast for legibility.

When I started researching colors for this project, I was expecting to find a simple formula. It turned out there were multiple steps.

To calculate the contrast between two colors, we need to know their luminance levels, which is essentially the brightness (Stacie Arellano does a deep dive on luminance that’s worth checking out.)

Thanks to the W3C, we know the formula for calculating contrast using luminance:

const contrast = (lighterColorLuminance + 0.05) / (darkerColorLuminance + 0.05);

Getting the luminance of a color means we have to convert the color from the regular 8-bit RGB value used on the web (where each color is 0-255) to what’s called linear RGB. The reason we need to do this is that brightness doesn’t increase evenly as colors change. We need to convert our colors into a format where the brightness does vary evenly with color changes. That allows us to properly calculate luminance. Again, the W3C is a help here:

const luminance = (0.2126 * getLinearRGB(r) + 0.7152 * getLinearRGB(g) + 0.0722 * getLinearRGB(b));

But wait, there’s more! In order to convert 8-bit RGB (0 to 255) to linear RGB, we need to go through what’s called standard RGB (also called sRGB), which is on a scale from 0 to 1.

So the process goes: 

8-bit RGB → standard RGB  → linear RGB → luminance

And once we have the luminance of both colors we want to compare, we can plug in the luminance values to get the contrast between their respective colors.

// getContrast is the only function we need to interact with directly. // The rest of the functions are intermediate helper steps. function getContrast(color1, color2) {   const color1_luminance = getLuminance(color1);   const color2_luminance = getLuminance(color2);   const lighterColorLuminance = Math.max(color1_luminance, color2_luminance);   const darkerColorLuminance = Math.min(color1_luminance, color2_luminance);   const contrast = (lighterColorLuminance + 0.05) / (darkerColorLuminance + 0.05);   return contrast; } 
 function getLuminance({r,g,b}) {   return (0.2126 * getLinearRGB(r) + 0.7152 * getLinearRGB(g) + 0.0722 * getLinearRGB(b)); } function getLinearRGB(primaryColor_8bit) {   // First convert from 8-bit rbg (0-255) to standard RGB (0-1)   const primaryColor_sRGB = convert_8bit_RGB_to_standard_RGB(primaryColor_8bit); 
   // Then convert from sRGB to linear RGB so we can use it to calculate luminance   const primaryColor_RGB_linear = convert_standard_RGB_to_linear_RGB(primaryColor_sRGB);   return primaryColor_RGB_linear; } function convert_8bit_RGB_to_standard_RGB(primaryColor_8bit) {   return primaryColor_8bit / 255; } function convert_standard_RGB_to_linear_RGB(primaryColor_sRGB) {   const primaryColor_linear = primaryColor_sRGB < 0.03928 ?     primaryColor_sRGB/12.92 :     Math.pow((primaryColor_sRGB + 0.055) / 1.055, 2.4);   return primaryColor_linear; }

Now that we can calculate contrast, we’ll need to look at our image from the previous step and loop through each pixel, comparing the contrast between that pixel’s color and the foreground text color. As we loop through the image’s pixels, we’ll keep track of the worst (lowest) contrast so far, and when we reach the end of the loop, we’ll know the worst-contrast color in the image.

function getWorstContrastColorInImage(textColor, imagePixelColors) {   let worstContrastColorInImage;   let worstContrast = Infinity; // This guarantees we won't start too low   for (let i = 0; i < imagePixelColors.data.length; i += 4) {     let pixelColor = {       r: imagePixelColors.data[i],       g: imagePixelColors.data[i + 1],       b: imagePixelColors.data[i + 2],     };     let contrast = getContrast(textColor, pixelColor);     if(contrast < worstContrast) {       worstContrast = contrast;       worstContrastColorInImage = pixelColor;     }   }   return worstContrastColorInImage; }

Step 3: Prepare a color-mixing formula to test overlay opacity levels

Now that we know the worst-contrast color in our image, the next step is to establish how transparent the overlay should be and see how that changes the contrast with the text.

When I first implemented this, I used a separate canvas to mix colors and read the results. However, thanks to Ana Tudor’s article about transparency, I now know there’s a convenient formula to calculate the resulting color from mixing a base color with a transparent overlay.

For each color channel (red, green, and blue), we’d apply this formula to get the mixed color:

mixedColor = baseColor + (overlayColor - baseColor) * overlayOpacity

So, in code, that would look like this:

function mixColors(baseColor, overlayColor, overlayOpacity) {   const mixedColor = {     r: baseColor.r + (overlayColor.r - baseColor.r) * overlayOpacity,     g: baseColor.g + (overlayColor.g - baseColor.g) * overlayOpacity,     b: baseColor.b + (overlayColor.b - baseColor.b) * overlayOpacity,   }   return mixedColor; }

Now that we’re able to mix colors, we can test the contrast when the overlay opacity value is applied.

function getTextContrastWithImagePlusOverlay({textColor, overlayColor, imagePixelColor, overlayOpacity}) {   const colorOfImagePixelPlusOverlay = mixColors(imagePixelColor, overlayColor, overlayOpacity);   const contrast = getContrast(this.state.textColor, colorOfImagePixelPlusOverlay);   return contrast; }

With that, we have all the tools we need to find the optimal overlay opacity!

Step 4: Find the overlay opacity that hits our contrast goal

We can test an overlay’s opacity and see how that affects the contrast between the text and image. We’re going to try a bunch of different opacity levels until we find the contrast that hits our mark where the text is 4.5 times lighter than the background. That may sound crazy, but don’t worry; we’re not going to guess randomly. We’ll use a binary search, which is a process that lets us quickly narrow down the possible set of answers until we get a precise result.

Here’s how a binary search works:

  • Guess in the middle.
  • If the guess is too high, we eliminate the top half of the answers. Too low? We eliminate the bottom half instead.
  • Guess in the middle of that new range.
  • Repeat this process until we get a value.

I just so happen to have a tool to show how this works:

In this case, we’re trying to guess an opacity value that’s between 0 and 1. So, we’ll guess in the middle, test whether the resulting contrast is too high or too low, eliminate half the options, and guess again. If we limit the binary search to eight guesses, we’ll get a precise answer in a snap.

Before we start searching, we’ll need a way to check if an overlay is even necessary in the first place. There’s no point optimizing an overlay we don’t even need!

function isOverlayNecessary(textColor, worstContrastColorInImage, desiredContrast) {   const contrastWithoutOverlay = getContrast(textColor, worstContrastColorInImage);   return contrastWithoutOverlay < desiredContrast; }

Now we can use our binary search to look for the optimal overlay opacity:

function findOptimalOverlayOpacity(textColor, overlayColor, worstContrastColorInImage, desiredContrast) {   // If the contrast is already fine, we don't need the overlay,   // so we can skip the rest.   const isOverlayNecessary = isOverlayNecessary(textColor, worstContrastColorInImage, desiredContrast);   if (!isOverlayNecessary) {     return 0;   } 
   const opacityGuessRange = {     lowerBound: 0,     midpoint: 0.5,     upperBound: 1,   };   let numberOfGuesses = 0;   const maxGuesses = 8; 
   // If there's no solution, the opacity guesses will approach 1,   // so we can hold onto this as an upper limit to check for the no-solution case.   const opacityLimit = 0.99; 
   // This loop repeatedly narrows down our guesses until we get a result   while (numberOfGuesses < maxGuesses) {     numberOfGuesses++; 
     const currentGuess = opacityGuessRange.midpoint;     const contrastOfGuess = getTextContrastWithImagePlusOverlay({       textColor,       overlayColor,       imagePixelColor: worstContrastColorInImage,       overlayOpacity: currentGuess,     }); 
     const isGuessTooLow = contrastOfGuess < desiredContrast;     const isGuessTooHigh = contrastOfGuess > desiredContrast;     if (isGuessTooLow) {       opacityGuessRange.lowerBound = currentGuess;     }     else if (isGuessTooHigh) {       opacityGuessRange.upperBound = currentGuess;     } 
     const newMidpoint = ((opacityGuessRange.upperBound - opacityGuessRange.lowerBound) / 2) + opacityGuessRange.lowerBound;     opacityGuessRange.midpoint = newMidpoint;   } 
   const optimalOpacity = opacityGuessRange.midpoint;   const hasNoSolution = optimalOpacity > opacityLimit; 
   if (hasNoSolution) {     console.log('No solution'); // Handle the no-solution case however you'd like     return opacityLimit;   }   return optimalOpacity; }

With our experiment complete, we now know exactly how transparent our overlay needs to be to keep our text readable without hiding the background image too much.

We did it!

Improvements and limitations

The methods we’ve covered only work if the text color and the overlay color have enough contrast to begin with. For example, if you were to choose a text color that’s the same as your overlay, there won’t be an optimal solution unless the image doesn’t need an overlay at all.

In addition, even if the contrast is mathematically acceptable, that doesn’t always guarantee it’ll look great. This is especially true for dark text with a light overlay and a busy background image. Various parts of the image may distract from the text, making it difficult to read even when the contrast is numerically fine. That’s why the popular recommendation is to use light text on a dark background.

We also haven’t taken where the pixels are located into account or how many there are of each color. One drawback of that is that a pixel in the corner could possibly exert too much influence on the result. The benefit, however, is that we don’t have to worry about how the image’s colors are distributed or where the text is because, as long as we’ve handled where the least amount of contrast is, we’re safe everywhere else.

I learned a few things along the way

There are some things I walked away with after this experiment, and I’d like to share them with you:

  • Getting specific about a goal really helps! We started with a vague goal of wanting readable text on an image, and we ended up with a specific contrast level we could strive for.
  • It’s so important to be clear about the terms. For example, standard RGB wasn’t what I expected. I learned that what I thought of as “regular” RGB (0 to 255) is formally called 8-bit RGB. Also, I thought the “L” in the equations I researched meant “lightness,” but it actually means “luminance,” which is not to be confused with “luminosity.” Clearing up terms helps how we code as well as how we discuss the end result.
  • Complex doesn’t mean unsolvable. Problems that sound hard can be broken into smaller, more manageable pieces.
  • When you walk the path, you spot the shortcuts. For the common case of white text on a black transparent overlay, you’ll never need an opacity over 0.54 to achieve WCAG AA-level readability.

In summary…

You now have a way to make your text readable on a background image without sacrificing too much of the image. If you’ve gotten this far, I hope I’ve been able to give you a general idea of how it all works.

I originally started this project because I saw (and made) too many website banners where the text was tough to read against a background image or the background image was overly obscured by the overlay. I wanted to do something about it, and I wanted to give others a way to do the same. I wrote this article in hopes that you’d come away with a better understanding of readability on the web. I hope you’ve learned some neat canvas tricks too.

If you’ve done something interesting with readability or canvas, I’d love to hear about it in the comments!


The post Nailing the Perfect Contrast Between Light Text and a Background Image appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , , , ,
[Top]

What does 100% mean in CSS?

When using percentage values in CSS like this…

.element {   margin-top: 40%; }

…what does that % value mean here? What is it a percentage of? There’ve been so many times when I’ll be using percentages and something weird happens. I typically shrug, change the value to something else and move on with my day.

But Amelia Wattenberger says no! in this remarkable deep dive into how percentages work in CSS and all the peculiar things we need to know about them. And as is par for the course at this point, any post by Amelia has a ton of wonderful demos that perfectly describe how the CSS in any given example works. And this post is no different.

Direct Link to ArticlePermalink


The post What does 100% mean in CSS? appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

,
[Top]

font-weight: 300 considered harmful

Tomáš Janoušek:

Many web pages these days set font-weight: 300 in their stylesheet. With DejaVu Sans as my preferred font, this results in very thin and light text that is hard to read, because for some reason the “DejaVu Sans ExtraLight” variant (weight 200) is being used for weights < 360 (in Chrome; in Firefox up to 399). Let’s investigate why this happens and what can be done about it.

Why are people setting font-weight: 300; at all? Well, Mac people, probably. On my macOS Catalina computer, look at the differences between some of the default and built-in fonts between 400 and 300.

400 weight on the top, 300 weight on the bottom

I wouldn’t blame a designer for going using a 300 weight in some cases. In fact, 11 years ago I published a snippet called “Better Helvetica” that touted this.

body {    font-family: "HelveticaNeue-Light", "Helvetica Neue Light", "Helvetica Neue", Helvetica, Arial, "Lucida Grande", sans-serif;     font-weight: 300; }

But for Tomáš, whose default font is DejaVu Sans (a default on many Linux and Android systems) the font is difficult to read when the type is that thin. Part of the issue is that if a fallback font doesn’t happen to have a 300, the spec says it can fallback all the way to 100 if needed. I believe the technical term for that is pretty gosh-darned thin.

I’ll try to avoid that myself (or use it when I’m loading web fonts I know have it), but check out Tomáš’ article for a fix on your computer if this bugs you on many sites. This actually reminds me of the different Levels of Fix.

Direct Link to ArticlePermalink


The post font-weight: 300 considered harmful appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, ,
[Top]

Every Website is an Essay

Every website that’s made me oooo and aaahhh lately has been of a special kind; they’re written and designed like essays. There’s an argument, a playfulness in the way that they’re not so much selling me something as they are trying to convince me of the thing. They use words and type and color in a way that makes me sit up and listen.

And I think that framing our work in this way lets us web designers explore exciting new possibilities. Instead of throwing a big carousel on the page and being done with it, thinking about making a website like an essay encourages us to focus on the tough questions. We need an introduction, we need to provide evidence for our statements, we need a conclusion, etc. This way we don’t have to get so caught up in the same old patterns that we’ve tried again and again in our work.

And by treating web design like an essay, we can be weird with the design. We can establish a distinct voice and make it sound like an honest-to-goodness human being wrote it, too.

One example of a website-as-an-essay is the Analogue Pocket site which uses real paragraphs to market their fancy new device.

Another example is the new email app Hey in which the website is nothing but paragraphs — no screenshots, no fancy product information. It’s almost feels like a political manifesto hammered onto a giant wooden door.

Apple’s marketing sites are little essays, too. Take this one section from the iPad Pro all about the LiDAR Scanner. It’s not so much trying to sell you an iPad at this point so much as it is trying to argue the case for LiDAR. And as with all good essays it answers the who, what, why, when, and how.

Another example is Stripe’s recent beautiful redesign. What I love more than the outrageously gorgeous animated gradients is the argument that the website is making. What is Stripe? How can I trust them? How easy is it to get set up? Who, what, why, when, how.

To be my own devil’s advocate for a bit though, we’re all familiar with this line of reasoning: Why care about the writing so much when people don’t read? Folks skim through a website. They don’t persevere with the text, they don’t engage with the writing, and you only have half a millisecond to hit them with something flashy before they leave. They can’t handle complex words or sentences. They can’t grasp complex ideas. So keep those paragraphs short! Remove all text from the page!

The implication here is that users are dumb. They can’t focus and they don’t care. You have to shout at them. And I kinda sorta hate that.

Instead, I think the opposite is true. They’ve seen the same boring websites for years. Everyone is tired of lifeless, humorless copywriting. They’ve seen all the animations, witnessed all the cool fonts, and in the face of all that stuff, they yawn. They yawn because it supports a bad argument, or more precisely, a bad essay; one that doesn’t charm the reader, or give them a reason to care.

So what if we made our websites more like essays and less like billboards that dot the freeways? What would that look like?


The post Every Website is an Essay appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, ,
[Top]

You’ll Get It When You See It, A Creative Advertising Campaign by One Twenty Three West

[Top]

JavaScript Fatigue

From Nicholas Zakas’ newsletter, on how he avoids JavaScript fatigue:

 I don’t try to learn about every new thing that comes out. There’s a limited number of hours in the day and a limited amount of energy you can devote to any topic, so I choose not to learn about anything that I don’t think will be immediately useful to me. That doesn’t mean I completely ignore new things as they are released, but rather, I try to fit them into an existing mental model so I can talk about them intelligently. For instance, I haven’t built an application in Deno, but I know that Deno is a JavaScript runtime for building server-side applications, just like Node.js. So I put Deno in the same bucket as Node.js and know that when I need to learn more, there’s a point of comparison.

I too have used the “buckets” analogy. Please allow me to continue and overuse this metaphor.

I rarely try to learn anything just for the sake of it. I learn what I need to learn as whatever I’m working on demands it. But I try to know enough so that when I read tech news, I can place what I’m reading into mental buckets. Then I can, metaphorically, reach into those buckets when those topics come up. Plus, I can keep an eye on how full those buckets are. If I find myself putting a lot of stuff into one bucket, I might plunge in there and see what’s going on as there is clearly something in the water.

You’ll need to subscribe to the newsletter to get to the content.

Direct Link to ArticlePermalink


The post JavaScript Fatigue appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

,
[Top]

HTML for Subheadings and Headings

Let’s say you have a double heading situation going on. A little one on top of a big one. It comes up, I dunno, a billion times a day, I’d say. What HTML do you go for? Dare I say, it depends? But have you considered all the options? And how those options play out semantically and accessibility-y?

As we do around here sometimes, let’s take a stroll through the options.

The visual examples

Let’s assume these are not the <h1> on the page. Not that things would be dramatically different if one of them was, but just to set the stage. I find this tends to come up in subsections or cards the most.

Here’s an example a friend brought up in a conversation the other day:

Here’s one that I worked on also just the other day:

And here’s a classic card:

Option 1: The ol’ <h3> then <h2>

The smaller one is on the top, and the bigger one is below, and obviously <h3> is always smaller than an <h2> right?

<h3>Subheading</h3> <h2>Heading</h2>

This is probably pretty common thinking and my least favorite approach.

I’d rather see class names in use so that the styling is isolated to those classes.

<h3 class="card-subhead">Subheading</h3> <h2 class="card-head">Heading</h2>

Don’t make semantic choices, particularly those that affect accessibility, based on this purely visual treatment.

The bigger weirdness is using two heading elements at all. Using two headings for a single bit of content doesn’t feel right. The combination feels like: “Hey here’s a major new section, and then here’s another subheading because there will be more of them in this subsection so this one is just for this first subsection.” But then there are no other subsections and no other subheadings. Even if that isn’t weird, the order is weird with the subsection header coming first.

If you’re going to use two different heading elements, it seems to me the smaller heading is almost more likely to make more sense as the <h2>, which leads us to…

Option 2: Small ‘n’ mighty <h2> and <h3>

If we’ve got classes in place, and the subheading works contextually as the more dominant heading, then we can do this:

<h2 class="card-subheading">Subheading</h2> <h3 class="card-heading">Heading</h3>

Just because that <h2> is visually smaller doesn’t mean it still can’t be the dominant heading in the document outline. If you look at the example from CodePen above, the title “Context Switching” feels like it works better as a heading than the following sentence does. I think using the <h2> on that smaller header works fine there, and keeps the structure more “normal” (I suppose) with the <h2> coming first.

Still, using two headings for one section still feels weird.

Option 3: One header, one div

Perhaps only one of the two needs to be a heading? That feels generally more correct to me. I’ve definitely done this before:

<div class="card-subheading">Subheading</div> <h3 class="card-heading">Heading</h3>

That feels OK, except for the weirdness that the subhead might have relevant content and people could potentially miss that if they navigated to the head via screenreader and read from there forward. I guess you could swap the visual order…

<hgroup> <!-- hgroup is deprecated, just defiantly using it anyway -->    <h3 class="card-heading">Heading</h3>   <div class="card-subheading">Subheading</div>  </hgroup>
hgroup {   display: flex;   flex-direction: column; } hgroup .card-subheading {   /* Visually, put on top, without affecting source order */   order: -1; }

But I think messing with visual order like that is generally a no-no, as in, awkward for sighted screenreader users. So don’t tell anybody I showed you that.

Option 4: Keep it all in one heading

Since we’re only showing a heading for one bit of content anyway, it seems right to only use a single header.

<h2>   <strong>Subheading</strong>   Heading </h2>

Using the <strong> element in there gives us a hook in the CSS to do the same type of styling. For example…

h2 strong {   display: block;   font-size: 75%;   opacity: 0.75; }

The trick here is that the headings then need to work/read together as one. So, either they read together naturally, or you could use a colon : or something.

<h2>   <strong>New Podcast:</strong>   Struggling with Basic HTML </h2>

ARIA Role

It turns out that there is an ARIA role dedicated to subtitles:

So like:

<h2 class="card-heading">Heading</h2> <div role="doc-subtitle">Subheading</div>

I like styles based on ARIA roles in general (because it demands their proper use), so the styling could be done directly with it:

[role="doc-subtitle"] { }

Testing from Steve and Léonie suggest that browsers generally treat it as a “heading without a level.” JAWS is the exception, which treats it like an <h2>. That seems OK… maybe? Steve even thinks putting the subheading first is OK.

The bad and the ugly

What’s happening here is the subheading is providing general context to the heading — kinda like labelling it:

<label for="card-heading-1">Subheading</label> <h2 id="card-heading-1" class="card-heading">Heading</h2>

But we’re not dealing in form elements here, so that’s not recommended. Another way to make it a single heading would be to use a pseudo-element to place the subhead, like:

<h2 class="card-heading" data-subheading="Subheading">Heading</h2>
.card-head::before {   content: attr(data-subheading);   display: block;   font-size: 75%;   opacity: 0.75; }

It used to be that screen readers ignored pseudo-content, but it’s gotten better, though still not perfect. That makes this only slightly more usable, but the text is still un-selectable and not on-page-findable, so I’d rather not go here.


The post HTML for Subheadings and Headings appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, ,
[Top]

Building Custom Data Importers: What Engineers Need to Know

Importing data is a common pain-point for engineering teams. Whether its importing CRM data, inventory SKUs, or customer details, importing data into various applications and building a solution for this is a frustrating experience nearly every engineer can relate to. Data import, as a critical product experience is a huge headache. It reduces the time to value for customers, strains internal resources, and takes valuable development cycles away from developing key, differentiating product features.

Frequent error messages end-users receive when importing data. Why do we expect customers to fix this themselves?

Data importers, specifically CSV importers, haven’t been treated as key product features within the software, and customer experience. As a result, engineers tend to dedicate an exorbitant amount of effort creating less-than-ideal solutions for customers to successfully import their data.

Engineers typically create lengthy, technical documentation for customers to review when an import fails. However, this doesn’t truly solve the issue but instead offsets the burden of a great experience from the product to an end-user.

In this article, we’ll address the current problems with importing data and discuss a few key product features that are necessary to consider if you’re faced with a decision to build an in-house solution.

Importing data is typically frustrating for anyone involved at a data-led company. Simply put, there has never been a standard for importing customer data. Until now, teams have deferred to CSV templates, lengthy documentation, video tutorials, or buggy in-house solutions to allow users the ability to import spreadsheets. Companies trying to import CSV data can run into a variety of issues such as:

  • Fragmented data: With no standard way to import data, we get emails going back and forth with attached spreadsheets that are manually imported. As spreadsheets get passed around, there are obvious version control challenges. Who made this change? Why don’t these numbers add up as they did in the original spreadsheet? Why are we emailing spreadsheets containing sensitive data?
  • Improper formatting: CSV import errors frequently occur when formatting isn’t done correctly. As a result, companies often rely on internal developer resources to properly clean and format data on behalf of the client — a process that can take hours per customer, and may lead to churn anyway. This includes correcting dates or splitting fields that need to be healed prior to importing.
  • Encoding errors: There are plenty of instances where a spreadsheet can’t be imported when it’s not improperly encoded. For example, a company may need a file to be saved with UTF-8 encoding (the encoding typically preferred for email and web pages) in order to then be uploaded properly to their platform. Incorrect encoding can result in a lengthy chain of communication where the customer is burdened with correcting and re-importing their data.
  • Data normalization: A lack of data normalization results in data redundancy and a never-ending string of data quality problems that make customer onboarding particularly challenging. One example includes formatting email addresses, which are typically imported into a database, or checking value uniqueness, which can result in a heavy load on engineers to get the validation working correctly.

Remember building your first CSV importer?

When it comes down to creating a custom-built data importer, there are a few critical features that you should include to help improve the user experience. (One caveat – building a data importer can be time-consuming not only to create but also maintain – it’s easy to ensure your company has adequate engineering bandwidth when first creating a solution, but what about maintenance in 3, 6, or 12 months?)

A preview of Flatfile Portal. It integrates in minutes using a few lines of JavaScript.

Data mapping

Mapping or column-matching (they are often used interchangeably) is an essential requirement for a data importer as the file import will typically fail without it. An example is configuring your data model to accept contact-level data. If one of the required fields is “address” and the customer who is trying to import data chooses a spreadsheet where the field is labeled “mailing address,” the import will fail because “mailing address” doesn’t correlate with “address” in a back-end system. This is typically ‘solved’ by providing a pre-built CSV template for customers, who then have to manipulate their data, effectively increasing time-to-value during a product experience. Data mapping needs to be included in the custom-built product as a key feature to retain data quality and improve the customer data onboarding experience.

Auto-column matching CSV data is the bread and butter of Portal, saving massive amounts of time for customers while providing a delightful import experience.

Data validation

Data validation, which checks if the data matches an expected format or value, is another critical feature to include in a custom data importer. Data validation is all about ensuring the data is accurate and is specific to your required data model. For example, if special characters can’t be used within a certain template, error messages can appear during the import stage. Having spreadsheets with hundreds of rows containing validation errors results in a nightmare for customers, as they’ll have to fix these issues themselves, or your team, which will spend hours on end cleaning data. Automatic data validators allow for streamlining of healing incoming data without the need for a manual review.

We built Data Hooks into Portal to quickly normalize data on import. A perfect use-case would be validating email uniqueness against a database.

Data parsing

Data parsing is the process of taking an aggregation of information (in a spreadsheet) and breaking it into discrete parts. It’s the separation of data. In a custom-built data importer, a data parsing feature should not only have the ability to go from a file to an array of discrete data but also streamline the process for customers.

Data transformation

Data transformation means making changes to imported data as it’s flowing into your system to meet an expected or desired value. Rather than sending data back to users with an error message for them to fix, data transformation can make small, systematic tweaks so that the users’ data is more usable in your backend. For example, when transferring a task list, prioritization data could be transformed into a different value, such as numbers instead of labels.

Data Hooks normalize imported customer data automatically using validation rules set in the Portal JSON config. These highly adaptable hooks can be worked to auto-validate nearly any incoming customer data.

We’ve baked all of the above features into Portal, our flagship CSV importer at Flatfile. Now that we’ve reviewed some of the must-have features of a data importer, the next obvious question for an engineer building an in-house importer is typically… should they?

Engineering teams that are taking on this task typically use custom or open source solutions, which may not adhere to specific use-cases. Building a comprehensive data importer also brings UX challenges when building a UI and maintaining application code to handle data parsing, normalization, and mapping. This is prior to considering how customer data requirements may change in future months and the ramifications of maintaining a custom-built solution.

Companies facing data import challenges are now considering integrating a pre-built data importer such as Flatfile Portal. We’ve built Portal to be the elegant import button for web apps. With just a few lines of JavaScript, Portal can be implemented alongside any data model and validation ruleset, typically in a few hours. Engineers no longer need to dedicate hours cleaning up and formatting data, nor do they need to custom build a data importer (unless they want to!). With Flatfile, engineers can focus on creating product-differentiating features, rather than work on solving spreadsheet imports.

Importing data is wrought with challenges and there are several critical features necessary to include when building a data importer. The alternative to a custom-built solution is to look for a pre-built data importer such as Portal.

Flatfile’s mission is to remove barriers between humans and data. With AI-assisted data onboarding, they eliminate repetitive work and make B2B data transactions fast, intuitive, and error-free. Flatfile automatically learns how imported data should be structured and cleaned, enabling customers and teams to spend more time using their data instead of fixing it. Flatfile has transformed over 300 million rows of data for companies like ClickUp, Blackbaud, Benevity, and Toast. To learn more about Flatfile’s products, Portal and Concierge, visit flatfile.io.


The post Building Custom Data Importers: What Engineers Need to Know appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , , ,
[Top]

TypeScript, Minus TypeScript

Unless you’ve been hiding under a rock the last several years (and let’s face it, hiding under a rock sometimes feels like the right thing to do), you’ve probably heard of and likely used TypeScript. TypeScript is a syntactical superset of JavaScript that adds — as its name suggests — typing to the web’s favorite scripting language.

TypeScript is incredibly powerful, but is often difficult to read for beginners and carries the overhead of needing a compilation step before it can run in a browser due to the extra syntax that isn’t valid JavaScript. For many projects this isn’t a problem, but for others this might get in the way of getting work done. Fortunately the TypeScript team has enabled a way to type check vanilla JavaScript using JSDoc.

Setting up a new project

To get TypeScript up and running in a new project, you’ll need NodeJS and npm. Let’s start by creating a new project and running npm init. For the purposes of this article, we are going to be using VShttps://code.visualstudio.comCode as our code editor. Once everything is set up, we’ll need to install TypeScript:

npm i -D typescript

Once that install is done, we need to tell TypeScript what to do with our code, so let’s create a new file called tsconfig.json and add this:

{   "compilerOptions": {     "target": "esnext",     "module": "esnext",     "moduleResolution": "node",     "lib": ["es2017", "dom"],     "allowJs": true,     "checkJs": true,     "noEmit": true,     "strict": false,     "noImplicitThis": true,     "alwaysStrict": true,     "esModuleInterop": true   },   "include": [ "script", "test" ],   "exclude": [ "node_modules" ] }

For our purposes, the important lines of this config file are the allowJs and checkJs options, which are both set to true. These tell TypeScript that we want it to evaluate our JavaScript code. We’ve also told TypeScript to check all files inside of a /script directory, so let’s create that and a new file in it called index.js.

A simple example

Inside our newly-created JavaScript file, let’s make a simple addition function that takes two parameters and adds them together:

function add(x, y) {   return x + y; }

Fairly simple, right? add(4, 2) will return 6, but because JavaScript is dynamically-typed you could also call add with a string and a number and get some potentially unexpected results:

add('4', 2); // returns '42'

That’s less than ideal. Fortunately, we can add some JSDoc annotations to our function to tell users how we expect it to work:

/**  * Add two numbers together  * @param {number} x  * @param {number} y  * @return {number}  */ function add(x, y) {   return x + y; }

We’ve changed nothing about our code; we’ve simply added a comment to tell users how the function is meant to be used and what value should be expected to return. We’ve done this by utilizing JSDoc’s @param and @return annotations with types set in curly braces ({}).

Trying to run our incorrect snippet from before throws an error in VS Code:

TypeScript evaluates that a call to add is incorrect if one of the arguments is a string.

In the example above, TypeScript is reading our comment and checking it for us. In actual TypeScript, our function now is equivalent of writing:

/**  * Add two numbers together  */ function add(x: number, y: number): number {   return x + y; }

Just like we used the number type, we have access to dozens of built-in types with JSDoc, including string, object, Array as well as plenty of others, like HTMLElement, MutationRecord and more.

One added benefit of using JSDoc annotations over TypeScript’s proprietary syntax is that it provides developers an opportunity to provide additional metadata around arguments or type definitions by providing those inline (hopefully encouraging positive habits of self-documenting our code).

We can also tell TypeScript that instances of certain objects might have expectations. A WeakMap, for instance, is a built-in JavaScript object that creates a mapping between any object and any other piece of data. This second piece of data can be anything by default, but if we want our WeakMap instance to only take a string as the value, we can tell TypeScript what we want:

/** @type {WeakMap<object, string} */ const metadata = new WeakMap(); 
 const object = {}; const otherObject = {}; 
 metadata.set(object, 42); metadata.set(otherObject, 'Hello world');

This throws an error when we try to set our data to 42 because it is not a string.

Defining our own types

Just like TypeScript, JSDoc allows us to define  and work with our own types. Let’s create a new type called Person that has name, age and hobby properties. Here’s how that looks in TypeScript:

interface Person {   name: string;   age: number;   hobby?: string; }

In JSDoc, our type would be the following:

/**  * @typedef Person  * @property {string} name - The person's name  * @property {number} age - The person's age  * @property {string} [hobby] - An optional hobby  */

We can use the @typedef tag to define our type’s name. Let’s define an interface called Person with required  name (a string)) and age (a number) properties, plus a third, optional property called hobby (a string). To define these properties, we use @property (or the shorthand @prop key) inside our comment.

When we choose to apply the Person type to a new object using the @type comment, we get type checking and autocomplete when writing our code. Not only that, we’ll also be told when our object doesn’t adhere to the contract we’ve defined in our file:

Screenshot of an example of TypeScript throwing an error on our vanilla JavaScript object

Now, completing the object will clear the error:

Our object now adheres to the Person interface defined above

Sometimes, however, we don’t want a full-fledged object for a type. For example, we might want to provide a limited set of possible options. In this case, we can take advantage of something called a union type:

/**  * @typedef {'cat'|'dog'|'fish'} Pet  */ 
 /**  * @typedef Person  * @property {string} name - The person's name  * @property {number} age - The person's age  * @property {string} [hobby] - An optional hobby  * @property {Pet} [pet] - The person's pet  */

In this example, we have defined a union type called Pet that can be any of the possible options of 'cat', 'dog' or 'fish'. Any other animals in our area are not allowed as pets, so if caleb above tried to adopt a 'kangaroo' into his household, we would get an error:

/** @type {Person} */ const caleb = {   name: 'Caleb Williams',   age: 33,   hobby: 'Running',   pet: 'kangaroo' };
Screenshot of an an example illustrating that kangaroo is not an allowed pet type

This same technique can be utilized to mix various types in a function:

/**  * @typedef {'lizard'|'bird'|'spider'} ExoticPet  */ 
 /**  * @typedef Person  * @property {string} name - The person's name  * @property {number} age - The person's age  * @property {string} [hobby] - An optional hobby  * @property {Pet|ExoticPet} [pet] - The person's pet  */

Now our person type can have either a Pet or an ExoticPet.

Working with generics

There could be times when we don’t want hard and fast types, but a little more flexibility while still writing consistent, strongly-typed code. Enter generic types. The classic example of a generic function is the identity function, which takes an argument and returns it back to the user. In TypeScript, that looks like this:

function identity<T>(target: T): T {   return target; }

Here, we are defining a new generic type (T) and telling the computer and our users that the function will return a value that shares a type with whatever the argument target is. This way, we can still pass in a number or a string or an HTMLElement and have the assurance that the returned value is also of that same type.

The same thing is possible using the JSDoc notation using the @template annotation:

/**  * @template T  * @param {T} target  * @return {T}  */ function identity(target) {   return x; }

Generics are a complex topic, but for more detailed documentation on how to utilize them in JSDoc, including examples, you can read the Google Closure Compiler page on the topic.

Type casting

While strong typing is often very helpful, you may find that TypeScript’s built-in expectations don’t quite work for your use case. In that sort of instance, we might need to cast an object to a new type. One instance of when this might be necessary is when working with event listeners.

In TypeScript, all event listeners take a function as a callback where the first argument is an object of type Event, which has a property, target, that is an EventTarget. This is the correct type per the DOM standard, but oftentimes the bit of information we want out of the event’s target doesn’t exist on EventTarget — such as the value property that exists on HTMLInputElement.prototype. That makes the following code invalid:

document.querySelector('input').addEventListener(event => {   console.log(event.target.value); };

TypeScript will complain that the property value doesn’t exist on EventTarget even though we, as developers, know fully well that an <input> does have a value.

A screenshot showing that value doesn’t exist on type EventTarget

In order for us to tell TypeScript that we know event.target will be an HTMLInputElement, we must cast the object’s type:

document.getElementById('input').addEventListener('input', event => {   console.log(/** @type {HTMLInputElement} */(event.target).value); });

Wrapping event.target in parenthesis will set it apart from the call to value. Adding the type before the parenthesis will tell TypeScript we mean that the event.target is something different than what it ordinarily expects.

Screenshot of a valid example of type casting in VS Code.

And if a particular object is being problematic, we can always tell TypeScript an object is @type {any} to ignore error messages, although this is generally considered bad practice depsite being useful in a pinch.

Wrapping up

TypeScript is an incredibly powerful tool that many developers are using to streamline their workflow around consistent code standards. While most applications will utilize the built-in compiler, some projects might decide that the extra syntax that TypeScript provides gets in the way. Or perhaps they just feel more comfortable sticking to standards rather than being tied to an expanded syntax. In those cases, developers can still get the benefits of utilizing TypeScript’s type system even while writing vanilla JavaScript.


The post TypeScript, Minus TypeScript appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

,
[Top]