Tag: Making

Making My Netlify Build Run Sass

Let’s say you wanted to build a site with Eleventy as the generator. Popular choice these days! Eleventy doesn’t have some particularly blessed way of preprocessing your CSS, if that’s something you want to do. There are a variety of ways to do it and perhaps that freedom is part of the spirit of Eleventy.

I’ve seen people set up Gulp for this, which is cool, I still use and like Gulp for some stuff. I’ve seen someone use templating to return preprocessed CSS, which seems weird, but hey, whatever works. I’ve even seen someone extend the Eleventy config itself to run the processing.

So far, the thing that has made the most sense to me is to use npm scripts do the Sass processing. Do the CSS first, then the HTML, with npm-run-all. So, you’d set up something like this in your package.json:

  "scripts": {     "build": "npm-run-all build:css build:html",     "build:css": "node-sass src/site/_includes/css/main.scss > src/site/css/main.css",     "build:html": "eleventy",     "watch": "npm-run-all --parallel watch:css watch:html",     "watch:css": "node-sass --watch src/site/_includes/css/main.scss > src/site/css/main.css",     "watch:html": "eleventy --serve --port=8181",     "start": "npm run watch"   },

I think that’s fairly nice. Since Eleventy doesn’t have a blessed CSS processing route anyway, it feels OK to have it de-coupled from Eleventy processing.

But I see Netlify has come along nicely with their build plugins. As Sarah put it:

What the Build Plugin does is give you access to key points in time during that process, for instance, onPreBuildonPostBuildonSuccess, and so forth. You can execute some logic at those specific points in time

There is something really intuitive and nice about that structure. A lot of build plugins are created by the community or Netlify themselves. You just click them on via the UI or reference them in your config. But Sass isn’t a build-in project (as I write), which I would assume is because people are a pretty opinionated about what/where/how their CSS is processed that it makes sense to just let people do it themselves. So let’s do that.

In our project, we’d create a directory for our plugins, and then a folder for this particular plugin we want to write:

project-root/   src/   whatever/   plugins/     sass/       index.js       manifest.yml

That index.js file is where we write our code, and we’ll specifically want to use the onPreBuild hook here, because we’d want our Sass to be done preprocessing before the build process runs Eleventy and Eleventy moves things around.

module.exports = {   onPreBuild: async ({ utils: { run } }) => {     await run.command(       "node-sass src/site/_includes/css/main.scss src/site/css/main.css"     );   }, };

Here’s a looksie into all the relevant files together:

Now, if I netlify build from the command line, it will run the same build process that Netlify itself does, and it will hook into my plugin and run it!

One little thing I noticed is that I was trying to have my config be the (newer) netlify.yml format, but the plugins didn’t work, and I had to re-do the config as netlify.toml.

So we de-coupled ourselves from Eleventy with this particular processing, and coupled ourselves to Netlify. Just something to be aware of. I’m down with that as this way of configuring a build is so nice and I see so much potential in it.

I prefer the more explicit and broken up configuration of this style. Just look at how much cleaner the package.json gets:

Removed a bunch of lines from the scripts area of a package.json file, like the specific build:css and build:html commands

I still have this idea…

…of building a site that is a dog-fooded example of all the stuff you could/should do during a build process. I’ve started the site here, (and repo), but it’s not doing too much yet. I think it would be cool to wire up everything on that list (and more?) via Build Plugins.

If you wanna contribute, feel free to let me know. Maybe email me or open an issue to talk about what you’d want to do. You’re free to do a Pull Request too, but PRs without any prior communication are a little tricky sometimes as it’s harder to ensure our visions are aligned before you put in a bunch of effort.

The post Making My Netlify Build Run Sass appeared first on CSS-Tricks.

CSS-Tricks

, , ,

I’m getting back to making videos

It’s probably one part coronavirus, one part new-fancy-video setup, and one part “hey this is good for CodePen too,” but I’ve been doing more videos lately. It’s nice to be back in the swing of that for a minute. There’s something fun about coming back to an old familiar workflow.

Where do the videos get published? I’m a publish-on-your-own site kinda guy, as I’m sure you know, so there is a whole Videos section of this site where every video we’ve ever published lives. There is also a YouTube channel, of course, which is probably the most practical way for most people to subscribe. We’re about halfway to Wes Bos-level, so let’s go people!

I had literally forgotten about it, but ages ago when I set this up, I created a special RSS feed for the videos so I could submit it as a video podcast on iTunes. That’s all still there and working! An interesting side note is that this enables offline viewing, as most podcatchers can cache subscriptions. Why build an app when you get the core ability for free, right?

I keep the original videos, of course. On individual video pages, I show a YouTube player that could be somewhat easily swapped out for another player if something crazy happened, like YouTube closes down or drastically changed their business model in some way that makes it problematic to show videos with their player. The originals are stored in an S3 bucket. If you’re an MVP Supporter, I give you the original high-quality download link right on the video pages.

If your curious about my workflow, I’m still using ScreenFlow. I don’t make nearly enough use of it, but it feels good in that it’s fairly easy to use, very reliable and fast, and I can always learn and do more with it. Shooting my screen is easy and a built-in feature of ScreenFlow of course. I also have a Rode Podcaster on a boom arm at my desk so the audio is passable. And I just went through a whole process to use a DSLR camera at my desk too, and I think the quality from that is great. It’s all a little funny because I have this whole sound recording booth as well, with a $ 1,000 audio setup in there, but I only use that for podcasting. The lighting sucks in there, making it no good for video.

It’s this new desk setup that has inspired me to do more video, and I suspect it will continue! One thing I could really use is a new high quality intro video. Just like a five-second thing with refreshed aesthetics. Anyone do that kind of work?

The post I’m getting back to making videos appeared first on CSS-Tricks.

CSS-Tricks

, , , ,
[Top]

Making dark theme switcher with PostCSS.

You have noticed that there is a new design trend that is floating around web design since 2019, the dark mode. Facebook, Apple, and Google both introduced the dark version of their software.

Why a dark theme

Most of you probably think this is just a trend that will disappear after some years, well, let me say that this is not like many other trends, dark UI provide different advantages and they are not something just related to the “designer mood”. Let’s see why a dark mode on your applications and websites are something useful.

Better for batteries

Pixels on a screen consume more energy to display light colors rather than dark ones. Consequently, devices’ batteries can save energy and improve their daily duration while using dark UI.

Better for dark environments

Most of us use their smartphone and laptops while at home. Such environments are typically not so bright. The dark mode can help the use of the application while indoor, without causing visual disturbances.

Better for people

Some people with — or without — visual diseases, like epilepsy, can have unfortunate events by being flashed by bright applications. Having a dark mode means being more accessible.

Preparing styles

A very simple theme switcher should offer at least 3 options:

  • Dark theme
  • Light theme
  • Automatic theme (should be on by default)

Wait, what’s the automatic theme? Well, modern operating systems allow users to change the global visual appearance by setting os-wide options that enable the dark or light mode. The automatic option make sure to respect the OS preference if the user has not specified any theme.

To make this even more simple, we’ll use PostCSS and a simple but useful plugin called postcss-dark-theme-class.

yarn add postcss-dark-theme-class

This plugin will do 70% of the work, once installed, add it to your PostCSS config and configure the selectors you want to use to activate the correct theme, which will be used by the plugin to generate the correct CSS:

module.exports = {   plugins: [     /* ...other plugins */      require('postcss-dark-theme-class')({       darkSelector: '[data-theme="dark"]',       lightSelector: '[data-theme="light"]'     })   ] }

Once the plugin is up and running, we can start defining our dark and light themes using a CSS specific media query prefers-color-scheme. This special media query will handle the automatic part of our themes by applying the correct theme based on the user’s OS preferences:

:root {   --accent-color: hsl(226deg 100% 50%);   --global-background: hsl(0 0% 100%);   --global-foreground: hsl(0 0% 0%); }  @media (prefers-color-scheme: dark) {   :root {     --accent-color: hsl(226deg 100% 50%);     --global-background: hsl(0 0% 0%);     --global-foreground: hsl(0 0% 100%);   } }

If the user is using a dark version of his OS, the set inside the media query will apply, overwriting others, otherwhise the set of properties outside the media query is used. Since it’s pure CSS, this behaviour is on by default.

Browsers will now adapt the color scheme automatically based on the users’ OS preferences. Nice done! 🚀 Now it’s time to make the theme switcher allow users to specify what theme to use, overriding the OS preference.

Making the theme switcher

s we said, our switcher should have three options, we can use a simple select element, or build a set of buttons:

<button aria-current="true" data-set-theme="auto">Auto</button> <button aria-current="false" data-set-theme="dark">Dark</button> <button aria-current="false" data-set-theme="light">Light</button>

We’ll build the switcher using vanilla JS, but you can do it with any framework you want, the concept is the same: we have to add the selectors we defined inside the PostCSS plugin to the root element, based on the clicked button.

const html = document.documentElement const themeButtons = document.querySelectorAll('[data-set-theme]');  themeButtons.forEach((button) => { 	const theme = button.dataset.setTheme;  	button.addEventListener('click', () => { 		html.dataset.theme = theme; 	}) })

Each time we click on a theme button, the value set as data-set-theme is applied as value of the data-theme attribute on the document root element.

Check it live:

Where is the magic?

The magic is made by postcss-dark-theme-class — which will add our [data-theme] custom attribute to the :root selectors we wrote — during the CSS transpilation. Here what it generates from our code:

/* Our automatic and user specified light theme */ :root {   --accent-color: hsl(226deg, 100%, 50%);   --global-background: hsl(0, 0%, 100%);   --global-foreground: hsl(0, 0%, 0%); }  /* Our automatic dark theme */ @media (prefers-color-scheme: dark) {   :root:not([data-theme="light"]) {     --accent-color: hsl(226deg, 100%, 50%);     --global-background: hsl(0, 0%, 0%);     --global-foreground: hsl(0, 0%, 100%);   } }  /* Our dark theme specified by the user */ :root[data-theme="dark"] {   --accent-color: hsl(226deg, 100%, 50%);   --global-background: hsl(0, 0%, 0%);   --global-foreground: hsl(0, 0%, 100%); }

Bonus tip

You may notice that the --accent-color custom property defined inside themes doesn’t change. If you have colors that will not change based on the theme, you can remove them from the prefers-color-scheme at-rule.

In this way, they will not be duplicated and the one defined outside the media query will always apply.

:root {   --accent-color: hsl(226deg 100% 50%);   --global-background: hsl(0 0% 100%);   --global-foreground: hsl(0 0% 0%); }  @media (prefers-color-scheme: dark) {   :root {     --global-background: hsl(0 0% 0%);     --global-foreground: hsl(0 0% 100%);   } }

The post Making dark theme switcher with PostCSS. appeared first on CSS-Tricks.

CSS-Tricks

, , , ,
[Top]

Making Things Better: Redefining the Technical Possibilities of CSS

(This is a sponsored post.)

Robin recently lamented the common complaint that CSS is frustrating. There are misconceptions about what it is and what it does. There are debates about what kind of language it is. There are even different views on where it should be written.

Rachel Andrew has a new talk from An Event Apart DC 2019 available that walks us back; back to the roots of the issues we used to have with CSS and the “hacks” we used to overcome them. CSS has changed a lot over the past few years and, while those changes have felt complex and confusing at times, they are designed to solve what we have always wanted CSS to do.

The full hour it takes to watch the talk is well worth the time. Here are a few nuggets that stood out. First off, some de-bunking of common CSS complaints:

  • You never know how tall anything is on the web. Floats never solved this because they only bring things next to each other instead of knowing about the items around them. New layout methods, including CSS Grid and Flexbox, actually look at our elements and help them behave consistently.
  • Flexbox is weird and unintuitive. It’s not the layout method you might think it is. If we view it as a way to line things up next to each other, it’s going to feel weird and behavior weirdly as well. But if we see it for what it is – a method that looks at differently sized elements and returns the most logical layout – it starts to make sense. It assigns space, rather than squishing things into a box.

Rachel continues by giving us a peek into the future of what CSS wants to do for us:

  • CSS wants to avoid data loss. New alignment keywords like safe and unsafe will give us extra control to define whether we want CSS to aggressively avoid content that’s unintentionally hidden or allow it to happen.
.container {   display: flex;   flex-direction: column;   /* Please center as long as it doesn't result in overflow */   align-items: safe center; }
  • CSS wants to help us get real with overflow. Themin-content and max-content keywords make it possible to create boxes that are wide enough for the content but not wider, and boxes that are as big as the content can be.
.container {   width: min-content; /* Allow wrapping */ }
  • CSS wants to lay things out logically. The web is not left-to-right. Grid and Flexbox quietly introduced a way of thinking start-to-end that is direction agnostic. That has brought about a new specification for Logical Properties and Values.
  • CSS wants to make content more portable. CSS Regions let us flow content from one element into another. While it’s probably a flawed comparison, it’s sorta like the pipes in old school Mario Bros. games where jumping in one pipe at one location will plop your character out of another pipe in another location… but we get to define those sources ourselves and without angry plants trying to eat us along the way.

Anyway, these are merely scratching the surface of what Rachel covers in her talk. It’s a good reminder that An Event Apart has an entire library of valuable talks from amazing speakers and that attending an AEA event is an invaluable experience worth checking out. Rachel’s talk was from last year’s Washington D.C. event and, as it turns out, the very next event is taking place there this April 13-15. If you can’t make that one, there are several others throughout the year across the United States.

Oh, and of course we have a discount code for you! Use AEACP for $ 100 off any show.

Direct Link to ArticlePermalink

The post Making Things Better: Redefining the Technical Possibilities of CSS appeared first on CSS-Tricks.

CSS-Tricks

, , , , ,
[Top]

Making Room for Variation

Say you have a design system and you’re having a moment where it doesn’t have what you need. You need to diverge and create something new. Yesenia Perez-Cruz categorizes these moments from essentially ooops to niiice:

There are three kinds of deviations that come up in a design
system:

  • Unintentional divergence typically happens when designers can’t find the information they’re looking for. They may not know that a certain solution exists within a system, so they create their own style. Clear, easy-to-find documentation and usage guidelines can help your team avoid unintentional variation.
  • Intentional but unnecessary divergence usually results from designers not wanting to feel constrained by the system, or believing they have a better solution. Making sure your team knows how to push back on and contribute to the system can help mitigate this kind of variation.
  • Intentional, meaningful divergence is the goal of an expressive design system. In this case, the divergence is meaningful because it solves a very specific user problem that no existing pattern solves.

We want to enable intentional, meaningful variation.

This is an excerpt from her book Expressive Design Systems on A Book Apart, the same publishers as the incredible iconic book Practical SVG.

And while we’re linking up books about design systems, check out Andrew Couldwell’s Laying the Foundations.

System design is not a scary thing — this book aims to dispel that myth. It covers what design systems are, why they are important, and how to get stakeholder buy-in to create one. It introduces you to a simple model, and two very different approaches to creating a design system. What’s unique about this book is its focus on the importance of brand in design systems and creating documentation.

Direct Link to ArticlePermalink

The post Making Room for Variation appeared first on CSS-Tricks.

CSS-Tricks

, ,
[Top]

Two Lessons I Learned From Making React Components

Here’s a couple of lessons I’ve learned about how not to build React components. These are things I’ve come across over the past couple of months and thought they might be of interest to you if you’re working on a design system, especially one with a bunch of legacy technical decisions and a lot of tech debt under the hood.

Lesson 1: Avoid child components as much as you can

One thing about working on a big design system with lots of components is that the following pattern eventually starts to become problematic real quick:

<Card>   <Card.Header>Title</Card.Header>   <Card.Body><p>This is some content</p></Card.Body> </Card>

The problematic parts are those child components, Card.Body and Card.Header. This example isn’t terrible because things are relatively simple — it’s when components get more complex that things can get bonkers. For example, each child component can have a whole series of complex props that interfere with the others.

One of my biggest pain points is with our Form components. Take this:

<Form>   <Input />   <Form.Actions>     <Button>Submit</Button>     <Button>Cancel</Button>   </Form.Actions> </Form>

I’m simplifying things considerably, of course, but every time an engineer wants to place two buttons next to each other, they’d import Form.Actions, even if there wasn’t a Form on the page. This meant that everything inside the Form component gets imported and that’s ultimately bad for performance. It just so happens to be bad system design implementation as well.

This also makes things extra difficult when documenting components because now you’ll have to ensure that each of these child components are documented too.

So instead of making Form.Actions a child component, we should’ve made it a brand new component, simply: FormActions (or perhaps something with a better name like ButtonGroup). That way, we don’t have to import Form all the time and we can keep layout-based components separate from the others.

I’ve learned my lesson. From here on out I’ll be avoiding child components altogether where I can.

Lesson 2: Make sure your props don’t conflict with one another

Mandy Michael wrote a great piece about how props can bump into one another and cause all sorts of confusing conflicts, like this TypeScript example:

interface Props {   hideMedia?: boolean   mediaIsEdgeToEdge?: boolean   mediaFullHeight?: boolean   videoInline?: boolean }

Mandy writes:

The purpose of these props are to change the way the image or video is rendered within the card or if the media is rendered at all. The problem with defining them separately is that you end up with a number of flags which toggle component features, many of which are mutually exclusive. For example, you can’t have an image that fills the margins if it’s also hidden.

This was definitely a problem for a lot of the components we inherited in my team’s design systems. There were a bunch of components where boolean props would make a component behave in all sorts of odd and unexpected ways. We even had all sorts of bugs pop up in our Card component during development because the engineers wouldn’t know which props to turn on and turn off for any given effect!

Mandy offers the following solution:

type MediaMode = 'hidden'| 'edgeToEdge' | 'fullHeight'  interface Props {   mediaMode: 'hidden'| 'edgeToEdge' | 'fullHeight' }

In short: if we combine all of these nascent options together then we have a much cleaner API that’s easily extendable and is less likely to cause confusion in the future.


That’s it! I just wanted to make a quick note about those two lessons. Here’s my question for you: What have you learned when it comes to making components or working on design systems?

The post Two Lessons I Learned From Making React Components appeared first on CSS-Tricks.

CSS-Tricks

, , , , ,
[Top]

The Making of a “Special Series” on a WordPress Site

We just ran a fancy article series here on CSS-Tricks with a bunch of different articles all answering the same question. By fancy, I mean two things:

  • The articles had a specially-designed template just for them. (Example)
  • The series has a specially-designed landing page.

One of the reasons I enjoy working with WordPress is because of how easy this is to pull off. I’m sure any CMS has their own ways of doing this, and I don’t mean it to be a battle-of-CMSs thing here. I just want to illustrate how easy I found this to pull off in WordPress.

Any post can have a template

I made a PHP file in the theme directory called eoy-2019.php (eoy meaning “End of Year,” the spirit of this series). At the top of this file is some special code comments to make it read as a template:

<?php /* Template Name: EOY 2019 Template Post Type: post */

Now any post we publish can select this template from a dropdown:

And they’ll use our cool template now!

Special scripts and styles for that template

I actually didn’t need any scripting for this, but it’s the same concept as styles.

I put a conditional check in my header.php file that would load up a CSS file I created just for these posts.

if (is_page_template('art-direction/eoy-2019.php') || is_category('2019-end-of-year-thoughts')) {   wp_enqueue_style('eoy-2019', get_template_directory_uri() . "/css/eoy-2019.css?version=7"); }

A special template for the whole group

I could have grouped the posts together in any number of ways:

  • I could have made a special Page template for these posts and designed that to link to these special posts manually or with a custom query/loop.
  • I could have had them share a tag, and then created a special tag archive page for them.
  • I could have used the same system we used for our Guides, which is a combination of CMB2 and the attached-posts plugin.

Instead, I gave all the posts the same category. That felt the most right for whatever reason. We don’t use categories a ton, but we use them for some major groups of posts, like Chronicle posts.

By using a category, I can use a special templating trick. Naming the file category-2019-end-of-year-thoughts.php, I can use the end of that file name to match the slug of the category name I used and it “just works,” thanks to the WordPress template hierarchy. I don’t need to write any code for that file to be used to display this category and I get the URL for free. Now I have a file I can use to build a special design just for these posts.

Future fallbacks

Let’s say it’s five years from now and I’ve kinda stopped caring about the super special styling and treatment behind these posts. I don’t know if that will happen, but it might. All these special files can just be deleted without hurting the content or URLs.

Again, thanks to the template hierarchy, the special post templates will simply fall back to the regular single.php template. The category homepage will just render from the regular archive.php template. The special styles won’t be applied, but the content will be just fine and the URLs will be unchanged. Nice.

The post The Making of a “Special Series” on a WordPress Site appeared first on CSS-Tricks.

CSS-Tricks

, , , ,
[Top]

Making a Better Custom Select Element

We just covered The Current State of Styling Selects in 2019, but we didn’t get nearly as far and fancy as Julie Grundy gets here. There is a decent chunk of JavaScript that powers it, so I’m still very much eyeballing browsers’ recent interest in giving us more powerful selects in (presumably) just HTML and CSS.

I tossed a fork on CodePen in case you just wanna see the final result.

This is also the first article in the 2019 edition of 24 Ways, the long-running and wonderful annual advent calendar for developers that is worth reading every single year.

Direct Link to ArticlePermalink

The post Making a Better Custom Select Element appeared first on CSS-Tricks.

CSS-Tricks

, , , ,
[Top]

A Handy Sass-Powered Tool for Making Balanced Color Palettes

For those who may not come from a design background, selecting a color palette is often based on personal preferences. Choosing colors might be done with an online color tool, sampling from an image, “borrowing” from favorite brands, or just sort of randomly picking from a color wheel until a palette “just feels right.”

Our goal is to better understand what makes a palette “feel right” by exploring key color attributes with Sass color functions. By the end, you will become more familiar with:

  • The value of graphing a palette’s luminance, lightness, and saturation to assist in building balanced palettes
  • The importance of building accessible contrast checking into your tools
  • Advanced Sass functions to extend for your own explorations, including a CodePen you can manipulate and fork

What you’ll ultimately find, however, is that color on the web is a battle of hardware versus human perception.

What makes color graphing useful

You may be familiar with ways of declaring colors in stylesheets, such as RGB and RGBA values, HSL and HSLA values, and HEX codes.

rbg(102,51,153) rbga(102,51,153, 0.6) hsl(270, 50%, 40%) hsla(270, 50%, 40%, 0.6) #663399

Those values give devices instructions on how to render color. Deeper attributes of a color can be exposed programmatically and leveraged to understand how a color relates to a broader palette.

The value of graphing color attributes is that we get a more complete picture of the relationship between colors. This reveals why a collection of colors may or may not feel right together. Graphing multiple color attributes helps hint at what adjustments can be made to create a more harmonious palette. We’ll look into examples of how to determine what to change in a later section.

Two useful measurements we can readily obtain using built-in Sass color functions are lightness and saturation.

  • Lightness refers to the mix of white or black with the color.
  • Saturation refers to the intensity of a color, with 100% saturation resulting in the purest color (no grey present).
$ color: rebeccapurple;  @debug lightness($ color); // 40%  @debug saturation($ color); // 50%;

However, luminance may arguably be the most useful color attribute. Luminance, as represented in our tool, is calculated using the WCAG formula which assumes an sRGB color space. Luminance is used in the contrast calculations, and as a grander concept, also aims to get closer to quantifying the human perception of relative brightness to assess color relationships. This means that a tighter luminance value range among a palette is likely to be perceived as more balanced to the human eye. But machines are fallible, and there are exceptions to this rule that you may encounter as you manipulate palette values. For more extensive information on luminance, and a unique color space called CIELAB that aims to even more accurately represent the human perception of color uniformity, see the links at the end of this article.

Additionally, color contrast is exceptionally important for accessibility, particularly in terms of legibility and distinguishing UI elements, which can be calculated programmatically. That’s important in that it means tooling can test for passing values. It also means algorithms can, for example, return an appropriate text color when passed in the background color. So our tool will incorporate contrast checking as an additional way to gauge how to adjust your palette.

The functions demonstrated in this project can be extracted for helping plan a contrast-safe design system palette, or baked into a Sass framework that allows defining a custom theme.

Sass as a palette building tool

Sass provides several traditional programming features that make it perfect for our needs, such as creating and iterating through arrays and manipulating values with custom functions. When coupled with an online IDE, like CodePen, that has real-time processing, we can essentially create a web app to solve specific problems such as building a color palette.

Here is a preview of the tool we’re going to be using:

See the Pen
Sass Color Palette Grapher
by Stephanie Eckles (@5t3ph)
on CodePen.

Features of the Sass palette builder

  • It outputs an aspect ratio-controlled responsive graph for accurate plot point placement and value comparing.
  • It leverages the result of Sass color functions and math calculations to correctly plot points on a 0–100% scale.
  • It generates a gradient to provide a more traditional “swatch” view.
  • It uses built-in Sass functions to extract saturation and lightness values.
  • It creates luminance and contrast functions (forked from Material Web Components in addition to linking in required precomputed linear color channel values).
  • It returns appropriate text color for a given background, with a settings variable to change the ratio used.
  • It provides functions to uniformly scale saturation and lightness across a given palette.

Using the palette builder

To begin, you may wish to swap from among the provided example palettes to get a feel for how the graph values change for different types of color ranges. Simply copy a palette variable name and swap it for $ default as the value of the $ palette variable which can be found under the comment SWAP THE PALETTE VARIABLE.

Next, try switching the $ contrastThreshold variable value between the predefined ratios, especially if you are less familiar with ensuring contrast passes WCAG guidelines.

Then try to adjust the $ palette-scale-lightness or $ palette-scale-saturation values. Those feed into the palette function and uniformly scale those measurements across the palette (up to the individual color’s limit).

Finally, have a go at adding your own palette, or swap out some colors within the examples. The tool is a great way to explore Sass color functions to adjust particular attributes of a color, some of which are demonstrated in the $ default palette.

Interpreting the graphs and creating balanced, accessible palettes

The graphing tool defaults to displaying luminance due to it being the most reliable indicator of a balanced palette, as we discussed earlier. Depending on your needs, saturation and lightness can be useful metrics on their own, but mostly they are signalers that can help point to what needs adjusting to bring a palette’s luminance more in alignment. An exception may be creating a lightness scale based on each value in your established palette. You can swap to the $ stripeBlue example for that.

The $ default palette is actually in need of adjustment to get closer to balanced luminance:

The $ default palette’s luminance graph

A palette that shows well-balanced luminance is the sample from Stripe ($ stripe):

The $ stripe palette luminance graph

Here’s where the tool invites a mind shift. Instead of manipulating a color wheel, it leverages Sass functions to programmatically adjust color attributes.

Check the saturation graph to see if you have room to play with the intensity of the color. My recommended adjustment is to wrap your color value with the scale-color function and pass an adjusted $ saturation value, e.g. example: scale-color(#41b880, $ saturation: 60%). The advantage of scale-color is that it fluidly adjusts the value based on the given percent.

Lightness can help explain why two colors feel different by assigning a value to their brightness measured against mixing them with white or black. In the $ default palette, the change-color function is used for purple to align it’s relative $ lightness value with the computed lightness() of the value used for the red.

The scale-color function also allows bundling both an adjusted $ saturation and $ lightness value, which is often the most useful. Note that provided percents can be negative.

By making use of Sass functions and checking the saturation and lightness graphs, the $ defaultBalancedLuminance achieves balanced luminance. This palette also uses the map-get function to copy values from the $ default palette and apply further adjustments instead of overwriting them, which is handy for testing multiple variations such as perhaps a hue shift across a palette.

The $ defaultBalancedLuminance luminance graph

Take a minute to explore other available color functions.

http://jackiebalzer.com/color offers an excellent web app to review effects of Sass and Compass color functions.

Contrast comes into play when considering how the palette colors will actually be used in a UI. The tool defaults to the AA contrast most appropriate for all text: 4.5. If you are building for a light UI, then consider that any color used on text should achieve appropriate contrast with white when adjusting against luminance, indicated by the center color of the plot point.

Tip: The graph is set up with a transparent background, so you can add a background rule on body if you are developing for a darker UI.

Further reading

Color is an expansive topic and this article only hits the aspects related to Sass functions. But to truly understand how to create harmonious color systems, I recommend the following resources:

  • Color Spaces – is a super impressive deep-dive with interactive models of various color spaces and how they are computed.
  • Understanding Colors and Luminance – A beginner-friendly overview from MDN on color and luminance and their relationship to accessibility.
  • Perpetually Uniform Color Spaces – More information on perceptually uniform color systems, with an intro the tool HSLuv that converts values from the more familiar HSL color space to the luminance-tuned CIELUV color space.
  • Accessible Color Systems – A case study from Stripe about their experience building an accessible color system by creating custom tooling (which inspired this exploration and article).
  • A Nerd’s Guide to Color on the Web – This is a fantastic exploration of the mechanics of color on the web, available right here on CSS-Tricks.
  • Tanaguru Contrast Finder – An incredible tool to help if you are struggling to adjust colors to achieve accessible contrast.
  • ColorBox – A web app from Lyft that further explores color scales through graphing.
  • Designing Systematic Colors – Describes Mineral UI‘s exceptional effort to create color ramps to support consistent theming via a luminance-honed palette.
  • How we designed the new color palettes in Tableau 10 – Tableau exposed features of their custom tool that helped them create a refreshed palette based on CIELAB, including an approachable overview of that color space.

The post A Handy Sass-Powered Tool for Making Balanced Color Palettes appeared first on CSS-Tricks.

CSS-Tricks

, , , , , ,
[Top]

Making an Audio Waveform Visualizer with Vanilla JavaScript

As a UI designer, I’m constantly reminded of the value of knowing how to code. I pride myself on thinking of the developers on my team while designing user interfaces. But sometimes, I step on a technical landmine.

A few years ago, as the design director of wsj.com, I was helping to re-design the Wall Street Journal’s podcast directory. One of the designers on the project was working on the podcast player, and I came upon Megaphone’s embedded player.

I previously worked at SoundCloud and knew that these kinds of visualizations were useful for users who skip through audio. I wondered if we could achieve a similar look for the player on the Wall Street Journal’s site.

The answer from engineering: definitely not. Given timelines and restraints, it wasn’t a possibility for that project. We eventually shipped the redesigned pages with a much simpler podcast player.

But I was hooked on the problem. Over nights and weekends, I hacked away trying to achieve this
effect. I learned a lot about how audio works on the web, and ultimately was able to achieve the look with less than 100 lines of JavaScript!

It turns out that this example is a perfect way to get acquainted with the Web Audio API, and how to visualize audio data using the Canvas API.

But first, a lesson in how digital audio works

In the real, analog world, sound is a wave. As sound travels from a source (like a speaker) to your ears, it compresses and decompresses air in a pattern that your ears and brain hear as music, or speech, or a dog’s bark, etc. etc.

An analog sound wave is a smooth, continuous function.

But in a computer’s world of electronic signals, sound isn’t a wave. To turn a smooth, continuous wave into data that it can store, computers do something called sampling. Sampling means measuring the sound waves hitting a microphone thousands of times every second, then storing those data points. When playing back audio, your computer reverses the process: it recreates the sound, one tiny split-second of audio at a time.

A digital sound file is made up of tiny slices of the original audio, roughly re-creating the smooth continuous wave.

The number of data points in a sound file depends on its sample rate. You might have seen this number before; the typical sample rate for mp3 files is 44.1 kHz. This means that, for every second of audio, there are 44,100 individual data points. For stereo files, there are 88,200 every second — 44,100 for the left channel, and 44,100 for the right. That means a 30-minute podcast has 158,760,000 individual data points describing the audio!

How can a web page read an mp3?

Over the past nine years, the W3C (the folks who help maintain web standards) have developed the Web Audio API to help web developers work with audio. The Web Audio API is a very deep topic; we’ll hardly crack the surface in this essay. But it all starts with something called the AudioContext.

Think of the AudioContext like a sandbox for working with audio. We can initialize it with a few lines of JavaScript:

// Set up audio context window.AudioContext = window.AudioContext || window.webkitAudioContext; const audioContext = new AudioContext(); let currentBuffer = null;

The first line after the comment is a necessary because Safari has implemented AudioContext as webkitAudioContext.

Next, we need to give our new audioContext the mp3 file we’d like to visualize. Let’s fetch it using… fetch()!

const visualizeAudio = url => {   fetch(url)     .then(response => response.arrayBuffer())     .then(arrayBuffer => audioContext.decodeAudioData(arrayBuffer))     .then(audioBuffer => visualize(audioBuffer)); };

This function takes a URL, fetches it, then transforms the Response object a few times.

  • First, it calls the arrayBuffer() method, which returns — you guessed it — an ArrayBuffer! An ArrayBuffer is just a container for binary data; it’s an efficient way to move lots of data around in JavaScript.
  • We then send the ArrayBuffer to our audioContext via the decodeAudioData() method. decodeAudioData() takes an ArrayBuffer and returns an AudioBuffer, which is a specialized ArrayBuffer for reading audio data. Did you know that browsers came with all these convenient objects? I definitely did not when I started this project.
  • Finally, we send our AudioBuffer off to be visualized.

Filtering the data

To visualize our AudioBuffer, we need to reduce the amount of data we’re working with. Like I mentioned before, we started off with millions of data points, but we’ll have far fewer in our final visualization.

First, let’s limit the channels we are working with. A channel represents the audio sent to an individual speaker. In stereo sound, there are two channels; in 5.1 surround sound, there are six. AudioBuffer has a built-in method to do this: getChannelData(). Call audioBuffer.getChannelData(0), and we’ll be left with one channel’s worth of data.

Next, the hard part: loop through the channel’s data, and select a smaller set of data points. There are a few ways we could go about this. Let’s say I want my final visualization to have 70 bars; I can divide up the audio data into 70 equal parts, and look at a data point from each one.

const filterData = audioBuffer => {   const rawData = audioBuffer.getChannelData(0); // We only need to work with one channel of data   const samples = 70; // Number of samples we want to have in our final data set   const blockSize = Math.floor(rawData.length / samples); // Number of samples in each subdivision   const filteredData = [];   for (let i = 0; i < samples; i++) {     filteredData.push(rawData[i * blockSize]);    }   return filteredData; }
This was the first approach I took. To get an idea of what the filtered data looks like, I put the result into a spreadsheet and charted it.

The output caught me off guard! It doesn’t look like the visualization we’re emulating at all. There are lots of data points that are close to, or at zero. But that makes a lot of sense: in a podcast, there is a lot of silence between words and sentences. By only looking at the first sample in each of our blocks, it’s highly likely that we’ll catch a very quiet moment.

Let’s modify the algorithm to find the average of the samples. And while we’re at it, we should take the absolute value of our data, so that it’s all positive.

const filterData = audioBuffer => {   const rawData = audioBuffer.getChannelData(0); // We only need to work with one channel of data   const samples = 70; // Number of samples we want to have in our final data set   const blockSize = Math.floor(rawData.length / samples); // the number of samples in each subdivision   const filteredData = [];   for (let i = 0; i < samples; i++) {     let blockStart = blockSize * i; // the location of the first sample in the block     let sum = 0;     for (let j = 0; j < blockSize; j++) {       sum = sum + Math.abs(rawData[blockStart + j]) // find the sum of all the samples in the block     }     filteredData.push(sum / blockSize); // divide the sum by the block size to get the average   }   return filteredData; }

Let’s see what that data looks like.

This is great. There’s only one thing left to do: because we have so much silence in the audio file, the resulting averages of the data points are very small. To make sure this visualization works for all audio files, we need to normalize the data; that is, change the scale of the data so that the loudest samples measure as 1.

const normalizeData = filteredData => {   const multiplier = Math.pow(Math.max(...filteredData), -1);   return filteredData.map(n => n * multiplier); }

This function finds the largest data point in the array with Math.max(), takes its inverse with Math.pow(n, -1), and multiplies each value in the array by that number. This guarantees that the largest data point will be set to 1, and the rest of the data will scale proportionally.

Now that we have the right data, let’s write the function that will visualize it.

Visualizing the data

To create the visualization, we’ll be using the JavaScript Canvas API. This API draws graphics into an HTML <canvas> element. The first step to using the Canvas API is similar to the Web Audio API.

const draw = normalizedData => {   // Set up the canvas   const canvas = document.querySelector("canvas");   const dpr = window.devicePixelRatio || 1;   const padding = 20;   canvas.width = canvas.offsetWidth * dpr;   canvas.height = (canvas.offsetHeight + padding * 2) * dpr;   const ctx = canvas.getContext("2d");   ctx.scale(dpr, dpr);   ctx.translate(0, canvas.offsetHeight / 2 + padding); // Set Y = 0 to be in the middle of the canvas };

This code finds the <canvas> element on the page, and checks the browser’s pixel ratio (essentially the screen’s resolution) to make sure our graphic will be drawn at the right size. We then get the context of the canvas (its individual set of methods and values). We calculate the pixel dimensions pf the canvas, factoring in the pixel ratio and adding in some padding. Lastly, we change the coordinates system of the <canvas>; by default, (0,0) is in the top-left of the box, but we can save ourselves a lot of math by setting (0, 0) to be in the middle of the left edge.

Now let’s draw some lines! First, we’ll create a function that will draw an individual segment.

const drawLineSegment = (ctx, x, y, width, isEven) => {   ctx.lineWidth = 1; // how thick the line is   ctx.strokeStyle = "#fff"; // what color our line is   ctx.beginPath();   y = isEven ? y : -y;   ctx.moveTo(x, 0);   ctx.lineTo(x, y);   ctx.arc(x + width / 2, y, width / 2, Math.PI, 0, isEven);   ctx.lineTo(x + width, 0);   ctx.stroke(); };

The Canvas API uses an concept called “turtle graphics.” Imagine that the code is a set of instructions being given to a turtle with a marker. In basic terms, the drawLineSegment() function works as follows:

  1. Start at the center line, x = 0.
  2. Draw a vertical line. Make the height of the line relative to the data.
  3. Draw a half-circle the width of the segment.
  4. Draw a vertical line back to the center line.

Most of the commands are straightforward: ctx.moveTo() and ctx.lineTo() move the turtle to the specified coordinate, without drawing or while drawing, respectively.

Line 5, y = isEven ? -y : y, tells our turtle whether to draw down or up from the center line. The segments alternate between being above and below the center line so that they form a smooth wave. In the world of the Canvas API, negative y values are further up than positive ones. This is a bit counter-intuitive, so keep it in mind as a possible source of bugs.

On line 8, we draw a half-circle. ctx.arc() takes six parameters:

  • The x and y coordinates of the center of the circle
  • The radius of the circle
  • The place in the circle to start drawing (Math.PI or π is the location, in radians, of 9 o’clock)
  • The place in the circle to finish drawing (0 in radians represents 3 o’clock)
  • A boolean value telling our turtle to draw either counterclockwise (if true) or clockwise (if false). Using isEven in this last argument means that we’ll draw the top half of a circle — clockwise from 9 o’clock to 3 clock — for even-numbered segments, and the bottom half for odd-numbered segments.

OK, back to the draw() function.

const draw = normalizedData => {   // Set up the canvas   const canvas = document.querySelector("canvas");   const dpr = window.devicePixelRatio || 1;   const padding = 20;   canvas.width = canvas.offsetWidth * dpr;   canvas.height = (canvas.offsetHeight + padding * 2) * dpr;   const ctx = canvas.getContext("2d");   ctx.scale(dpr, dpr);   ctx.translate(0, canvas.offsetHeight / 2 + padding); // Set Y = 0 to be in the middle of the canvas    // draw the line segments   const width = canvas.offsetWidth / normalizedData.length;   for (let i = 0; i < normalizedData.length; i++) {     const x = width * i;     let height = normalizedData[i] * canvas.offsetHeight - padding;     if (height < 0) {         height = 0;     } else if (height > canvas.offsetHeight / 2) {         height = height > canvas.offsetHeight / 2;     }     drawLineSegment(ctx, x, height, width, (i + 1) % 2);   } };

After our previous setup code, we need to calculate the pixel width of each line segment. This is the canvas’s on-screen width, divided by the number of segments we’d like to display.

Then, a for-loop goes through each entry in the array, and draws a line segment using the function we defined earlier. We set the x value to the current iteration’s index, times the segment width. height, the desired height of the segment, comes from multiplying our normalized data by the canvas’s height, minus the padding we set earlier. We check a few cases: subtracting the padding might have pushed height into the negative, so we re-set that to zero. If the height of the segment will result in a line being drawn off the top of the canvas, we re-set the height to a maximum value.

We pass in the segment width, and for the isEven value, we use a neat trick: (i + 1) % 2 means “find the reminder of i + 1 divided by 2.” We check i + 1 because our counter starts at 0. If i + 1 is even, its remainder will be zero (or false). If i is odd, its remainder will be 1 or true.

And that’s all she wrote. Let’s put it all together. Here’s the whole script, in all its glory.

See the Pen
Audio waveform visualizer
by Matthew Ström (@matthewstrom)
on CodePen.

In the drawAudio() function, we’ve added a few functions to the final call: draw(normalizeData(filterData(audioBuffer))). This chain filters, normalizes, and finally draws the audio we get back from the server.

If everything has gone according to plan, your page should look like this:

Notes on performance

Even with optimizations, this script is still likely running hundreds of thousands of operations in the browser. Depending on the browser’s implementation, this can take many seconds to finish, and will have a negative impact on other computations happening on the page. It also downloads the whole audio file before drawing the visualization, which consumes a lot of data. There are a few ways that we could improve the script to resolve these issues:

  1. Analyze the audio on the server side. Since audio files don’t change often, we can take advantage of server-side computing resources to filter and normalize the data. Then, we only have to transmit the smaller data set; no need to download the mp3 to draw the visualization!
  2. Only draw the visualization when a user needs it. No matter how we analyze the audio, it makes sense to defer the process until well after page load. We could either wait until the element is in view using an intersection observer, or delay even longer until a user interacts with the podcast player.
  3. Progressive enhancement. While exploring Megaphone’s podcast player, I discovered that their visualization is just a facade — it’s the same waveform for every podcast. This could serve as a great default to our (vastly superior) design. Using the principles of progressive enhancement, we could load a default image as a placeholder. Then, we can check to see if it makes sense to load the real waveform before initiating our script. If the user has JavaScript disabled, their browser doesn’t support the Web Audio API, or they have the save-data header set, nothing is broken.

I’d love to hear any thoughts y’all have on optimization, too.

Some closing thoughts

This is a very, very impractical way of visualizing audio. It runs on the client side, processing millions of data points into a fairly straightforward visualization.

But it’s cool! I learned a lot in writing this code, and even more in writing this article. I refactored a lot of the original project and trimmed the whole thing in half. Projects like this might not ever go on to see a production codebase, but they are unique opportunities to develop new skills and a deeper understanding of some of the neat APIs modern browsers support.

I hope this was a helpful tutorial. If you have ideas of how to improve it, or any cool variations on the theme, please reach out! I’m @ilikescience on Twitter.

The post Making an Audio Waveform Visualizer with Vanilla JavaScript appeared first on CSS-Tricks.

CSS-Tricks

, , , , ,
[Top]