Category: Design

Oh Hey, Padding Percentage is Based on the Parent Element’s Width

I learned something about percentage-based (%) padding today that I had totally wrong in my head! I always thought that percentage padding was based on the element itself. So if an element is 1,000 pixels wide with padding-top: 50%, that padding is 500 pixels. It’s weird having top padding based on width, but that’s how it works — but only sorta. The 50% is based on the parent element’s width, not itself. That’s the part that confused me.

The only time I’ve ever messed with percentage padding is to do the aspect ratio boxes trick. The element is of fluid width, but you want to maintain an aspect ratio, hence the percentage padding trick. In that scenario, the width of the element is the width of the parent, so my incorrect mental model just never mattered.

It doesn’t take much to prove it:

See the Pen
% padding is based on parent, not itself
by Chris Coyier (@chriscoyier)
on CodePen.

Thanks to Cameron Clark for the email about this:

The difference is moot when you have a block element that expands to fill the width of its parent completely, as in every example used in this article. But when you set any width on the element other than 100%, this misunderstanding will lead to baffling results.

They pointed to this article by Aayush Arora who claims that only 1% of developers knew this.

The post Oh Hey, Padding Percentage is Based on the Parent Element’s Width appeared first on CSS-Tricks.

CSS-Tricks

, , , , ,

An Early Look at the Vue 3 Composition API in the Wild

I recently had an opportunity to try the new Vue Composition API in a real project to check where it might be useful and how we could use it in the future.

Until now, when we were creating a new component we were using Options API. That API forced us to separate the component’s code by options, meaning that we needed to have all reactive data in one place (data), all computed properties in one place (computed), all methods in one place (methods), and so on.

As it is handy and readable for smaller components, it becomes painful when the component gets more complicated and deals with multiple functionalities. Usually, logic related to one specific functionality contains some reactive data, computed property, a method or a few of them; sometimes it also involves using component lifecycle hooks. That makes you constantly jump between different options in the code when working on a single logical concern.

The other issue that you may have encountered when working with Vue is how to extract a common logic that can be reused by multiple components. Vue already has few options to do that, but all of them have their own drawbacks (e.g. mixins, and scoped-slots).

The Composition API brings a new way of creating component, separating code and extracting reusable pieces of code.

Let’s start with code composition within a component.

Code composition

Imagine you have a main component that sets up few things for your whole Vue app (like layout in Nuxt). It deals with the following things:

  • setting locale
  • checking if the user is still authenticated and redirects them if not
  • preventing the user from reloading the app too many times
  • tracking user activity and reacting when the user is inactive for specific period of time
  • listening on an event using EventBus (or window object event)

Those are just a few things the component can do. You can probably imagine a more complex component, but this will serve the purpose of this example. For the sake of readability, I am just using names of the props without the actual implementation.

This is how the component would look like using Options API:

<template>   <div id="app">     ...   </div> </template>  <script> export default {   name: 'App',    data() {     return {       userActivityTimeout: null,       lastUserActivityAt: null,       reloadCount: 0     }   },    computed: {     isAuthenticated() {...}     locale() {...}   },    watch: {     locale(value) {...},     isAuthenticated(value) {...}   },    async created() {     const initialLocale = localStorage.getItem('locale')     await this.loadLocaleAsync(initialLocale)   },    mounted() {     EventBus.$  on(MY_EVENT, this.handleMyEvent)      this.setReloadCount()     this.blockReload()      this.activateActivityTracker()     this.resetActivityTimeout()   },    beforeDestroy() {     this.deactivateActivityTracker()     clearTimeout(this.userActivityTimeout)     EventBus.$  off(MY_EVENT, this.handleMyEvent)   },    methods: {     activateActivityTracker() {...},     blockReload() {...},     deactivateActivityTracker() {...},     handleMyEvent() {...},     async loadLocaleAsync(selectedLocale) {...}     redirectUser() {...}     resetActivityTimeout() {...},     setI18nLocale(locale) {...},     setReloadCount() {...},     userActivityThrottler() {...},   } } </script>

As you can see, each option contains parts from all functionalities. There is no clear separation between them and that makes the code hard to read, especially if you are not the person who wrote it and you are looking at it for the first time. It is very hard to find which method is used by which functionality.

Let’s look at it again but identify the logical concerns as comments. Those would be:

  • Activity tracker
  • Reload blocker
  • Authentication check
  • Locale
  • Event Bus registration
<template>   <div id="app">     ...   </div> </template>  <script> export default {   name: 'App',    data() {     return {       userActivityTimeout: null, // Activity tracker       lastUserActivityAt: null, // Activity tracker       reloadCount: 0 // Reload blocker     }   },    computed: {     isAuthenticated() {...} // Authentication check     locale() {...} // Locale   },    watch: {     locale(value) {...},     isAuthenticated(value) {...} // Authentication check   },    async created() {     const initialLocale = localStorage.getItem('locale') // Locale     await this.loadLocaleAsync(initialLocale) // Locale   },    mounted() {     EventBus.$  on(MY_EVENT, this.handleMyEvent) // Event Bus registration      this.setReloadCount() // Reload blocker     this.blockReload() // Reload blocker      this.activateActivityTracker() // Activity tracker     this.resetActivityTimeout() // Activity tracker   },    beforeDestroy() {     this.deactivateActivityTracker() // Activity tracker     clearTimeout(this.userActivityTimeout) // Activity tracker     EventBus.$  off(MY_EVENT, this.handleMyEvent) // Event Bus registration   },    methods: {     activateActivityTracker() {...}, // Activity tracker     blockReload() {...}, // Reload blocker     deactivateActivityTracker() {...}, // Activity tracker     handleMyEvent() {...}, // Event Bus registration     async loadLocaleAsync(selectedLocale) {...} // Locale     redirectUser() {...} // Authentication check     resetActivityTimeout() {...}, // Activity tracker     setI18nLocale(locale) {...}, // Locale     setReloadCount() {...}, // Reload blocker     userActivityThrottler() {...}, // Activity tracker   } } </script>

See how hard it is to untangle all of those? 🙂

Now imagine you need to make a change in one functionality (e.g. activity tracking logic). Not only do you need to know which elements are related to that logic, but even when you know, you still need to jump up and down between different component options.

Let’s use the Composition API to separate the code by logical concerns. To do that we create a single function for each logic related to a specific functionality. This is what we call a composition function.

// Activity tracking logic function useActivityTracker() {   const userActivityTimeout = ref(null)   const lastUserActivityAt = ref(null)    function activateActivityTracker() {...}   function deactivateActivityTracker() {...}   function resetActivityTimeout() {...}   function userActivityThrottler() {...}    onBeforeMount(() => {     activateActivityTracker()     resetActivityTimeout()   })    onUnmounted(() => {     deactivateActivityTracker()     clearTimeout(userActivityTimeout.value)   }) }
// Reload blocking logic function useReloadBlocker(context) {   const reloadCount = ref(null)    function blockReload() {...}   function setReloadCount() {...}    onMounted(() => {     setReloadCount()     blockReload()   }) }
// Locale logic function useLocale(context) {   async function loadLocaleAsync(selectedLocale) {...}   function setI18nLocale(locale) {...}    watch(() => {     const locale = ...     loadLocaleAsync(locale)   })    // No need for a 'created' hook, all logic that runs in setup function is placed between beforeCreate and created hooks   const initialLocale = localStorage.getItem('locale')   loadLocaleAsync(initialLocale) }
// Event bus listener registration import EventBus from '@/event-bus'  function useEventBusListener(eventName, handler) {   onMounted(() => EventBus.$  on(eventName, handler))   onUnmounted(() => EventBus.$  off(eventName, handler)) }

As you can see, we can declare reactive data (ref / reactive), computed props, methods (plain functions), watchers (watch) and lifecycle hooks (onMounted / onUnmounted). Basically everything you normally use in a component.

We have two options when it comes to where to keep the code. We can leave it inside the component or extract it into a separate file. Since the Composition API is not officially there yet, there are no best practices or rules on how to deal with it. The way I see it, if the logic is tightly coupled to a specific component (i.e. it won’t be reused anywhere else), and it can’t live without the component itself, I suggest leaving it within the component. On the flip side, if it is general functionality that will likely be reused, I suggest extracting it to a separate file. However, if we want to keep it in a separate file, we need to remember to export the function from the file and import it in our component.

This is how our component will look like using newly created composition functions:

<template>   <div id="app">          </div> </template>  <script> export default {   name: 'App',    setup(props, context) {     useEventBusListener(MY_EVENT, handleMyEvent)     useActivityTracker()     useReloadBlocker(context)     useLocale(context)      const isAuthenticated = computed(() => ...)      watch(() => {       if (!isAuthenticated) {...}     })      function handleMyEvent() {...},      function useLocale() {...}     function useActivityTracker() {...}     function useEventBusListener() {...}     function useReloadBlocker() {...}   } } </script>

This gives us a single function for each logical concern. If we want to use any specific concern, we need to call the related composition function in the new setup function.

Imagine again that you need to make some change in activity tracking logic. Everything related to that functionality lives in the useActivityTracker function. Now you instantly know where to look and jump to the right place to see all the related pieces of code. Beautiful!

Extracting reusable pieces of code

In our case, the Event Bus listener registration looks like a piece of code we can use in any component that listens to events on Event Bus.

As mentioned before, we can keep the logic related to a specific functionality in a separate file. Let’s move our Event Bus listener setup into a separate file.

// composables/useEventBusListener.js import EventBus from '@/event-bus'  export function useEventBusListener(eventName, handler) {   onMounted(() => EventBus.$  on(eventName, handler))   onUnmounted(() => EventBus.$  off(eventName, handler)) }

To use it in a component, we need to make sure we export our function (named or default) and import it in a component.

<template>   <div id="app">     ...   </div> </template>  <script> import { useEventBusListener } from '@/composables/useEventBusListener'  export default {   name: 'MyComponent',    setup(props, context) {     useEventBusListener(MY_EVENT, myEventHandled)     useEventBusListener(ANOTHER_EVENT, myAnotherHandled)   } } </script>

That’s it! We can now use that in any component we need.

Wrapping up

There is an ongoing discussion about the Composition API. This post has no intention to promote any side of the discussion. It is more about showing when it might be useful and in what cases it brings additional value.

I think it is always easier to understand the concept on a real life example like above. There are more use cases and, the more you use the new API, the more patterns you will see. This post is merely a few basic patterns to get your started.

Let’s go again through the presented use cases and see where the Composition API can be useful:

General features that can live on its own without tight coupling with any specific component

  • All logic related to a specific feature in one file
  • Keep it in @/composables/*.js and import it in components
  • Examples: Activity Tracker, Reload Blocker, and Locale

Reusable features that are used in multiple components

  • All logic related to a specific feature in one file
  • Keep it in @/composables/*.js and import in components
  • Examples: Event Bus listener registration, window event registration, common animation logic, common library usage

Code organization within component

  • All logic related to a specific feature in one function
  • Keep the code in a composition function within the component
  • The code related to the same logical concern is in the same place (i.e. there’s no need to jump between data, computed, methods, lifecycle hooks, etc.)

Remember: This is all a work-in-progress!

The Vue Composition API is currently at work in progress stage and is subject to future changes. Nothing mentioned in the examples above is sure, and both syntax and use cases may change. It is intended to be shipped with Vue version 3.0. In the meantime, you can check out view-use-web for a collection of composition functions that are expected to be included in Vue 3 but can be used with the Composition API in Vue 2.

If you want to experiment with the new API you can use the @vue/composition library.

The post An Early Look at the Vue 3 Composition API in the Wild appeared first on CSS-Tricks.

CSS-Tricks

, , ,
[Top]

Thoughts After Looking at the Web Almanac’s Chapter on CSS

Woah, I didn’t see this coming! The HTTP Archive dropped this big “state of the web” report called Web Almanac with guest writers exploring data from 5.8 million websites.

Una Kravetz and Adam Argyle wrote the CSS chapter. The point is to squeeze a digestible amount of insight out of a mountain’s worth of data. So there is some irony here with me squeezing in my thoughts about this chapter for an even quicker read, but hey, here we go.

  • 83% of sites make use of rgba() but only 22% use rgb(). That entirely makes sense to me, as rgb() isn’t a particularly useful color format, if you ask me. It’s the “a” (alpha) that gives the format the ability control transparency, which is only recently available in the most popular format, Hex, in the form of 8-Digit Hex Codes. But even then, it isn’t as nice as rgba(). hsla() is arguably the nicest color format.
  • Definitely not surprising that white and black are the two most popular named colors. I use them, by name, a lot. I even change CSS commits to use white instead of #FFF and black instead of #000 because I think there is less mental overhead to it.
  • em is twice as popular as rem. Wow. I’m a rem guy myself, just because I find it less finicky and more predictable. In theory, I still like the idea of px at the Root, rem for Components, and em for Text Elements, but I’m not sure I’ve ever pulled it off that cleanly.
  • Classes are by far the leader in selector types. Factoring how many are used, they have over a 10x lead over anything else. I like to see that. They have a low-but-not-too-low specificity value. They have nice APIs for manipulating them. Their entire purpose is to be a styling hook. They are element-agnostic. It’s just the right way to do styling. The flatter you keep styles, the less headaches you’ll have., A little more surprisingly to me is the fact that the average number of classes per element is one. Makes me really wanna know the decimal though. Was it 1.1? 1.4? 1.00001?
  • Holy buckets. Flexbox on half of sites and grid only on two percent?! The browser support isn’t that different. I’m well aware they are both very useful and can be used together and are for different things, but I find grid generally more useful and personally use it more often than flexbox.
  • I would have guessed the median number of loaded fonts on a site to average to 0, but it’s 3! I think of CSS-Tricks as having one (which is Rubik at time of writing and used only for titles. The body font of this site is system-ui.) But really, it’s 4, because of preloading and subsetting and bold versus regular and such. I wonder when variable fonts will start having an impact. I would think it would lower this number over time. Open Sans and Roboto are triple any other loaded font, and the whole list is Google Font stuff, other than some instances of Font Awesome.
  • It’s funny how some major libraries can skew stats at such a global/universal scale for their use of certain features. I remember how YouTube’s play button used to “morph” into a pause button using an SVG technology called SMIL. But because YouTube is so huge, it could look like a very high percentage of internet traffic includes SMIL, when it’s really just one button on one site. filter is in that report. While filters are cool, it’s really the fact that it happens to be within YouTube embeds and Font Awesome’s stylesheet that the percentage of sites using it (78%) looks astonishingly high.
  • Of pages that make super heavy use of transitions and animations, transitions are about twice as heavily used, but, transitions are used six times more at the 50th percentile. That feels about right to me. Transitions are generally more useful, but the more animation you are doing, the more you reach for advanced tools like keyframes.
  • Looks like most websites have approximately 45 media queries on them. It’s not totally clear if those are different media queries, or if they could be the same media queries repeated elsewhere in the stylesheet. That seems like a lot if they are all different, so I suspect it’s just a tooling thing where people use nested media queries for authoring clarity and they bubble out but don’t get combined because that’s too weird with specificity. I’d be interested to see how many unique media queries people use. The most fascinating thing to me about the media query data is that min-width and max-width are neck-and-neck in usage. I would have put max-width far ahead if I was guessing.
  • About six stylesheets per site. It’s probably too hard to tell because assets like this are so often hosted on CDNs, but I’d love to know how many are hand-authored for the site, and how many are from third parties. Is the distribution like three and three or like one and five?

There is a lot more in the report, and CSS is just one of twenty chapters. So go digging!

The post Thoughts After Looking at the Web Almanac’s Chapter on CSS appeared first on CSS-Tricks.

CSS-Tricks

, , , ,
[Top]

scrapestack: An API for Scraping Sites

(This is a sponsored post.)

Not every site has an API to access data from it. Most don’t, in fact. If you need to pull that data, one approach is to “scrape” it. That is, load the page in web browser (that you automate), find what you are looking for in the DOM, and take it.

You can do this yourself if you want to deal with the cost, maintenance, and technical debt. For example, this is one of the big use-cases for “headless” browsers, like how Puppeteer can spin up and control headless Chrome.

Or, you can use a tool like scrapestack that is a ready-to-use API that not only does the scraping for you, but likely does it better, faster, and with more options than trying to do it yourself.

Say my goal is to pull the latest completed meetup from a Meetup.com page. Meetup.com has an API, but it’s pricy and requires OAuth and stuff. All we need is the name and link of a past meetup here, so let’s just yank it off the page.

We can see what we need in the DOM:

To have a play, let’s scrape it with the scrapestack API client-side with jQuery:

$ .get('https://api.scrapestack.com/scrape',   {     access_key: 'MY_API_KEY',     url: 'https://www.meetup.com/BendJS/'   },   function(websiteContent) {      // we have the entire sites HTML here!    } );

Within that callback, I can now also use jQuery to traverse the DOM, snagging the pieces I want, and constructing what I need on our site:

// Get what we want let event = $ (websiteContent)   .find(".groupHome-eventsList-pastEvents .eventCard")   .first(); let eventTitle = event   .find(".eventCard--link")[0].innerText; let eventLink =    `https://www.meetup.com/` +    event.find(".eventCard--link").attr("href");  // Use it on page $ ("#event").append(`   $ {eventTitle} `);

In real usage, if we were doing it client-side like this, we’d make use of some rudimentary storage so we wouldn’t have to hit the API on every page load, like sticking the result in localStorage and invalidating after a few days or something.

It works!

It’s actually much more likely that we do our scraping server-side. For one thing, that’s the way to protect your API keys, which is your responsibility, and not really possible on a public-facing site if you’re using the API directly client-side.

Myself, I’d probably make a cloud function to do it, so I can stay in JavaScript (Node.js), and have the opportunity to tuck the data in storage somewhere.

I’d say go check out the documentation and see if this isn’t the right answer next time you need to do some scraping. You get 10,000 requests on the free plan to try it out anyway, and it jumps up a ton on any of the paid plans with more features.

Direct Link to ArticlePermalink

The post scrapestack: An API for Scraping Sites appeared first on CSS-Tricks.

CSS-Tricks

, ,
[Top]

CSS-Tricks Chronicle XXXVII

Chronicle posts are opportunities for me to round-up things that I haven’t gotten a chance to post about yet, rounded up together. It’s stuff like podcasts I’ve had the good fortune of being on, conferences I’ve been at or are going to be at, happenings at ShopTalk and CodePen, and more.

My talk at JAMstack_conf

We recorded a live episode of ShopTalk Show as well:

A guest on The Product Business Podcast

Happenings at CodePen

As I write and publish this, we’re rolling out a really cool new UI feature. Wherever you’re browsing CodePen you see grids of items. For example, a 6-up grid of Pens. Click them and it takes you to the Pen Editor (there are some shortcuts, like clicking the views number takes you to Full Page View and clicking the comments number takes you to Details View). Now, there is a prominent action that will expand the Pen into a modal right on that page. This will allow you to play with the Pen, see it’s code, see the description, tags, comments… really everything related to that Pen, and without leaving the page you were on. It’s a faster, easier, and more fun way to browse around CodePen. If you’re not PRO, there are some ads as part of it, which helped justify it from a business perspective.

Other newer features include any-size Teams, user blocking, and private-by-default.

Oh! And way nicer Collections handling:

Part of life at CodePen is also all the things we do very consistently week after week, like publish our weekly newsletter The CodePen Spark, publish our podcast CodePen Radio, and produce weekly Challenges to give everyone a fresh prompt to build something from if you could use the nudge.

Upgrading to PRO is the best thing you can do to support me and these sites.

My talk at FITC’s Web Unleashed

My talk at Craft CMS’s dotall Conf

I was on a panel discussion there as well: The Future of Web Development Panel with Ryan Irelan, Andrew Welch, Matsuko Friedland, Marion Newlevant, and myself. Here’s that video:

Conferences

I don’t have any more conferences in 2019, but I’ll be at a handful of them in 2020 I’m sure. Obviously I’m pretty interested in anything that gets into front-end web development.

Remember we have a whole calendar site for upcoming front-end development-related conferences! Please submit any you happen to see missing.

The only ones I have booked firmly so far are Smashing Conf in April / San Francisco and June / Austin.

Happenings at ShopTalk

We don’t even really have “seasons” on ShopTalk Show. Instead, we’re just really consistent and put out a show every week. We’ll be skipping just one show over the holidays (that’s a little nuts, honestly, we should plan to take a longer break next year).

We’re edging extremely close to hitting episode #400. I think it’ll end up being in February. Considering we do about a show a week, that’s getting on eight years. Questions like: Are we played out? Are people finding us still? Are people afraid to jump in? Are people as engaged with the show this far in? – have been going through my mind. But anytime we mention stuff along those lines on the show, we hear from lots of people just starting to listen, having no trouble, and enjoying it. I think having a “brand” as established as we’ve done with ShopTalk Show is ultimately a good thing and worth keeping on largely as-is. We might not be as hot’n’fresh as some new names in podcasting, but even I find our show consistently interesting.

Some of our top-downloaded shows in the last few months:

Ducktapes

I was on an episode of the Duck Tapes Podcast they called SVG with Chris Coyier:

The post CSS-Tricks Chronicle XXXVII appeared first on CSS-Tricks.

CSS-Tricks

, ,
[Top]

When to Use SVG vs. When to Use Canvas

SVG and canvas are both technologies that can draw stuff in web browsers, so they are worth comparing and understanding when one is more suitable than the other. Even a light understanding of them makes the choice of choosing one over the other pretty clear.

  • A little flat-color icon? That’s clearly SVG territory.
  • An interactive console-like game? That’s clearly canvas territory.

I know we didn’t cover why yet, but I hope that will become clear as we dig into it.

SVG is vector and declarative

If you know you need vector art, SVG is the choice. Vector art is visually crisp and tends to be a smaller file size than raster graphics like JPG.

That makes logos a very common SVG use case. SVG code can go right within HTML, and are like declarative drawing instructions:

<svg viewBox="0 0 100 100">   <circle cx="50" cy="50" r="50" /> </svg>

If you care a lot about the flexibility and responsiveness of the graphic, SVG is the way.

Canvas is a JavaScript drawing API

You put a <canvas> element in HTML, then do the drawing in JavaScript. In other words, you issue commands to tell it how to draw (which is more imperative than declarative).

<canvas id="myCanvas" width="578" height="200"></canvas> <script>   var canvas = document.getElementById('myCanvas');   var context = canvas.getContext('2d');   var centerX = canvas.width / 2;   var centerY = canvas.height / 2;   var radius = 70;    context.beginPath();   context.arc(centerX, centerY, radius, 0, 2 * Math.PI, false);   context.fillStyle = 'green';   context.fill(); </script>

SVG is in the DOM

If you’re familiar with DOM events like click and mouseDown and whatnot, those are available in SVG as well. A <circle> isn’t terribly different than a <div> in that respect.

<svg viewBox="0 0 100 100">      <circle cx="50" cy="50" r="50" />      <script>     document.querySelector('circle').addEventListener('click', e => {       e.target.style.fill = "red";     });   </script>    </svg>

SVG for accessibility

You can have a text alternative for canvas:

<canvas aria-label="Hello ARIA World" role="img"></canvas>

You can do that in SVG too, but since SVG and its guts can be right in the DOM, we generally think of SVG as being what you use if you’re trying to build an accessible experience. Explained another way, you can build an SVG that assistive technology can access and find links and sub-elements with their own auditory explanations and such.

Text is also firmly in SVG territory. SVG literally has a <text> element, which is accessible and visually crisp — unlike canvas where text is typically blurry.

Canvas for pixels

As you’ll see in Sarah Dranser’s comparison below, canvas is a way of saying dance, pixels, dance!. That’s a fun way of explaining the concept that drives it home better than any dry technical sentiment could do.

Highly interactive work with lots and lots of complex detail and gradients is the territory of canvas. You’ll see a lot more games built with canvas than SVG for this reason, although there are always exceptions (note the simple vector-y-ness of this game).

CSS can play with SVG

We saw above that SVG can be in the DOM and that JavaScript can get in there and do stuff. The story is similar with CSS.

<svg viewBox="0 0 100 100">      <circle cx="50" cy="50" r="50" />      <style>     circle { fill: blue; }   </style>    </svg>

Note how I’ve put the <script> and <style> blocks within the <svg> for these examples, which is valid. But assuming you’ve put the SVG literally in the HTML, you could move those out, or have other external CSS and JavaScript do the same thing.

We have a massive guide of SVG Properties and CSS. But what is great to know is that the stuff that CSS is great at is still possible in SVG, like :hover states and animation!

See the Pen
ExxbpBE
by Chris Coyier (@chriscoyier)
on CodePen.

Combinations

Technically, they aren’t entirely mutually exclusive. A <svg> can be painted to a <canvas>.

As Blake Bowen proves, you can even keep the SVG on the canvas very crisp!

See the Pen
SVG vs Canvas Scaling
by Blake Bowen (@osublake)
on CodePen.

Ruth John’s comparison

See the Pen
SVG vs Canvas
by Rumyra (@Rumyra)
on CodePen.

Sarah Drasner’s comparison

Tablized from this tweet.

DOM/Virtual DOM Canvas
Pros
Great for UI/UX animation Dance, pixels, dance!
Great for SVG that is resolution independent Great for really impressive 3D or immersive stuff
Easier to debug Movement of tons of objects
Cons
Tanks with lots of objects Harder to make accessible
Because you have to care about the way you anmimate Not resolution independent out of the box
Breaks to nothing

Shirley Wu’s comparison

Tablized from this tweet.

SVG Canvas
Pros Easy to get started Very performant
Easier to register user interactions Easy to update
Easy to animate
Cons Potentially complex DOM More work to get started
Not performant for a large number of elements More work to handle interactions
Have to write custom animations

Many folks consider scenarios with a lot of objects (1,000+, as Shirley says) is the territory of canvas.

SVG is the default choice; canvas is the backup

A strong opinion, but it feels right to me:

Wrap up

So, if we revisit those first two bullet points…

  • A little flat-color icon? SVG goes in the DOM, so something like an icon inside a button makes a lot of sense for SVG — not to mention it can be controlled with CSS and have regular JavaScript event handlers and stuff
  • An interactive console-like game? That will have lots and lots of moving elements, complex animation and interaction, performance considerations. Those are things that canvas excels at.

And yet there is a bunch of middle ground here. As a day-to-day web designer/developer kinda guy, I find SVG far more useful on a practical level. I’m not sure I’ve done any production work in canvas ever. Part of that is because I don’t know canvas nearly as well. I wrote a book on SVG, so I’ve done far more research on that side, but I know enough that SVG is doing the right stuff for my needs.

The post When to Use SVG vs. When to Use Canvas appeared first on CSS-Tricks.

CSS-Tricks

[Top]

Making an Audio Waveform Visualizer with Vanilla JavaScript

As a UI designer, I’m constantly reminded of the value of knowing how to code. I pride myself on thinking of the developers on my team while designing user interfaces. But sometimes, I step on a technical landmine.

A few years ago, as the design director of wsj.com, I was helping to re-design the Wall Street Journal’s podcast directory. One of the designers on the project was working on the podcast player, and I came upon Megaphone’s embedded player.

I previously worked at SoundCloud and knew that these kinds of visualizations were useful for users who skip through audio. I wondered if we could achieve a similar look for the player on the Wall Street Journal’s site.

The answer from engineering: definitely not. Given timelines and restraints, it wasn’t a possibility for that project. We eventually shipped the redesigned pages with a much simpler podcast player.

But I was hooked on the problem. Over nights and weekends, I hacked away trying to achieve this
effect. I learned a lot about how audio works on the web, and ultimately was able to achieve the look with less than 100 lines of JavaScript!

It turns out that this example is a perfect way to get acquainted with the Web Audio API, and how to visualize audio data using the Canvas API.

But first, a lesson in how digital audio works

In the real, analog world, sound is a wave. As sound travels from a source (like a speaker) to your ears, it compresses and decompresses air in a pattern that your ears and brain hear as music, or speech, or a dog’s bark, etc. etc.

An analog sound wave is a smooth, continuous function.

But in a computer’s world of electronic signals, sound isn’t a wave. To turn a smooth, continuous wave into data that it can store, computers do something called sampling. Sampling means measuring the sound waves hitting a microphone thousands of times every second, then storing those data points. When playing back audio, your computer reverses the process: it recreates the sound, one tiny split-second of audio at a time.

A digital sound file is made up of tiny slices of the original audio, roughly re-creating the smooth continuous wave.

The number of data points in a sound file depends on its sample rate. You might have seen this number before; the typical sample rate for mp3 files is 44.1 kHz. This means that, for every second of audio, there are 44,100 individual data points. For stereo files, there are 88,200 every second — 44,100 for the left channel, and 44,100 for the right. That means a 30-minute podcast has 158,760,000 individual data points describing the audio!

How can a web page read an mp3?

Over the past nine years, the W3C (the folks who help maintain web standards) have developed the Web Audio API to help web developers work with audio. The Web Audio API is a very deep topic; we’ll hardly crack the surface in this essay. But it all starts with something called the AudioContext.

Think of the AudioContext like a sandbox for working with audio. We can initialize it with a few lines of JavaScript:

// Set up audio context window.AudioContext = window.AudioContext || window.webkitAudioContext; const audioContext = new AudioContext(); let currentBuffer = null;

The first line after the comment is a necessary because Safari has implemented AudioContext as webkitAudioContext.

Next, we need to give our new audioContext the mp3 file we’d like to visualize. Let’s fetch it using… fetch()!

const visualizeAudio = url => {   fetch(url)     .then(response => response.arrayBuffer())     .then(arrayBuffer => audioContext.decodeAudioData(arrayBuffer))     .then(audioBuffer => visualize(audioBuffer)); };

This function takes a URL, fetches it, then transforms the Response object a few times.

  • First, it calls the arrayBuffer() method, which returns — you guessed it — an ArrayBuffer! An ArrayBuffer is just a container for binary data; it’s an efficient way to move lots of data around in JavaScript.
  • We then send the ArrayBuffer to our audioContext via the decodeAudioData() method. decodeAudioData() takes an ArrayBuffer and returns an AudioBuffer, which is a specialized ArrayBuffer for reading audio data. Did you know that browsers came with all these convenient objects? I definitely did not when I started this project.
  • Finally, we send our AudioBuffer off to be visualized.

Filtering the data

To visualize our AudioBuffer, we need to reduce the amount of data we’re working with. Like I mentioned before, we started off with millions of data points, but we’ll have far fewer in our final visualization.

First, let’s limit the channels we are working with. A channel represents the audio sent to an individual speaker. In stereo sound, there are two channels; in 5.1 surround sound, there are six. AudioBuffer has a built-in method to do this: getChannelData(). Call audioBuffer.getChannelData(0), and we’ll be left with one channel’s worth of data.

Next, the hard part: loop through the channel’s data, and select a smaller set of data points. There are a few ways we could go about this. Let’s say I want my final visualization to have 70 bars; I can divide up the audio data into 70 equal parts, and look at a data point from each one.

const filterData = audioBuffer => {   const rawData = audioBuffer.getChannelData(0); // We only need to work with one channel of data   const samples = 70; // Number of samples we want to have in our final data set   const blockSize = Math.floor(rawData.length / samples); // Number of samples in each subdivision   const filteredData = [];   for (let i = 0; i < samples; i++) {     filteredData.push(rawData[i * blockSize]);    }   return filteredData; }
This was the first approach I took. To get an idea of what the filtered data looks like, I put the result into a spreadsheet and charted it.

The output caught me off guard! It doesn’t look like the visualization we’re emulating at all. There are lots of data points that are close to, or at zero. But that makes a lot of sense: in a podcast, there is a lot of silence between words and sentences. By only looking at the first sample in each of our blocks, it’s highly likely that we’ll catch a very quiet moment.

Let’s modify the algorithm to find the average of the samples. And while we’re at it, we should take the absolute value of our data, so that it’s all positive.

const filterData = audioBuffer => {   const rawData = audioBuffer.getChannelData(0); // We only need to work with one channel of data   const samples = 70; // Number of samples we want to have in our final data set   const blockSize = Math.floor(rawData.length / samples); // the number of samples in each subdivision   const filteredData = [];   for (let i = 0; i < samples; i++) {     let blockStart = blockSize * i; // the location of the first sample in the block     let sum = 0;     for (let j = 0; j < blockSize; j++) {       sum = sum + Math.abs(rawData[blockStart + j]) // find the sum of all the samples in the block     }     filteredData.push(sum / blockSize); // divide the sum by the block size to get the average   }   return filteredData; }

Let’s see what that data looks like.

This is great. There’s only one thing left to do: because we have so much silence in the audio file, the resulting averages of the data points are very small. To make sure this visualization works for all audio files, we need to normalize the data; that is, change the scale of the data so that the loudest samples measure as 1.

const normalizeData = filteredData => {   const multiplier = Math.pow(Math.max(...filteredData), -1);   return filteredData.map(n => n * multiplier); }

This function finds the largest data point in the array with Math.max(), takes its inverse with Math.pow(n, -1), and multiplies each value in the array by that number. This guarantees that the largest data point will be set to 1, and the rest of the data will scale proportionally.

Now that we have the right data, let’s write the function that will visualize it.

Visualizing the data

To create the visualization, we’ll be using the JavaScript Canvas API. This API draws graphics into an HTML <canvas> element. The first step to using the Canvas API is similar to the Web Audio API.

const draw = normalizedData => {   // Set up the canvas   const canvas = document.querySelector("canvas");   const dpr = window.devicePixelRatio || 1;   const padding = 20;   canvas.width = canvas.offsetWidth * dpr;   canvas.height = (canvas.offsetHeight + padding * 2) * dpr;   const ctx = canvas.getContext("2d");   ctx.scale(dpr, dpr);   ctx.translate(0, canvas.offsetHeight / 2 + padding); // Set Y = 0 to be in the middle of the canvas };

This code finds the <canvas> element on the page, and checks the browser’s pixel ratio (essentially the screen’s resolution) to make sure our graphic will be drawn at the right size. We then get the context of the canvas (its individual set of methods and values). We calculate the pixel dimensions pf the canvas, factoring in the pixel ratio and adding in some padding. Lastly, we change the coordinates system of the <canvas>; by default, (0,0) is in the top-left of the box, but we can save ourselves a lot of math by setting (0, 0) to be in the middle of the left edge.

Now let’s draw some lines! First, we’ll create a function that will draw an individual segment.

const drawLineSegment = (ctx, x, y, width, isEven) => {   ctx.lineWidth = 1; // how thick the line is   ctx.strokeStyle = "#fff"; // what color our line is   ctx.beginPath();   y = isEven ? y : -y;   ctx.moveTo(x, 0);   ctx.lineTo(x, y);   ctx.arc(x + width / 2, y, width / 2, Math.PI, 0, isEven);   ctx.lineTo(x + width, 0);   ctx.stroke(); };

The Canvas API uses an concept called “turtle graphics.” Imagine that the code is a set of instructions being given to a turtle with a marker. In basic terms, the drawLineSegment() function works as follows:

  1. Start at the center line, x = 0.
  2. Draw a vertical line. Make the height of the line relative to the data.
  3. Draw a half-circle the width of the segment.
  4. Draw a vertical line back to the center line.

Most of the commands are straightforward: ctx.moveTo() and ctx.lineTo() move the turtle to the specified coordinate, without drawing or while drawing, respectively.

Line 5, y = isEven ? -y : y, tells our turtle whether to draw down or up from the center line. The segments alternate between being above and below the center line so that they form a smooth wave. In the world of the Canvas API, negative y values are further up than positive ones. This is a bit counter-intuitive, so keep it in mind as a possible source of bugs.

On line 8, we draw a half-circle. ctx.arc() takes six parameters:

  • The x and y coordinates of the center of the circle
  • The radius of the circle
  • The place in the circle to start drawing (Math.PI or π is the location, in radians, of 9 o’clock)
  • The place in the circle to finish drawing (0 in radians represents 3 o’clock)
  • A boolean value telling our turtle to draw either counterclockwise (if true) or clockwise (if false). Using isEven in this last argument means that we’ll draw the top half of a circle — clockwise from 9 o’clock to 3 clock — for even-numbered segments, and the bottom half for odd-numbered segments.

OK, back to the draw() function.

const draw = normalizedData => {   // Set up the canvas   const canvas = document.querySelector("canvas");   const dpr = window.devicePixelRatio || 1;   const padding = 20;   canvas.width = canvas.offsetWidth * dpr;   canvas.height = (canvas.offsetHeight + padding * 2) * dpr;   const ctx = canvas.getContext("2d");   ctx.scale(dpr, dpr);   ctx.translate(0, canvas.offsetHeight / 2 + padding); // Set Y = 0 to be in the middle of the canvas    // draw the line segments   const width = canvas.offsetWidth / normalizedData.length;   for (let i = 0; i < normalizedData.length; i++) {     const x = width * i;     let height = normalizedData[i] * canvas.offsetHeight - padding;     if (height < 0) {         height = 0;     } else if (height > canvas.offsetHeight / 2) {         height = height > canvas.offsetHeight / 2;     }     drawLineSegment(ctx, x, height, width, (i + 1) % 2);   } };

After our previous setup code, we need to calculate the pixel width of each line segment. This is the canvas’s on-screen width, divided by the number of segments we’d like to display.

Then, a for-loop goes through each entry in the array, and draws a line segment using the function we defined earlier. We set the x value to the current iteration’s index, times the segment width. height, the desired height of the segment, comes from multiplying our normalized data by the canvas’s height, minus the padding we set earlier. We check a few cases: subtracting the padding might have pushed height into the negative, so we re-set that to zero. If the height of the segment will result in a line being drawn off the top of the canvas, we re-set the height to a maximum value.

We pass in the segment width, and for the isEven value, we use a neat trick: (i + 1) % 2 means “find the reminder of i + 1 divided by 2.” We check i + 1 because our counter starts at 0. If i + 1 is even, its remainder will be zero (or false). If i is odd, its remainder will be 1 or true.

And that’s all she wrote. Let’s put it all together. Here’s the whole script, in all its glory.

See the Pen
Audio waveform visualizer
by Matthew Ström (@matthewstrom)
on CodePen.

In the drawAudio() function, we’ve added a few functions to the final call: draw(normalizeData(filterData(audioBuffer))). This chain filters, normalizes, and finally draws the audio we get back from the server.

If everything has gone according to plan, your page should look like this:

Notes on performance

Even with optimizations, this script is still likely running hundreds of thousands of operations in the browser. Depending on the browser’s implementation, this can take many seconds to finish, and will have a negative impact on other computations happening on the page. It also downloads the whole audio file before drawing the visualization, which consumes a lot of data. There are a few ways that we could improve the script to resolve these issues:

  1. Analyze the audio on the server side. Since audio files don’t change often, we can take advantage of server-side computing resources to filter and normalize the data. Then, we only have to transmit the smaller data set; no need to download the mp3 to draw the visualization!
  2. Only draw the visualization when a user needs it. No matter how we analyze the audio, it makes sense to defer the process until well after page load. We could either wait until the element is in view using an intersection observer, or delay even longer until a user interacts with the podcast player.
  3. Progressive enhancement. While exploring Megaphone’s podcast player, I discovered that their visualization is just a facade — it’s the same waveform for every podcast. This could serve as a great default to our (vastly superior) design. Using the principles of progressive enhancement, we could load a default image as a placeholder. Then, we can check to see if it makes sense to load the real waveform before initiating our script. If the user has JavaScript disabled, their browser doesn’t support the Web Audio API, or they have the save-data header set, nothing is broken.

I’d love to hear any thoughts y’all have on optimization, too.

Some closing thoughts

This is a very, very impractical way of visualizing audio. It runs on the client side, processing millions of data points into a fairly straightforward visualization.

But it’s cool! I learned a lot in writing this code, and even more in writing this article. I refactored a lot of the original project and trimmed the whole thing in half. Projects like this might not ever go on to see a production codebase, but they are unique opportunities to develop new skills and a deeper understanding of some of the neat APIs modern browsers support.

I hope this was a helpful tutorial. If you have ideas of how to improve it, or any cool variations on the theme, please reach out! I’m @ilikescience on Twitter.

The post Making an Audio Waveform Visualizer with Vanilla JavaScript appeared first on CSS-Tricks.

CSS-Tricks

, , , , ,
[Top]

A Super Weird CSS Bug That Affects Text Selection

You know how you can style (to some degree) selected text with ::selection? Well, Jeff Starr uncovered a heck of a weird CSS bug.

If you:

  1. Leave that selector empty
  2. Link it from an external stylesheet (rather than <style> block)

Selecting text will have no style at all. 😳😬😕

In other words, if you <link rel="stylesheet" ...> up some CSS that includes these empty selectors:

::-moz-selection { } ::selection { }

Then text appears to be un-selectable. You actually can still select the text, so it’s a bit like you did ::selection { background: transparent; } rather than user-select: none;.

The fact that it behaves that way in most browsers (Safari being a lone exception) makes it seems like that’s how it is specced, but I’m calling it a bug. A selector with zero properties in it should essentially be ignored, rather than doing something so heavy-handed.

Jeff made a demo. I did as well to confirm it.

See the Pen
Invisible Text Selection Bug
by Chris Coyier (@chriscoyier)
on CodePen.

The post A Super Weird CSS Bug That Affects Text Selection appeared first on CSS-Tricks.

CSS-Tricks

, , , ,
[Top]

Disabled buttons suck

In this oldie but goodie, Hampus Sethfors digs into why disabled buttons are troubling for usability reasons and he details one example where this was pretty annoying for him. The same has happened to me recently where I clicked a button that looked like a secondary button and… nothing happened.

Here’s another reason why disabled buttons are bad though:

Disabled buttons usually have call to action words on them, like “Send”, “Order” or “Add friend”. Because of that, they often match what users want to do. So people will try to click them.

What’s the alternative? Throw errors? Highlight the precise field that the user needs to update? All those are certainly very helpful to show what the problem is and how to fix it. To play devil’s advocate for a second though, Daniel Koster wrote about how disabled buttons don’t have to suck which is sort of interesting.

But whatever the case, let’s just not get into the topic of disabled links.

Direct Link to ArticlePermalink

The post Disabled buttons suck appeared first on CSS-Tricks.

CSS-Tricks

, ,
[Top]

Pac-Man… in CSS!

You all know famous Pac-Man video game, right? The game is fun and building an animated Pac-Man character in HTML and CSS is just as fun! I’ll show you how to create one while leveraging the powers of the clip-path property.

See the Pen
Animated Pac-Man
by Maks Akymenko (@maximakymenko)
on CodePen.

Are you excited? Let’s get to it!

First, let’s bootstrap the project

We only need two files for our project: index.html and style.css. We could do this manually by creating an empty folder with the files inside. Or, we can use this as an excuse to work with the command line, if you’d like:

mkdir pacman cd pacman touch index.html && touch style.css

Set up the baseline styles

Go to style.css and add basic styling for your project. You could also use things like reset.css and normalize.css to reset the browser styling, but our project is simple and straightforward, so we won’t do much here.One thing you’ll want to do for sure is use Autoprefixer to help with cross-browser compatibility.

We’re basically setting the body to the full width and height of the viewport and centering things right smack dab in the middle of it. Things like the background color and fonts are purely aesthetic.

@import url('https://fonts.googleapis.com/css?family=Slabo+27px&display=swap');  *, *:after, *:before {   box-sizing: border-box; }  body {   background: #000;   color: #fff;   padding: 0;   margin: 0;   font-family: 'Slabo 27px', serif;   display: flex;   height: 100vh;   justify-content: center;   align-items: center; }

Behold, Pac-Man in HTML!

Do you remember how Pac-Man looks? He’s essentially a yellow circle and an opening in the circle for a mouth. He’s a two-dimensional dot-eating machine!

So he has a body (or is he just a head?) and a mouth. He doesn’t even have eyes, but we’ll give him one anyway.

This’ll be our HTML markup:

<div class="pacman">   <div class="pacman__eye"></div>   <div class="pacman__mouth"></div> </div>

Dressing up Pac-Man with CSS

The most interesting part starts! Go to style.css and create the styles for Pac-Man.

First, we’ll create his body/head/whatever-that-is. Again, that’s merely a circle with a yellow background:

.pacman {   width: 100px;   height: 100px;   border-radius: 50%;   background: #f2d648;   position: relative;   margin-top: 20px; }

His (non-existent) eye is pretty much the same — a circle, but smaller and dark gray. We’ll give it absolute positioning so we can place it right where it needs to be:

.pacman__eye {   position: absolute;   width: 10px;   height: 10px;   border-radius: 50%;   top: 20px;   right: 40px;   background: #333333; }

Now we have the basic shape!

See the Pen
Mouthless Pac-Man
by CSS-Tricks (@css-tricks)
on CodePen.

Using clip-path to draw the mouth

Pretty straightforward so far, right? If our Pac-Man is going to eat some dots (and chase some ghosts), he’s going to need a mouth. We’re going to create yet another circle, but make it black this time and overlay it right on top of his yellow head. Then we’re going to use the clip-path property to cut out a slice of it — sort of like an empty pie container except for one last piece of pie.

The dotted border shows where the black circle has been cut out, leaving only a slice of it to be Pac-Man’s mouth.
.pacman__mouth {   background: #000;   position: absolute;   width: 100%;   height: 100%;   clip-path: polygon(100% 74%, 44% 48%, 100% 21%); }

Why a polygon and all those percentages?!?! Notice that we’ve already established the width and height of the element. The polygon() function let’s us draw a free-form shape inside the bounds of the element and that shape serves as a mask that only displays that portion of the element. So, we’re using clip-path and telling it we want a shape (polygon()) that contains a series of points at specific positions (100% 74%, 44% 48%, 100% 21%).

The clip-path property can be hard to grok. The CSS-Tricks Almanac helps explain it. There’s also the cool Clippy app that makes it easy to draw clip-patch shapes and export the CSS, which is what I did for this tutorial.

So far, so good:

See the Pen
Pac-Man with Mouth
by CSS-Tricks (@css-tricks)
on CodePen.

Make Pac-Man eat

We’ve got a pretty good-looking Pac-Man, but it will be way cooler but there’s no way for him to chew his food. I mean, maybe we wants to swallow his food whole, but we’re not going to allow that. Let’s make his mouth open and close instead.

All we need to do is animate the clip-path property, and we’ll use @keyframes for that. I’m naming this animation eat:

@keyframes eat {   0% {     clip-path: polygon(100% 74%, 44% 48%, 100% 21%);   }   25% {     clip-path: polygon(100% 60%, 44% 48%, 100% 40%);   }   50% {     clip-path: polygon(100% 50%, 44% 48%, 100% 50%);   }   75% {     clip-path: polygon(100% 59%, 44% 48%, 100% 35%);   }   100% {     clip-path: polygon(100% 74%, 44% 48%, 100% 21%);   } }

Again, I used the Clippy app to get the values, however, feel free to pass in your own. Maybe, you’ll be able to make animation even smoother!

We’ve got our keyframes in place, so let’s add it to our .pacman class. We could use the shorthand animation property, but I’ve broken out the properties to make things more self-explanatory so you can see what’s going on:

animation-name: eat; animation-duration: 0.7s; animation-iteration-count: infinite;

There we go!

See the Pen
Chewing Pac-Man
by CSS-Tricks (@css-tricks)
on CodePen.

We’ve gotta feed Pac-Man

If Pac-Man can chomp, why not to give him some food to eat! Let’s slightly modify our HTML markup a bit to include some food:

<div class="pacman">   <div class="pacman__eye"></div>   <div class="pacman__mouth"></div>   <div class="pacman__food"></div> </div>

…and let’s style it up. After all, food needs to be appetizing to the eyes as well as the mouth! We’re going to make yet another circle because that’s what the game uses.

.pacman__food {   position: absolute;   width: 15px;   height: 15px;   background: #fff;   border-radius: 50%;   top: 40%;   left: 120px; }

See the Pen
Chewing Pac-Man with Teasing Food
by CSS-Tricks (@css-tricks)
on CodePen.

Aw, poor Pac-Man sees the food, but is unable to eat it. Let’s make the food come to him using another sprinkle of CSS animation:

@keyframes food {   0% {     transform: translateX(0);       opacity: 1;   }   100% {     transform: translateX(-50px);     opacity: 0;   } }

Now we only need to pass this animation to our .pacman__food class.

animation-name: food; animation-duration: 0.7s; animation-iteration-count: infinite;

We have a happy, eating Pac-Man!

See the Pen
Pac-Man Eating
by CSS-Tricks (@css-tricks)
on CodePen.

Like before, the animation took some tweaking on my end to get it just right. What’s happening is the food starts away from Pac-Man at full opacity, then slides closer to him using transform: translateX() to move from left to right and disappears with zero opacity. Then it’s set to run infinitely so he eats all day, every day!


That’s a wrap! It’s fun to take little things like this and see how HTML and CSS can be used to re-create them. I’d like to see your own Pac-Man (or Ms. Pac-Man!). Share in the comments!

The post Pac-Man… in CSS! appeared first on CSS-Tricks.

CSS-Tricks

[Top]