Tag: Modern

Taming the Cascade With BEM and Modern CSS Selectors

BEM. Like seemingly all techniques in the world of front-end development, writing CSS in a BEM format can be polarizing. But it is – at least in my Twitter bubble – one of the better-liked CSS methodologies.

Personally, I think BEM is good, and I think you should use it. But I also get why you might not.

Regardless of your opinion on BEM, it offers several benefits, the biggest being that it helps avoid specificity clashes in the CSS Cascade. That’s because, if used properly, any selectors written in a BEM format should have the same specificity score (0,1,0). I’ve architected the CSS for plenty of large-scale websites over the years (think government, universities, and banks), and it’s on these larger projects where I’ve found that BEM really shines. Writing CSS is much more fun when you have confidence that the styles you’re writing or editing aren’t affecting some other part of the site.

There are actually exceptions where it is deemed totally acceptable to add specificity. For instance: the :hover and :focus pseudo classes. Those have a specificity score of 0,2,0. Another is pseudo elements — like ::before and ::after — which have a specificity score of 0,1,1. For the rest of this article though, let’s assume we don’t want any other specificity creep. 🤓

But I’m not really here to sell you on BEM. Instead, I want to talk about how we can use it alongside modern CSS selectors — think :is(), :has(), :where(), etc. — to gain even more control of the Cascade.

What’s this about modern CSS selectors?

The CSS Selectors Level 4 spec gives us some powerful new(ish) ways to select elements. Some of my favorites include :is(), :where(), and :not(), each of which is supported by all modern browsers and is safe to use on almost any project nowadays.

:is() and :where() are basically the same thing except for how they impact specificity. Specifically, :where() always has a specificity score of 0,0,0. Yep, even :where(button#widget.some-class) has no specificity. Meanwhile, the specificity of :is() is the element in its argument list with the highest specificity. So, already we have a Cascade-wrangling distinction between two modern selectors that we can work with.

The incredibly powerful :has() relational pseudo-class is also rapidly gaining browser support (and is the biggest new feature of CSS since Grid, in my humble opinion). However, at time of writing, browser support for :has() isn’t quite good enough for use in production just yet.

Lemme stick one of those pseudo-classes in my BEM and…

/* ❌ specificity score: 0,2,0 */ .something:not(.something--special) {   /* styles for all somethings, except for the special somethings */ }

Whoops! See that specificity score? Remember, with BEM we ideally want our selectors to all have a specificity score of 0,1,0. Why is 0,2,0 bad? Consider this same example, expanded:

.something:not(.something--special) {   color: red; } .something--special {   color: blue; }

Even though the second selector is last in the source order, the first selector’s higher specificity (0,2,0) wins, and the color of .something--special elements will be set to red. That is, assuming your BEM is written properly and the selected element has both the .something base class and .something--special modifier class applied to it in the HTML.

Used carelessly, these pseudo-classes can impact the Cascade in unexpected ways. And it’s these sorts of inconsistencies that can create headaches down the line, especially on larger and more complex codebases.

Dang. So now what?

Remember what I was saying about :where() and the fact that its specificity is zero? We can use that to our advantage:

/* ✅ specificity score: 0,1,0 */ .something:where(:not(.something--special)) {   /* etc. */ }

The first part of this selector (.something) gets its usual specificity score of 0,1,0. But :where() — and everything inside it — has a specificity of 0, which does not increase the specificity of the selector any further.

:where() allows us to nest

Folks who don’t care as much as me about specificity (and that’s probably a lot of people, to be fair) have had it pretty good when it comes to nesting. With some carefree keyboard strokes, we may wind up with CSS like this (note that I’m using Sass for brevity):

.card { ... }  .card--featured {   /* etc. */     .card__title { ... }   .card__title { ... } }  .card__title { ... } .card__img { ... }

In this example, we have a .card component. When it’s a “featured” card (using the .card--featured class), the card’s title and image needs to be styled differently. But, as we now know, the code above results in a specificity score that is inconsistent with the rest of our system.

A die-hard specificity nerd might have done this instead:

.card { ... } .card--featured { ... } .card__title { ... } .card__title--featured { ... } .card__img { ... } .card__img--featured { ... }

That’s not so bad, right? Frankly, this is beautiful CSS.

There is a downside in the HTML though. Seasoned BEM authors are probably painfully aware of the clunky template logic that’s required to conditionally apply modifier classes to multiple elements. In this example, the HTML template needs to conditionally add the --featured modifier class to three elements (.card, .card__title, and .card__img) though probably even more in a real-world example. That’s a lot of if statements.

The :where() selector can help us write a lot less template logic — and fewer BEM classes to boot — without adding to the level of specificity.

.card { ... } .card--featured { ... }  .card__title { ... } :where(.card--featured) .card__title { ... }  .card__img { ... } :where(.card--featured) .card__img { ... }

Here’s same thing but in Sass (note the trailing ampersands):

.card { ... } .card--featured { ... } .card__title {    /* etc. */    :where(.card--featured) & { ... } } .card__img {    /* etc. */    :where(.card--featured) & { ... } }

Whether or not you should opt for this approach over applying modifier classes to the various child elements is a matter of personal preference. But at least :where() gives us the choice now!

What about non-BEM HTML?

We don’t live in a perfect world. Sometimes you need to deal with HTML that is outside of your control. For instance, a third-party script that injects HTML that you need to style. That markup often isn’t written with BEM class names. In some cases those styles don’t use classes at all but IDs!

Once again, :where() has our back. This solution is slightly hacky, as we need to reference the class of an element somewhere further up the DOM tree that we know exists.

/* ❌ specificity score: 1,0,0 */ #widget {   /* etc. */ }  /* ✅ specificity score: 0,1,0 */ .page-wrapper :where(#widget) {   /* etc. */ }

Referencing a parent element feels a little risky and restrictive though. What if that parent class changes or isn’t there for some reason? A better (but perhaps equally hacky) solution would be to use :is() instead. Remember, the specificity of :is() is equal to the most specific selector in its selector list.

So, instead of referencing a class we know (or hope!) exists with :where(), as in the above example, we could reference a made up class and the <body> tag.

/* ✅ specificity score: 0,1,0 */ :is(.dummy-class, body) :where(#widget) {   /* etc. */ }

The ever-present body will help us select our #widget element, and the presence of the .dummy-class class inside the same :is() gives the body selector the same specificity score as a class (0,1,0)… and the use of :where() ensures the selector doesn’t get any more specific than that.

That’s it!

That’s how we can leverage the modern specificity-managing features of the :is() and :where() pseudo-classes alongside the specificity collision prevention that we get when writing CSS in a BEM format. And in the not too distant future, once :has() gains Firefox support (it’s currently supported behind a flag at the time of writing) we’ll likely want to pair it with :where() to undo its specificity.

Whether you go all-in on BEM naming or not, I hope we can agree that having consistency in selector specificity is a good thing!


Taming the Cascade With BEM and Modern CSS Selectors originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

CSS-Tricks

, , ,

The Optional Chaining Operator, “Modern” Browsers, and My Mom

Jim Nielsen’s mom couldn’t open a website. Jim worked on confirming the issue and documented how he got to the bottom of it:

“[…] well it can’t be a browser issue. It’s not like my Mom is using Internet Explorer! She has relatively modern tech: an iPad (Safari) and a Chromebox (Google Chrome).”

But the more I thought about it—a website that works on some devices but not on others—the more I realized this had to be a browser issue.

So I looked at the version of Chrome on my parent’s computer. Version 76! I knew we were at ninety-something in 2022, so I figured that was the culprit. “I’ll just update Chrome,” I thought.

Turns out, you can’t.

I absolutely celebrate the idea of evergreen browsers. It’s one of the absolute most important things that has happened to the web in recent-ish years. It enables a much quicker evolution for the web, and all browsers are taking advantage of it.

But even browsers that I think of as evergreen aren’t always. Eventually, hardware limits the software. The logic isn’t as simple as “if Chrome, then evergreeen,” for example.

Safari normally updates via system updates, but in this case it was a first-generation iPad Air stuck on iOS 12, and no more updates were possible for what Apple considers a “vintage” device. Same deal with a Chromebook stuck at Chrome 76.

A couple of little optional chaining question mark (?) characters borked the whole dang site. Unfortunate. That “serve two bundles, modern and legacy” idea is still pretty smart.


Speaking of moms, I was reminded of an older episode of ShopTalk we did with Paul Irish’s mom that has a lot of this “regular person using the internet” vibes.

To Shared LinkPermalink on CSS-Tricks


The Optional Chaining Operator, “Modern” Browsers, and My Mom originally published on CSS-Tricks. You should get the newsletter and become a supporter.

CSS-Tricks

, , , ,
[Top]

Less Absolute Positioning With Modern CSS

Ahmad Shadeed blogs the sentiment that we might not need to lean on position: absolute as much as we might have in the past. For one thing: stacking elements. For example, if you have a stack of elements that should all go on top of each other…

.stack {   display: grid; } .stack > * {   grid-area: 1 / -1; }

All the elements occupy the same grid cell at that point, but you can still use alignment and justification to move stuff around and get it looking and behaving how you want.

What you are really saying with position: absolute is I want this element to be entirely removed from the flow such that it doesn’t affect other elements and other elements don’t affect it. Sometimes you do, but arguably less often than your existing CSS muscle memory would have you believe.

I’ll snag one of Ahmad’s idea here:

Both the tag and the title are positioned in a way we might automatically think of using absolute positioning. But again, something like CSS Grid has all of the alignment features we need to not only stack them vertically, but place them right where we want.

Direct Link to ArticlePermalink


The post Less Absolute Positioning With Modern CSS appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , ,
[Top]

How You Might Build a Modern Day Webring

I’m sure different people picture different things when they think about webrings, so let me clarify what I picture. I see an element on a website that:

  1. Signifies this site is part of a webring
  2. Allows you to move to the next or previous site of the webring
  3. Maybe has other functionality like going to a “random” site or seeing the complete list

But then another major thing:

  1. Site owners don’t have to do much. They just plop (it?) on the site and a functional webring UI is there.

So like this:

A Calvin & Hobbes webring UI that comes up all the time when you search the web about webrings

How did it used to work? You know what? I have no idea. My guess is that it was an ancient <frameset><frame /></frameset> situation, but this stuff is before my time a bit. How might we do this today?

Well, we could use an <iframe>, I guess. That’s what sites like YouTube do when they give “embed code” as an HTML snippet. Sites like Twitter and CodePen give you a <div> (or whatever semantic HTML) and a <script>, so that there can be fallback content and the script enhances it into an <iframe>. An <iframe> might be fine, as it asks very little of the site owner, but they are known to be fairly bad for performance. It’s a whole document inside another document, after all. Plus, they don’t offer much by the way of customization. You get what you get.

Another problem with an iframe is that… how would it know what site it is currently embedded on? Maybe a URL parameter? Maybe a postMessage from the parent page? Not super clean if you ask me.

I think a Web Component might be the way to go here, as long as we’re thinking modern. We could make a custom element like <webring-*>. Let’s do that, and make it for CSS sites specifically. That’ll give us the opportunity to send in the current site with an attribute, like this:

<webring-css site="http://css-tricks.com">   This is gonna boot itself up into webring in a minute. </webring-css>

That solves the technology choice. Now we need to figure out the global place to store the data. Because, well, a webring needs to be able to be updated. Sites need to be able to be added and removed from the webring without the other sites on the webring needing to do anything.

For fairly simple data like this, a JSON file on GitHub seems to be a perfectly modern choice. Let’s do that.

Now everyone can see all the sites in the webring in a fairly readable fashion. Plus, they could submit Pull Requests against it to add/remove sites (feel free).

Getting that data from our web component is a fetch away:

fetch(`https://raw.githubusercontent.com/CSS-Tricks/css-webring/main/webring.json`)   .then((response) => response.json())   .then((sites) => {      // Got the data.   });

We’ll fire that off when our web component mounts. Let’s scaffold that…

const DATA_FOR_WEBRING = `https://raw.githubusercontent.com/CSS-Tricks/css-webring/main/webring.json`;  const template = document.createElement("template"); template.innerHTML = ` <style>   /* styles */ </style>  <div class="webring">   <!-- content --> </div>`;  class WebRing extends HTMLElement {   connectedCallback() {     this.attachShadow({ mode: "open" });     this.shadowRoot.appendChild(template.content.cloneNode(true));      fetch(DATA_FOR_WEBRING)       .then((response) => response.json())       .then((sites) => {         // update template with data       });   } }  window.customElements.define("webring-css", WebRing); 

The rest of this isn’t so terribly interesting that I feel like we need to go through it step by step. I’ll just blog-sketch it for you:

  1. Pull the attribute off the web component so we can see what the current site is
  2. Match the current site in the data
  3. Build out Next, Previous, and Random links from the data in a template
  4. Update the HTML in the template

And voilà!

Could you do a lot more with this, like error handling, better design, and better everything?

Yes.


The post How You Might Build a Modern Day Webring appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , ,
[Top]

In Defense of Tables and Floats in Modern Day Development

Twenty-plus years ago, tables were the main way web pages were created in HTML. It gave web builders consistent control of constructing pages with some “design.” No longer did sites only have to be top-to-bottom in a linear manner — they could be set up with columns that align left-to-right and top-to-bottom. Back then, it was seen as a huge breakthrough.

Tables, however, were never designed to lay out pages and, in fact, have all sorts of problems when used that way today. It was a convenient hack, but at the time, a very welcome one, particularly for those trying to achieve a super-specific layout that previous ways couldn’t handle.

Fast-forward to modern days and it’s now obvious that were tons of issues with the table layout approach. Accessibility is a big one.<table>, <th>, <tr> and <td> elements aren’t exactly accessible, especially when they’re nested several levels deep. Screen readers — the devices that read web content and serve as a measure of accessibility compliance — struggle to parse them into cohesive blocks of content. That’s not to say tables are bad; they simply were never intended as a layout mechanism.

Check out this table layout. Feel free to run it through VoiceOver or whatever screen reading software you have access to.

Yes, that example looks very much like a typical website layout, but it’s crafted solely with a table. You can see how quickly it becomes bloated and inaccessible the very moment we start using it for anything other than tabular data.

So after more than 20 years of being put through the ringer, you might think we should avoid tables altogether. If you’ve never shipped a table-based layout, you’ve undoubtedly heard war stories from those of us who have, and those stories are never kind. It’s like we’ve sort of made tables the “Internet Explorer of HTML elements.”

But that’s not totally fair because tables do indeed fill a purpose on the web and they are indeed accessible when they are used correctly.

Tables are designed to handle data that is semantically related and is best presented in a linear-like format. So, yes, we can use tables today in the year 2020, and that will likely continue to be true many years from now.

Here’s a table being used to display exactly what it’s intended to: tabular data!

With the push toward web standards in the early 2000s, tables were pushed aside as a layout solution in favor of other approaches, most notably the CSS float property. Designers and developers alike rejoiced because, for the first time, we had a true separation of concerns that let markup do the markup-y things it needs to do, and CSS to do the visual stuff it needs to do. That made code both cleaner and way easier to maintain and, as a result, we could actually focus on true standards, like accessibility, and even other practices, like SEO.

See (or rather hear) the difference in this example?

Many of us have worked with floats in the past. They were originally designed to allow content to flow around images that are floated either to the left or right, and still be in the document flow. Now that we’ve gotten newer layout features — again, like grid and flexbox — floats, too, have sort of fallen by the wayside, perhaps either because there are better ways to accomplish what they do, or because they also got the same bad rap as tables after being (ab)used for a long time.

But floats are still useful and relevant! In fact, we have to use them for the shape-outside property to work.

A legitimate float use case could be for wrapping content around a styled <blockquote>.

CSS features like grid, flexbox, and multicolumn layouts are among the wonderful tools we have to work with these days. With even more layout possibilities, cleaner and more accessible code, they will remain our go-to layout approaches for many years to come.

No hacks or extra code in this flexbox example of the same layout we’ve looked at throughout this article:


So, next time you find yourself considering tables or floats, reach for them with confidence! Well, when you know the situation aligns with their intended use. It’s not like I’m expecting you to walk away from this with a reinvigorated enthusiasm for tables and floats; only that, when used correctly, they are perfectly valid techniques, and even continue to be indispensable parts of our overall toolset.


The post In Defense of Tables and Floats in Modern Day Development appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , ,
[Top]

10 modern layouts in 1 line of CSS

, ,
[Top]

“The Modern Web”

A couple of interesting articles making the rounds:

I like Tom’s assertion that React (which he’s using as a stand-in for JavaScript frameworks in general) has an ideal usage:

There is a sweet spot of React: in moderately interactive interfaces. Complex forms that require immediate feedback, UIs that need to move around and react instantly. That’s where it excels.

If there is anything I hope for the world of web design and development, it’s that we get better at picking the right tools for the job.

I heard several people hone in on this:

I can, for example, guarantee that this blog is faster than any Gatsby blog (and much love to the Gatsby team) because there is nothing that a React static site can do that will make it faster than a non-React static site.

One reaction was hell yes. React is a bunch of JavaScript and it does lots of stuff, but does not grant superpowers that make the web faster than it was without it. Another reaction was: well it actually does. That’s kind of the whole point of SPAs: not needing to reload the page. Instead, we’re able to make a trimmed network request for the new data needed for a new page and re-rendering only what is necessary.

Rich digs into that even more:

When I tap on a link on Tom’s JS-free website, the browser first waits to confirm that it was a tap and not a brush/swipe, then makes a request, and then we have to wait for the response. With a framework-authored site with client-side routing, we can start to do more interesting things. We can make informed guesses based on analytics about which things the user is likely to interact with and preload the logic and data for them. We can kick off requests as soon as the user first touches (or hovers) the link instead of waiting for confirmation of a tap — worst case scenario, we’ve loaded some stuff that will be useful later if they do tap on it. We can provide better visual feedback that loading is taking place and a transition is about to occur. And we don’t need to load the entire contents of the page — often, we can make do with a small bit of JSON because we already have the JavaScript for the page. This stuff gets fiendishly difficult to do by hand.

That’s what makes this stuff so easy to argue about. Everyone has good points. When we try to speak on behalf of the entire web, it’s tough for us all to agree. But the web is too big for broad, sweeping assertions.

Do people reach for React-powered SPAs too much? Probably, but that’s not without reason. There is innovation there that draws people in. The question is, how can we improve it?

From a front-of-the-front-end perspective, the fact that front-end frameworks like React encourage demand us write a front-end in components is compelling all by itself.

There is optimism and pessimism in both posts. The ending sentences of both are starkly different.

The post “The Modern Web” appeared first on CSS-Tricks.

CSS-Tricks

, ,
[Top]

Avoid Heavy Babel Transformations by (Sometimes) Not Writing Modern JavaScript

It’s hard to imagine writing production-ready JavaScript without a tool like Babel. It’s been an undisputed game-changer in making modern code accessible to a wide range of users. With this challenge largely out of the way, there’s not much holding us back from really leaning into the features that modern specifications have to offer.

But at the same time, we don’t want to lean in too hard. If you take an occasional peek into the code your users are actually downloading, you’ll notice that sometimes, seemingly straightforward Babel transformations can be especially bloated and complex. And in a lot of those cases, you can perform the same task using a simple, “old school” approach — without the heavy baggage that can come from preprocessing.

Let’s take a closer look at what I’m talking about using Babel’s online REPL — a great tool for quickly testing transformations. Targeting browsers that don’t support ES2015+, we’ll use it to highlight just a few of the times when you (and your users) might be better off choosing an “old school” way to do something in JavaScript, despite a “new” approach popularized by modern specifications.

As we go along, keep in mind that this is less about “old vs. new” and more about choosing the best implementation that gets the job done while bypassing any expected side effects of our build processes.

Let’s build!

Preprocessing a for..of loop

The for..of loop is a flexible, modern means of looping over iterable collections. It’s often used in a way very similar to a traditional for loop, which may lead you to think that Babel’s transformation would be simple and predictable, especially if you’re just using it with an array. Not quite. The code we write may only be 98 bytes:

function getList() {   return [1, 2, 3]; }  for (let value of getList()) {   console.log(value); }

But the output results in 1.8kb (a 1736% increase!):

 "use strict";  function _createForOfIteratorHelper(o) { if (typeof Symbol === "undefined" || o[Symbol.iterator] == null) { if (Array.isArray(o) || (o = _unsupportedIterableToArray(o))) { var i = 0; var F = function F() {}; return { s: F, n: function n() { if (i >= o.length) return { done: true }; return { done: false, value: o[i++] }; }, e: function e(_e) { throw _e; }, f: F }; } throw new TypeError("Invalid attempt to iterate non-iterable instance.\nIn order to be iterable, non-array objects must have a [Symbol.iterator]() method."); } var it, normalCompletion = true, didErr = false, err; return { s: function s() { it = o[Symbol.iterator](); }, n: function n() { var step = it.next(); normalCompletion = step.done; return step; }, e: function e(_e2) { didErr = true; err = _e2; }, f: function f() { try { if (!normalCompletion && it.return != null) it.return(); } finally { if (didErr) throw err; } } }; }  function _unsupportedIterableToArray(o, minLen) { if (!o) return; if (typeof o === "string") return _arrayLikeToArray(o, minLen); var n = Object.prototype.toString.call(o).slice(8, -1); if (n === "Object" && o.constructor) n = o.constructor.name; if (n === "Map" || n === "Set") return Array.from(o); if (n === "Arguments" || /^(?:Ui|I)nt(?:8|16|32)(?:Clamped)?Array$  /.test(n)) return _arrayLikeToArray(o, minLen); }  function _arrayLikeToArray(arr, len) { if (len == null || len > arr.length) len = arr.length; for (var i = 0, arr2 = new Array(len); i < len; i++) { arr2[i] = arr[i]; } return arr2; }  function getList() {   return [1, 2, 3]; }  var _iterator = _createForOfIteratorHelper(getList()),     _step;  try {   for (_iterator.s(); !(_step = _iterator.n()).done;) {     var value = _step.value;     console.log(value);   } } catch (err) {   _iterator.e(err); } finally {   _iterator.f(); }

Why didn’t it just use for loop for this? It’s an array! Apparently, in this case, Babel doesn’t know it’s handling an array. All it knows is that it’s working with a function that could return any iterable (array, string, NodeList), and it needs to be ready for whatever that value could be, based on the ECMAScript specification for the for..of loop.

We could drastically slim the transformation by explicitly passing an array to it, but that’s not always easy in a real application. So, to leverage the benefits of loops (like break and continue statements), while confidently keeping bundle size slim, we might just reach for the for loop. Sure, it’s old school, but it gets the job done.

function getList() {   return [1, 2, 3]; } 
 for (var i = 0; i < getList().length; i++) {   console.log(value); }

/explanation Dave Rupert blogged about this exact situation a few years ago and found that forEach, even polyfilled, as a good solution for him.

Preprocessing Array […Spread]

Similar deal here. The spread operator can be used with more than one class of objects (not just arrays), so when Babel isn’t aware of the type of data it’s dealing with, it needs to take precautions. Unfortunately, those precautions can result in some serious byte bloat.

Here’s the input, weighing in at a slim 81 bytes:

function getList () {   return [4, 5, 6]; } 
 console.log([1, 2, 3, ...getList()]);

The output balloons to 1.3kb:

"use strict";  function _toConsumableArray(arr) { return _arrayWithoutHoles(arr) || _iterableToArray(arr) || _unsupportedIterableToArray(arr) || _nonIterableSpread(); }  function _nonIterableSpread() { throw new TypeError("Invalid attempt to spread non-iterable instance.\nIn order to be iterable, non-array objects must have a [Symbol.iterator]() method."); }  function _unsupportedIterableToArray(o, minLen) { if (!o) return; if (typeof o === "string") return _arrayLikeToArray(o, minLen); var n = Object.prototype.toString.call(o).slice(8, -1); if (n === "Object" && o.constructor) n = o.constructor.name; if (n === "Map" || n === "Set") return Array.from(o); if (n === "Arguments" || /^(?:Ui|I)nt(?:8|16|32)(?:Clamped)?Array$  /.test(n)) return _arrayLikeToArray(o, minLen); }  function _iterableToArray(iter) { if (typeof Symbol !== "undefined" && Symbol.iterator in Object(iter)) return Array.from(iter); }  function _arrayWithoutHoles(arr) { if (Array.isArray(arr)) return _arrayLikeToArray(arr); }  function _arrayLikeToArray(arr, len) { if (len == null || len > arr.length) len = arr.length; for (var i = 0, arr2 = new Array(len); i < len; i++) { arr2[i] = arr[i]; } return arr2; }  function getList() {   return [4, 5, 6]; }  console.log([1, 2, 3].concat(_toConsumableArray(getList())));

Instead, we could cut to the chase and and just use concat(). The difference in the amount of code you need to write isn’t significant, it does exactly what it’s intended to do, and there’s no need to worry about that extra bloat.

function getList () {   return [4, 5, 6]; } 
 console.log([1, 2, 3].concat(getList()));

A more common example: Looping over a NodeList

You might have seen this more than a few times. We often need to query for several DOM elements and loop over the resulting NodeList. In order to use forEach on that collection, it’s common to spread it into an array.

[...document.querySelectorAll('.my-class')].forEach(function (node) {   // do something });

But like we saw, this makes for some heavy output. As an alternative, there’s nothing wrong with running that NodeList through a method on the Array prototype, like slice. Same result, but far less baggage:

[].slice.call(document.querySelectorAll('.my-class')).forEach(function(node) {   // do something });

A note about “loose” mode

It’s worth calling out that some of this array-related bloat can also be avoided by leveraging @babel/preset-env‘s loose mode, which compromises in staying totally true to the semantics of modern ECMAScript, but offers the benefit of slimmer output. In many situations, that might work just fine, but you’re also necessarily introducing risk into your application that you may come to regret later on. After all, you’re telling Babel to make some rather bold assumptions about how you’re using your code. 

The main takeaway here is that sometimes, it might be more suitable to be more intentional about the features you to use, rather than investing more time into tweaking your build process and potentially wrestling with unseen consequences later.

Preprocessing default parameters

This is a more predictable operation, but when it’s repeatedly used throughout a codebase, the bytes can add up. ES2015 introduced default parameter values, which tidy up a function’s signature when it accepts optional arguments. Here we are at 75 bytes:

function getName(name = "my friend") {   return `Hello, $  {name}!`; }

But Babel can be a little more verbose than expected with its transformation, resulting in 169 bytes:

"use strict"; 
 function getName() {   var name = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : "my friend";   return "Hello, ".concat(name, "!"); }

As an alternative, we could avoid using the arguments object altogether, and simply check if a parameter is undefined We lose the self-documenting nature that default parameters provide, but if we’re really pinching bytes, it might be worth it. And depending on the use case, we might even be able to get away with checking for falsey to slim it down even more.

function getName(name) {   name = name || "my friend";   return `Hello, $  {name}!`; }

Preprocessing async/await

The syntactic sugar of async/await over the Promise API is one of my favorite additions to JavaScript. Even so, out of the box, Babel can make make quite the mess out of it.

157 bytes to write:

async function fetchSomething(url) {   const response = await fetch(url);   return await response.json(); }  fetchSomething("https://google.com");

1.5kb when compiled:

"use strict";  function asyncGeneratorStep(gen, resolve, reject, _next, _throw, key, arg) { try { var info = gen[key](arg); var value = info.value; } catch (error) { reject(error); return; } if (info.done) { resolve(value); } else { Promise.resolve(value).then(_next, _throw); } }  function _asyncToGenerator(fn) { return function () { var self = this, args = arguments; return new Promise(function (resolve, reject) { var gen = fn.apply(self, args); function _next(value) { asyncGeneratorStep(gen, resolve, reject, _next, _throw, "next", value); } function _throw(err) { asyncGeneratorStep(gen, resolve, reject, _next, _throw, "throw", err); } _next(undefined); }); }; }  function fetchSomething(_x) {   return _fetchSomething.apply(this, arguments); }  function _fetchSomething() {   _fetchSomething = _asyncToGenerator( /*#__PURE__*/regeneratorRuntime.mark(function _callee(url) {     var response;     return regeneratorRuntime.wrap(function _callee$  (_context) {       while (1) {         switch (_context.prev = _context.next) {           case 0:             _context.next = 2;             return fetch(url);            case 2:             response = _context.sent;             _context.next = 5;             return response.json();            case 5:             return _context.abrupt("return", _context.sent);            case 6:           case "end":             return _context.stop();         }       }     }, _callee);   }));   return _fetchSomething.apply(this, arguments); }  fetchSomething("https://google.com");

You’ll notice that Babel doesn’t convert async code into promises out of the box. Instead, they’re transformed into generators that rely on the regenerator-runtime library, making for more a lot more code than what’s written in our IDE. Thankfully, it’s possible to go the Promise route by means of a plugin, like babel-plugin-transform-async-to-promises. Instead of that 1.5kb output, we end up with much less, at 638 bytes:

"use strict"; 
 function _await(value, then, direct) {   if (direct) {     return then ? then(value) : value;   } 
   if (!value || !value.then) {     value = Promise.resolve(value);   } 
   return then ? value.then(then) : value; } 
 var fetchSomething = _async(function (url) {   return _await(fetch(url), function (response) {     return _await(response.json());   }); }); 
 function _async(f) {   return function () {     for (var args = [], i = 0; i < arguments.length; i++) {       args[i] = arguments[i];     } 
     try {       return Promise.resolve(f.apply(this, args));     } catch (e) {       return Promise.reject(e);     }   }; }

But, like mentioned before, there’s risk in relying on a plugin to ease pain like this. When doing so, we’re impacting transformations in the entire project, and also introducing another build dependency. Instead, we could consider just sticking with the Promise API.

function fetchSomething(url) {   return new Promise(function (resolve) {     fetch(url).then(function (response) {       return response.json();     }).then(function (data) {       return resolve(data);     });   }); }

Preprocessing classes

For more syntactic sugar, there’s the class syntax introduced with ES2015, which provides a streamlined way to leverage JavaScript’s prototypical inheritance. But if we’re using Babel to transpile for older browsers, there’s nothing sweet about the output.

The input us only 120 bytes:

class Robot {   constructor(name) {     this.name = name;   } 
   speak() {      console.log(`I'm $  {this.name}!`);   } }

But the output results in 989kb:

"use strict";  function _classCallCheck(instance, Constructor) { if (!(instance instanceof Constructor)) { throw new TypeError("Cannot call a class as a function"); } }  function _defineProperties(target, props) { for (var i = 0; i < props.length; i++) { var descriptor = props[i]; descriptor.enumerable = descriptor.enumerable || false; descriptor.configurable = true; if ("value" in descriptor) descriptor.writable = true; Object.defineProperty(target, descriptor.key, descriptor); } }  function _createClass(Constructor, protoProps, staticProps) { if (protoProps) _defineProperties(Constructor.prototype, protoProps); if (staticProps) _defineProperties(Constructor, staticProps); return Constructor; }  var Robot = /*#__PURE__*/function () {   function Robot(name) {     _classCallCheck(this, Robot);      this.name = name;   }    _createClass(Robot, [{     key: "speak",     value: function speak() {       console.log("I'm ".concat(this.name, "!"));     }   }]);    return Robot; }();

Much of the time, unless you’re doing some fairly involved inheritance, it’s straightforward enough to use a pseudoclassical approach. It requires slightly less code to write, and the resulting interface is virtually identical to a class.

function Robot(name) {   this.name = name; 
   this.speak = function() {     console.log(`I'm $  {this.name}!`);   } } 
 const rob = new Robot("Bob"); rob.speak(); // "Bob"

Strategic considerations

Keep in mind that, depending on your application’s audience, a lot of what you’re reading here might mean that your strategies to keep bundles slim may take different shapes.

For example, your team might have already made a deliberate decision to drop support for Internet Explorer and other “legacy” browsers (which is becoming more and more common, given that the vast majority of browsers support ES2015+). If that’s the case, your time might best be spent in auditing the list of browsers your build system is targeting, or making sure you’re not shipping unnecessary polyfills.

And even if you are still obligated to support older browsers (or maybe you love some of the modern APIs too much to give them up), there are other options to enable you to ship heavy, preprocessed bundles only to the users that need them, like a differential serving implementation.

The important thing isn’t so much about which strategy (or strategies) your team chooses to prioritize, but more about intentionally making those decisions in light of the code being spit out by your build system. And that all starts by cracking open that dist directory to take a peak.

Pop open that hood

I’m a big fan of the new features modern JavaScript continues to provide. They make for applications that are easier to write, maintain, scale, and especially read. But as long as writing JavaScript means preprocessing JavaScript, it’s important to make sure that we have a finger on the pulse of what these features mean for the users that we ultimately aim to serve.

And that means popping the hood of your build process once in a while. At best, you might be able avoid especially hefty Babel transformations by using a simpler, “classic” alternative. And at worst, you’ll come to better understand (and appreciate) the work that Babel does all the more.

The post Avoid Heavy Babel Transformations by (Sometimes) Not Writing Modern JavaScript appeared first on CSS-Tricks.

CSS-Tricks

, , , , , , ,
[Top]

Modern CSS Solutions for Old CSS Problems

This is a hell of a series by Stephanie Eckles. It’s a real pleasure watching CSS evolve and solve problems in clear and elegant ways.

Just today I ran across this little jab at CSS in a StackOverflow answer from 2013.

Typical CSS. They provide CSS animations so it's not done in Javascript, and the styling is all in one place, but then if you want to do anything more than the bare basics then you have to implement a maze of hacks. Why don't they just implement things that make it easier for developers? – Jonathan. Aug 23 '13 at 13:52

This particular jab was about CSS lacking a way to pause between @keyframe animations, which is still not something CSS can do without hacks. Aside from hand-wavy and ignorable “CSS is bad” statements, I see a lot less of this. CSS is just getting better.

Direct Link to ArticlePermalink

The post Modern CSS Solutions for Old CSS Problems appeared first on CSS-Tricks.

CSS-Tricks

, ,
[Top]

The Unseen Performance Costs of Modern CSS-in-JS Libraries

This article is full of a bunch of data from Aggelos Arvanitakis. But lemme just focus on his final bit of advice:

Investigate whether a zero-runtime CSS-in-JS library can work for your project. Sometimes we tend to prefer writing CSS in JS for the DX (developer experience) it offers, without a need to have access to an extended JS API. If you app doesn’t need support for theming and doesn’t make heavy and complex use of the css prop, then a zero-runtime CSS-in-JS library might be a good candidate.

“Zero-runtime” meaning you author your styles in a CSS-in-JS syntax, but what is produced is .css files like any other CSS preprocessor would produce. This shifts the tool into a totally different category. It’s a developer tool only, rather than a tool where the user of the website pays the price of using it.

The flagship zero-runtime CSS-in-JS library is Linaria. I think the syntax looks really nice.

import { css } from 'linaria'; import fonts from './fonts';  const header = css`   text-transform: uppercase;   font-family: $  {fonts.heading}; `;  <h1 className={header}>Hello world</h1>;

I’m also a fan of the just do scoping for me ability that CSS modules brings, which can be done zero-runtime style.

Direct Link to ArticlePermalink

The post The Unseen Performance Costs of Modern CSS-in-JS Libraries appeared first on CSS-Tricks.

CSS-Tricks

, , , , ,
[Top]