All About mailto: Links

You can make a garden variety anchor link (<a>) open up a new email. Let’s take a little journey into this feature. It’s pretty easy to use, but as with anything web, there are lots of things to consider.

The basic functionality

<a href="mailto:someone@yoursite.com">Email Us</a>

It works!

But we immediately run into a handful of UX issues. One of them is that clicking that link surprises some people in a way they don’t like. Sort of the same way clicking on a link to a PDF opens a file instead of a web page. Le sigh. We’ll get to that in a bit.

“Open in new tab” sometimes does matter.

If a user has their default mail client (e.g. Outlook, Apple Mail, etc.) set up to be a native app, it doesn’t really matter. They click a mailto: link, that application opens up, a new email is created, and it behaves the same whether you’ve attempted to open that link in a new tab or not.

But if a user has a browser-based email client set up, it does matter. For example, you can allow Gmail to be your default email handler on Chrome. In that case, the link behaves like any other link, in that if you don’t open in a new tab, the page will redirect to Gmail.

I’m a little on the fence about it. I’ve weighed in on opening links in new tabs before, but not specifically about opening emails. I’d say I lean a bit toward using target="_blank" on mail links, despite my feelings on using it in other scenarios.

<a href="mailto:someone@yoursite.com" target="_blank" rel="noopener noreferrer">Email Us</a>

Adding a subject and body

This is somewhat rare to see for some reason, but mailto: links can define the email subject and body content as well. They are just query parameters!

mailto:chriscoyier@gmail.com?subject=Important!&body=Hi.

Add copy and blind copy support

You can send to multiple email addresses, and even carbon copy (CC), and blind carbon copy (BCC) people on the email. The trick is more query parameters and comma-separating the email addresses.

mailto:someone@yoursite.com?cc=someoneelse@theirsite.com,another@thatsite.com,me@mysite.com&bcc=lastperson@theirsite.com

This site is awful handy

mailtolink.me will help generate email links.

Use a <form> to let people craft the email first

I’m not sure how useful this is, but it’s an interesting curiosity that you can make a <form> do a GET, which is basically a redirect to a URL — and that URL can be in the mailto: format with query params populated by the inputs! It can even open in a new tab.

See the Pen
Use a <form> to make an email
by Chris Coyier (@chriscoyier)
on CodePen.

People don’t like surprises

Because mailto: links are valid anchor links like any other, they are typically styled exactly the same. But clicking them clearly produces very different results. It may be worthwhile to indicate mailto: links in a special way.

If you use an actual email address as the link, that’s probably a good indication:

<a href="mailto:chriscoyier@gmail.com">chriscoyier@gmail.com</a>

Or you could use CSS to help explain with a little emoji story:

a[href^="mailto:"]::after {   content: " (&#x1f4e8;&#x2197;&#xfe0f;)"; }

If you really dislike mailto: links, there is a browser extension for you.

https://ihatemailto.com/

I dig how it doesn’t just block them, but copies the email address to your clipboard and tells you that’s what it did.

The post All About mailto: Links appeared first on CSS-Tricks.

CSS-Tricks

, ,

Advanced Tooling for Web Components

Over the course of the last four articles in this five-part series, we’ve taken a broad look at the technologies that make up the Web Components standards. First, we looked at how to create HTML templates that could be consumed at a later time. Second, we dove into creating our own custom element. After that, we encapsulated our element’s styles and selectors into the shadow DOM, so that our element is entirely self-contained.

We’ve explored how powerful these tools can be by creating our own custom modal dialog, an element that can be used in most modern application contexts regardless of the underlying framework or library. In this article, we will look at how to consume our element in the various frameworks and look at some advanced tooling to really ramp up your Web Component skills.

Article Series:

  1. An Introduction to Web Components
  2. Crafting Reusable HTML Templates
  3. Creating a Custom Element from Scratch
  4. Encapsulating Style and Structure with Shadow DOM
  5. Advanced Tooling for Web Components (This post)

Framework agnostic

Our dialog component works great in almost any framework or even without one. (Granted, if JavaScript is disabled, the whole thing is for naught.) Angular and Vue treat Web Components as first-class citizens: the frameworks have been designed with web standards in mind. React is slightly more opinionated, but not impossible to integrate.

Angular

First, let’s take a look at how Angular handles custom elements. By default, Angular will throw a template error whenever it encounters an element it doesn’t recognize (i.e. the default browser elements or any of the components defined by Angular). This behavior can be changed by including the CUSTOM_ELEMENTS_SCHEMA.

…allows an NgModule to contain the following:

  • Non-Angular elements named with dash case (-).
  • Element properties named with dash case (-). Dash case is the naming convention for custom elements.

Angular Documentation

Consuming this schema is as simple as adding it to a module:

import { NgModule, CUSTOM_ELEMENTS_SCHEMA } from '@angular/core';  @NgModule({   /** Omitted */   schemas: [ CUSTOM_ELEMENTS_SCHEMA ] }) export class MyModuleAllowsCustomElements {}

That’s it. After this, Angular will allow us to use our custom element wherever we want with the standard property and event bindings:

<one-dialog [open]="isDialogOpen" (dialog-closed)="dialogClosed($  event)">   <span slot="heading">Heading text</span>   <div>     <p>Body copy</p>   </div> </one-dialog>

Vue

Vue’s compatibility with Web Components is even better than Angular’s as it doesn’t require any special configuration. Once an element is registered, it can be used with Vue’s default templating syntax:

<one-dialog v-bind:open="isDialogOpen" v-on:dialog-closed="dialogClosed">   <span slot="heading">Heading text</span>   <div>     <p>Body copy</p>   </div> </one-dialog>

One caveat with Angular and Vue, however, is their default form controls. If we wish to use something like reactive forms or [(ng-model)] in Angular or v-model in Vue on a custom element with a form control, we will need to set up that plumbing for which is beyond the scope of this article.

React

React is slightly more complicated than Angular. React’s virtual DOM effectively takes a JSX tree and renders it as a large object. So, instead of directly modifying attributes on HTML elements like Angular or Vue, React uses an object syntax to track changes that need to be made to the DOM and updates them in bulk. This works just fine in most cases. Our dialog’s open attribute is bound to its property and will respond perfectly well to changing props.

The catch comes when we start to look at the CustomEvent dispatched when our dialog closes. React implements a series of native event listeners for us with their synthetic event system. Unfortunately, that means that controls like onDialogClosed won’t actually attach event listeners to our component, so we have to find some other way.

The most obvious means of adding custom event listeners in React is by using DOM refs. In this model, we can reference our HTML node directly. The syntax is a bit verbose, but works great:

import React, { Component, createRef } from 'react';  export default class MyComponent extends Component {   constructor(props) {     super(props);     // Create the ref     this.dialog = createRef();     // Bind our method to the instance     this.onDialogClosed = this.onDialogClosed.bind(this);      this.state = {       open: false     };   }    componentDidMount() {     // Once the component mounds, add the event listener     this.dialog.current.addEventListener('dialog-closed', this.onDialogClosed);   }    componentWillUnmount() {     // When the component unmounts, remove the listener     this.dialog.current.removeEventListener('dialog-closed', this.onDialogClosed);   }    onDialogClosed(event) { /** Omitted **/ }    render() {     return <div>       <one-dialog open={this.state.open} ref={this.dialog}>         <span slot="heading">Heading text</span>         <div>           <p>Body copy</p>         </div>       </one-dialog>     </div>   } }

Or, we can use stateless functional components and hooks:

import React, { useState, useEffect, useRef } from 'react';  export default function MyComponent(props) {   const [ dialogOpen, setDialogOpen ] = useState(false);   const oneDialog = useRef(null);   const onDialogClosed = event => console.log(event);    useEffect(() => {     oneDialog.current.addEventListener('dialog-closed', onDialogClosed);     return () => oneDialog.current.removeEventListener('dialog-closed', onDialogClosed)   });    return <div>       <button onClick={() => setDialogOpen(true)}>Open dialog</button>       <one-dialog ref={oneDialog} open={dialogOpen}>         <span slot="heading">Heading text</span>         <div>           <p>Body copy</p>         </div>       </one-dialog>     </div> }

That’s not bad, but you can see how reusing this component could quickly become cumbersome. Luckily, we can export a default React component that wraps our custom element using the same tools.

import React, { Component, createRef } from 'react'; import PropTypes from 'prop-types';  export default class OneDialog extends Component {   constructor(props) {     super(props);     // Create the ref     this.dialog = createRef();     // Bind our method to the instance     this.onDialogClosed = this.onDialogClosed.bind(this);   }    componentDidMount() {     // Once the component mounds, add the event listener     this.dialog.current.addEventListener('dialog-closed', this.onDialogClosed);   }    componentWillUnmount() {     // When the component unmounts, remove the listener     this.dialog.current.removeEventListener('dialog-closed', this.onDialogClosed);   }    onDialogClosed(event) {     // Check to make sure the prop is present before calling it     if (this.props.onDialogClosed) {       this.props.onDialogClosed(event);     }   }    render() {     const { children, onDialogClosed, ...props } = this.props;     return <one-dialog {...props} ref={this.dialog}>       {children}     </one-dialog>   } }  OneDialog.propTypes = {   children: children: PropTypes.oneOfType([       PropTypes.arrayOf(PropTypes.node),       PropTypes.node   ]).isRequired,   onDialogClosed: PropTypes.func };

…or again as a stateless, functional component:

import React, { useRef, useEffect } from 'react'; import PropTypes from 'prop-types';  export default function OneDialog(props) {   const { children, onDialogClosed, ...restProps } = props;   const oneDialog = useRef(null);      useEffect(() => {     onDialogClosed ? oneDialog.current.addEventListener('dialog-closed', onDialogClosed) : null;     return () => {       onDialogClosed ? oneDialog.current.removeEventListener('dialog-closed', onDialogClosed) : null;       };   });    return <one-dialog ref={oneDialog} {...restProps}>{children}</one-dialog> }

Now we can use our dialog natively in React, but still keep the same API across all our applications (and still drop classes, if that’s your thing).

import React, { useState } from 'react'; import OneDialog from './OneDialog';  export default function MyComponent(props) {   const [open, setOpen] = useState(false);   return <div>     <button onClick={() => setOpen(true)}>Open dialog</button>     <OneDialog open={open} onDialogClosed={() => setOpen(false)}>       <span slot="heading">Heading text</span>       <div>         <p>Body copy</p>       </div>     </OneDialog>   </div> }

Advanced tooling

There are a number of great tools for authoring your own custom elements. Searching through npm reveals a multitude of tools for creating highly-reactive custom elements (including my own pet project), but the most popular today by far is lit-html from the Polymer team and, more specifically for Web Components, LitElement.

LitElement is a custom elements base class that provides a series of APIs for doing all of the things we’ve walked through so far. It can be run in a browser without a build step, but if you enjoy using future-facing tools like decorators, there are utilities for that as well.

Before diving into how to use lit or LitElement, take a minute to familiarize yourself with tagged template literals, which are a special kind of function called on template literal strings in JavaScript. These functions take in an array of strings and a collection of interpolated values and can return anything you might want.

function tag(strings, ...values) {   console.log({ strings, values });   return true; } const who = 'world';  tag`hello $  {who}`;  /** would log out { strings: ['hello ', ''], values: ['world'] } and return true **/

What LitElement gives us is live, dynamic updating of anything passed to that values array, so as a property updates, the element’s render function would be called and the resulting DOM would be re-rendered

import { LitElement, html } from 'lit-element';  class SomeComponent {   static get properties() {     return {        now: { type: String }     };   }    connectedCallback() {     // Be sure to call the super     super.connectedCallback();     this.interval = window.setInterval(() => {       this.now = Date.now();     });   }    disconnectedCallback() {     super.disconnectedCallback();     window.clearInterval(this.interval);   }    render() {     return html`<h1>It is $  {this.now}</h1>`;   } }  customElements.define('some-component', SomeComponent);

See the Pen
LitElement now example
by Caleb Williams (@calebdwilliams)
on CodePen.

What you will notice is that we have to define any property we want LitElement to watch using the static properties getter. Using that API tells the base class to call render whenever a change is made to the component’s properties. render, in turn, will update only the nodes that need to change.

So, for our dialog example, it would look like this using LitElement:

See the Pen
Dialog example using LitElement
by Caleb Williams (@calebdwilliams)
on CodePen.

There are several variants of lit-html available, including Haunted, a React hooks-style library for Web Components that can also make use of virtual components using lit-html as a base.

At the end of the day, most of the modern Web Components tools are various flavors of what LitElement is: a base class that abstracts common logic away from our components. Among the other flavors are Stencil, SkateJS, Angular Elements and Polymer.

What’s next

Web Components standards are continuing to evolve and new features are being discussed and added to browsers on an ongoing basis. Soon, Web Component authors will have APIs for interacting with web forms at a high level (including other element internals that are beyond the scope of these introductory articles), like native HTML and CSS module imports, native template instantiation and updating controls, and many more which can be tracked on the W3C/web components issues board on GitHub.

These standards are ready to adopt into our projects today with the appropriate polyfills for legacy browsers and Edge. And while they may not replace your framework of choice, they can be used alongside them to augment you and your organization’s workflows.

The post Advanced Tooling for Web Components appeared first on CSS-Tricks.

CSS-Tricks

, ,
[Top]

10 Website Templates To Boost Your Restaurant Business

[Top]

Technical Debt is Like Tetris

Here’s a wonderful post by Eric Higgins all about refactoring and technical debt. He compares giant refactoring projects to being similar to Tetris:

Similar to running a business, Tetris gets harder the longer you play. Pieces move faster and it becomes harder to keep up.

Similar to running a business, you can never win Tetris. There is no true finish line. You only control how quickly you lose.

Similar to running a business, allowing too many gaps to build up in Tetris will cause you to lose.

I love this comparison, despite my mediocre Tetris skills. It does feel like even “easy” development becomes harder as technical debt grows on a project, much the same way Tetris pieces gain speed and provide little time to react as the stack grows. However, I do think perhaps I have a more optimistic view of technical debt overall. If you work slowly and carefully then you can build up a culture of refactoring and gather momentum over time.

Direct Link to ArticlePermalink

The post Technical Debt is Like Tetris appeared first on CSS-Tricks.

CSS-Tricks

, , ,
[Top]

Using for Menus and Dialogs is an Interesting Idea

One of the most empowering things you can learn as a new front-end developer who is starting to learn JavaScript is to change classes. If you can change classes, you can use your CSS skills to control a lot on a page. Toggle a class to one thing, style it this way, toggle to another class (or remove it) and style it another way.

But there is an HTML element that also does toggles! <details>! For example, it’s definitely the quickest way to build an accordion UI.

Extending that toggle-based thinking, what is a user menu if not for a single accordion? Same with modals. If we went that route, we could make JavaScript optional on those dynamic things. That’s exactly what GitHub did with their menu.

Inside the <details> element, GitHub uses some Web Components (that do require JavaScript) to do some bonus stuff, but they aren’t required for basic menu functionality. That means the menu is resilient and instantly interactive when the page is rendered.

Mu-An Chiou, a web systems engineer at GitHub who spearheaded this, has a presentation all about this!

The worst strike on <details> is its browser support in Edge, but I guess we won’t have to worry about that soon, as Edge will be using Chromium… soon? Does anyone know?

The post Using <details> for Menus and Dialogs is an Interesting Idea appeared first on CSS-Tricks.

CSS-Tricks

, , , ,
[Top]

It’s pretty cool how Netlify CMS works with any flat file site generator

Little confession here: when I first saw Netlify CMS at a glance, I thought: cool, maybe I’ll try that someday when I’m exploring CMSs for a new project. Then as I looked at it with fresh eyes: I can already use this! It’s a true CMS in that it adds a content management UI on top of any static site generator that works from flat files! Think of how you might build a site from markdown files with Gatsby, Jekyll, Hugo, Middleman, etc. You can create and edit Markdown files and the site’s build process runs and the site is created.

Netlify CMS gives you (or anyone you set it up for) a way to create/edit those Markdown files without having to use a code editor or know about Pull Requests on GitHub or anything. It’s a little in-browser app that gives you a UI and does the file manipulation and Git stuff behind the scenes.

Here’s an example.

Our conferences website is a perfect site to build with a static site generator.

It’s on GitHub, so it’s open to Pull Requests, and each conference is a Markdown file.

That’s pretty cool already. The community has contributed 77 Pull Requests already really fleshing out the content of the site, and the design, accessibility, and features as well!

I used 11ty to build the site, which works great with building out those Markdown files into a site, using Nunjucks templates. Very satisfying combo, I found, after a slight mostly configuration-related learning curve.

Enter Netlify CMS.

But as comfortable as you or I might be with a quick code edit and Pull Request, not everybody is. And even I have to agree that going to a URL quick, editing some copy in input fields, and clicking a save button is the easiest possible way to manage content.

That CMS UI is exactly what Netlify CMS gives you. Wanna see the entire commit for adding Netlify CMS?

It’s two files! That still kinda blows my mind. It’s a little SPA React app that’s entirely configurable with one file.

Cutting to the chase here, once it is installed, I now have a totally customized UI for editing the conferences on the site available right on the production site.

Netlify CMS doesn’t do anything forceful or weird, like attempt to edit the HTML on the production site directly. It works right into the workflow in the same exact way that you would if you were editing files in a code editor and committing in Git.

Auth & Git

You use Netlify CMS on your production site, which means you need authentication so that just you (and the people you want) have access to it. Netlify Identity makes that a snap. You just flip it on from your Netlify settings and it works.

I activated GitHub Auth so I could make logging in one-click for me.

The Git magic happens through a technology called Git Gateway. You don’t have to understand it (I don’t really), you just enable it in Netlify as part of Netlify Identity, and it forms the connection between your site and the Git repository.

Now when you create/edit content, actual Markdown files are created and edited (and whatever else is involved, like images!) and the change happens right in the Git repository.


I made this the footer of the site cause heck yeah.

The post It’s pretty cool how Netlify CMS works with any flat file site generator appeared first on CSS-Tricks.

CSS-Tricks

, , , , , , , ,
[Top]

Encapsulating Style and Structure with Shadow DOM

This is part four of a five-part series discussing the Web Components specifications. In part one, we took a 10,000-foot view of the specifications and what they do. In part two, we set out to build a custom modal dialog and created the HTML template for what would evolve into our very own custom HTML element in part three.

Article Series:

  1. An Introduction to Web Components
  2. Crafting Reusable HTML Templates
  3. Creating a Custom Element from Scratch
  4. Encapsulating Style and Structure with Shadow DOM (This post)
  5. Advanced Tooling for Web Components (Coming soon!)

If you haven’t read those articles, you would be advised to do so now before proceeding in this article as this will continue to build upon the work we’ve done there.

When we last looked at our dialog component, it had a specific shape, structure and behaviors, however it relied heavily on the outside DOM and required that the consumers of our element would need to understand it’s general shape and structure, not to mention authoring all of their own styles (which would eventually modify the document’s global styles). And because our dialog relied on the contents of a template element with an id of “one-dialog”, each document could only have one instance of our modal.

The current limitations of our dialog component aren’t necessarily bad. Consumers who have an intimate knowledge of the dialog’s inner workings can easily consume and use the dialog by creating their own <template> element and defining the content and styles they wish to use (even relying on global styles defined elsewhere). However, we might want to provide more specific design and structural constraints on our element to accommodate best practices, so in this article, we will be incorporating the shadow DOM to our element.

What is the shadow DOM?

In our introduction article, we said that the shadow DOM was “capable of isolating CSS and JavaScript, almost like an <iframe>.” Like an <iframe>, selectors and styles inside of a shadow DOM node don’t leak outside of the shadow root and styles from outside the shadow root don’t leak in. There are a few exceptions that inherit from the parent document, like font family and document font sizes (e.g. rem) that can be overridden internally.

Unlike an <iframe>, however, all shadow roots still exist in the same document so that all code can be written inside a given context but not worry about conflicts with other styles or selectors.

Adding the shadow DOM to our dialog

To add a shadow root (the base node/document fragment of the shadow tree), we need to call our element’s attachShadow method:

class OneDialog extends HTMLElement {   constructor() {     super();     this.attachShadow({ mode: 'open' });     this.close = this.close.bind(this);   } }

By calling attachShadow with mode: 'open', we are telling our element to save a reference to the shadow root on the element.shadowRoot property. attachShadow always returns a reference to the shadow root, but here we don’t need to do anything with that.

If we had called the method with mode: 'closed', no reference would have been stored on the element and we would have to create our own means of storage and retrieval using a WeakMap or Object, setting the node itself as the key and the shadow root as the value.

const shadowRoots = new WeakMap();  class ClosedRoot extends HTMLElement {   constructor() {     super();     const shadowRoot = this.attachShadow({ mode: 'closed' });     shadowRoots.set(this, shadowRoot);   }    connectedCallback() {     const shadowRoot = shadowRoots.get(this);     shadowRoot.innerHTML = `<h1>Hello from a closed shadow root!</h1>`;   } }

We could also save a reference to the shadow root on our element itself, using a Symbol or other key to try to make the shadow root private.

In general, the closed mode for shadow roots exists for native elements that use the shadow DOM in their implementation (like <audio> or <video>). Further, for unit testing our elements, we might not have access to the shadowRoots object, making it unable for us to target changes inside our element depending on how our library is architected.

There might be some legitimate use cases for user-land closed shadow roots, but they are few and far between, so we’ll stick with the open shadow root for our dialog.

After implementing the new open shadow root, you might notice now that our element is completely broken when we try to run it:

See the Pen
Dialog example using template with shadow root
by Caleb Williams (@calebdwilliams)
on CodePen.

This is because all of the content we had before was added to and manipulated in the traditional DOM (what we’ll call the light DOM). Now that our element has a shadow DOM attached, there is no outlet for the light DOM to render. Let’s start fixing these issues by moving our content to the shadow DOM:

class OneDialog extends HTMLElement {   constructor() {     super();     this.attachShadow({ mode: 'open' });     this.close = this.close.bind(this);   }      connectedCallback() {     const { shadowRoot } = this;     const template = document.getElementById('one-dialog');     const node = document.importNode(template.content, true);     shadowRoot.appendChild(node);          shadowRoot.querySelector('button').addEventListener('click', this.close);     shadowRoot.querySelector('.overlay').addEventListener('click', this.close);     this.open = this.open;   }    disconnectedCallback() {     this.shadowRoot.querySelector('button').removeEventListener('click', this.close);     this.shadowRoot.querySelector('.overlay').removeEventListener('click', this.close);   }      set open(isOpen) {     const { shadowRoot } = this;     shadowRoot.querySelector('.wrapper').classList.toggle('open', isOpen);     shadowRoot.querySelector('.wrapper').setAttribute('aria-hidden', !isOpen);     if (isOpen) {       this._wasFocused = document.activeElement;       this.setAttribute('open', '');       document.addEventListener('keydown', this._watchEscape);       this.focus();       shadowRoot.querySelector('button').focus();     } else {       this._wasFocused && this._wasFocused.focus && this._wasFocused.focus();       this.removeAttribute('open');       document.removeEventListener('keydown', this._watchEscape);     }   }      close() {     this.open = false;   }      _watchEscape(event) {     if (event.key === 'Escape') {         this.close();        }   } }  customElements.define('one-dialog', OneDialog);

The major changes to our dialog so far are actually relatively minimal, but they carry a lot of impact. For starters, all our our selectors (including our style definitions) are internally scoped. For example, our dialog template only has one button internally, so our CSS only targets button { ... }, and those styles don’t bleed out to the light DOM.

We are, however, still reliant on the template that is external to our element. Let’s change that by removing the markup from our template and dropping it into our shadow root’s innerHTML.

See the Pen
Dialog example using only shadow root
by Caleb Williams (@calebdwilliams)
on CodePen.

Including content from the light DOM

The shadow DOM specification includes a means for allowing content from outside the shadow root to be rendered inside of our custom element. For those of you who remember AngularJS, this is a similar concept to ng-transclude or using props.children in React. With Web Components, this is done using the <slot> element.

A simple example would look like this:

<div>   <span>world <!-- this would be inserted into the slot element below --></span>   <#shadow-root><!-- pseudo code -->     <p>Hello <slot></slot></p>   </#shadow-root> </div>

A given shadow root can have any number of slot elements, which can be distinguished with a name attribute. The first slot inside of the shadow root without a name, will be the default slot and all content not otherwise assigned will flow inside that node. Our dialog really needs two slots: a heading and some content (which we’ll make default).

See the Pen
Dialog example using shadow root and slots
by Caleb Williams (@calebdwilliams)
on CodePen.

Go ahead and change the HTML portion of our dialog and see the result. Any content inside of the light DOM is inserted into the slot to which it is assigned. Slotted content remains inside the light DOM although it is rendered as if it were inside the shadow DOM. This means that these elements are still fully style-able by a consumer who might want to control the look and feel of their content.

A shadow root’s author can style content inside the light DOM to a limited extent using the CSS ::slotted() pseudo-selector; however, the DOM tree inside slotted is collapsed, so only simple selectors will work. In other words, we wouldn’t be able to style a <strong> element inside a <p> element within the flattened DOM tree in our previous example.

The best of both worlds

Our dialog is in a good state now: it has encapsulated, semantic markup, styles and behavior; however, some consumers of our dialog might still want to define their own template. Fortunately, by combining two techniques we’ve already learned, we can allow authors to optionally define an external template.

To do this, we will allow each instance of our component to reference an optional template ID. To start, we need to define a getter and setter for our component’s template.

get template() {   return this.getAttribute('template'); }  set template(template) {   if (template) {     this.setAttribute('template', template);   } else {     this.removeAttribute('template');   }   this.render(); }

Here we’re doing much the same thing that we did with our open property by tying it directly to its corresponding attribute. But at the bottom, we’re introducing a new method to our component: render. We are going to use our render method to insert our shadow DOM’s content and remove that behavior from the connectedCallback; instead, we will call render when our element is connected:

connectedCallback() {   this.render(); }  render() {   const { shadowRoot, template } = this;   const templateNode = document.getElementById(template);   shadowRoot.innerHTML = '';   if (templateNode) {     const content = document.importNode(templateNode.content, true);     shadowRoot.appendChild(content);   } else {     shadowRoot.innerHTML = `<!-- template text -->`;   }   shadowRoot.querySelector('button').addEventListener('click', this.close);   shadowRoot.querySelector('.overlay').addEventListener('click', this.close);   this.open = this.open; }

Our dialog now has some really basic default stylings, but also gives consumers the ability to define a new template for each instance. If we wanted, we could even use attributeChangedCallback to make this component update based on the template it’s currently pointing to:

static get observedAttributes() { return 'open', 'template']; }  attributeChangedCallback(attrName, oldValue, newValue) {   if (newValue !== oldValue) {     switch (attrName) {       /** Boolean attributes */       case 'open':         this[attrName] = this.hasAttribute(attrName);         break;       /** Value attributes */       case 'template':         this[attrName] = newValue;         break;     }   } }

See the Pen
Dialog example using shadow root, slots and template
by Caleb Williams (@calebdwilliams)
on CodePen.

In the demo above, changing the template attribute on our <one-dialog> element will alter which design is being used when the element is rendered.

Strategies for styling the shadow DOM

Currently, the only reliable way to style a shadow DOM node is by adding a <style> element to the shadow root’s inner HTML. This works fine in almost every case as browsers will de-duplicate stylesheets across these components, where possible. This does tend to add a bit of memory overhead, but generally not enough to notice.

Inside of these style tags, we can use CSS custom properties to provide an API for styling our components. Custom properties can pierce the shadow boundary and effect content inside a shadow node.

“Can we use a <link> element inside of a shadow root?” you might ask. And, in fact, we can. The trouble comes when trying to reuse this component across multiple applications as the CSS file might not be saved in a consistent location throughout all apps. However, if we are certain as to the element’s stylesheet location, then using <link> is an option. The same holds true for including an @import rule in a style tag.

CSS custom properties

One of the benefits of using CSS custom properties — also called CSS variables — is that they bleed through the shadow DOM. This is by design, giving component authors a surface for allowing theming and styling of their components from the outside. It is important to note, however, that since CSS cascades, changes to custom properties made inside a shadow root do not bleed back up.

See the Pen
CSS custom properties and shadow DOM
by Caleb Williams (@calebdwilliams)
on CodePen.

Go ahead and comment out or remove the variables set in the CSS panel of the demo above and see how this impacts the rendered content. Afterward, you can take a look at the styles in the shadow DOM’s innerHTML, you’ll see how the shadow DOM can define its own property that won’t affect the light DOM.

Constructible stylesheets

At the time of this writing, there is a proposed web feature that will allow for more modular styling of shadow DOM and light DOM elements using constructible stylesheets that has already landed in Chrome 73 and received positive signaling from Mozilla.

This feature would allow authors to define stylesheets in their JavaScript files similar to how they would write normal CSS and share those styles across multiple nodes. So, a single stylesheet could be appended to multiple shadow roots and potentially the document as well.

const everythingTomato = new CSSStyleSheet(); everythingTomato.replace('* { color: tomato; }');  document.adoptedStyleSheets = [everythingTomato];  class SomeCompoent extends HTMLElement {   constructor() {     super();     this.adoptedStyleSheets = [everythingTomato];   }      connectedCallback() {     this.shadowRoot.innerHTML = `<h1>CSS colors are fun</h1>`;   } }

In the above example, the everythingTomato stylesheet would be simultaneously applied to the shadow root and to the document’s body. This feature would be very useful for teams creating design systems and components that are intended to be shared across multiple applications and frameworks.

In the next demo, we can see a really basic example of how this can be utilized and the power that constructble stylesheets offer.

See the Pen
Construct style sheets demo
by Caleb Williams (@calebdwilliams)
on CodePen.

In this demo, we construct two stylesheets and append them to the document and to the custom element. After three seconds, we remove one stylesheet from our shadow root. For those three seconds, however, the document and the shadow DOM share the same stylesheet. Using the polyfill included in that demo, there are actually two style elements present, but Chrome Canary runs this natively.

That demo also includes a form for showing how a sheet’s rules can easily and effectively changed asynchronously as needed. This addition to the web platform can be a powerful ally for those creating design systems that span multiple frameworks or site authors who want to provide themes for their websites.

There is also a proposal for CSS Modules that could eventually be used with the adoptedStyleSheets feature. If implemented in its current form, this proposal would allow importing CSS as a module much like ECMAScript modules:

import styles './styles.css';  class SomeCompoent extends HTMLElement {   constructor() {     super();     this.adoptedStyleSheets = [styles];   } }

Part and theme

Another feature that is in the works for styling Web Components are the ::part() and ::theme() pseudo-selectors. The ::part() specification will allow authors to define parts of their custom elements that have a surface for styling:

class SomeOtherComponent extends HTMLElement {   connectedCallback() {     this.attachShadow({ mode: 'open' });     this.shadowRoot.innerHTML = `       <style>h1 { color: rebeccapurple; }</style>       <h1>Web components are <span part="description">AWESOME</span></h1>     `;   } }      customElements.define('other-component', SomeOtherComponent);

In our global CSS, we could target any element that has a part called description by invoking the CSS ::part() selector.

other-component::part(description) {   color: tomato; }

In the above example, the primary message of the <h1> tag would be in a different color than the description part, giving custom element authors the ability to expose styling APIs for their components and maintain control over the pieces they want to maintain control over.

The difference between ::part() and ::theme() is that ::part() must be specifically selected whereas ::theme() can be nested at any level. The following would have the same effect as the above CSS, but would also work for any other element that included a part="description" in the entire document tree.

:root::theme(description) {   color: tomato; }

Like constructible stylesheets, ::part() has landed in Chrome 73.

Wrapping up

Our dialog component is now complete, more-or-less. It includes its own markup, styles (without any outside dependencies) and behaviors. This component can now be included in projects that use any current or future frameworks because they are built against the browser specifications instead of third-party APIs.

Some of the core controls are a little verbose and do rely on at least a moderate knowledge of how the DOM works. In our final article, we will discuss higher-level tooling and how to incorporate with popular frameworks.

Article Series:

  1. An Introduction to Web Components
  2. Crafting Reusable HTML Templates
  3. Creating a Custom Element from Scratch
  4. Encapsulating Style and Structure with Shadow DOM (This post)
  5. Advanced Tooling for Web Components (Coming soon!)

The post Encapsulating Style and Structure with Shadow DOM appeared first on CSS-Tricks.

CSS-Tricks

, , ,
[Top]

Visual Composer’s New Name: and Why An Explanation Is in Order

[Top]

Blurred Borders in CSS

Say we want to target an element and just visually blur the border of it. There is no simple, single built-in web platform feature we can reach for. But we can get it done with a little CSS trickery.

Here’s what we’re after:

Screenshot of an element with a background image that shows oranges on a wooden table. The border of this element is blurred.
The desired result.

Let’s see how we can code this effect, how we can enhance it with rounded corners, extend support so it works cross-browser, what the future will bring in this department and what other interesting results we can get starting from the same idea!

Coding the basic blurred border

We start with an element on which we set some dummy dimensions, a partially transparent (just slightly visible) border and a background whose size is relative to the border-box, but whose visibility we restrict to the padding-box:

$ b: 1.5em; // border-width  div {   border: solid $ b rgba(#000, .2);   height: 50vmin;   max-width: 13em;   max-height: 7em;   background: url(oranges.jpg) 50%/ cover                  border-box /* background-origin */                 padding-box /* background-clip */; }

The box specified by background-origin is the box whose top left corner is the 0 0 point for background-position and also the box that background-size (set to cover in our case) is relative to. The box specified by background-clip is the box within whose limits the background is visible.

The initial values are padding-box for background-origin and border-box for background-clip, so we need to specify them both in this case.

If you need a more in-depth refresher on background-origin and background-clip, you can check out this detailed article on the topic.

The code above gives us the following result:

See the Pen by thebabydino (@thebabydino) on CodePen.

Next, we add an absolutely positioned pseudo-element that covers its entire parent’s border-box and is positioned behind (z-index: -1). We also make this pseudo-element inherit its parent’s border and background, then we change the border-color to transparent and the background-clip to border-box:

$ b: 1.5em; // border-width  div {   position: relative;   /* same styles as before */      &:before {     position: absolute;     z-index: -1;     /* go outside padding-box by       * a border-width ($ b) in every direction */     top: -$ b; right: -$ b; bottom: -$ b; left: -$ b;     border: inherit;     border-color: transparent;     background: inherit;     background-clip: border-box;     content: ''   } }

Now we can also see the background behind the barely visible border:

See the Pen by thebabydino (@thebabydino) on CodePen.

Alright, you may be seeing already where this is going! The next step is to blur() the pseudo-element. Since this pseudo-element is only visible only underneath the partially transparent border (the rest is covered by its parent’s padding-box-restricted background), it results the border area is the only area of the image we see blurred.

See the Pen by thebabydino (@thebabydino) on CodePen.

We’ve also brought the alpha of the element’s border-color down to .03 because we want the blurriness to be doing most of the job of highlighting where the border is.

This may look done, but there’s something I still don’t like: the edges of the pseudo-element are now blurred as well. So let’s fix that!

One convenient thing when it comes to the order browsers apply properties in is that filters are applied before clipping. While this is not what we want and makes us resort to inconvenient workarounds in a lot of other cases… right here, it proves to be really useful!

It means that, after blurring the pseudo-element, we can clip it to its border-box!

My preferred way of doing this is by setting clip-path to inset(0) because… it’s the simplest way of doing it, really! polygon(0 0, 100% 0, 100% 100%, 0 100%) would be overkill.

See the Pen by thebabydino (@thebabydino) on CodePen.

In case you’re wondering why not set the clip-path on the actual element instead of setting it on the :before pseudo-element, this is because setting clip-path on the element would make it a stacking context. This would force all its child elements (and consequently, its blurred :before pseudo-element as well) to be contained within it and, therefore, in front of its background. And then no nuclear z-index or !important could change that.

We can prettify this by adding some text with a nicer font, a box-shadow and some layout properties.

What if we have rounded corners?

The best thing about using inset() instead of polygon() for the clip-path is that inset() can also accommodate for any border-radius we may want!

And when I say any border-radius, I mean it! Check this out!

div {   --r: 15% 75px 35vh 13vw/ 3em 5rem 29vmin 12.5vmax;   border-radius: var(--r);   /* same styles as before */      &:before {     /* same styles as before */     border-radius: inherit;     clip-path: inset(0 round var(--r));   } }

It works like a charm!

See the Pen by thebabydino (@thebabydino) on CodePen.

Extending support

Some mobile browsers still need the -webkit- prefix for both filter and clip-path, so be sure to include those versions too. Note that they are included in the CodePen demos embeded here, even though I chose to skip them in the code presented in the body of this article.

Alright, but what if we need to support Edge? clip-path doesn’t work in Edge, but filter does, which means we do get the blurred border, but no sharp cut limits.

Well, if we don’t need corner rounding, we can use the deprecated clip property as a fallback. This means adding the following line right before the clip-path ones:

clip: rect(0 100% 100% 0)

And our demo now works in Edge… sort of! The right, bottom and left edges are cut sharply, but the top one still remains blurred (only in the Debug mode of the Pen, all seems fine for the iframe in the Editor View). And opening DevTools or right clicking in the Edge window or clicking anywhere outside this window makes the effect of this property vanish. Bug of the month right there!

Alright, since this is so unreliable and it doesn’t even help us if we want rounded corners, let’s try another approach!

This is a bit like scratching behind the left ear with the right foot (or the other way around, depending on which side is your more flexible one), but it’s the only way I can think of to make it work in Edge.

Some of you may have already been screaming at the screen something like “but Ana… overflow: hidden!” and yes, that’s what we’re going for now. I’ve avoided it initially because of the way it works: it cuts out all descendant content outside the padding-box. Not outside the border-box, as we’ve done by clipping!

This means we need to ditch the real border and emulate it with padding, which I’m not exactly delighted about because it can lead to more complications, but let’s take it one step at a time!

As far as code changes are concerned, the first thing we do is remove all border-related properties and set the border-width value as the padding. We then set overflow: hidden and restrict the background of the actual element to the content-box. Finally, we reset the pseudo-element’s background-clip to the padding-box value and zero its offsets.

$ fake-b: 1.5em; // fake border-width  div {   /* same styles as before */   overflow: hidden;   padding: $ fake-b;   background: url(oranges.jpg) 50%/ cover                  padding-box /* background-origin */                 content-box /* background-clip */;      &:before {     /* same styles as before */     top: 0; right: 0; bottom: 0; left: 0;     background: inherit;     background-clip: padding-box;   } }

See the Pen by thebabydino (@thebabydino) on CodePen.

If we want that barely visible “border” overlay, we need another background layer on the actual element:

$ fake-b: 1.5em; // fake border-width $ c: rgba(#000, .03);  div {   /* same styles as before */   overflow: hidden;   padding: $ fake-b;   --img: url(oranges.jpg) 50%/ cover;   background: var(--img)                 padding-box /* background-origin */                 content-box /* background-clip */,                 linear-gradient($ c, $ c);      &:before {     /* same styles as before */     top: 0; right: 0; bottom: 0; left: 0;     background: var(--img);   } }

See the Pen by thebabydino (@thebabydino) on CodePen.

We can also add rounded corners with no hassle:

See the Pen by thebabydino (@thebabydino) on CodePen.

So why didn’t we do this from the very beginning?!

Remember when I said a bit earlier that not using an actual border can complicate things later on?

Well, let’s say we want to have some text. With the first method, using an actual border and clip-path, all it takes to prevent the text content from touching the blurred border is adding a padding (of let’s say 1em) on our element.

See the Pen by thebabydino (@thebabydino) on CodePen.

But with the overflow: hidden method, we’ve already used the padding property to create the blurred “border”. Increasing its value doesn’t help because it only increases the fake border’s width.

We could add the text into a child element. Or we could also use the :after pseudo-element!

The way this works is pretty similar to the first method, with the :after replacing the actual element. The difference is we clip the blurred edges with overflow: hidden instead of clip-path: inset(0) and the padding on the actual element is the pseudos’ border-width ($ b) plus whatever padding value we want:

$ b: 1.5em; // border-width  div {   overflow: hidden;   position: relative;   padding: calc(1em + #{$ b});   /* prettifying styles */ 	   &:before, &:after {     position: absolute;     z-index: -1; /* put them *behind* parent */     /* zero all offsets */     top: 0; right: 0; bottom: 0; left: 0;     border: solid $ b rgba(#000, .03);     background: url(oranges.jpg) 50%/ cover                    border-box /* background-origin */                   padding-box /* background-clip */;     content: ''   } 	   &:before {     border-color: transparent;     background-clip: border-box;     filter: blur(9px);   } }

See the Pen by thebabydino (@thebabydino) on CodePen.

What about having both text and some pretty extreme rounded corners? Well, that’s something we’ll discuss in another article – stay tuned!

What about backdrop-filter?

Some of you may be wondering (as I was when I started toying with various ideas in order to try to achieve this effect) whether backdrop-filter isn’t an option.

Well, yes and no!

Technically, it is possible to get the same effect, but since Firefox doesn’t yet implement it, we’re cutting out Firefox support if we choose to take this route. Not to mention this approach also forces us to use both pseudo-elements if we want the best support possible for the case when our element has some text content (which means we need the pseudos and their padding-box area background to show underneath this text).

For those who don’t yet know what backdrop-filter does: it filters out what can be seen through the (partially) transparent parts of the element we apply it on.

The way we need to go about this is the following: both pseudo-elements have a transparent border and a background positioned and sized relative to the padding-box. We restrict the background of pseudo-element on top (the :after) to the padding-box.

Now the :after doesn’t have a background in the border area anymore and we can see through to the :before pseudo-element behind it there. We set a backdrop-filter on the :after and maybe even change that border-color from transparent to slightly visible. The bottom (:before) pseudo-element’s background that’s still visible through the (partially) transparent, barely distinguishable border of the :after above gets blurred as a result of applying the backdrop-filter.

$ b: 1.5em; // border-width  div {   overflow: hidden;   position: relative;   padding: calc(1em + #{$ b});   /* prettifying styles */ 	   &:before, &:after {     position: absolute;     z-index: -1; /* put them *behind* parent */     /* zero all offsets */     top: 0; right: 0; bottom: 0; left: 0;     border: solid $ b transparent;     background: $ url 50%/ cover                    /* background-origin & -clip */                   border-box;     content: ''   } 	   &:after {     border-color: rgba(#000, .03);     background-clip: padding-box;     backdrop-filter: blur(9px); /* no Firefox support */   } }

Remember that the live demo for this doesn’t currently work in Firefox and needs the Experimental Web Platform features flag enabled in chrome://flags in order to work in Chrome.

Eliminating one pseudo-element

This is something I wouldn’t recommend doing in the wild because it cuts out Edge support as well, but we do have a way of achieving the result we want with just one pseudo-element.

We start by setting the image background on the element (we don’t really need to explicitly set a border as long as we include its width in the padding) and then a partially transparent, barely visible background on the absolutely positioned pseudo-element that’s covering its entire parent. We also set the backdrop-filter on this pseudo-element.

$ b: 1.5em; // border-width  div {   position: relative;   padding: calc(1em + #{$ b});   background: url(oranges.jpg) 50%/ cover;   /* prettifying styles */ 	   &:before {     position: absolute;     /* zero all offsets */     top: 0; right: 0; bottom: 0; left: 0;     background: rgba(#000, .03);     backdrop-filter: blur(9px); /* no Firefox support */     content: ''   } }

Alright, but this blurs out the entire element behind the almost transparent pseudo-element, including its text. And it’s no bug, this is what backdrop-filter is supposed to do.

Screenshot.
The problem at hand.

In order to fix this, we need to get rid of (not make transparent, that’s completely useless in this case) the inner rectangle (whose edges are a distance $ b away from the border-box edges) of the pseudo-element.

We have two ways of doing this.

The first way (live demo) is with clip-path and the zero-width tunnel technique:

$ b: 1.5em; // border-width $ o: calc(100% - #{$ b});  div {   /* same styles as before */ 	   &:before {     /* same styles as before */      /* doesn't work in Edge */     clip-path: polygon(0 0, 100% 0, 100% 100%, 0 100%,                         0 0,                         #{$ b $ b}, #{$ b $ o}, #{$ o $ o}, #{$ o $ b},                         #{$ b $ b});   } }

The second way (live demo) is with two composited mask layers (note that, in this case, we need to explicitly set a border on our pseudo):

$ b: 1.5em; // border-width  div {   /* same styles as before */ 	   &:before {     /* same styles as before */      border: solid $ b transparent;      /* doesn't work in Edge */     --fill: linear-gradient(red, red);     -webkit-mask: var(--fill) padding-box,                    var(--fill);     -webkit-mask-composite: xor;             mask: var(--fill) padding-box exclude,                    var(--fill);   } }

Since neither of these two properties works in Edge, this means support is now limited to WebKit browsers (and we still need to enable the Experimental Web Platform features flag for backdrop-filter to work in Chrome).

Future (and better!) solution

This is not implemented by any browser at this point, but the spec mentions a filter() function that will allow us to apply filters on individual background layers. This would eliminate the need for a pseudo-element and would reduce the code needed to achieve this effect to two CSS declarations!

border: solid 1.5em rgba(#000, .03); background: $ url                border-box /* background-origin */               padding-box /* background-clip */,              filter($ url, blur(9px))                /* background-origin & background-clip */               border-box

If you think this is something useful to have, you can add your use cases and track implementation progress for both Chrome and Firefox.

More border filter options

I’ve only talked about blurring the border up to now, but this technique works for pretty much any CSS filter (save for drop-shadow() which wouldn’t make much sense in this context). You can play with switching between them and tweaking values in the interactive demo below:

See the Pen by thebabydino (@thebabydino) on CodePen.

And all we’ve done so far has used just one filter function, but we can also chain them and then the possibilities are endless – what cool effects can you come up with this way?

See the Pen by thebabydino (@thebabydino) on CodePen.

The post Blurred Borders in CSS appeared first on CSS-Tricks.

CSS-Tricks

,
[Top]

Some Notes About Accessibility

Earlier this month Eric Bailey wrote about the current state of accessibility on the web and why it felt like fighting an uphill battle:

As someone with a good deal of interest in the digital accessibility space, I follow WebAIM’s work closely. Their survey results are priceless insights into how disabled people actually use the web, so when the organization speaks with authority on a subject, I listen.

WebAIM’s accessibility analysis of the top 1,000,000 homepages was released to the public on February 27, 2019. I’ve had a few days to process it, and frankly, it’s left me feeling pretty depressed. In a sea of already demoralizing findings, probably the most notable one is that pages containing ARIA—a specialized language intended to aid accessibility—are actually more likely to have accessibility issues.

Following up from that post, Ethan Marcotte jotted down his thoughts on the matter and about who has the responsibility to fix these issues in the long run:

Organizations like WebAIM have, alongside countless other non-profits and accessibility advocates, been showing us how we could make the web live up to its promise as a truly universal medium, one that could be accessed by anyone, anywhere, regardless of ability or need. And we failed.

I say we quite deliberately. This is on us: on you, and on me. And, look, I realize it may sting to read that. Hell, my work is constantly done under deadline, the way I work seems to change every year month, and it can feel hard to find the time to learn more about accessibility. And maybe you feel the same way. But the fact remains that we’ve created a web that’s actively excluding people, and at a vast, terrible scale. We need to meditate on that.

I suppose the lesson I’m taking from this is, well, we need to much, much more than meditating. I agree with Marcy Sutton: accessibility is a civil right, full stop. Improving the state of accessibility on the web is work we have to support. The alternative isn’t an option. Leaving the web in its current state isn’t fair. It isn’t just.

I entirely agree with Ethan here – we all have a responsibility to make the web a better place for everyone and especially when it comes to accessibility where the bar is so very low for us now. This isn’t to say that I know best, because there’s been plenty of times when I’ve dropped the ball when I’m designing something for the web.

What can we do to tackle the widespread issue surrounding web accessibility?

Well, as Eric mentions in his post, it’s first and foremost a problem of education and he points to Firefox and their great accessibility inspector as a tool to help us see and understand accessibility principles in action:

An image of the Firefox Accessibility tool inspecting the CSS-Tricks website

Marco Zehe is on the Firefox accessibility team wrote and about what the inspector is and how to use it:

This inspector is not meant as an evaluation tool. It is an inspection tool. So it will not give you hints about low contrast ratios, or other things that would tell you whether your site is WCAG compliant. It helps you inspect your code, helps you understand how your web site is translated into objects for assistive technologies.

Chris also wrote up some of his thoughts a short while ago, including other accessibility testing tools and checklists that can help us get started making more accessible experiences. The important thing to note here is that these tools need to be embedded within our process for web design if they’re going to solve these issues.

We can’t simply blame our tools.

I know the current state of web accessbility is pretty bad and that there’s an enormous amount of work to do for us all, but to be honest, I can’t help but feel a little optimistic. For the first time in my career, I’ve had designers and engineers alike approach me excitedly about accessibility. Each year, there are tons of workshops, articles, meetups, and talks (and I particularly like this talk by Laura Carvajal) on the matter meaning there’s a growing source of referential content that can teach us to be better.

And I can’t help but think that all of these conversations are a good sign – but now it’s up to us to do the work.

The post Some Notes About Accessibility appeared first on CSS-Tricks.

CSS-Tricks

, , ,
[Top]