Tag: into

Parsing Markdown into an Automated Table of Contents

A table of contents is a list of links that allows you to quickly jump to specific sections of content on the same page. It benefits long-form content because it shows the user a handy overview of what content there is with a convenient way to get there.

This tutorial will show you how to parse long Markdown text to HTML and then generate a list of links from the headings. After that, we will make use of the Intersection Observer API to find out which section is currently active, add a scrolling animation when a link is clicked, and finally, learn how Vue’s <transition-group> allow us to create a nice animated list depending on which section is currently active.

Parsing Markdown

On the web, text content is often delivered in the form of Markdown. If you haven’t used it, there are lots of reasons why Markdown is an excellent choice for text content. We are going to use a markdown parser called marked, but any other parser is also good. 

We will fetch our content from a Markdown file on GitHub. After we loaded our Markdown file, all we need to do is call the marked(<markdown>, <options>) function to parse the Markdown to HTML.

async function fetchAndParseMarkdown() {   const url = 'https://gist.githubusercontent.com/lisilinhart/e9dcf5298adff7c2c2a4da9ce2a3db3f/raw/2f1a0d47eba64756c22460b5d2919d45d8118d42/red_panda.md'   const response = await fetch(url)   const data = await response.text()   const htmlFromMarkdown = marked(data, { sanitize: true });   return htmlFromMarkdown }

After we fetch and parse our data, we will pass the parsed HTML to our DOM by replacing the content with innerHTML.

async function init() {   const $ main = document.querySelector('#app');   const htmlContent = await fetchAndParseMarkdown();   $ main.innerHTML = htmlContent } 
 init();

Now that we’ve generated the HTML, we need to transform our headings into a clickable list of links. To find the headings, we will use the DOM function querySelectorAll('h1, h2'), which selects all <h1> and <h2> elements within our markdown container. Then we’ll run through the headings and extract the information we need: the text inside the tags, the depth (which is 1 or 2), and the element ID we can use to link to each respective heading.

function generateLinkMarkup($ contentElement) {   const headings = [...$ contentElement.querySelectorAll('h1, h2')]   const parsedHeadings = headings.map(heading => {     return {       title: heading.innerText,       depth: heading.nodeName.replace(/D/g,''),       id: heading.getAttribute('id')     }   })   console.log(parsedHeadings) }

This snippet results in an array of elements that looks like this:

[   {title: "The Red Panda", depth: "1", id: "the-red-panda"},   {title: "About", depth: "2", id: "about"},   // ...  ]

After getting the information we need from the heading elements, we can use ES6 template literals to generate the HTML elements we need for the table of contents.

First, we loop through all the headings and create <li> elements. If we’re working with an <h2> with depth: 2, we will add an additional padding class, .pl-4, to indent them. That way, we can display <h2> elements as indented subheadings within the list of links.

Finally, we join the array of <li> snippets and wrap it inside a <ul> element.

function generateLinkMarkup($ contentElement) {   // ...   const htmlMarkup = parsedHeadings.map(h => `   <li class="$ {h.depth > 1 ? 'pl-4' : ''}">     <a href="#$ {h.id}">$ {h.title}</a>   </li>   `)   const finalMarkup = `<ul>$ {htmlMarkup.join('')}</ul>`   return finalMarkup }

That’s all we need to generate our link list. Now, we will add the generated HTML to the DOM.

async function init() {   const $ main = document.querySelector('#content');   const $ aside = document.querySelector('#aside');   const htmlContent = await fetchAndParseMarkdown();   $ main.innerHTML = htmlContent   const linkHtml = generateLinkMarkup($ main);   $ aside.innerHTML = linkHtml         }

Adding an Intersection Observer

Next, we need to find out which part of the content we’re currently reading. Intersection Observers are the perfect choice for this. MDN defines Intersection Observer as follows:

The Intersection Observer API provides a way to asynchronously observe changes in the intersection of a target element with an ancestor element or with a top-level document’s viewport.

So, basically, they allow us to observe the intersection of an element with the viewport or one of its parent’s elements. To create one, we can call a new IntersectionObserver(), which creates a new observer instance. Whenever we create a new observer, we need to pass it a callback function that is called when the observer has observed an intersection of an element. Travis Almand has a thorough explanation of the Intersection Observer you can read, but what we need for now is a callback function as the first parameter and an options object as the second parameter.

function createObserver() {   const options = {     rootMargin: "0px 0px -200px 0px",     threshold: 1   }   const callback = () => { console.log("observed something") }   return new IntersectionObserver(callback, options) }

The observer is created, but nothing is being observed at the moment. We will need to observe the heading elements in our Markdown, so let’s loop over them and add them to the observer with the observe() function.

const observer = createObserver() $ headings.map(heading => observer.observe(heading))

Since we want to update our list of links, we will pass it to the observer function as a $ links parameter, because we don’t want to re-read the DOM on every update for performance reasons. In the handleObserver function, we find out whether a heading is intersecting with the viewport, then obtain its id and pass it to a function called updateLinks which handles updating the class of the links in our table of contents.

function handleObserver(entries, observer, $ links) {   entries.forEach((entry)=> {     const { target, isIntersecting, intersectionRatio } = entry     if (isIntersecting && intersectionRatio >= 1) {       const visibleId = `#$ {target.getAttribute('id')}`       updateLinks(visibleId, $ links)     }   }) }

Let’s write the function to update the list of links. We need to loop through all links, remove the .is-active class if it exists, and add it only to the element that’s actually active.

function updateLinks(visibleId, $ links) {   $ links.map(link => {     let href = link.getAttribute('href')     link.classList.remove('is-active')     if(href === visibleId) link.classList.add('is-active')   }) }

The end of our init() function creates an observer, observes all the headings, and updates the links list so the active link is highlights when the observer notices a change.

async function init() {   // Parsing Markdown   const $ aside = document.querySelector('#aside'); 
   // Generating a list of heading links   const $ headings = [...$ main.querySelectorAll('h1, h2')]; 
   // Adding an Intersection Observer   const $ links = [...$ aside.querySelectorAll('a')]   const observer = createObserver($ links)   $ headings.map(heading => observer.observe(heading)) }

Scroll to section animation

The next part is to create a scrolling animation so that, when a link in the table of contents is clicked, the user is scrolled to the heading position rather abruptly jumping there. This is often called smooth scrolling.

Scrolling animations can be harmful if a user prefers reduced motion, so we should only animate this scrolling behavior if the user hasn’t specified otherwise. With window.matchMedia('(prefers-reduced-motion)'), we can read the user preference and adapt our animation accordingly. That means we need a click event listener on each link. Since we need to scroll to the headings, we will also pass our list of $ headings and the motionQuery

const motionQuery = window.matchMedia('(prefers-reduced-motion)'); 
 $ links.map(link => {   link.addEventListener("click",      (evt) => handleLinkClick(evt, $ headings, motionQuery)   ) })

Let’s write our handleLinkClick function, which is called whenever a link is clicked. First, we need to prevent the default behavior of links, which would be to jump directly to the section. Then we’ll read the href attribute of the clicked link and find the heading with the corresponding id attribute. With a tabindex value of -1 and focus(), we can focus our heading to make the users aware of where they jumped to. Finally, we add the scrolling animation by calling scroll() on our window. 

Here is where our motionQuery comes in. If the user prefers reduced motion, the behavior will be instant; otherwise, it will be smooth. The top option adds a bit of scroll margin to the top of the headings to prevent them from sticking to the very top of the window.

function handleLinkClick(evt, $ headings, motionQuery) {   evt.preventDefault()   let id = evt.target.getAttribute("href").replace('#', '')   let section = $ headings.find(heading => heading.getAttribute('id') === id)   section.setAttribute('tabindex', -1)   section.focus() 
   window.scroll({     behavior: motionQuery.matches ? 'instant' : 'smooth',     top: section.offsetTop - 20   }) }

For the last part, we will make use of Vue’s <transition-group>, which is very useful for list transitions. Here is Sarah Drasner’s excellent intro to Vue transitions if you’ve never worked with them before. They are especially great because they provide us with animation lifecycle hooks with easy access to CSS animations.

Vue automatically attaches CSS classes for us when an element is added (v-enter) or removed (v-leave) from a list, and also with classes for when the animation is active (v-enter-active and v-leave-active). This is perfect for our case because we can vary the animation when subheadings are added or removed from our list. To use them, we will need wrap our <li> elements in our table of contents with an <transition-group> element. The name attribute of the <transition-group> defines how the CSS animations will be called, the tag attribute should be our parent <ul> element.

<transition-group name="list" tag="ul">   <li v-for="(item, index) in activeHeadings" v-bind:key="item.id">     <a :href="item.id">       {{ item.text }}     </a>   </li> </transition-group>

Now we need to add the actual CSS transitions. Whenever an element is entering or leaving it, should animate from not visible (opacity: 0) and moved a bit to the bottom (transform: translateY(10px)).

.list-enter, .list-leave-to {   opacity: 0;   transform: translateY(10px); }

Then we define what CSS property we want to animate. For performance reasons, we only want to animate the transform and the opacity properties. CSS allows us to chain the transitions with different timings: the transform should take 0.8 seconds and the fading only 0.4s.

.list-leave-active, .list-move {   transition: transform 0.8s, opacity 0.4s; }

Then we want to add a bit of a delay when a new element is added, so the subheadings fade in after the parent heading moved up or down. We can make use of the v-enter-active hook to do that:

.list-enter-active {    transition: transform 0.8s ease 0.4s, opacity 0.4s ease 0.4s; }

Finally, we can add absolute positioning to the elements that are leaving to avoid sudden jumps when the other elements are animating:

.list-leave-active {   position: absolute; }

Since the scrolling interaction is fading elements out and in, it’s advisable to debounce the scrolling interaction in case someone is scrolling very quickly. By debouncing the interaction we can avoid unfinished animations overlapping other animations. You can either write your own debouncing function or simply use the lodash debounce function. For our example the simplest way to avoid unfinished animation updates is to wrap the Intersection Observer callback function with a debounce function and pass the debounced function to the observer.

const debouncedFunction = _.debounce(this.handleObserver) this.observer = new IntersectionObserver(debouncedFunction,options)

Here’s the final demo


Again, a table of contents is a great addition to any long-form content. It helps make clear what content is covered and provides quick access to specific content. Using the Intersection Observer and Vue’s list animations on top of it can help to make a table of contents even more interactive and even allow it to serve as an indication of reading progress. But even if you only add a list of links, it will already be a great feature for the user reading your content.


The post Parsing Markdown into an Automated Table of Contents appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , ,

More People Dipping Toes Into Web Monetization

Léonie Watson:

I do think that Coil and Web Monetization are at the vanguard of a quiet revolution.

Here’s me when I’m visiting Léonie’s site:

Browser Extension opened saying "Coil is Paying".
Enjoy the pennies!

My Coil subscription ($ 5/month) doles out money to sites I visit that have monetization set up and installed.

Other Coil subscribers deposit small bits of money directly into my online wallet (I’m using Uphold). I set this up over a year ago and found it all quick and easy to get started. But to be fair, I wasn’t trying to understand every detail of it and I’m still not betting anything major on it. PPK went as far to say it was user-hostile and I’ll admit he has some good points…

Signing up for payment services is a complete hassle, because you don’t know what you’re doing while being granted the illusion of free choice by picking one of two or three different systems — that you don’t understand and that aren’t explained. Why would I pick EasyMoneyGetter over CoinWare when both of them are black boxes I never heard of?

Also, these services use insane units. Brave use BATs, though to their credit I saw a translation to US$ — but not to any other currency, even though they could have figured out from my IP address that I come from Europe. Coil once informed me I had earned 0.42 XBP without further comment. W? T? F?

Bigger and bigger sites are starting to use it. TechDirt, is one example. I’ve got it on CodePen as well.

If this was just a “sprinkle some pennies at sites” play, it would be doomed.

I’m pessimistic at that approach. Micropayments have been done over and over and it hasn’t worked and I just don’t see it ever having enough legs to do anything meaningful to the industry.

At a quick glance, that’s what this looks like, and that’s how it is behaving right now, and that deserves a little skepticism.

There are two things that make this different

  1. This has a chance of being a web standard, not something that has to be installed to work.
  2. There are APIs to actually do things based on people transferring money to a site.

Neither of these things are realized, but if both of them happen, then meaningful change is much more likely to happen.

With the APIs, a site could say, “You’ll see no ads on this site if you pay us $ 1/month,” and then write code to make that happen all anonymously. That’s so cool. Removing ads is the most basic and obvious use case, and I hope some people give that a healthy try. I don’t do that on this site, because I think the tech isn’t quite there yet. I’d want to clearly be able to control the dollar-level of when you get that perk (you can’t control how much you give sites on Coil right now), but more importantly, in order to really make good on the promise of not delivering ads, you need to know very quickly if any given user is supporting you at the required level or not. For example, you can’t wait 2600 milliseconds to decide whether ads need to be requested. Well, you can, but you’ll hurt your ad revenue. And you can’t simply request the ads and hide them when you find out, lest you are not really making good on a promise, as trackers’n’stuff will have already done their thing.

Coil said the right move here is the “100+20” Rule, which I think is smart. It says to give everyone the full value of your site, but then give people extra if they hit monetization thresholds. For example, on this site, if you’re a supporter (not a Coil thing, this is old-school eCommerce), you can download the screencast originals (nobody else can). That’s the kind of thing I’d be happy to unlock via Web Monetization if it became easy to write the code to do that.

Maybe the web really will get monetized at some point and thus fix the original sin of the internet. I’m not really up on where things are in the process, but there is a whole site for it.

I’m not really helping, yet

While I have Coil installed and I’m a fan of all this, what will actually make a difference is having sites that actually do things for users that pay them. Like my video download example above. Maybe recipe sites offer some neat little printable PDF shopping list for people that pay them via Web Monetization. I dunno! Stuff like that! I’m not doing anything cool like that yet, myself.

If this thing gets legs, we’ll see all sorts of creative stuff, and the standard will make it so there is no one service that lords over this. It will be standardized APIs, so there could be a whole ecosystem of online wallets that accept money, services that help dole money out, fancy in-browser features, and site owners doing creative things.


The post More People Dipping Toes Into Web Monetization appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , ,
[Top]

Jumping Into Webmentions With NextJS (or Not)

Webmention is a W3C recommendation last published on January 12, 2017. And what exactly is a Webmention? It’s described as…

[…] a simple way to notify any URL when you mention it on your site. From the receiver’s perspective, it’s a way to request notifications when other sites mention it.

In a nutshell, it’s a way of letting a website know it has been mentioned somewhere, by someone, in some way. The Webmention spec also describes it as a way for a website to let others know it cited them. What that basically bails down to is that your website is an active social media channel, channeling communication from other channels (e.g. Twitter, Instagram, Mastodon, Facebook, etc.).

How does a site implement Webmentions? In some cases, like WordPress, it’s as trivial as installing a couple of plugins. Other cases may not be quite so simple, but it’s still pretty straightforward. In fact, let’s do that now!

Here’s our plan

  1. Declare an endpoint to receive Webmentions
  2. Process social media interactions to Webmentions
  3. Get those mentions into a website/app
  4. Set the outbound Webmentions

Luckily for us, there are services in place that make things extremely simple. Well, except that third point, but hey, it’s not so bad and I’ll walk through how I did it on my own atila.io site.

My site is a server-side blog that’s pre-rendered and written with NextJS. I have opted to make Webmention requests client-side; therefore, it will work easily in any other React app and with very little refactoring in any other JavaScript application.

Step 1: Declare an endpoint to receive Webmentions

In order to have an endpoint we can use to accept Webmentions, we need to either write the script and add to our own server, or use a service such as Webmention.io (which is what I did).

Webmention.io is free and you only need to confirm ownership over the domain you register. Verification can happen a number of ways. I did it by adding a rel="me" attribute to a link in my website to my social media profiles. It only takes one such link, but I went ahead and did it for all of my accounts.

<a   href="https://twitter.com/atilafassina"   target="_blank"   rel="me noopener noreferrer" >   @AtilaFassina </a>

Verifying this way, we also need to make sure there’s a link pointing back to our website in that Twitter profile. Once we’ve done that, we can head back to Webmention.io and add the URL.

This gives us an endpoint for accepting Webmentions! All we need to do now is wire it up as <link> tags in the <head> of our webpages in order to collect those mentions.

<head>   <link rel="webmention" href="https://webmention.io/{user}/webmention" />   <link rel="pingback" href="https://webmention.io/{user}/xmlrpc" />   <!-- ... --> </head>

Remember to replace {user} with your Webmention.io username.

Step 2: Process social media interactions into Webmentions

We are ready for the Webmentions to start flowing! But wait, we have a slight problem: nobody actually uses them. I mean, I do, you do, Max Böck does, Swyx does, and… that’s about it. So, now we need to start converting all those juicy social media interactions into Webmentions.

And guess what? There’s an awesome free service for it. Fair warning though: you’d better start loving the IndieWeb because we’re about to get all up in it.

Bridgy connects all our syndicated content and converts them into proper Webmentions so we can consume it. With a SSO, we can get each of our profiles lined up, one by one.

Step 3: Get those mentions into a website/app

Now it’s our turn to do some heavy lifting. Sure, third-party services can handle all our data, but it’s still up to us to use it and display it.

We’re going to break this up into a few stages. First, we’ll get the number of Webmentions. From there, we’ll fetch the mentions themselves. Then we’ll hook that data up to NextJS (but you don’t have to), and display it.

Get the number of mentions

type TMentionsCountResponse = {   count: number   type: {     like: number     mention: number     reply: number     repost: number   } }

That is an example of an object we get back from the Webmention.io endpoint. I formatted the response a bit to better suit our needs. I’ll walk through how I did that in just a bit, but here’s the object we will get:

type TMentionsCount = {   mentions: number   likes: number   total: number }

The endpoint is located at:

https://webmention.io/api/count.json?target=$  {post_url}

The request will not fail without it, but the data won’t come either. Both Max Böck and Swyx combine likes with reposts and mentions with replies. In Twitter, they are analogous.

const getMentionsCount = async (postURL: string): TMentionsCount => {   const resp = await fetch(     `https://webmention.io/api/count.json?target=$  {postURL}/`   )   const { type, count } = await resp.json() 
   return {     likes: type.like + type.repost,     mentions: type.mention + type.reply,     total: count,   } }

Get the actual mentions

Before getting to the response, please note that the response is paginated, where the endpoint accepts three parameters in the query:

  • page: the page being requested
  • per-page: the number of mentions to display on the page
  • target: the URL where Webmentions are being fetched

Once we hit https://webmention.io/api/mentions and pass the these params, the successful response will be an object with a single key links which is an array of mentions matching the type below:

type TMention = {   source: string   verified: boolean   verified_date: string // date string   id: number   private: boolean   data: {     author: {       name: string       url: string       photo: string // url, hosted in webmention.io     }     url: string     name: string     content: string // encoded HTML     published: string // date string     published_ts: number // ms   }   activity: {     type: 'link' | 'reply' | 'repost' | 'like'     sentence: string // pure text, shortened     sentence_html: string // encoded html   }   target: string }

The above data is more than enough to show a comment-like section list on our site. Here’s how the fetch request looks in TypeScript:

const getMentions = async (   page: string,   postsPerPage: number,   postURL: string ): { links: TWebMention[] } => {   const resp = await fetch(     `https://webmention.io/api/mentions?page=$  {page}&per-page=$  {postsPerPage}&target=$  {postURL}`   )   const list = await resp.json()   return list.links }

Hook it all up in NextJS

We’re going to work in NextJS for a moment. It’s all good if you aren’t using NextJS or even have a web app. We already have all the data, so those of you not working in NextJS can simply move ahead to Step 4. The rest of us will meet you there.

As of version 9.3.0, NextJS has three different methods for fetching data:

  1. getStaticProps: fetches data on build time
  2. getStaticPaths: specifies dynamic routes to pre-render based on the fetched data
  3. getServerSideProps: fetches data on each request

Now is the moment to decide at which point we will be making the first request for fetching mentions. We can pre-render the data on the server with the first batch of mentions, or we can make the entire thing client-side. I opted to go client-side.

If you’re going client-side as well, I recommend using SWR. It’s a custom hook built by the Vercel team that provides good caching, error and loading states — it and even supports React.Suspense.

Display the Webmention count

Many blogs show the number of comments on a post in two places: at the top of a blog post (like this one) and at the bottom, right above a list of comments. Let’s follow that same pattern for Webmentions.

First off, let’s create a component for the count:

const MentionsCounter = ({ postUrl }) => {   const { t } = useTranslation()   // Setting a default value for `data` because I don't want a loading state   // otherwise you could set: if(!data) return <div>loading...</div>   const { data = {}, error } = useSWR(postUrl, getMentionsCount) 
   if (error) {     return <ErrorMessage>{t('common:errorWebmentions')}</ErrorMessage>   } 
   // The default values cover the loading state   const { likes = '-', mentions = '-' } = data 
   return (     <MentionCounter>       <li>         <Heart title="Likes" />         <CounterData>{Number.isNaN(likes) ? 0 : likes}</CounterData>       </li>       <li>         <Comment title="Mentions" />{' '}         <CounterData>{Number.isNaN(mentions) ? 0 : mentions}</CounterData>       </li>     </MentionCounter>   ) }

Thanks to SWR, even though we are using two instances of the WebmentionsCounter component, only one request is made and they both profit from the same cache.

Feel free to peek at my source code to see what’s happening:

Display the mentions

Now that we have placed the component, it’s time to get all that social juice flowing!

At of the time of this writing, useSWRpages is not documented. Add to that the fact that the webmention.io endpoint doesn’t offer collection information on a response (i.e. no offset or total number of pages), I couldn’t find a way to use SWR here.

So, my current implementation uses a state to keep the current page stored, another state to handle the mentions array, and useEffect to handle the request. The “Load More” button is disabled once the last request brings back an empty array.

const Webmentions = ({ postUrl }) => {   const { t } = useTranslation()   const [page, setPage] = useState(0)   const [mentions, addMentions] = useState([]) 
   useEffect(() => {     const fetchMentions = async () => {       const olderMentions = await getMentions(page, 50, postUrl)       addMentions((mentions) => [...mentions, ...olderMentions])     }     fetchMentions()   }, [page]) 
   return (     <>       {mentions.map((mention, index) => (         <Mention key={mention.data.author.name + index}>           <AuthorAvatar src={mention.data.author.photo} lazy />           <MentionContent>             <MentionText               data={mention.data}               activity={mention.activity.type}             />           </MentionContent>         </Mention>       ))}       </MentionList>       {mentions.length > 0 && (         <MoreButton           type="button"           onClick={() => {           setPage(page + 1)         }}         >         {t('common:more')}       </MoreButton>     )}     </>   ) }

The code is simplified to allow focus on the subject of this article. Again, feel free to peek at the full implementation:

Step 4: Handling outbound mentions

Thanks to Remy Sharp, handling outbound mentions from one website to others is quite easy and provides an option for each use case or preference possible.

The quickest and easiest way is to head over to Webmention.app, get an API token, and set up a web hook. Now, if you have RSS feed in place, the same thing is just as easy with an IFTT applet, or even a deploy hook.

If you prefer to avoid using yet another third-party service for this feature (which I totally get), Remy has open-sourced a CLI package called wm which can be ran as a postbuild script.

But that’s not enough to handle outbound mentions. In order for our mentions to include more than simply the originating URL, we need to add microformats to our information. Microformats are key because it’s a standardized way for sites to distribute content in a way that Webmention-enabled sites can consume.

At their most basic, microformats are a kind of class-based notations in markup that provide extra semantic meaning to each piece. In the case of a blog post, we will use two kinds of microformats:

  • h-entry: the post entry
  • h-card: the author of the post

Most of the required information for h-entry is usually in the header of the page, so the header component may end up looking something like this:

<header class="h-entry">   <!-- the post date and time -->   <time datetime="2020-04-22T00:00:00.000Z" class="dt-published">     2020-04-22   </time>   <!-- the post title -->   <h1 class="p-name">     Webmentions with NextJS   </h1> </header>

And that’s it. If you’re writing in JSX, remember to replace class with className, that datetime is camelCase (dateTime), and that you can use the new Date('2020-04-22').toISOString() function.

It’s pretty similar for h-card. In most cases (like mine), author information is below the article. Here’s how my page’s footer looks:

<footer class="h-card">   <!-- the author name -->   <span class="p-author">Atila Fassina</span>   <!-- the authot image-->   <img     alt="Author’s photograph: Atila Fassina"     class="u-photo"     src="/images/internal-avatar.jpg"     lazy   /> </footer>

Now, whenever we send an outbound mention from this blog post, it will display the full information to whomever is receiving it.

Wrapping up

I hope this post has helped you getting to know more about Webmentions (or even about IndieWeb as a whole), and perhaps even helped you add this feature to your own website or app. If it did, please consider sharing this post to your network. I will be super grateful! 😉

References

Further reading

The post Jumping Into Webmentions With NextJS (or Not) appeared first on CSS-Tricks.

CSS-Tricks

, , ,
[Top]

How to Convert a Date String into a Human-Readable Format

I’ll be the first to admit that I’m writing this article, in part, because it’s something I look up often and want to be able to find it next time. Formatting a date string that you get from an API in JavaScript can take many shapes — anything from loading all of Moment.js to have very finite control, or just using a couple of lines to update it. This article is not meant to be comprehensive, but aims to show the most common path of human legibility.

UTC is an extremely common date format. You can generally recognize it by the Z at the end of the string. Here’s an example: 2020-05-25T04:00:00Z. When I bring data in from an API, it’s typically in UTC format.

If I wanted to format the above string in a more readable format, say May 25, 2020, I would do this:

const dateString = '2020-05-14T04:00:00Z'  const formatDate = (date) => {   const options = { year: "numeric", month: "long", day: "numeric" }   return new Date(date).toLocaleDateString(undefined, options) }

Here’s what I’m doing…

First, I’m passing in options for how I want the output to be formatted. There are many, many other options we could pass in there to format the date in different ways. I’m just showing a fairly common example.

const options = { year: "numeric", month: "long", day: "numeric" }

Next, I’m creating a new Date instance that represents a single moment in time in a platform-independent format.

return new Date(date)

Finally, I’m using the .toLocaleDateString() method to apply the formatting options.

return new Date(date).toLocaleDateString(undefined, options)

Note that I passed in undefined. Not defining the value in this case means the time will be represented by whatever the default locale is. You can also set it to be a certain area/language. Or, for apps and sites that are internationalized, you can pass in what the user has selected (e.g. 'en-US' for the United States, 'de-DE' for Germany, and so forth). There’s a nice npm package that includes list of more locales and their codes.

Hope that helps you get started! And high fives to future Sarah for not having to look this up again in multiple places. 🤚

The post How to Convert a Date String into a Human-Readable Format appeared first on CSS-Tricks.

CSS-Tricks

, , , , ,
[Top]

First Steps into a Possible CSS Masonry Layout

It’s not at the level of demand as, say, container queries, but being able to make “masonry” layouts in CSS has been a big ask for CSS developers for a long time. Masonry being that kind of layout where unevenly-sized elements are layed out in ragged rows. Sorta like a typical brick wall turned sideways.

The layout alone is achievable in CSS alone already, but with one big caveat: the items aren’t arranged in rows, they are arranged in columns, which is often a deal-breaker for folks.

/* People usually don't want this */  1  4  6  8 2     7 3  5     9

/* They want this *.  1  2  3  4 5  6     7 8     9

If you want that ragged row thing and horizontal source order, you’re in JavaScript territory. Until now, that is, as Firefox rolled this out under a feature flag in Firefox Nightly, as part of CSS grid.

Mats Palmgren:

An implementation of this proposal is now available in Firefox Nightly. It is disabled by default, so you need to load about:config and set the preference layout.css.grid-template-masonry-value.enabled to true to enable it (type “masonry” in the search box on that page and it will show you that pref).

Jen Simmons has created some demos already:

Is this really a grid?

A bit of pushback from Rachel Andrew

Grid isn’t Masonry, because it’s a grid with strict rows and columns. If you take another look at the layout created by Masonry, we don’t have strict rows and columns. Typically we have defined rows, but the columns act more like a flex layout, or Multicol. The key difference between the layout you get with Multicol and a Masonry layout, is that in Multicol the items are displayed by column. Typically in a Masonry layout you want them displayed row-wise.

[…]

Speaking personally, I am not a huge fan of this being part of the Grid specification. It is certainly compelling at first glance, however I feel that this is a relatively specialist layout mode and actually isn’t a grid at all. It is more akin to flex layout than grid layout.

By placing this layout method into the Grid spec I worry that we then tie ourselves to needing to support the Masonry functionality with any other additions to Grid.

None of this is final yet, and there is active CSS Working Group discussion about it.

As Jen said:

This is an experimental implementation — being discussed as a possible CSS specification. It is NOT yet official, and likely will change. Do not write blog posts saying this is definitely a thing. It’s not a thing. Not yet. It’s an experiment. A prototype. If you have thoughts, chime in at the CSSWG.

Houdini?

Last time there was chatter about native masonry, it was mingled with idea that the CSS Layout API, as part of Houdini, could do this. That is a thing, as you can see by opening this demo (repo) in Chrome Canary.

I’m not totally up to speed on whether Houdini is intended to be a thing so that ideas like this can be prototyped in the browser and ultimately moved out of Houdini, or if the ideas should just stay in Houdini, or what.

The post First Steps into a Possible CSS Masonry Layout appeared first on CSS-Tricks.

CSS-Tricks

, , , , ,
[Top]

Turning a Fixed-Size Object into a Responsive Element

I was in a situation recently where I wanted to show an iPhone on a website. I wanted users to be able to interact with an application demo on this “mock” phone, so it had to be rendered in CSS, not an image. I found a great library called marvelapp/devices.css. The library implemented the device I needed with pure CSS, and they looked great, but there was a problem: the devices it offered were not responsive (i.e. they couldn’t be scaled). An open issue listed a few options, but each had browser incompatibilities or other issues. I set out to modify the library to make the devices responsive.

Here is the final resizable library. Below, we’ll walk through the code involved in creating it.

The original library was written in Sass and implements the devices using elements with fixed sizing in pixels. The authors also provided a straightforward HTML example for each device, including the iPhone X we’ll be working with throughout this article. Here’s a look at the original. Note that the device it renders, while detailed, is rather large and does not change sizes.

Here’s the approach

There are three CSS tricks that I used to make the devices resizable:

  1. calc(), a CSS function that can perform calculations, even when inputs have different units
  2. --size-divisor, a CSS custom property used with the var() function
  3. @media queries separated by min-width

Let’s take a look at each of them.

calc()

The CSS calc() function allows us to change the size of the various aspects of the device. The function takes an expression as an input and returns the evaluation of the function as the output, with appropriate units. If we wanted to make the devices half of their original size, we would need to divide every pixel measurement by 2.

Before:

width: 375px;

After:

width: calc(375px / 2);

The second snippet yields a length half the size of the first snippet. Every pixel measurement in the original library will need to be wrapped in a calc() function for this to resize the whole device.

var(–size-divisor)

A CSS variable must first be declared at the beginning of the file before it can be used anywhere.

:root {   --size-divisor: 3; }

With that, this value is accessible throughout the stylesheet with the var() function. This will be exceedingly useful as we will want all pixel counts to be divided by the same number when resizing devices, while avoiding magic numbers and simplifying the code needed to make the devices responsive.

width: calc(375px / var(--size-divisor));

With the value defined above, this would return the original width divided by three, in pixels.

@media

To make the devices responsive, they must respond to changes in screen size. We achieve this using media queries. These queries watch the width of the screen and fire if the screen size crosses given thresholds. I chose the breakpoints based on Bootstrap’s sizes for xs, sm, and so on

There is no need to set a breakpoint at a minimum width of zero pixels; instead, the :root declaration at the beginning of the file handles these extra-small screens. These media queries must go at the end of the document and be arranged in ascending order of min-width.

@media (min-width: 576px) {   :root {     --size-divisor: 2;   } } @media (min-width: 768px) {   :root {     --size-divisor: 1.5;   } } @media (min-width: 992px) {    :root {     --size-divisor: 1;   } } @media (min-width: 1200px) {    :root {     --size-divisor: .67;   } }

Changing the values in these queries will adjust the magnitude of resizing that the device undergoes. Note that calc() handles floats just as well as integers.

Preprocessing the preprocessor with Python

With these tools in hand, I needed a way to apply my new approach to the multi-thousand-line library. The resulting file will start with a variable declaration, include the entire library with each pixel measurement wrapped in a calc() function, and end with the above media queries.

Rather than prepare the changes by hand, I created a Python script that automatically converts all of the pixel measurements.

def scale(token):   if token[-2:] == ';\n':     return 'calc(' + token[:-2] + ' / var(--size-divisor));\n'   elif token[-3:] == ');\n':     return '(' + token[:-3] + ' / var(--size-divisor));\n'   return 'calc(' + token + ' / var(--size-divisor))'

This function, given a string containing NNpx, returns calc(NNpx / var(--size-divisor));. Throughout the file, there are fortunately only three matches for pixels: NNpx, NNpx; and NNpx);. The rest is just string concatenation. However, these tokens have already been generated by separating each line by space characters.

def build_file(scss):   out = ':root {\n\t--size-divisor: 3;\n}\n\n'   for line in scss:     tokens = line.split(' ')     for i in range(len(tokens)):       if 'px' in tokens[i]:         tokens[i] = scale(tokens[i])     out += ' '.join(tokens)   out += "@media (min-width: 576px) {\n  \     :root {\n\t--size-divisor: 2;\n  \     }\n}\n\n@media (min-width: 768px) {\n \     :root {\n\t--size-divisor: 1.5;\n  \     }\n}\n\n@media (min-width: 992px) { \     \n  :root {\n\t--size-divisor: 1;\n  \     }\n}\n\n@media (min-width: 1200px) { \     \n  :root {\n\t--size-divisor: .67;\n  }\n}"   return out

This function, which builds the new library, begins by declaring the CSS variable. Then, it iterates through the entire old library in search of pixel measurements to scale. For each of the hundreds of tokens it finds that contain px, it scales that token. As the iteration progresses, the function preserves the original line structure by rejoining the tokens. Finally, it appends the necessary media queries and returns the entire library as a string. A bit of code to run the functions and read and write from files finishes the job.

if __name__ == '__main__':   f = open('devices_old.scss', 'r')   scss = f.readlines()   f.close()   out = build_file(scss)   f = open('devices_new.scss', 'w')   f.write(out)   f.close()

This process creates a new library in Sass. To create the CSS file for final use, run:

sass devices_new.scss devices.css

This new library offers the same devices, but they are responsive! Here it is:

To read the actual output file, which is thousands of lines, check it out on GitHub.

Other approaches

While the results of this process are pretty compelling, it was a bit of work to get them. Why didn’t I take a simpler approach? Here are three more approaches with their advantages and drawbacks.

zoom

One initially promising approach would be to use zoom to scale the devices. This would uniformly scale the device and could be paired with media queries as with my solution, but would function without the troublesome calc() and variable.

This won’t work for a simple reason: zoom is a non-standard property. Among other limitations, it is not supported in Firefox.

Replace px with em

Another find-and-replace approach prescribes replacing all instances of px with em. Then, the devices shrink and grow according to font size. However, getting them small enough to fit on a mobile display may require minuscule font sizes, smaller than the minimum sizes browsers, like Chrome, enforce. This approach could also run into trouble if a visitor to your website is using assistive technology that increases font size.

This could be addressed by scaling all of the values down by a factor of 100 and then applying standard font sizes. However, that requires just as much preprocessing as this article’s approach, and I think it is more elegant to perform these calculations directly rather than manipulate font sizes.

scale()

The scale() function can change the size of entire objects. The function returns a <transform-function>, which can be given to a transform attribute in a style. Overall, this is a strong approach, but does not change the actual measurement of the CSS device. I prefer my approach, especially when working with complex UI elements rendered on the device’s “screen.”

Chris Coyier prepared an example using this approach.

In conclusion

Hundreds of calc() calls might not be the first tool I would reach for if implementing this library from scratch, but overall, this is an effective approach for making existing libraries resizable. Adding variables and media queries makes the objects responsive. Should the underlying library be updated, the Python script would be able to process these changes into a new version of our responsive library.

The post Turning a Fixed-Size Object into a Responsive Element appeared first on CSS-Tricks.

CSS-Tricks

, , , , ,
[Top]

Let’s Take a Deep Dive Into the CSS Contain Property

Compared to the past, modern browsers have become really efficient at rendering the tangled web of HTML, CSS, and JavaScript code a typical webpage provides. It takes a mere milliseconds to render the code we give it into something people can use.

What could we, as front-end developers, do to actually help the browser be even faster at rendering? There are the usual best practices that are so easy to forget with our modern tooling — especially in cases where we may not have as much control over generated code. We could keep our CSS under control, for instance, with fewer and simpler selectors. We could keep our HTML under control; keep the tree flatter with fewer nodes, and especially fewer children. We could keep our JavaScript under control; while being careful with our HTML and CSS manipulations.

Actually, modern frameworks such as Vue and React do help out a good bit with that last part.

I would like to explore a CSS property that we could use to help the browser figure out what calculations it can reduce in priority or maybe even skip altogether.

This property is called contain. Here is how MDN defines this property:

The contain CSS property allows an author to indicate that an element and its contents are, as much as possible, independent of the rest of the document tree. This allows the browser to recalculate layout, style, paint, size, or any combination of them for a limited area of the DOM and not the entire page, leading to obvious performance benefits.

A simple way to look at what this property provides is that we can give hints to the browser about the relationships of the various elements on the page. Not necessarily smaller elements, such as paragraphs or links, but larger groups such as sections or articles. Essentially, we’re talking about container elements that hold content — even content that can be dynamic in nature. Think of a typical SPA where dynamic content is being inserted and removed throughout the page, often independent of other content on the page.

A browser cannot predict the future of layout changes to the webpage that can happen from JavaScript inserting and removing content on the page. Even simple things as inserting a class name to an element, animating a DOM element, or just getting the dimensions of an element can cause a reflow and repaint of the page. Such things can be expensive and should be avoided, or at least be reduced as much as possible.

Developers can sort of predict the future because they’ll know about possible future changes based on the UX of the page design, such as when the user clicks on a button it will call for data to be inserted in a div located somewhere in the current view. We know that’s a possibility, but the browser does not. We also know that there’s a distinct possibility that inserting data in that div will not change anything visually, or otherwise, for other elements on the page.

Browser developers have spent a good amount of time optimizing the browser to handle such situations. There are various ways of helping the browser be more efficient in such situations, but more direct hints would be helpful. The contain property gives us a way to provide these hints.

The various ways to contain

The contain property has three values that can be used individually or in combination with one another: size, layout, and paint. It also has two shorthand values for common combinations: strict and content. Let’s cover the basics of each.

Please keep in mind that there are a number of rules and edge cases for each of these that are covered in the spec. I would imagine these will not be of much concern in most situations. Yet, if you get an undesired result, then a quick look at the spec might be handy.

There is also a style containment type in the spec that this article will not cover. The reason being that the style containment type is considered of little value at this time and is currently at-risk of being removed from the spec.

Size containment

size containment is actually a simple one to explain. When a container with this containment is involved in the layout calculations, the browser can skip quite a bit because it ignores the children of that container. It is expected the container will have a set height and width; otherwise, it collapses, and that is the only thing considered in layout of the page. It is treated as if it has no content whatsoever.

Consider that descendants can affect their container in terms of size, depending on the styles of the container. This has to be considered when calculating layout; with size containment, it most likely will not be considered. Once the container’s size has been resolved in relation to the page, then the layout of its descendants will be calculated.

size containment doesn’t really provide much in the way of optimizations. It is usually combined with one of the other values.

Although, one benefit it could provide is helping with JavaScript that alters the descendants of the container based on the size of the container, such as a container query type situation. In some circumstances, altering descendants based on the container’s size can cause the container to change size after the change was done to the descendants. Since a change in the container’s size can trigger another change in the descendants you could end up with a loop of changes. size containment can help prevent that loop.

Here’s a totally contrived example of this resizing loop concept:

In this example, clicking the start button will cause the red box to start growing, based on the size of the purple parent box, plus five pixels. As the purple box adjusts in size, a resize observer tells the red square to again resize based on the size of the parent. This causes the parent to resize again and so on. The code stops this process once the parent gets above 300 pixels to prevent the infinite loop.

The reset button, of course, puts everything back into place.

Clicking the checkbox “set size containment” sets different dimensions and the size containment on the purple box. Now when you click on the start button, the red box will resize itself based on the width of the purple box. True, it overflows the parent, but the point is that it only resizes the one time and stops; there’s no longer a loop.

If you click on the resize container button, the purple box will grow wider. After the delay, the red box will resize itself accordingly. Clicking the button again returns the purple box back to its original size and then the red box will resize again.

While it is possible to accomplish this behavior without use of the containment, you will miss out on the benefits. If this is a situation that can happen often in the page the containment helps out with page layout calculations. When the descendants change internal to the containment, the rest of the page behaves as if the changes never happened.

Layout containment

layout containment tells the browser that external elements neither affect the internal layout of the container element, nor does the internal layout of the container element affect external elements. So when the browser does layout calculations, it can assume that the various elements that have the layout containment won’t affect other elements. This can lower the amount of calculations that need to be done.

Another benefit is that related calculations could be delayed or lowered in priority if the container is off-screen or obscured. An example the spec provides is:

[…] for example, if the containing box is near the end of a block container, and you’re viewing the beginning of the block container

The container with layout containment becomes a containing box for absolute or fixed position descendants. This would be the same as applying a relative position to the container. So, keep that in mind how the container’s descendants may be affected when applying this type of containment.

On a similar note, the container gets a new stacking context, so z-index can be used the same as if a relative, absolute, or fixed position was applied. Although, setting the top, right, bottom, or left properties has no effect on the container.

Here’s a simple example of this:

Click the box and layout containment is toggled. When layout containment is applied, the two purple lines, which are absolute positioned, will shift to inside the purple box. This is because the purple box becomes a containing block with the containment. Another thing to note is that the container is now stacked on top of the green lines. This is because the container now has a new stacking context and follows those rules accordingly.

Paint containment

paint containment tells the browser none of the children of the container will ever be painted outside the boundaries of the container’s box dimensions. This is similar to placing overflow: hidden; on the container, but with a few differences.

For one, the container gets the same treatment as it does under layout containment: it becomes a containing block with its own stacking context. So, having positioned children inside paint containment will respect the container in terms of placement. If we were to duplicate the layout containment demo above but use paint containment instead, the outcome would be much the same. The difference being that the purple lines would not overflow the container when containment is applied, but would be clipped at the container’s border-box.

Another interesting benefit of paint containment is that the browser can skip that element’s descendants in paint calculations if it can detect that the container itself is not visible within the viewport. If the container is not in the viewport or obscured in some way, then it’s a guarantee that its descendants are not visible as well. As an example, think of a nav menu that normally sits off-screen to the left of the page and it slides in when a button is clicked. When that menu is in its normal state off-screen, the browser just skips trying to paint its contents.

Containments working together

These three containments provide different ways of influencing parts of rendering calculations performed by the browser. size containment tells the browser that this container should not cause positional shifting on the page when its contents change. layout containment tells the browser that this container’s descendants should not cause layout changes in elements outside of its container and vice-versa. paint containment tells the browser that this container’s content will never be painted outside of the container’s dimensions and, if the container is obscured, then don’t bother painting the contents at all.

Since each of these provide different optimizations, it would make sense to combine some of them. The spec actually allows for that. For example, we could use layout and paint together as values of the contain property like this:

.el {   contain: layout paint; }

Since this is such an obvious thing to do, the spec actually provides two shorthand values:

Shorthand Longhand
content layout paint
strict layout paint size

The content value will be the most common to use in a web project with a number of dynamic elements, such as large multiple containers that have content changing over time or from user activity.

The strict value would be useful for containers that have a defined size that will never change, even if the content changes. Once in place, it’ll stay the intended size. A simple example of that is a div that contains third-party external advertising content, at industry-defined dimensions, that has no relation to anything else on the page DOM-wise.

Performance benefits

This part of the article is tough to explain. The problem is that there isn’t much in the way of visuals about the performance benefits. Most of the benefits are behind-the-scenes optimizations that help the browser decide what to do on a layout or paint change.

As an attempt to show the contain property’s performance benefits, I made a simple example that changes the font-size on an element with several children. This sort of change would normally trigger a re-layout, which would also lead to a repaint of the page. The example covers the contain values of none, content, and strict.

The radio buttons change the value of the contain property being applied to the purple box in the center. The “change font-size” button will toggle the font-size of the contents of the purple box by switching classes. Unfortunately, this class change is also a potential trigger for re-layout. If you’re curious, here is a list of situations in JavaScript and then a similar list for CSS that trigger such layout and paint calculations. I bet there’s more than you think.

My totally unscientific process was to select the contain type, start a performance recording in Chome’s developer tools, click the button, wait for the font-size change, then stop the recording after another second or so. I did this three times for each containment type to be able to compare multiple recordings. The numbers for this type of comparison are in the low milliseconds each, but there’s enough of a difference to get a feel for the benefits. The numbers could potentially be quite different in a more real-world situation.

But there are a few things to note other than just the raw numbers.

When looking through the recording, I would find the relevant area in the timeline and focus there to select the task that covers the change. Then I would look at the event log of the task to see the details. The logged events were: recalculate style, layout, update layer tree, paint, and composite layers. Adding the times of all those gives us the total time of the task.

DevTools showing set time at 27.9 milliseconds which is the same as the total time to recalculate styles.
The event log with no containment.

One thing to note for the two containment types is that the paint event was logged twice. I’ll get back to that in a moment.

DevTools showing set time at 13.8 milliseconds which is the same as the total time to recalculate styles.

Completing the task at hand

Here are the total times for the three containment types, three runs each:

Containment Run 1 Run 2 Run 3 Average
none 24 ms 33.8 ms 23.3 ms 27.03 ms
content 13.2 ms 9 ms 9.2 ms 10.47 ms
strict 5.6 ms 18.9 ms 8.5 ms 11 ms

The majority of the time was spent in layout. There were spikes here and there throughout the numbers, but remember that these are unscientific anecdotal results. In fact, the second run of strict containment had a much higher result than the other two runs; I just kept it in because such things do happen in the real world. Perhaps the music I was listening to at the time changed songs during that run, who knows. But you can see that the other two runs were much quicker.

So, by these numbers you can start to see that the contain property helps the browser render more efficiently. Now imagine my one small change being multiplied over the many changes made to the DOM and styling of a typical dynamic web page.

Where things get more interesting is in the details of the paint event.

Layout once, paint twice

Stick with me here. I promise it will make sense.

I’m going to use the demo above as the basis for the following descriptions. If you wish to follow along then go to the full version of the demo and open the DevTools. Note that you have to open up the details of the “frame” and not the “main” timeline once you run the performance tool to see what I’m about to describe.

Showing frame details open and main details closed in DevTools.
Showing frame details open and main details closed in DevTools

I’m actually taking screenshots from the “fullpage” version since DevTools works better with that version. That said, the regular “full” version should give roughly the same idea.

The paint event only fired once in the event log for the task that had no containment at all. Typically, the event didn’t take too long, ranging from 0.2 ms to 3.6 ms. The deeper details is where it gets interesting. In those details, it notes that the area of paint was the entire page. In the event log, if you hover on the paint event, DevTools will even highlight the area of the page that was painted. The dimensions in this case will be whatever the size of your browser viewport happens to be. It will also note the layer root of the paint.

Showing DevTools paint calculation of 0.7 milliseconds.
Paint event details

Note that the page area to the left in the image is highlighted, even outside of the purple box. Over to the right, are the dimensions of the paint to the screen. That’s roughly the size of the viewport in this instance. For a future comparison, note the #document as the layer root.

Keep in mind that browsers have the concept of layers for certain elements to help with painting. Layers are usually for elements that may overlap each other due to a new stacking context. An example of this is the way applying position: relative; and z-index: 1; to an element will cause the browser to create that element as a new layer. The contain property has the same effect.

There is a section in DevTools called “rendering” and it provides various tools to see how the browser renders the page. When selecting the checkbox named “Layer borders” we can see different things based on the containment. When the containment is none then you should see no layers beyond the typical static web page layers. Select content or strict and you can see the purple box get converted to its own layer and the rest of the layers for the page shift accordingly.

Layers with no containment
Layers with containment

It may be hard to notice in the image, but after selecting content containment the purple box becomes its own layer and the page has a shift in layers behind the box. Also notice that in the top image the layer line goes across on top of the box, while in the second image the layer line is below the box.

I mentioned before that both content and strict causes the paint to fire twice. This is because two painting processes are done for two different reasons. In my demo the first event is for the purple box and the second is for the contents of the purple box.

Typically the first event will paint the purple box and report the dimensions of that box as part of the event. The box is now its own layer and enjoys the benefits that applies.

The second event is for the contents of the box since they are scrolling elements. As the spec explains; since the stacking context is guaranteed, scrolling elements can be painted into a single GPU layer. The dimensions reported in the second event is taller, the height of the scrolling elements. Possibly even narrower to make room for the scrollbar.

First paint event with content containment
Second paint event with content containment

Note the difference in dimensions on the right of both of those images. Also, the layer root for both of those events is main.change instead of the #document seen above. The purple box is a main element, so only that element was painted as opposed as to whole document. You can see the box being highlighted as opposed to the whole page.

The benefits of this is that normally when scrolling elements come into view, they have to be painted. Scrolling elements in containment have already been painted and don’t require it again when coming into view. So we get some scrolling optimizations as well.

Again, this can be seen in the demo.

Back to that Rendering tab. This time, check “Scrolling performance issue” instead. When the containment is set to none, Chrome covers the purple box with an overlay that’s labeled “repaints on scroll.”

DevTools showing “repaints on scroll” with no containment

If you wish to see this happen live, check the “Paint flashing” option.

Please note: if flashing colors on the screen may present an issue for you in some way, please consider not checking the “Paint flashing” option. In the example I just described, not much changes on the page, but if one were to leave that checked and visited other sites, then reactions may be different.

With paint flashing enabled, you should see a paint indicator covering all the text within the purple box whenever you scroll inside it. Now change the containment to content or strict and then scroll again. After the first initial paint flash it should never reappear, but the scrollbar does show indications of painting while scrolling.

Paint flashing enabled and scrolling with no containment
Paint flashing and scrolling with content containment

Also notice that the “repaints on scroll” overlay is gone on both forms of containment. In this case, containment has given us not only some performance boost in painting but in scrolling as well.

An interesting accidental discovery

As I was experimenting with the demo above and finding out how the paint and scrolling performance aspects worked, I came across an interesting issue. In one test, I had a simple box in the center of page, but with minimal styling. It was essentially an element that scrolls with lots of text content. I was applying content containment to the container element, but I wasn’t seeing the scrolling performance benefits described above.

The container was flagged with the “repaints on scroll” overlay and the paint flashing was the same as no containment applied, even though I knew for a fact that content containment was being applied to the container. So I started comparing my simple test against the more styled version I discussed above.

I eventually saw that if the background-color of the container is transparent, then the containment scroll performance benefits do not happen.

I ran a similar performance test where I would change the font-size of the contents to trigger the re-layout and repaint. Both tests had roughly the same results, with only difference that the first test had a transparent background-color and the second test had a proper background-color. By the numbers, it looks like the behind-the-scenes calculations are still more performant; only the paint events are different. It appears the element doesn’t become its own layer in the paint calculations with a transparent background-color.

The first test run only had one paint event in the event log. The second test run had the two paint events as I would expect. Without that background color, it seems the browser decides to skip the layer aspect of the containment. I even found that faking transparency by using the same color as the color behind the element works as well. My guess is if the container’s background is transparent then it must rely on whatever is underneath, making it impossible to separate the container to its own paint layer.

I made another version of the test demo that changes the background-color of the container element from transparent to the same color used for the background color of the body. Here are two screenshots showing the differences when using the various options in the Rendering panel in DevTools.

Rendering panel with transparent background-color

You can see the checkboxes that have been selected and the result to the container. Even with a content containment applied, the box has “repaints on scroll” as well as the green overlay showing painting while scrolling.

Rendering panel with background-color applied

In the second image, you can see that the same checkboxes are selected and a different result to the container. The “repaints on scroll” overlay is gone and the green overlay for painting is also gone. You can see the paint overlay on the scrollbar to show it was active.

Conclusion: make sure to apply some form of background color to your container when applying containment to get all the benefits.

Here’s what I used for the test:

This is the bottom of the page

This article has covered the basics of the CSS contain property with its values, benefits, and potential performance gains. There are some excellent benefits to applying this property to certain elements in HTML; which elements need this applied is up to you. At least, that’s what I gather since I’m unaware of any specific guidance. The general idea is apply it to elements that are containers of other elements, especially those with some form of dynamic aspect to them.

Some possible scenarios: grid areas of a CSS grid, elements containing third-party content, and containers that have dynamic content based on user interaction. There shouldn’t be any harm in using the property in these cases, assuming you aren’t trying to contain an element that does, in fact, rely in some way on another element outside that containment.

Browser support is very strong. Safari is the only holdout at this point. You can still use the property regardless because the browser simply skips over that code without error if it doesn’t understand the property or its value.

So, feel free to start containing your stuff!

The post Let’s Take a Deep Dive Into the CSS Contain Property appeared first on CSS-Tricks.

CSS-Tricks

, , , , , ,
[Top]

How to Turn a Procreate Drawing into a Web Animation

I recently started drawing on my iPad using the Procreate app with Apple Pencil. I’m enjoying the flexibility of drawing this way. What usually keeps me from painting at home are basic things, like setup, cleaning brushes, proper ventilation, and other factors not really tied to the painting itself. Procreate does a pretty nice job of emulating painting and drawing processes, but adding digital features like undo/redo, layers, and layer effects.

Here’s a Procreate painting I made that I wound up exporting and animating on the web.

See the Pen
zebra page 2
by Sarah Drasner (@sdras)
on CodePen.

You can do this too! There are two basic animation effects we’ll cover here: the parallax effect that takes place on hover (with the ability to turn it off for those with vestibular disorders), and the small drawing effect when the page loads.

Parallax with drawing layers

I mentioned that part of the reason I enjoy drawing on the iPad is the ability to work in layers. When creating layers, I take care to keep certain “themes” on the same layer, for instance, the zebra stripes are on one layer and the dots are on own layer under beneath the stripes.

I’ll extend the drawing beyond the boundaries of where the line from the layer above ends, mainly because you’ll be able to peek around it a bit as we move the drawing around in the parallax effect. If the lines are sharp at any point, this will look unnatural.

Once I’m done creating my layers, I can export things as a Photoshop (PSD) file, thanks to Procreate’s exporting options.

The same drawing opened in Photoshop.

Then I’ll join together a few, so that I’m only working with about 8 layers at most. I use a photoshop plugin called tinyPNG to export each layer individually. I’ve heard there are better compression tools, but I’ve been pretty happy with this one.

Next, I’ll go into my code editor and create a div to house all the various images that are contained in the layers. I give that div relative positioning while all of the images inside it get absolute positioning. This places the images one on top the other.

<div id="zebra-ill" role="presentation">   <img class="zebraimg" src='https://s3-us-west-2.amazonaws.com/s.cdpn.io/28963/zebraexport6.png' />   <img class="zebraimg" src='https://s3-us-west-2.amazonaws.com/s.cdpn.io/28963/zebraexport5.png' />  … </div>
#zebra-ill {   position: relative;   min-height: 650px;   max-width: 500px; }  .zebraimg {   position: absolute;   top: 0;   left: 0;   perspective: 600px;   transform-style: preserve-3d;   transform: translateZ(0);   width: 100%;   }

The 100% width on the image will confines all of the images to the size of the parent div. I do this so that I’m controlling them all at once with the same restrictions, which works well for responsive conditions. The max-width and min-height on the parent allow me to confine the way that the div shrinks and grows, especially as when it gets dropped into a CSS Grid layout. It will need to be flexible, but have some constraints as well and CSS Grid is great for that.

Next, I add a mousemove event listener on the parent div with JavaScript. That lets me capture some information about the coordinates of the mouse using e.clientX and e.clientY.

const zebraIll = document.querySelector('#zebra-ill')  // Hover zebraIll.addEventListener('mousemove', e => {   let x = e.clientX;   let y = e.clientY; })

Then, I’ll go through each of the drawings and use those coordinates to move the images around. I’ll even apply transform styles connected to those coordinates.

const zebraIll = document.querySelector('#zebra-ill') const zebraIllImg = document.querySelectorAll('.zebraimg') const rate = 0.05  //hover zebraIll.addEventListener('mousemove', e => {   let x = e.clientX;   let y = e.clientY;      zebraIllImg.forEach((el, index) => {     el.style.transform =        `rotateX($ {x}deg) rotateY($ {y}deg)`   }) })

See the Pen
zebra page
by Sarah Drasner (@sdras)
on CodePen.

Woah, slow down there partner! That’s way too much movement, we want something a little more subtle. So I’ll need to slow it way down by multiplying it by a low rate, like 0.05. I also want to change it a just bit per layer, so I’ll use the layers index to speed up or slow down the movement.

const zebraIll = document.querySelector('#zebra-ill') const zebraIllImg = document.querySelectorAll('.zebraimg') const rate = 0.05  // Hover zebraIll.addEventListener('mousemove', e => {   let x = e.clientX;   let y = e.clientY;      zebraIllImg.forEach((el, index) => {     let speed = index += 1     let xPos = speed + rate * x     let yPos = speed + rate * y          el.style.transform =        `rotateX($ {xPos - 20}deg) rotateY($ {yPos - 20}deg) translateZ($ {index * 10}px)`   }) })

Finally, I can create a checkbox that asks the user if want to turn off this effect.

<p>   <input type="checkbox" name="motiona11y" id="motiona11y" />   <label for="motiona11y">If you have a vestibular disorder, check this to turn off some of the effects</label> </p>
const zebraIll = document.querySelector('#zebra-ill') const zebraIllImg = document.querySelectorAll('.zebraimg') const rate = 0.05 const motioncheck = document.getElementById('motiona11y') let isChecked = false  // Check to see if someone checked the vestibular disorder part motioncheck.addEventListener('change', e => {   isChecked = e.target.checked; })  // Hover zebraIll.addEventListener('mousemove', e => {   if (isChecked) return   let x = e.clientX;   let y = e.clientY;      // ... })

Now the user has the ability to look at the layered dimensionality of the drawing on hover, but can also turn the effect off if it is bothersome.

Drawing Effect

The ability to make something look like it’s been drawn on to the page has been around for a while and there are a lot of articles on how it’s done. I cover it as well in a course I made for Frontend Masters.

The premise goes like this:

  • Take an SVG path and make it dashed with dashoffset.
  • Make the dash the entire length of the shape.
  • Animate the dashoffset (the space between dashes).

What you get in the end is a kind of “drawn-on” effect.

But in this particular drawing you might have noticed that the parts I animated look like they were hand-drawn, which is a little more unique. You see, though that effect will work nicely for more mechanical drawings, the web doesn’t quite yet support the use of tapered lines (lines that vary in thickness, as is typical of a more hand-drawn feel).

For this approach, I brought the file into Illustrator, traced the lines from that part of my drawing, and made those lines tapered by going into the Stroke panel, where I selected “More options” and clicked the tapered option from the dropdown.

Screenshot of the Illustrator Stroke menu.

I duplicated those lines, and created fatter, uniform paths underneath. I then took those fat lines and animate them onto the page. Now my drawing shows through the shape:

Here’s what I did:

  • I traced with Pen tool and used tapered brush.
  • I duplicated the layer and changed the lines to be uniform and thicker.
  • I took the first layer and created a compound path.
  • I simplified path points.
  • I created clipping mask.

From there, I can animate everything with drawSVG and Greensock. Though you don’t need to, you could use CSS for this kind of animation. There’s a ton of path points so in this case, so it makes sense to use something more powerful. I wrote another post that goes into depth on how to start off creating these kinds of animations. I would recommend you start there if you’re fresh to it.

To use drawSVG, we need to do a few things:

  • Load the plugin script.
  • Register the plugin at the top of the JavaScript file.
  • Make sure that paths are being used, and that there are strokes on those paths.
  • Make sure that those paths are targeted rather than the groups that house them. The parent elements could be targeted instead.

Here’s a very basic example of drawSVG (courtesy of GreenSock):

See the Pen
DrawSVGPlugin Values
by GreenSock (@GreenSock)
on CodePen.

So, in the graphics editor, there is this clipping mask and a group that is clipped by them with fat uniform lines underneath:

From here, we’ll grab a hold of those paths and use the drawSVG plugin to animate them onto the page.

//register the plugin gsap.registerPlugin(DrawSVGPlugin);  const drawLines = () => {   gsap.set('.cls-15, #yellowunderline, .cls-13', {     visibility: 'visible'   })      const timeline = gsap.timeline({      defaults: {       delay: 1,       ease: 'circ',       duration: 2     }		     })   .add('start')   .fromTo('.cls-15 path', {     drawSVG: '0%'   }, {     drawSVG: '100%',     immediateRender: true   }, 'start')   .fromTo('#yellowunderline path', {     drawSVG: '50% 50%'   }, {     drawSVG: '100%',     immediateRender: true   }, 'start+=1')   .fromTo('.cls-13', {     drawSVG: '50% 50%'   }, {     drawSVG: '100%',     immediateRender: true   }, 'start+=1') }  window.onload = () => {   drawLines() };

And there we have it! An initial illustration for our site that’s created from a layered drawing in the Procreate iPad app. I hope this gets you going making your web projects unique with wonderful hand-drawn illustrations. If you make anything cool, let us know in the comments below!

The post How to Turn a Procreate Drawing into a Web Animation appeared first on CSS-Tricks.

CSS-Tricks

, , , ,
[Top]

Dip Your Toes Into Hardware With WebMIDI

Did you know there is a well-supported browser API that allows you to interface with interesting and even custom-built hardware using a mature protocol that predates the web? Let me introduce you to MIDI and the WebMIDI API and show you how it presents a unique opportunity for front-end developers to break outside the browser and dabble in the world of hardware programming without leaving the relative comfort of JavaScript and the DOM.

What are MIDI and WebMIDI exactly?

MIDI is a niche protocol designed for musical instruments to communicate with each other. It was standardized in 1983 and is maintained to this day by an organization consisting of music industry companies and representatives. It’s not wildly different than how the W3C dictates and preserves web standards, in some sense.

Photo by Jiroe on Unsplash

The WebMIDI API is the browser-based implementation of this protocol and allows our web applications to “speak” MIDI and communicate with any MIDI-capable hardware that might be connected to a user’s device.

Not a musician? Don’t worry! We’ll discover very quickly that this simple protocol designed for electronic musical instruments can be used to build fun, interactive, and completely non-musical things.

Why would I want to do this?

Great question. The shortest answer: because it’s fun!

If that answer isn’t satisfying enough for you I’ll offer this: Creating something that straddles the line between the physical world and virtual world we spend most of our days building things for is a good exercise in thinking differently. It’s an opportunity for creative tinkering and for considering and creating new user interfaces and experiences to navigate. I truly think this kind of playful exploration helps make us use different parts of our brains and makes us better developers in the long-haul.

What kind of things can I build?

Cellular Automata on a MIDI controller
Playing Go

Whack-a-mole
Mixing colors with hand motions

What do I need to get started?

Here are the prerequisites to start experimenting with WebMIDI:

A MIDI controller

This might be the trickiest part. You’ll need to procure a MIDI-capable piece of hardware to experiment with. You might be able to find something cheap on Craigslist, Amazon or AliExpress. Or — if you’re really ambitious and have an Arduino available — you can build your own (see the end of this article for more information about this).

A WebMIDI-capable browser

This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.

Desktop

Chrome Opera Firefox IE Edge Safari
43 30 No No 76 No

Mobile / Tablet

iOS Safari Opera Mobile Opera Mini Android Android Chrome Android Firefox
No 46 No 76 78 No

As of this writing, according to caniuse.com it’s supported by approximately 73% of browsers, though most of the heavy-lifting is done by Chromium. Any Chromium-based browser will support WebMIDI—that includes Electron apps and the newer Chromium-based Microsoft Edge. It’s also supported on Opera and the Samsung Internet Browser. On Firefox it’s still being discussed but hopefully coming sooner than later.

Hello, WebMIDI

Once you’ve procured both of those things we can start writing some code! Working with the WebMIDI is not terribly different than working with other browser APIs like the Geolocation or MediaDevices APIs, if you’re familiar with either of those.

The high-level flow looks like this:

  • We detect availability of the WebMIDI API in the browser.
  • If detected, we request permission from the user to access it.
  • Once we’re granted permission, we now have access to additional methods to detect and communicate with any connected MIDI devices.

Let’s see that in action:

if ("requestMIDIAccess" in navigator) {   // The Web MIDI API is available to us! }

Now, assuming we’re in a WebMIDI-capable browser, let’s request access:

navigator.requestMIDIAccess() .then((access) => {   // The user gave us permission. Now we can   // access the MIDI-capable devices connected   // to the user's machine. }) .catch((error) => {   // Permission was not granted. :( });

If the user gives us permission, we should now have access to the MIDIAccess interface. This helps us build a list of the devices that we can receive MIDI input from and send MIDI output to.

Let’s do that next. This is the code that goes inside the function we’re passing into then from the previous code snippet:

const inputs = access.inputs; const outputs = access.outputs;  // Iterate through each connected MIDI input device inputs.forEach((midiInput) => {   // Do something with the MIDI input device });  // Iterate through each connected MIDI output device outputs.forEach((midioutput) => {   // Do something with the MIDI output device  });

You might be wondering what the difference is between a MIDI input and output device. Some devices are setup to only send MIDI information to the computer (these will be listed as inputs) and others can receive information from the computer (these will appear as outputs). It’s not uncommon that a device can be send and receive, so you will find it listed under both.

Now that we have code that can iterate through all the connected MIDI devices, there are basically only two things we’ll want to do;

  • If it’s an input device, we’ll want to listen for any incoming MIDI messages emitting from it.
  • If it’s an output device, we might want to send MIDI message to it.

The code for setting up an event listener to respond to any incoming MIDI messages from our input devices looks very similar to an event listener you might setup for other DOM events, except in this case, the event we’re listening for is the midimessage event:

input.addEventListener('midimessage', (event) => {   // the <code>event object will have a data property   // that contains an array of 3 numbers. For examples:   // [144, 63, 127] })

If we want to send a MIDI message to an output device the code we can do so like this;

outputsend([144, 63, 127]);

Here is a CodePen demo with most of this put together for you. It will let you know about all of the MIDI inputs and output devices connected to your system and show you incoming MIDI messages as they happen:

See the Pen
WebMIDI Basic Test
by George Mandis (@georgemandis)
on CodePen.

WebMIDI Test demo screenshot highlighting which MIDI input and output devices were found
WebMIDI Test demo screenshot highlighting MIDI messages received by one of the MIDI input devices. In this case, we’re seeing when a key on the MIDI keyboard is pressed.
WebMIDI Test demo screenshot highlighting MIDI messages received by one of the MIDI input devices. In this case, we’re seeing when a key on the MIDI keyboard is released.

You might be wondering a couple things at this point:

  • When you’re listening for the midimessage event, how do I make heads or tails of that three number array in event.data?
  • Why did you send an array of three numbers to your MIDI output device and why did you send those specific numbers?

The answer to both of these questions lies in further exploring and understanding how the MIDI protocol works and the problems it was designed to solve.

Anatomy of a MIDI message

When a MIDI controller “speaks” to another MIDI-capable device or computer, they are sending and receiving MIDI messages with one another. The protocol underlying this communication is fairly simple in practice but a little verbose when explained. Still, I’ll try.

Every MIDI message consists of three bytes consisting of 8-bits (0-255). Represented in binary, a message might look like this:

10010000 | 00111100 | 01111111

There are only two types of MIDI messages: Status and data. Every message will consist of one status byte and two data bytes.

The status byte is intended to communicate what kind of message is being delivered, including things like:

  • Note On
  • Note Off
  • Pitch Bend Change
  • Control/Mode Change
  • Program Change

…and many others.

If you’re coming at this from a non-musical background, these status messages might seem kind of strange, but don’t worry too much about it. The data byte is intended to provide more information and context to the status. To give an example, if I have a MIDI piano plugged into my machine and press a key to play a note, it would send a “Note On” status byte accompanied by data bytes indicating which note I played, and perhaps how hard I pressed it.

A status byte will always begin with the number 1 and data bytes with the number 0.

1x0010000 | 0x0111100 | 0x1111111     ^status     ^data1      ^data2

For data bytes that leaves 7-bits to express the data in that byte. That gives us an integer range of 0-127.

For status bytes, the next 3-bits after the first describe the type of status message while the remaining 4-bits describe the channel. To break down our binary representation:

1x001x0000

How this translates into WebMIDI and JavaScript

As you may have guessed from the code samples earlier, with the WebMIDI API, we seldom have to deal with these binary representations directly. When we send and receive these messages in JavaScript we simply use arrays like this:

[144, 63, 127]

If you’re working with existing musical hardware, it’s helpful to have this deeper understanding of how and why the messages are structured the way they are. It’s helpful to know that receiving a 144 in your first byte means a note is being turned on in the first channel and that a 128 would indicate that a note is being turned off.

However, if we’re building non-musical experiences and creating our own hardware, these numbers can be repurposed to represent whatever you want!

What kind of hardware can I use?

Any MIDI-capable device that can be connected to your computer should also be accessible through the WebMIDI API. Devices that are capable of sending MIDI data to another MIDI-capable device are often called MIDI controllers. A common example would be a simple, piano-style keyboard like this Korg nanoKey2:

But they can vary widely in appearance and modes of interaction. Buttons are certainly common, but you might also find some that incorporate dials or pressure-sensitive pads like the AKAI LPD8:

Others use more abstract and interesting modes of interaction, including mapping motion or breath to MIDI signals. For example, this controller (The Hothand from Source Audio) uses three accelerometers to map hand gestures to MIDI messages:

Some controllers can both send and receive MIDI messages, allowing for you to have a true two-way conversation with the physical world. The Novation Launchpad is a classic example — buttons can be pressed to send messages and messages can also be received to dynamically change colors on the device:

Can I build my own hardware?

It turns out they’re not terribly difficult to build and you can find a lot of home-brewed MIDI controllers out in the wild. They can get much more elaborate in a hurry. Some can be downright bananas:

Bananas connected by wires to an Adafruit Circuit Playground programmed to function as a MIDI instrument

Building your own MIDI controller will take you a bit outside the world of JavaScript, but it’s still surprisingly accessible if you’re familiar with or interested in the Arduino platform. The Circuit Playground Classic from Adafruit is a great device to get started with and you can find starter code to flash to the device and make it into a multi-faceted MIDI controller here on GitHub.

Summary

The WebMIDI API is a low-barrier-to-entry way for front-end developers to start experimenting with basic hardware and software interactions. The implementation is relatively straightforward compared to some other hardware web APIs (like Bluetooth) and the MIDI standard is well-documented. There are lots of existing MIDI-capable devices out there to experiment or build cool things with, and if you really want to go all-out and start building your own custom MIDI hardware for your project, you can do that too.

Go out there and make something!

The post Dip Your Toes Into Hardware With WebMIDI appeared first on CSS-Tricks.

CSS-Tricks

, , ,
[Top]

ImageKit.io: Image Optimization That Plugs Into Your Infrastructure

Images are the most efficient means to showcase a product or service on a website. They make up for most of the visual content on our website.

But, the more images a webpage has, the more bandwidth it consumes, affecting the page load speed – a raging factor having a significant impact on not just our search ranking, but also on our conversion rates.

Image optimization helps serve the right images and improve the page load time. A faster website has a positive impact on our user experience.

With image optimization becoming a standard practice for websites and apps, finding a service that offers the most competent features with a coherent pricing model, one that integrates seamlessly with the existing infrastructure and business needs, is paramount to any website and its efficiency.

So what are the most common criteria for selecting an image optimization tool?

The importance of image optimization isn’t up for debate, our choice of tool or service for it may just be. And it is a factor we should consider carefully. So where are the challenges?

  • Inability to integrate with existing infrastructure – Image optimization is very fundamental on modern websites. To implement it, we should not have to make any changes to our existing setup. But unfortunately, a lot of these tools or services can only be used with specific CDNs or are incapable of being integrated with our existing image servers or storage.
  • Dependency on their image storage – Some tools require us to move our images to their system, making us more dependent on them — an added hassle. And nobody wants to spend their time and effort on a virtually unnecessary step like migrating assets from one platform to another (and may be migrating to some other service in the future, if this one doesn’t work out). Not when it’s not needed. Not if we have hundreds of thousands of images and can never be sure if all the images have been migrated to the new tool or not.

Apart from the above challenges, it is possible that the choice of tool could be expensive or has a complex pricing structure. Or could have only the basic image-related features or not deliver excellent performance across the globe.

Enter ImageKit.io.

It is the only tool that we will need for image optimization and transformation with minimal effort and almost no changes to our existing infrastructure.

It is a complete image CDN with optimization and transformation capabilities for images on websites and apps. What does that mean?

Feed ImageKit.io an original, un-optimized image and fetch it using an ImageKit.io URL, and it shall deliver an optimized, rightly resized image to our user’s devices!

For example:

ImageKit.io resizes the image automatically by simply specifying the size in the image URL.

https://ik.imagekit.io/demo/img/tr:w-300,h-100/default-image.jpg

Here, the width and height of the final image are specified with “w” and “h” parameters in the URL. The output image has dimensions 300×100px, scaled down from an original size of 1000×1000px.

As we know, the right image formats can have a significant impact on our website’s bandwidth consumption.

For instance, the following PNG image,

https://ik.imagekit.io/demo/img/seo-img_E3VIYyKu6.jpg?tr=f-png

is almost 2.5x the size of its JPG variant.

ImageKit.io automatically uses the correct output image format based on image content, image quality, browser support, and user device capabilities. Also, the above image will be converted and delivered as a WebP for all browsers supporting the WebP image format.

Just turn it on, and we’re good to go!

And not just simple resizing and cropping, ImageKit.io allows for more advanced transformations like smart crop, image and text overlays, image trimming, blurring, contrast and sharpness correction, and many more. It also comes with advanced delivery features like Brotli compression for SVG images and image security. ImageKit.io is the solution to all our image optimization needs. And more.

It also comes bundled with a stellar infrastructure — AWS CloudFront CDN, multi-region core processing servers, and a media library with automatic global backups.

Integrations

It isn’t enough to find the most appropriate tool for our website. It should also provide an optimum experience fitting in just right with our existing setup.

ImageKit.io integration with the existing infrastructure

With S3 bucket or other web servers

It is usually a tedious task to integrate any image optimization tool into our infrastructure. For example, for images, we would already have a CMS in place to upload the images. And a server or storage to store all of those images. We would not want to make any significant changes to the existing setup that could potentially disrupt all our team’s processes.

But with ImageKit.io, the whole process from integration to image delivery takes just a few minutes. ImageKit.io allows us to plug our S3 bucket or webserver and start delivering optimized and transformed images instantly. No movement or re-upload of images is required. It takes just a few minutes to get the best optimization results.

We can attach our server or storage using “ADD ORIGIN” in the ImageKit.io dashboard

Such simple integrations with AWS S3 or web servers are not available even with some leading players in the market.

Images fetched through ImageKit.io are automatically optimized and converted to the appropriate format. And we can resize and transform any image by just adding URL parameters.

For example:

Image with width 300px and height is adjusted automatically to preserve aspect ratio:

https://ik.imagekit.io/your_imagekit_id/rest-of-the-path.jpg?tr=w-300

or,

https://ik.imagekit.io/your_imagekit_id/tr:w-300/rest-of-the-path.jpg

We are using https://ik.imagekit.io/your_imagekit_id format for the URLs, but custom domain names are available with ImageKIt.io.

Learn more about it here.

Media library

ImageKit.io also provides a media library, a feature missing in many prominent image optimization tools. Media library is a highly available file storage for all ImageKit.io users. It comes with a simple user interface to upload, search, and manage files, images, and folders.

ImageKit.io Media Library

With platforms like Magento and WordPress

Most SMBs use WordPress to build their websites. ImageKit.io proves handy and almost effortless to set up for these websites.

Installing a plugin on WordPress and making some minor changes in the settings from the WP plugin is all we need to do to integrate ImageKit.io with our WordPress website for all essential optimizations. Learn more about it here.

Similarly, we can integrate ImageKit.io with our Magento store. The essential format and quality optimizations do not require any code change and can be set up by making some changes in Settings in the Magento Admin Panel. Learn more about it here.

For advanced use cases like granular image quality control, smart cropping, watermarking, text overlays, and DPR support, ImageKit.io offers a much better solution, with more robust features, more control over the optimizations and more complex transformations, than the ones provided natively by WordPress or Magento. And we can implement them by making some changes in our website’s template files.

With other CDNs

Most businesses use CDNs for their websites, usually under contracts, or running processes other than or in addition to image optimizations on these CDNs, or simply because a particular CDN performs better in a specific region.

Even then, it is possible to use ImageKit.io with our choice of CDN. They support integrations with:

  • Akamai
  • Azure
  • Alibaba Cloud CDN
  • CloudFlare (if one is on an Enterprise plan in CloudFlare)
  • CloudFront
  • Fastly
  • Google CDN
  • Zenedge

To ensure correct optimizations and caching on our CDN, their team works closely with their customer’s team to set up the configuration.

Custom domain names

As image optimization has become a norm, services like LetsEncrypt, which make obtaining and deploying an SSL certificate free, have also become standard practice. In such a scenario, enabling a custom domain name should not be charged heavily.

ImageKit.io helps us out here too.

It offers one custom domain name like images.example.com, SSL and HTTP2 enabled, free of charge, with each paid account. Additional domain names are available at a nominal one-time fee of $ 50. There are no hidden costs after that.

One ImageKit.io account, multiple websites

For image optimization needs for multiple websites or agencies handling hundreds of websites, ImageKit.io would prove to be an ideal solution. We’re free to attach as many “origins” (storages or servers) with ImageKit.io. What that means is each website has its storage or image server, which can be mapped to separate URLs within the same ImageKit.io account.

Configure the origin for a new site in the ImageKit.io dashboard and map it against a URL endpoint

The added advantage of using ImageKit.io for multiple websites and apps is their pricing model. When one consolidates the usage across their many business accounts, and if it crosses the 1TB mark, they can reach out to ImageKit.io’s team for a special quote. Learn more about it here.

Multiple server regions

ImageKit.io is a global image CDN, It has a distributed infrastructure and processing regions around the world, in North Virginia (US East), North California (US West), Frankfurt (EU), Mumbai (India), Sydney (Australia) and Singapore, aiming to reduce latency between their servers and our image origins.

Multiple server regions with processing and delivery nodes across the globe

Additionally, ImageKit.io stores the images (transformed as well as original) in their cache after fetching them from our origins. This intermediate caching is done to avoid processing the images again. Doing so not only reduces the load on our origin servers and storage but also reduces data transfer costs from them.

Custom cache time

There are times when a shorter cache time is required. For cases where one or more images are set to change at intervals, while the others remain the same, ImageKit.io offers the option to set up custom cache times in place of the default 180 days preset in ImageKit.io, giving us more control over how our resources are cached.

Custom cache time

This feature is usually not provided by third-party image CDNs or is available to enterprise customers only, but it is available with ImageKit.io on request.

Deliver non-image files through ImageKit.io CDN

ImageKit.io comes integrated with a CDN to store and serve resources. And while the primary use case is to deliver optimized images, it can do the same for the other static assets on our website, like JS, CSS as well as fonts.

There is no point dividing our static resources between two services when one, ImageKit.io, can do the whole job on its own.

It’s worth a mention here that though ImageKit.io provides delivery for non-image assets, it does not process or optimize them.

With ImageKit.io’s pricing model in mind, using their CDN may prove beneficial as it bills only based on one parameter — the total bandwidth consumed.

Cost efficiency

Most image optimization tools charge us for image storage or bill us based on requests or the number of master images or transformations.

ImageKit.io bills its customers on a single parameter, their output bandwidth. With ImageKit.io, we can optimize and transform to our heart’s content, without worrying about the costs they may incur. With our focus no longer on the billing costs, we can now focus on important tasks, like optimizing our images and website.

Usually, when an ImageKit.io account crosses 1TB total bandwidth consumption, that account becomes eligible for special pricing.

One can contact their team for a quote after crossing that threshold.

Final thoughts

Image optimization holds many benefits, but are we doing it right? And are we using the best service in the market for it?

ImageKit.io might prove to be the solution for all image optimization and management needs. But we don’t have to take their word for it. We can know for ourselves by joining the many developers and companies using ImageKit.io to deliver best-in-class image optimizations to their users by signing up for free here.

The post ImageKit.io: Image Optimization That Plugs Into Your Infrastructure appeared first on CSS-Tricks.

CSS-Tricks

, , , , ,
[Top]