Tag: Automated

Create Your Own Automated Social Images With Resoc

There has been a lot of talk about automated social images lately. GitHub has created its own. A WordPress plugin has been acquired by Jetpack. There is definitely interest! People like Ryan Filler and Zach Leatherman have implemented social images on their websites. They had to code a lot of things on their own. But the landscape is changing and tools are available to smooth the process.

In this tutorial, we are going to create our own automated social images with HTML and CSS, integrate them to an Eleventy blog — mostly by configuration — and deploy our site to Netlify.

If you really, really can’t wait, check the result or browse the project!

What are social images again?

In the <head> section of HTML, we insert a few Open Graph markups:

<meta property="og:title" content="The blue sky strategy" /> <meta property="og:description" content="Less clouds, more blue" /> <meta property="og:image" content="/sky-with-clouds.jpg" />

When we share this page on Facebook, we and our friends see this:

LinkedIn, Twitter, WhatsApp, Slack, Discord, iMessage… All these sites behave pretty much the same way: they provide a visual “card” that accompanies the link, giving it more space and context.

Twitter has its own set of markups with its Twitter Cards, but they are very similar. And Twitter falls back to Open Graph when it can’t find them.

It is natural for our pages to have a title and a description. But in the screenshot above, they are quite small compared to the space and attention the picture of sky and clouds gets — not to mention the size of the clickable area. That’s the power of the social image. It’s easy to understand the impact these images can have when a link is shared.

From Level 0 to Level 3

Not all social images are created equal. These are not official terms, but let’s consider numbered “levels” on how impactful these social image cards can be.

Level 0

The most basic social image is no image. The link might be lost in a sea of content with the small area and not much visual.

Level 1

A classic technique is to create a site-wide social image. While this solution might seem to offer a good outcome-to-effort ratio, one could argue this is worse than no image at all. Sure, we get some attention, but the reaction might be negative, especially if people see a lot of links to this website that all look the same. It risks feeling repetitive and unnecessary.

Level 2

The next level is standard in blogs and media sites: the social image of a post. Each post has its own featured image, and they differ from one post to another. This practice is totally legitimate for a news site, where the photo complements the page content. The potential drawback here is that it requires effort to find and create artwork for each and every published post.

That might lead to a bit of laziness. We’ve all been exposed to images that are obviously stock photos. It might get attention, but perhaps not the kind of attention you actually want.

Need an image of an intentionally diverse group of people meeting around a table foe work? There’s a ton of them out there!

Level 3

The final level: per-page, content-rich, meaningful social images. CSS-Tricks is doing just this. The team’s social images are branded. They share the same layout. They mention the post title, along with the author’s name and profile picture, something the regular title and description could not show. They grab attention and are memorable.

The CSS-Tricks social card incorporates information related to the post worth looking at.

There is an obvious requirement with this approach: automation. It is out of question to create unique images for every possible link. Just think of the overhead. We’d need some programmatic solution to help with the heavy lifting.

Let’s start a blog with blog posts that have unique social images

To give ourselves a nice little excuse (and sandbox) to build out unique social images, we’ll put together a quick blog. When I write and publish an article to this blog, I follow a quick two-step process:

  1. Write and publish the article
  2. Post the published URL to my social network accounts

This is when social images must shine. We want to give our blog its best shot at being noticed. But that’s not our only goal. This blog should establish our personal brand. We want our friends, colleagues, and followers to remember us when they see our social posts. We want something that’s repeatable, recognizable, and representative of ourselves.

Creating a blog is a lot or work. Although automated social images are cool, it’s unwise to spend too much time on them. (Chris came to the same conclusion at the end of 2020). So, in the interest of efficiency, we’re making an Eleventy site. Eleventy is a simple static site generator. Instead of starting from scratch, let’s use one of the starter projects. In fact, let’s pick the first one, eleventy-base-blog.

This is just the base template. We’re only using it to make sure we have posts to share.

Visit the eleventy-base-blog GitHub page and use it as a template:

Use eleventy-base-blog as a template

Let’s create the repository, and set a repository name, description. We can make it public or private, it doesn’t matter.

Next, we clone our repository locally, install packages, and run the site:

git clone [your repo URL] cd my-demo-blog ### Or whatever you named it npm install npm run serve

Our site running is running at http://localhost:8080.

Now let’s deploy it. Netlify makes this a super quick (and free!) task. (Oh, and spoiler alert: our social images automation relies on a Netlify Function.)

So, let’s go to Netlify and create an account, that is, if you don’t already have one. Either way, create a new site:

Click the “New site from Git” button to link up the project repo for hosting and deployment.

Go through the process of allowing Netlify to access the blog repository.

Simply leave the default values as they are and click the “Deploy site” button

Netlify deploys our site:

After a minute or so, the blog is deployed:

The site is deployed — we’re all set!

One image template to rule them all

Our social images are going to be based on an image template. To design this template, we are going to use the technologies we already know and love: HTML and CSS. HTML doesn’t turn itself into images auto-magically, but there are tools for this, the most famous being headless Chrome with Puppeteer.

However, instead of building our social image stack ourselves, we use the Resoc Image Template Development Kit. So, from the project root we can run this in the terminal:

npx itdk init resoc-templates/default -m title-description

This command creates a new image template in the resoc-templates/default directory. It also opens up in a new browser window.

The viewer provides a browser preview of the template configuration, as well as UI to change the values.

We could use this template as-is, but that only gets us to Level 2 on “impactful” spectrum. What we need to make this go all the way up to Level 3 and match the CSS-Tricks template is:

  • the page title aligned to the right with a bit of negative space on the left.
  • a footer at the bottom that contains a background gradient made from two colors we are going to use throughout the blog
  • the post author’s name and profile picture

If we head back to the browser, we can see in the Parameters panel of the template viewer that the template expects two parameters: a title and description. That’s just the template we chose when we ran -m title-description in the terminal as we set things up. But we can add more parameters by editing resoc-templates/default/resoc.manifest.json. Specifically, we can remove the second parameter to get:

{   "partials": {     "content": "./content.html.mustache",     "styles": "./styles.css.mustache"   },   "parameters": [     {       "name": "title",       "type": "text",       "demoValue": "A picture is worth a thousand words"     }   ] }

The viewer reflects the change in the browser:

Now the description is gone.

It’s time to design the image itself, which we can do in resoc-templates/default/content.html.mustache:

<div class="wrapper">   <main>     <h1>{{ title }}</h1>   </main>   <footer>     <img src="profil-pic.jpg" />     <h2>Philippe Bernard</h2>   </footer> </div>

That’s just regular HTML. Well, except {{ title }}. This is Mustache, the templating framework Resoc uses to inject parameter values into the template. We can even type some text in the “Title” field to see it working:

Looking at the previews, notice that we’re missing an image, profil-pic.jpg. Copy your best profile picture to resoc-templates/default/profil-pic.jpg:

The profile picture is now set.

It’s time to write the CSS in resoc-templates/default/styles.css.mustache. The point of this post isn’t how to style the template, but here’s what I ended up using:

@import url('https://fonts.googleapis.com/css2?family=Anton&family=Raleway&display=swap');  .wrapper {   display: flex;   flex-direction: column; }  main {   flex: 1;   display: flex;   flex-direction: column;   justify-content: center;   position: relative; }  h1 {   text-align: right;   margin: 2vh 3vw 10vh 20vw;   background: rgb(11,35,238);   background: linear-gradient(90deg, rgba(11,35,238,1) 0%, rgba(246,52,12,1) 100%);   -webkit-text-fill-color: transparent;   -webkit-background-clip: text;   font-family: 'Anton';   font-size: 14vh;   text-transform: uppercase;   text-overflow: ellipsis;   display: -webkit-box;   -webkit-line-clamp: 3;   -webkit-box-orient: vertical; }  h2 {   color: white;   margin: 0;   font-family: 'Raleway';   font-size: 10vh; }  footer {   flex: 0 0;   min-height: 20vh;   display: flex;   align-items: center;   background: rgb(11,35,238);   background: linear-gradient(90deg, rgba(11,35,238,1) 0%, rgba(246,52,12,1) 100%);   padding: 2vh 3vw 2vh 3vw; }  footer img {   width: auto;   height: 100%;   border-radius: 50%;   margin-right: 3vw; }

Most of the sizes rely on vw and vh units to help anticipate the various contexts that the template might be rendered. We are going to follow Facebook’s recommndations, which are 1200×630. Twitter Cards, on the other hand, are sized differently. We could render images in a low resolution, like 600×315, but let’s go with 1200×630 so we we only need to work in pixels.

The viewer renders the Facebook preview at 1200×630 and scales it down to fit the screen. If the preview fulfills your expectations, so will the actual Open Graph images.

So far, the template matches our needs:

What about the image?

There is one little thing to add before we are done with the template. Some of our blog posts will have images, but not all of them. In situations where a post doesn’t have an image, it would be cool to use the image to fill the space on the left.

This is a new template parameter, so we need to update resoc-templates/default/resoc.manifest.json once again:

{   "partials": {     "content": "./content.html.mustache",     "styles": "./styles.css.mustache"   },   "parameters": [     {       "name": "title",       "type": "text",       "demoValue": "A picture is worth a thousand words"     },     {       "name": "sideImage",       "type": "imageUrl",       "demoValue": "https://resoc.io/assets/img/demo/photos/pexels-photo-371589.jpeg"     }   ] }

Let’s declare an additional div in resoc-templates/default/content.html.mustache:

<div class="wrapper">   <main>     {{#sideImage}}     <div class="sideImage"></div>     {{/sideImage}}     <h1>{{ title }}</h1>   </main>   <footer>     <img src="profil-pic.jpg" />     <h2>Philippe Bernard</h2>   </footer> </div>

The new {{#sideImage}} ... {{/sideImage}} syntax is a Mustache section. It’s only present when the sideImage parameter is defined.

We need a little extra CSS to handle the image. Notice that we’re able to use the Mustache syntax here to inset the background-image value for a specific post. Here’s how I approached it in the resoc-templates/default/styles.css.mustache file:

{{#sideImage}} .sideImage {   position: absolute;   width: 100%;   height: 100%;   background-image: url({{{ sideImage }}});   background-repeat: no-repeat;   background-size: auto 150vh;   background-position: -35vw 0vh;   -webkit-mask-image: linear-gradient(45deg, rgba(0,0,0,0.5), transparent 40%); } {{/sideImage}}

Our template looks great!

We commit our template:

git add resoc-templates git commit -m "Resoc image template"

Before we automate the social images, let’s generate one manually, just as a teaser. The viewer provides a command line to generate the corresponding image for our testing purposes:

Copy it, run it from a terminal and open output-image.jpg:

Social images automation

OK, so we created one image via the command line. What should we do now? Call it as many times as there are pages on our blog? This sounds like a boring task, and there is a deeper issue with this approach: time. Even if creating a single image took something like two seconds, we can multiply by the number of pages and we easily see the effort grow and grow.

The original Eleventy blog template is generated almost instantly, but we should wait about a minute for something as marginal as social images? This is not acceptable.

Instead of performing this task at build time, we are going to defer it, lazy style, with a Netlify Function and a Netlify on-demand builder. Actually, we aren’t actually dealing directly with a Netlify Function — an Eleventy plugin is going to handle this for us.

Let’s install that now. We can add the Resoc Social Image plugin for Eleventy, along with its companion Netlify plugin, with this command:

npm install --save-dev @resoc/eleventy-plugin-social-image @resoc/netlify-plugin-social-image

Why two plugins? The first one is dedicated to Eleventy, while the second one is framework-agnostic (for example, it can be used for Next.js).

Edit .eleventy.js at the root of the project so that we’re importing the plugin:

const pluginResoc = require("@resoc/eleventy-plugin-social-image");

Configure it near the top of .eleventy.js, right after the existing eleventyConfig.addPlugin:

eleventyConfig.addPlugin(pluginResoc, {   templatesDir: 'resoc-templates',   patchNetlifyToml: true });

templatesDir is where we stored our image template. patchNetlifyToml is asking the plugin to configure @resoc/netlify-plugin-social-image in netlify.toml for us.

We want all our pages to have automated social images. So, let’s open the master template, _includes/layouts/base.njk, and add this near the top of the file:

{% set socialImageUrl %} {%- resoc   template = "default",   slug = (title or metadata.title) | slug,   values = {     title: title or metadata.title,     sideImage: featuredImage   } -%} {% endset %}

This declares a new variable named socialImageUrl. The content of this variable is provided by the resoc short code, which takes three parameters:

  • The template is the sub directory of our template (it is in resoc-templates/default).
  • The slug is used to build the social image URL (e.g. /social-images/brand-new-post.jpg). We slug-ify the page title to provide a unique and sharable URL.
  • The values are the content, as defined in resoc-templates/default/resoc.manifest.json. title is obvious, because pages already have a title. sideImage is set to a meta named featuredImage, which we are going to define for illustrated pages.

Now we can open up _includes/layouts/base.njk, place our cursor in the <head>, add some new markup to populate all that stuff

<meta name="og:title" content="{{ title or metadata.title }}"/> <meta name="og:description" content="{{ description or metadata.description }}"/> <meta name="og:image" content="{{ socialImageUrl }}"/> <meta name="og:image:width" content="1200"/> <meta name="og:image:height" content="630"/>

The title and description markups are similar to the existing <title> and <meta name="description">. We’re using socialImageUrl as-is for the og:image meta. We also provide the social image dimensions to round things out.

Automated social images are ready!

Let’s deploy this

When we deploy the blog again, all pages will show the text-only version of our template. To see the full version , we assign an image to an existing page. that requires us to edit one of the posts — I created four posts and am editing the fourth one, posts/fourthpost.md — so there’s a featuredImage entry after the existing meta:

--- title: This is my fourth post. description: This is a post on My Blog about touchpoints and circling wagons. date: 2018-09-30 tags: second tag layout: layouts/post.njk featuredImage: https://resoc.io/assets/img/demo/photos/pexels-pixabay-459653.jpg ---

Using an external URL is enough here, but we normally drop images in an img directory with Eleventy and provide the base URL once and for all in _includes/layouts/base.njk.

Build the site again:

npm run build

When running git status, we might notice two modified files in addition to the ones we edited ourselves. In .gitignore, the plugin added resoc-image-data.json. This file stores our social image data used internally by the Netlify plugin, and netlify.toml now contains the Netlify plugin configuration.

Deploy time!

git commit -a -m "Automated social images" git push

Netlify is notified and deploys the site. Once the latest version is online, share the homepage somewhere (e.g. Slack it to yourself or use the Facebook debugger). Here’s how the social card looks for the homepage, which does not contain an image:

This is our text-only card.

And here’s how it looks for a post that does contain an image:

This card sports an image.



So far, automated social images have mostly been a matter of developers willing to explore and play around with lots of different ideas and approaches, some easy and some tough. We kept things relatively simple.

With a few lines of code, we were able to quickly setup automated social images on a blog based on Eleventy and hosted on Netlify. The part we spent the most time on was the image template, but that’s not a problem. With the viewer and Mustache already integrated, we focused on what we know, love, and value: web design.

Hopefully something like the Resoc image template dev kit and its related tools will help make the automated social images go from being a niche hobby into the mainstream.

The post Create Your Own Automated Social Images With Resoc appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.


, , , ,

AutomateWoo Brings Automated Communications to Bookings

AutomateWoo is this handy extension for WooCommerce that adds triggers actions based on your online store’s activity. Someone abandoned their cart? Remind them by email. Someone made a purchase? Ask them to leave a review or follow up to see how they’re liking the product so far.

This sort of automated communication is gold. Automatically reaching out to customers based on their own activity keeps them engaged and encourages more activity, hopefully the kind that makes you more money.

AutomateWoo now integrates with WooCommerce Bookings and it includes a new trigger: booking status changes. So now, when a customer’s appointment changes—say from “pending” to “confirmed”—AutomateWoo can perform an action, like sending an email confirming the customer’s booked appointment. Or asking for feedback after the appointment. Or reminding a customer to book a follow-up appointment.

Or really anything else you can think of based on the status of a booking. It’s like having your own version of Zapier right in WordPress, but without having to manage another platform.

Once a customer’s booking status for a river rafting adventure is confirmed, they’ll get an email not only confirming the appointment but with any additional information they might need to know.

OK, real-life situation.

Sometime in the middle of last year, I was working with a client that takes online appointments, or bookings. The user finds an open spot on a calendar and books that time to come in for service. A lot like making a hair appointment.

Well, what happens when a bunch of appointments need to be canceled? I don’t need to tell you what a big deal that was this time last year. This particular client had the unfortunate task of reaching out to each and every customer and updating the status of over 200 appointments.

That was awful. Fortunately, we found a way to bulk update the appointments. Then we sent a mass email to everyone with a canceled appointment. Not the most elegant solution, but it got the job done. I’ll tell you what, though, it would have been a heckuva lot less work and the communications would have been much polished if we had this AutomateWoo + WooCommerce Bookings combo. Simply create a trigger for a canceled status change, write the email, and bulk update the bookings statuses.

AutomateWoo and WooCommerce Bookings are both paid extensions for WooCommerce. The licenses will run you $ 349, which includes a year of updates and customer support. The value you get out of it really depends on your overall budget and how much activity happens on your site. I can tell you that would’ve been a stellar deal when we were trying to resolve 200+ canceled appointments. And if it leads to additional bookings, upsells, and fewer cancelations (because, hey, research has shown that 75% of email revenue is generated by triggered campaigns), then it could very well pay for itself.

You can give WooCommerce Bookings a front-end test drive with a live demo. Both extensions also come with a 30-day money-back guarantee, giving you a good amount of time to try them out.

The post AutomateWoo Brings Automated Communications to Bookings appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.


, , , ,

Parsing Markdown into an Automated Table of Contents

A table of contents is a list of links that allows you to quickly jump to specific sections of content on the same page. It benefits long-form content because it shows the user a handy overview of what content there is with a convenient way to get there.

This tutorial will show you how to parse long Markdown text to HTML and then generate a list of links from the headings. After that, we will make use of the Intersection Observer API to find out which section is currently active, add a scrolling animation when a link is clicked, and finally, learn how Vue’s <transition-group> allow us to create a nice animated list depending on which section is currently active.

Parsing Markdown

On the web, text content is often delivered in the form of Markdown. If you haven’t used it, there are lots of reasons why Markdown is an excellent choice for text content. We are going to use a markdown parser called marked, but any other parser is also good. 

We will fetch our content from a Markdown file on GitHub. After we loaded our Markdown file, all we need to do is call the marked(<markdown>, <options>) function to parse the Markdown to HTML.

async function fetchAndParseMarkdown() {   const url = 'https://gist.githubusercontent.com/lisilinhart/e9dcf5298adff7c2c2a4da9ce2a3db3f/raw/2f1a0d47eba64756c22460b5d2919d45d8118d42/red_panda.md'   const response = await fetch(url)   const data = await response.text()   const htmlFromMarkdown = marked(data, { sanitize: true });   return htmlFromMarkdown }

After we fetch and parse our data, we will pass the parsed HTML to our DOM by replacing the content with innerHTML.

async function init() {   const $ main = document.querySelector('#app');   const htmlContent = await fetchAndParseMarkdown();   $ main.innerHTML = htmlContent } 

Now that we’ve generated the HTML, we need to transform our headings into a clickable list of links. To find the headings, we will use the DOM function querySelectorAll('h1, h2'), which selects all <h1> and <h2> elements within our markdown container. Then we’ll run through the headings and extract the information we need: the text inside the tags, the depth (which is 1 or 2), and the element ID we can use to link to each respective heading.

function generateLinkMarkup($ contentElement) {   const headings = [...$ contentElement.querySelectorAll('h1, h2')]   const parsedHeadings = headings.map(heading => {     return {       title: heading.innerText,       depth: heading.nodeName.replace(/D/g,''),       id: heading.getAttribute('id')     }   })   console.log(parsedHeadings) }

This snippet results in an array of elements that looks like this:

[   {title: "The Red Panda", depth: "1", id: "the-red-panda"},   {title: "About", depth: "2", id: "about"},   // ...  ]

After getting the information we need from the heading elements, we can use ES6 template literals to generate the HTML elements we need for the table of contents.

First, we loop through all the headings and create <li> elements. If we’re working with an <h2> with depth: 2, we will add an additional padding class, .pl-4, to indent them. That way, we can display <h2> elements as indented subheadings within the list of links.

Finally, we join the array of <li> snippets and wrap it inside a <ul> element.

function generateLinkMarkup($ contentElement) {   // ...   const htmlMarkup = parsedHeadings.map(h => `   <li class="$ {h.depth > 1 ? 'pl-4' : ''}">     <a href="#$ {h.id}">$ {h.title}</a>   </li>   `)   const finalMarkup = `<ul>$ {htmlMarkup.join('')}</ul>`   return finalMarkup }

That’s all we need to generate our link list. Now, we will add the generated HTML to the DOM.

async function init() {   const $ main = document.querySelector('#content');   const $ aside = document.querySelector('#aside');   const htmlContent = await fetchAndParseMarkdown();   $ main.innerHTML = htmlContent   const linkHtml = generateLinkMarkup($ main);   $ aside.innerHTML = linkHtml         }

Adding an Intersection Observer

Next, we need to find out which part of the content we’re currently reading. Intersection Observers are the perfect choice for this. MDN defines Intersection Observer as follows:

The Intersection Observer API provides a way to asynchronously observe changes in the intersection of a target element with an ancestor element or with a top-level document’s viewport.

So, basically, they allow us to observe the intersection of an element with the viewport or one of its parent’s elements. To create one, we can call a new IntersectionObserver(), which creates a new observer instance. Whenever we create a new observer, we need to pass it a callback function that is called when the observer has observed an intersection of an element. Travis Almand has a thorough explanation of the Intersection Observer you can read, but what we need for now is a callback function as the first parameter and an options object as the second parameter.

function createObserver() {   const options = {     rootMargin: "0px 0px -200px 0px",     threshold: 1   }   const callback = () => { console.log("observed something") }   return new IntersectionObserver(callback, options) }

The observer is created, but nothing is being observed at the moment. We will need to observe the heading elements in our Markdown, so let’s loop over them and add them to the observer with the observe() function.

const observer = createObserver() $ headings.map(heading => observer.observe(heading))

Since we want to update our list of links, we will pass it to the observer function as a $ links parameter, because we don’t want to re-read the DOM on every update for performance reasons. In the handleObserver function, we find out whether a heading is intersecting with the viewport, then obtain its id and pass it to a function called updateLinks which handles updating the class of the links in our table of contents.

function handleObserver(entries, observer, $ links) {   entries.forEach((entry)=> {     const { target, isIntersecting, intersectionRatio } = entry     if (isIntersecting && intersectionRatio >= 1) {       const visibleId = `#$ {target.getAttribute('id')}`       updateLinks(visibleId, $ links)     }   }) }

Let’s write the function to update the list of links. We need to loop through all links, remove the .is-active class if it exists, and add it only to the element that’s actually active.

function updateLinks(visibleId, $ links) {   $ links.map(link => {     let href = link.getAttribute('href')     link.classList.remove('is-active')     if(href === visibleId) link.classList.add('is-active')   }) }

The end of our init() function creates an observer, observes all the headings, and updates the links list so the active link is highlights when the observer notices a change.

async function init() {   // Parsing Markdown   const $ aside = document.querySelector('#aside'); 
   // Generating a list of heading links   const $ headings = [...$ main.querySelectorAll('h1, h2')]; 
   // Adding an Intersection Observer   const $ links = [...$ aside.querySelectorAll('a')]   const observer = createObserver($ links)   $ headings.map(heading => observer.observe(heading)) }

Scroll to section animation

The next part is to create a scrolling animation so that, when a link in the table of contents is clicked, the user is scrolled to the heading position rather abruptly jumping there. This is often called smooth scrolling.

Scrolling animations can be harmful if a user prefers reduced motion, so we should only animate this scrolling behavior if the user hasn’t specified otherwise. With window.matchMedia('(prefers-reduced-motion)'), we can read the user preference and adapt our animation accordingly. That means we need a click event listener on each link. Since we need to scroll to the headings, we will also pass our list of $ headings and the motionQuery

const motionQuery = window.matchMedia('(prefers-reduced-motion)'); 
 $ links.map(link => {   link.addEventListener("click",      (evt) => handleLinkClick(evt, $ headings, motionQuery)   ) })

Let’s write our handleLinkClick function, which is called whenever a link is clicked. First, we need to prevent the default behavior of links, which would be to jump directly to the section. Then we’ll read the href attribute of the clicked link and find the heading with the corresponding id attribute. With a tabindex value of -1 and focus(), we can focus our heading to make the users aware of where they jumped to. Finally, we add the scrolling animation by calling scroll() on our window. 

Here is where our motionQuery comes in. If the user prefers reduced motion, the behavior will be instant; otherwise, it will be smooth. The top option adds a bit of scroll margin to the top of the headings to prevent them from sticking to the very top of the window.

function handleLinkClick(evt, $ headings, motionQuery) {   evt.preventDefault()   let id = evt.target.getAttribute("href").replace('#', '')   let section = $ headings.find(heading => heading.getAttribute('id') === id)   section.setAttribute('tabindex', -1)   section.focus() 
   window.scroll({     behavior: motionQuery.matches ? 'instant' : 'smooth',     top: section.offsetTop - 20   }) }

For the last part, we will make use of Vue’s <transition-group>, which is very useful for list transitions. Here is Sarah Drasner’s excellent intro to Vue transitions if you’ve never worked with them before. They are especially great because they provide us with animation lifecycle hooks with easy access to CSS animations.

Vue automatically attaches CSS classes for us when an element is added (v-enter) or removed (v-leave) from a list, and also with classes for when the animation is active (v-enter-active and v-leave-active). This is perfect for our case because we can vary the animation when subheadings are added or removed from our list. To use them, we will need wrap our <li> elements in our table of contents with an <transition-group> element. The name attribute of the <transition-group> defines how the CSS animations will be called, the tag attribute should be our parent <ul> element.

<transition-group name="list" tag="ul">   <li v-for="(item, index) in activeHeadings" v-bind:key="item.id">     <a :href="item.id">       {{ item.text }}     </a>   </li> </transition-group>

Now we need to add the actual CSS transitions. Whenever an element is entering or leaving it, should animate from not visible (opacity: 0) and moved a bit to the bottom (transform: translateY(10px)).

.list-enter, .list-leave-to {   opacity: 0;   transform: translateY(10px); }

Then we define what CSS property we want to animate. For performance reasons, we only want to animate the transform and the opacity properties. CSS allows us to chain the transitions with different timings: the transform should take 0.8 seconds and the fading only 0.4s.

.list-leave-active, .list-move {   transition: transform 0.8s, opacity 0.4s; }

Then we want to add a bit of a delay when a new element is added, so the subheadings fade in after the parent heading moved up or down. We can make use of the v-enter-active hook to do that:

.list-enter-active {    transition: transform 0.8s ease 0.4s, opacity 0.4s ease 0.4s; }

Finally, we can add absolute positioning to the elements that are leaving to avoid sudden jumps when the other elements are animating:

.list-leave-active {   position: absolute; }

Since the scrolling interaction is fading elements out and in, it’s advisable to debounce the scrolling interaction in case someone is scrolling very quickly. By debouncing the interaction we can avoid unfinished animations overlapping other animations. You can either write your own debouncing function or simply use the lodash debounce function. For our example the simplest way to avoid unfinished animation updates is to wrap the Intersection Observer callback function with a debounce function and pass the debounced function to the observer.

const debouncedFunction = _.debounce(this.handleObserver) this.observer = new IntersectionObserver(debouncedFunction,options)

Here’s the final demo

Again, a table of contents is a great addition to any long-form content. It helps make clear what content is covered and provides quick access to specific content. Using the Intersection Observer and Vue’s list animations on top of it can help to make a table of contents even more interactive and even allow it to serve as an indication of reading progress. But even if you only add a list of links, it will already be a great feature for the user reading your content.

The post Parsing Markdown into an Automated Table of Contents appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.


, , , , ,

Automated Selenium Testing with Jest and LambdaTest

You know what the best thing is about building and running automated browser tests is? It means that the site you’re doing it on really matters. It means you’re trying to take care of that site by making sure it doesn’t break, and it’s worth the time to put guards in place against that breakages. That’s awesome. It means you’re on the right track.

My second favorite thing about automated browser tests is just how much coverage you get for such little code. For example, if you write a script that goes to your homepage, clicks a button, and tests if a change happened, that covers a lot of ground. For one, your website works. It doesn’t error out when it loads. The button is there! The JavaScript ran! If that test passes, a lot of stuff went right. If that fails, you’ve just caught a major problem.

So that’s what we’re talking about here:

  1. Selenium is the tool that automates browsers. Go here! Click this!
  2. Jest is the testing framework that developers love. I expect this to be that, was it? Yes? PASS. No? ERROR.
  3. LambdaTest is the cloud cross-browser testing platform you run it all on.

Are you one of those folks who likes the concept of automated testing but might not know where to start? That’s what we’re going to check out in this post. By stitching together a few resources, we can take the heavy lifting out of cross-browser testing and feel more confident that the code we write isn’t breaking other things.

Serenity Selenium now!

If you’re new to Selenium, you’re in for a treat. It’s an open source suite of automated testing tools that can run tests on different browsers and platforms on virtual machines. And when we say it can run tests, we’re talking about running them all in parallel. We’ll get to that.

It’s able to do that thanks to one of its components, Selenium Grid. The grid is a self-hosted tool that creates a network of testing machines. As in, the browsers and operating systems we want to test automatically. All of those machines provide the environments we want to test and they are able to run simultaneously. So cool.

Jest you wait ’til you see this

Where Selenium is boss at running tests, Jest is the testing framework. Jest tops the charts for developer satisfaction, interest, and awareness. It provides a library that helps you run code, pointing out not only where things fall apart, but the coverage of that code as a way of knowing what code impacts what tests. This is an amazing feature. How many times have you made a change to a codebase and been completely unsure what parts will be affected? That’s what Jest provides: confidence.

Jest is jam-packed with a lot of testing power and its straightforward API makes writing unit tests a relative cinch. It comes from the Facebook team, who developed it as a way to test React applications, but it’s capable of testing more than React. It’s for literally any JavaScript, and as we’ll see, browser tests themselves.

So, let’s make Jest part of our testing stack.

Selenium for machines, Jest for testing code

If we combine the superpowers of Selenium with Jest, we get a pretty slick testing environment. Jest runs the unit tests and Selenium provides and automates the grounds for cross-browser testing. It’s really no more than that!

Let’s hit pause on developing our automated testing stack for a moment to grab Selenium and Jest. They’re going to be pre-requisites for us, so we may as well snag them.

Start by creating a new project and cd-ing into it. If you already have a project, you can cd into that instead.

Once we’re in the project folder, let’s make sure we have Node and npm at the ready.

## Run this or download the package yourself at: https://nodejs.org/brew install node  ## Then we'll install the latest version of npm npm install npm@latest -g

Okey-dokey, now let’s install Jest. If you happen to be running a React project that was created with create-react-app, then you’re in luck — Jest is already included, so you’re good to go!

For the rest of us mortals, we’re going back to the command line:

## Yarn is also supported npm install --save-dev jest

OK, we have the core dependencies we need to get to work, but there is one more thing to consider…


Yep, scale. If you’re running a large, complex site, then it’s not far-fetched to think that you might need to run thousands of tests. And, while Selenium Grid is a fantastic resources, it is hosted on whatever environment you put it on, meaning you may very well outgrow it and need something more robust.

That’s where LambdaTest comes into play. If you haven’t heard of it, LambdaTest is a cloud-based cross-browser testing tool with 2,000+ real browsers for both manual and Selenium automation testing. Not to mention, it plays well with a lot of other services, from communication tools like Slack and Trello to project management tools like Jira and Asana — and GitHub, Bitbucket, and such. It’s extensible like that.

Here’s an important thing to know: Jest doesn’t support running tests in parallel all by itself, which is really needed when you have a lot of tests and you’re running them on multiple browsers. But on LambdaTest, you can be running concurrent sessions, meaning different Jest scripts can be running on different sessions at the same time. That’s right, it can run multiple tests together, meaning the time to run tests is cut down dramatically compared to running them sequentially.

Integrating LambdaTest Into the Stack

We’ve already installed Jest. Let’s say Selenium is already set up somewhere. The first thing we need to do is sign up for LambdaTest and grab the account credentials. We’ll need to set them up as environment variables in our project.

From the command line:

## Mac/Linuxexport LT_USERNAME=<your lambdatest username> export LT_ACCESS_KEY=<your lambdatest access_key>  ## Windowsset LT_ACCESS_KEY=<your lambdatest access_key>set LT_ACCESS_KEY=<your lambdatest access_key>

LambdaTest has a repo that contains a sample of how to set things up from here. You could clone that as a starting point if you’re just interested in testing things out.

Running tests

The LambdaTest docs use this as a sample test script:

const webdriver = require('selenium-webdriver'); const { until } = require('selenium-webdriver'); const { By } = require('selenium-webdriver'); const LambdaTestRestClient = require('@lambdatest/node-rest-client');  const username = process.env.LT_USERNAME || '<your username>'; const accessKey = process.env.LT_ACCESS_KEY || '<your accessKey>';  const AutomationClient = LambdaTestRestClient.AutomationClient({   username,   accessKey }); const capabilities = {   build: 'jest-LambdaTest-Single',   browserName: 'chrome',   version: '72.0',   platform: 'WIN10',   video: true,   network: true,   console: true,   visual: true };  const getElementById = async (driver, id, timeout = 2000) => {   const el = await driver.wait(until.elementLocated(By.id(id)), timeout);   return await driver.wait(until.elementIsVisible(el), timeout); };  const getElementByName = async (driver, name, timeout = 2000) => {   const el = await driver.wait(until.elementLocated(By.name(name)), timeout);   return await driver.wait(until.elementIsVisible(el), timeout); };  const getElementByXpath = async (driver, xpath, timeout = 2000) => {   const el = await driver.wait(until.elementLocated(By.xpath(xpath)), timeout);   return await driver.wait(until.elementIsVisible(el), timeout); };  let sessionId = null;  describe('webdriver', () => {   let driver;   beforeAll(async () => {     driver = new webdriver.Builder()       .usingServer(         'https://' + username + ':' + accessKey + '@hub.lambdatest.com/wd/hub'       )       .withCapabilities(capabilities)       .build();     await driver.getSession().then(function(session) {       sessionId = session.id_;     });     // eslint-disable-next-line no-undef     await driver.get(`https://lambdatest.github.io/sample-todo-app/`);   }, 30000);    afterAll(async () => {     await driver.quit();   }, 40000);    test('test', async () => {     try {       const lnk = await getElementByName(driver, 'li1');       await lnk.click();        const lnk1 = await getElementByName(driver, 'li2');       await lnk1.click();        const inpf = await getElementById(driver, 'sampletodotext');       await inpf.clear();       await inpf.sendKeys("Yey, Let's add it to list");        const btn = await getElementById(driver, 'addbutton');       await btn.click();        const output = await getElementByXpath(         driver,         '//html/body/div/div/div/ul/li[6]/span'       );       const outputVal = await output.getText();       expect(outputVal).toEqual("Yey, Let's add it to list");       await updateJob(sessionId, 'passed');     } catch (err) {       await webdriverErrorHandler(err, driver);       throw err;     }   }, 35000); });  async function webdriverErrorHandler(err, driver) {   console.error('Unhandled exception! ' + err.message);   if (driver && sessionId) {     try {       await driver.quit();     } catch (_) {}     await updateJob(sessionId, 'failed');   } } function updateJob(sessionId, status) {   return new Promise((resolve, reject) => {     AutomationClient.updateSessionById(       sessionId,       { status_ind: status },       err => {         if (err) return reject(err);         return resolve();       }     );   }); }

The ‘Capabilities’ object look confusing? It’s actually a lot easier to write this sort of thing using the Selenium Desired Capabilities Generator that the LambdaTest team provides. That sample script defines a set of tests that can be run on a cloud machine that have browser configuration Chrome 72 and operating system Windows 10. You can run the script from the command line, like this:

npm test .single.test.js

The sample script also have an example that you can use to run the tests on your local machine like this:

npm test .local.test.js

Great, but what about test results?

Wouldn’t it be great to have a record of all your tests, which ones are running, logs of when they ran, and what their results were? This is where LambdaTest is tough to beat because it has a UI for all of that through their automation dashboard.

The dashboard provides all of those details and more, including analytics that show how many builds ran on a given day, how much time it took to run them, and which ones passed or failed. Pretty nice to have that nearby. LambdaTest even has super handy documentation for the Selenium Automation API that can be used to extract all this test execution data that you can use for any custom reporting framework that you may have.

Test all the things!

That’s the stack: Selenium for virtual machines, Jest for unit tests, and LambdaTest for automation, hosting and reporting. That’s a lot of coverage from only a few moving pieces.

If you’ve ever used online cross-browser tools, it’s kind of like having that… but running on your own local machine. Or a production environment.

LambdaTest is free to try and worth every minute of the trial.

Get Started

The post Automated Selenium Testing with Jest and LambdaTest appeared first on CSS-Tricks.


, , , ,

Automated (and Guided!) Accessibility Audits with axe Pro

It’s important to know there are tools for automated accessibility testing of websites. They are a vital part in helping make sure your website is usable for everyone, which is both a noble goal and damn good for business. Automated tests won’t catch every potential accessibility issue, but they help a great deal, and in my experience, help establish a culture of caring about accessibility. The more seriously you take these tests and actually fix the problems, the more likely it is you’re doing a good job of accessibility overall.

There is one testing tool that stands above the rest, axe (it’s even what Google’s Lighthouse uses), and now it’s gotten a whole lot better with axe Pro.

The axe extention is still the powerhouse, where robust accessibility reports are a click away in DevTools:

axe in Chrome DevTools

But there’s more!

Guided Question-Answer Accessibility Walkthroughs

Not to bury the lede, this is probably the best new feature of axe Pro. The tool is so friendly, it gives accessibility testing powers to just about anybody:

“Axe Pro essentially lets every developer function as an in-house accessibility expert,” says Preety Kumar, CEO of Deque Systems.

Run the tests, and it just asks you stuff in plain English.

Screenshot of axe asking me if the page title is correct on this page
Anybody can answer questions from a guided tour like this. It’s extremely helpful to be hand-held through thinking about these important aspects of your site.

For example, here’s a test walkthrough of it helping me think about header and header levels:

Step 1: start test
Step 2: Identify headers that shouldn't be.
Step 3: Identify elements that should be headers but aren't.

First, you identify the problems, then you can save the information so you can work on the fixes as you can.

Your tests can be saved as online reports

Working in DevTools is nice and the perfect place to be digging into the accessibility issues on your site. But that session in DevTools is short-lived! It’s just one developer working on one computer at one point in time. With the new axe Pro web dashboard, you can save your tests online. This is useful for a variety of reasons:

  • You can save your testing so far and come back to it later to keep going.
  • You can clear out tests and re-do them, to track your progress moving forward.
  • You can have multiple tests on the dashboard to help you test multiple pages and projects.
  • Your team has a home to manage these tests all together.

You can export the data

If you’d prefer to get the results out of axe and somewhere else, you can now export the data as JSON or CSV. Perhaps you want to track the results over time to visualize improvements or port the information to some other tool. The data is yours.

export dialog box for axe test results

I found helpful prompts immediately.

I’d like to think I care about accessibility and work accessibily as I go, but if you’re anything like me, you’ll find mistakes find their way into your code bases over time. Just like we might have automated tools to watch our performance metrics over time, we can and should be regularly testing accessibility, especially when automation helpfulness like axe Pro provides.

Just in playing with axe Pro for a few hours, I’ve…

  • Found many images that had missing and incorrect alt text on them.
  • Found a few elements that were using header tags that really shouldn’t have been.
  • Fixed some color contrast on some elements that were just barely not AA and light tweaks got them there.

That second one I found very satisfying as the guided tour helped me find them. That’s something an entirely automated tool won’t really help find, it requires you looking at things and making judgment calls.

I’m very optimistic this is going to help lots of folks unearth accessibility issues they wouldn’t have caught otherwise.

It’s free for a limited time

It’s kind of a big deal:

Developers can typically identify about half of all critical accessibility blockers through axe.

You might as well try it out!

The post Automated (and Guided!) Accessibility Audits with axe Pro appeared first on CSS-Tricks.


, , ,