Tag: Engine

What is Your Page Title on a Google Search Engine Results Page?

Whatever Google wants it to be. I always thought it was exactly what your <title> element was. Perhaps in lieu of that, what the first <h1> on the page is. But recently I noticed some pages on this site that were showing a title on SERPs that was a string that appeared nowhere at all in the source of the page.

When I first noticed it, I tweeted my basic findings…

The thread has more info than what unfurled here.

This is a known thing. Apparently, they’ve been doing this for a long time (~10 years), but it’s the first I’ve noticed it. And it’s undergone a recent change:

The article is pretty clear about things:

[…] we think our new system is producing titles that work better for documents overall, to describe what they are about, regardless of the particular query.

Also, while we’ve gone beyond HTML text to create titles for over a decade, our new system is making even more use of such text. In particular, we are making use of text that humans can visually see when they arrive at a web page. We consider the main visual title or headline shown on a page, content that site owners often place within <H1> tags or other header tags, and content that’s large and prominent through the use of style treatments.

Other text contained in the page might be considered, as might be text within links that point at pages.

The change is in response to people having sucky <title> text. Like it’s too long, too jacked up with SEO garbage (irony!), or are just plain non-descriptive.

I’m not entirely sure how much I care just yet.

Part of me thinks, well, google.com isn’t the web. As important as it is, it’s a proprietary product by a private company and they can do whatever they want within the bounds of the law. In this case, it’s clear the intention is to help: to provide titles that are more clear than what the original page has.

Part of me thinks, well, that sucks that, as site owners, we have no control. If Google wanted to change the SERP title for every results to this website to “CSS-Tricks is a stupid website, never visit it,” they could and that’s that.

Part of me connects this kind of work to AMP. AMP was basically saying, “Y’all are absolutely horrible at building performant mobile websites, so we’re going to build a strict set of rules such that you can’t screw it up anymore, and dangling a carrot of better SERP placement if you buy into the rules.” This way of creating page titles is basically saying, “Y’all are absolutely horrible at providing good titles, so we’re going to title your pages for you so you can’t screw it up anymore and we can improve our SERPs.”

Except with AMP, you had to put in the development hours to make it happen. It was opt-in, even if the carrot was un-ignorable by content companies. This doesn’t carry the risk of burning development hours, but it’s also not something we get to opt into.


The post What is Your Page Title on a Google Search Engine Results Page? appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , ,

How to Create a Commenting Engine with Next.js and Sanity

One of the arguments against the Jamstack approach for building websites is that developing features gets complex and often requires a number of other services. Take commenting, for example. To set up commenting for a Jamstack site, you often need a third-party solution such as Disqus, Facebook, or even just a separate database service. That third-party solution usually means your comments live disconnected from their content.

When we use third-party systems, we have to live with the trade-offs of using someone else’s code. We get a plug-and-play solution, but at what cost? Ads displayed to our users? Unnecessary JavaScript that we can’t optimize? The fact that the comments content is owned by someone else? These are definitely things worth considering.

Monolithic services, like WordPress, have solved this by having everything housed under the same application. What if we could house our comments in the same database and CMS as our content, query it in the same way we query our content, and display it with the same framework on the front end?

It would make this particular Jamstack application feel much more cohesive, both for our developers and our editors.

Let’s make our own commenting engine

In this article, we’ll use Next.js and Sanity.io to create a commenting engine that meets those needs. One unified platform for content, editors, commenters, and developers.

Why Next.js?

Next.js is a meta-framework for React, built by the team at Vercel. It has built-in functionality for serverless functions, static site generation, and server-side rendering.

For our work, we’ll mostly be using its built-in “API routes” for serverless functions and its static site generation capabilities. The API routes will simplify the project considerably, but if you’re deploying to something like Netlify, these can be converted to serverless functions or we can use Netlify’s next-on-netlify package.

It’s this intersection of static, server-rendered, and serverless functions that makes Next.js a great solution for a project like this.

Why Sanity?

Sanity.io is a flexible platform for structured content. At its core, it is a data store that encourages developers to think about content as structured data. It often comes paired with an open-source CMS solution called the Sanity Studio.

We’ll be using Sanity to keep the author’s content together with any user-generated content, like comments. In the end, Sanity is a content platform with a strong API and a configurable CMS that allows for the customization we need to tie these things together.

Setting up Sanity and Next.js

We’re not going to start from scratch on this project. We’ll begin by using the simple blog starter created by Vercel to get working with a Next.js and Sanity integration. Since the Vercel starter repository has the front end and Sanity Studio separate, I’ve created a simplified repository that includes both.

We’ll clone this repository, and use it to create our commenting base. Want to see the final code? This “Starter” will get you set up with the repository, Vercel project, and Sanity project all connected.

The starter repo comes in two parts: the front end powered by Next.js, and Sanity Studio. Before we go any further, we need to get these running locally.

To get started, we need to set up our content and our CMS for Next to consume the data. First, we need to install the dependencies required for running the Studio and connecting to the Sanity API.

# Install the Sanity CLI globally npm install -g @sanity/cli # Move into the Studio directory and install the Studio's dependencies cd studio npm install

Once these finish installing, from within the /studio directory, we can set up a new project with the CLI.

# If you're not logged into Sanity via the CLI already sanity login # Run init to set up a new project (or connect an existing project) sanity init

The init command asks us a few questions to set everything up. Because the Studio code already has some configuration values, the CLI will ask us if we want to reconfigure it. We do.

From there, it will ask us which project to connect to, or if we want to configure a new project.

We’ll configure a new project with a descriptive project name. It will ask us to name the “dataset” we’re creating. This defaults to “production” which is perfectly fine, but can be overridden with whatever name makes sense for your project.

The CLI will modify the file ~/studio/sanity.json with the project’s ID and dataset name. These values will be important later, so keep this file handy.

For now, we’re ready to run the Studio locally.

# From within /studio npm run start

After the Studio compiles, it can be opened in the browser at http://localhost:3333.

At this point, it makes sense to go into the admin and create some test content. To make the front end work properly, we’ll need at least one blog post and one author, but additional content is always nice to get a feel for things. Note that the content will be synced in real-time to the data store even when you’re working from the Studio on localhost. It will become instantly available to query. Don’t forget to push publish so that the content is publicly available.

Once we have some content, it’s time to get our Next.js project running.

Getting set up with Next.js

Most things needed for Next.js are already set up in the repository. The main thing we need to do is connect our Sanity project to Next.js. To do this, there’s an example set of environment variables set in /blog-frontent/.env.local.example. Remove .example from that file and then we’ll modify the environment variables with the proper values.

We need an API token from our Sanity project. To create this value, let’s head over to the Sanity dashboard. In the dashboard, locate the current project and navigate to the Settings → API area. From here, we can create new tokens to use in our project. In many projects, creating a read-only token is all we need. In our project, we’ll be posting data back to Sanity, so we’ll need to create a Read+Write token.

Showing a modal open in the Sanity dashboard with a Add New Token heading, a text field to set the token label with a value of Comment Engine, and three radio buttons that set if the token as read, write or deploy studio access where the write option is selected.
Adding a new read and write token in the Sanity dashboard

When clicking “Add New Token,” we receive a pop-up with the token value. Once it’s closed, we can’t retrieve the token again, so be sure to grab it!

This string goes in our .env.local file as the value for SANITY_API_TOKEN. Since we’re already logged into manage.sanity.io , we can also grab the project ID from the top of the project page and paste it as the value of NEXT_PUBLIC_SANITY_PROJECT_ID. The SANITY_PREVIEW_SECRET is important for when we want to run Next.js in “preview mode”, but for the purposes of this demo, we don’t need to fill that out.

We’re almost ready to run our Next front-end. While we still have our Sanity Dashboard open, we need to make one more change to our Settings → API view. We need to allow our Next.js localhost server to make requests.

In the CORS Origins, we’ll add a new origin and populate it with the current localhost port: http://localhost:3000. We don’t need to be able to send authenticated requests, so we can leave this off When this goes live, we’ll need to add an additional Origin with the production URL to allow the live site to make requests as well.

Our blog is now ready to run locally!

# From inside /blog-frontend npm run dev

After running the command above, we now have a blog up and running on our computer with data pulling from the Sanity API. We can visit http://localhost:3000 to view the site.

Creating the schema for comments

To add comments to our database with a view in our Studio, we need to set up our schema for the data.

To add our schema, we’ll add a new file in our /studio/schemas directory named comment.js. This JavaScript file will export an object that will contain the definition of the overall data structure. This will tell the Studio how to display the data, as well as structuring the data that we will return to our frontend.

In the case of a comment, we’ll want what might be considered the “defaults” of the commenting world. We’ll have a field for a user’s name, their email, and a text area for a comment string. Along with those basics, we’ll also need a way of attaching the comment to a specific post. In Sanity’s API, the field type is a “reference” to another type of data.

If we wanted our site to get spammed, we could end there, but it would probably be a good idea to add an approval process. We can do that by adding a boolean field to our comment that will control whether or not to display a comment on our site.

export default {   name: 'comment',   type: 'document',   title: 'Comment',   fields: [     {       name: 'name',       type: 'string',     },     {       title: 'Approved',       name: 'approved',       type: 'boolean',       description: "Comments won't show on the site without approval"     },        {       name: 'email',       type: 'string',     },     {       name: 'comment',       type: 'text',     },     {       name: 'post',       type: 'reference',       to: [         {type: 'post'}       ]     }   ], }

After we add this document, we also need to add it to our /studio/schemas/schema.js file to register it as a new document type.

import createSchema from 'part:@sanity/base/schema-creator' import schemaTypes from 'all:part:@sanity/base/schema-type' import blockContent from './blockContent' import category from './category' import post from './post' import author from './author' import comment from './comment' // <- Import our new Schema export default createSchema({   name: 'default',   types: schemaTypes.concat([     post,     author,     category,     comment, // <- Use our new Schema     blockContent   ]) }) 

Once these changes are made, when we look into our Studio again, we’ll see a comment section in our main content list. We can even go in and add our first comment for testing (since we haven’t built any UI for it in the front end yet).

An astute developer will notice that, after adding the comment, the preview our comments list view is not very helpful. Now that we have data, we can provide a custom preview for that list view.

Adding a CMS preview for comments in the list view

After the fields array, we can specify a preview object. The preview object will tell Sanity’s list views what data to display and in what configuration. We’ll add a property and a method to this object. The select property is an object that we can use to gather data from our schema. In this case, we’ll take the comment’s name, comment, and post.title values. We pass these new variables into our prepare() method and use that to return a title and subtitle for use in list views.

export default {   // ... Fields information   preview: {       select: {         name: 'name',         comment: 'comment',         post: 'post.title'       },       prepare({name, comment, post}) {         return {           title: `$ {name} on $ {post}`,           subtitle: comment         }       }     }   }  }

The title will display large and the subtitle will be smaller and more faded. In this preview, we’ll make the title a string that contains the comment author’s name and the comment’s post, with a subtitle of the comment body itself. You can configure the previews to match your needs.

The data now exists, and our CMS preview is ready, but it’s not yet pulling into our site. We need to modify our data fetch to pull our comments onto each post.

Displaying each post’s comments

In this repository, we have a file dedicated to functions we can use to interact with Sanity’s API. The /blog-frontend/lib/api.js file has specific exported functions for the use cases of various routes in our site. We need to update the getPostAndMorePosts function in this file, which pulls the data for each post. It returns the proper data for posts associated with the current page’s slug, as well as a selection of new posts to display alongside it.

In this function, there are two queries: one to grab the data for the current post and one for the additional posts. The request we need to modify is the first request.

Changing the returned data with a GROQ projection

The query is made in the open-source graph-based querying language GROQ, used by Sanity for pulling data out of the data store. The query comes in three parts:

  • The filter – what set of data to find and send back *[_type == "post" && slug.current == $ slug]
  • An optional pipeline component — a modification to the data returned by the component to its left | order(_updatedAt desc)
  • An optional projection — the specific data elements to return for the query. In our case, everything between the brackets ({}).

In this example, we have a variable list of fields that most of our queries need, as well as the body data for the blog post. Directly following the body, we want to pull all the comments associated with this post.

In order to do this, we create a named property on the object returned called 'comments' and then run a new query to return the comments that contain the reference to the current post context.

The entire filter looks like this:

*[_type == "comment" && post._ref == ^._id && approved == true]

The filter matches all documents that meet the interior criteria of the square brackets ([]). In this case, we’ll find all documents of _type == "comment". We’ll then test if the current post’s _ref matches the comment’s _id. Finally, we check to see if the comment is approved == true.

Once we have that data, we select the data we want to return using an optional projection. Without the projection, we’d get all the data for each comment. Not important in this example, but a good habit to be in.

curClient.fetch(     `*[_type == "post" && slug.current == $ slug] | order(_updatedAt desc) {         $ {postFields}         body,         'comments': *[_type == "comment" && post._ref == ^._id && approved == true]{             _id,              name,              email,              comment,              _createdAt         }     }`,  { slug }  )  .then((res) => res?.[0]),

Sanity returns an array of data in the response. This can be helpful in many cases but, for us, we just need the first item in the array, so we’ll limit the response to just the zero position in the index.

Adding a Comment component to our post

Our individual posts are rendered using code found in the /blog-frontend/pages/posts/[slug].js file. The components in this file are already receiving the updated data in our API file. The main Post() function returns our layout. This is where we’ll add our new component.

Comments typically appear after the post’s content, so let’s add this immediately following the closing </article> tag.

// ... The rest of the component </article> // The comments list component with comments being passed in <Comments comments={post?.comments} />

We now need to create our component file. The component files in this project live in the /blog-frontend/components directory. We’ll follow the standard pattern for the components. The main functionality of this component is to take the array passed to it and create an unordered list with proper markup.

Since we already have a <Date /> component, we can use that to format our date properly.

# /blog-frontend/components/comments.js  import Date from './date'  export default function Comments({ comments = [] }) {   return (     <>      <h2 className="mt-10 mb-4 text-4xl lg:text-6xl leading-tight">Comments:</h2>       <ul>         {comments?.map(({ _id, _createdAt, name, email, comment }) => (           <li key={_id} className="mb-5">             <hr className="mb-5" />             <h4 className="mb-2 leading-tight"><a href={`mailto:$ {email}`}>{name}</a> (<Date dateString={_createdAt}/>)</h4>             <p>{comment}</p>             <hr className="mt-5 mb-5" />          </li>         ))       </ul>     </>   ) }

Back in our /blog-frontend/pages/posts/[slug].js file, we need to import this component at the top, and then we have a comment section displayed for posts that have comments.

import Comments from '../../components/comments'

We now have our manually-entered comment listed. That’s great, but not very interactive. Let’s add a form to the page to allow users to submit a comment to our dataset.

Adding a comment form to a blog post

For our comment form, why reinvent the wheel? We’re already in the React ecosystem with Next.js, so we might as well take advantage of it. We’ll use the react-hook-form package, but any form or form component will do.

First, we need to install our package.

npm install react-hook-form

While that installs, we can go ahead and set up our Form component. In the Post component, we can add a <Form /> component right after our new <Comments /> component.

// ... Rest of the component <Comments comments={post.comments} /> <Form _id={post._id} />

Note that we’re passing the current post _id value into our new component. This is how we’ll tie our comment to our post.

As we did with our comment component, we need to create a file for this component at /blog-frontend/components/form.js.

export default function Form ({_id}) {    // Sets up basic data state   const [formData, setFormData] = useState()             // Sets up our form states    const [isSubmitting, setIsSubmitting] = useState(false)   const [hasSubmitted, setHasSubmitted] = useState(false)            // Prepares the functions from react-hook-form   const { register, handleSubmit, watch, errors } = useForm()    // Function for handling the form submission   const onSubmit = async data => {     // ... Submit handler   }    if (isSubmitting) {     // Returns a "Submitting comment" state if being processed     return <h3>Submitting comment…</h3>   }   if (hasSubmitted) {     // Returns the data that the user submitted for them to preview after submission     return (       <>         <h3>Thanks for your comment!</h3>         <ul>           <li>             Name: {formData.name} <br />             Email: {formData.email} <br />             Comment: {formData.comment}           </li>         </ul>       </>     )   }    return (     // Sets up the Form markup   ) }

This code is primarily boilerplate for handling the various states of the form. The form itself will be the markup that we return.

// Sets up the Form markup <form onSubmit={handleSubmit(onSubmit)} className="w-full max-w-lg" disabled>   <input ref={register} type="hidden" name="_id" value={_id} /> 									   <label className="block mb-5">     <span className="text-gray-700">Name</span>     <input name="name" ref={register({required: true})} className="form-input mt-1 block w-full" placeholder="John Appleseed"/>     </label> 																																																									   <label className="block mb-5">     <span className="text-gray-700">Email</span>     <input name="email" type="email" ref={register({required: true})} className="form-input mt-1 block w-full" placeholder="your@email.com"/>   </label>    <label className="block mb-5">     <span className="text-gray-700">Comment</span>     <textarea ref={register({required: true})} name="comment" className="form-textarea mt-1 block w-full" rows="8" placeholder="Enter some long form content."></textarea>   </label> 																																					   {/* errors will return when field validation fails  */}   {errors.exampleRequired && <span>This field is required</span>} 	   <input type="submit" className="shadow bg-purple-500 hover:bg-purple-400 focus:shadow-outline focus:outline-none text-white font-bold py-2 px-4 rounded" /> </form>

In this markup, we’ve got a couple of special cases. First, our <form> element has an onSubmit attribute that accepts the handleSubmit() hook. That hook provided by our package takes the name of the function to handle the submission of our form.

The very first input in our comment form is a hidden field that contains the _id of our post. Any required form field will use the ref attribute to register with react-hook-form’s validation. When our form is submitted we need to do something with the data submitted. That’s what our onSubmit() function is for.

// Function for handling the form submission const onSubmit = async data => {   setIsSubmitting(true)            setFormData(data)            try {     await fetch('/api/createComment', {       method: 'POST',      body: JSON.stringify(data),      type: 'application/json'     })       setIsSubmitting(false)     setHasSubmitted(true)   } catch (err) {     setFormData(err)   } }

This function has two primary goals:

  1. Set state for the form through the process of submitting with the state we created earlier
  2. Submit the data to a serverless function via a fetch() request. Next.js comes with fetch() built in, so we don’t need to install an extra package.

We can take the data submitted from the form — the data argument for our form handler — and submit that to a serverless function that we need to create.

We could post this directly to the Sanity API, but that requires an API key with write access and you should protect that with environment variables outside of your front-end. A serverless function lets you run this logic without exposing the secret token to your visitors.

Submitting the comment to Sanity with a Next.js API route

In order to protect our credentials, we’ll write our form handler as a serverless function. In Next.js, we can use “API routes” to create serverless function. These live alongside our page routes in the /blog-frontent/pages directory in the api directory. We can create a new file here called createComment.js.

To write to the Sanity API, we first need to set up a client that has write permissions. Earlier in this demo, we set up a read+write token and put it in /blog-frontent/.env.local. This environment variable is already in use in a client object from /blog-frontend/lib/sanity.js. There’s a read+write client set up with the name previewClient that uses the token to fetch unpublished changes for preview mode.

At the top of our createClient file, we can import that object for use in our serverless function. A Next.js API route needs to export its handler as a default function with request and response arguments. Inside our function, we’ll destructure our form data from the request object’s body and use that to create a new document.

Sanity’s JavaScript client has a create() method which accepts a data object. The data object should have a _type that matches the type of document we wish to create along with any data we wish to store. In our example, we’ll pass it the name, email, and comment.

We need to do a little extra work to turn our post’s _id into a reference to the post in Sanity. We’ll define the post property as a reference and give the_id as the _ref property on this object. After we submit it to the API, we can return either a success status or an error status depending on our response from Sanity.

// This Next.js template already is configured to write with this Sanity Client import {previewClient} from '../../lib/sanity'  export default async function createComment(req, res) {   // Destructure the pieces of our request   const { _id, name, email, comment} = JSON.parse(req.body)   try {     // Use our Client to create a new document in Sanity with an object       await previewClient.create({       _type: 'comment',       post: {         _type: 'reference',         _ref: _id,       },      name,      email,      comment     })   } catch (err) {     console.error(err)     return res.status(500).json({message: `Couldn't submit comment`, err})   }        return res.status(200).json({ message: 'Comment submitted' }) }

Once this serverless function is in place, we can navigate to our blog post and submit a comment via the form. Since we have an approval process in place, after we submit a comment, we can view it in the Sanity Studio and choose to approve it, deny it, or leave it as pending.

Take the commenting engine further

This gets us the basic functionality of a commenting system and it lives directly with our content. There is a lot of potential when you control both sides of this flow. Here are a few ideas for taking this commenting engine further.


The post How to Create a Commenting Engine with Next.js and Sanity appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , ,
[Top]

Web Engine Diversity and Ecosystem Health

As front-end developers, our job is working with browsers. Knowing how many we have and the health of them is always of great interest. As far as numbers go, we have fewer recently than we have in the past. It’s only this month that Edge is starting to auto-update browsers to the Chromium version, yet another notable milestone in the shrinking number of browsers.

A few years back, Rachel Nabors likened the situation to a biological ecosystem and how diversity means health:

If we lose one of those browser engines, we lose its lineage, every permutation of that engine that would follow, and the unique takes on the Web it could allow for.

And it’s not likely to be replaced.

A huge consideration in all this is the open-source nature of what we have left. Remember that Microsoft’s browser technologies were not open-source. Brian Kardell:

In important ways, we are a more diverse, efficient and healthier ecosystem with the three multi-os, open-source engines we have left (Blink, Gecko, and WebKit) than when we had had more and were dominated by projects that weren’t that at all.

As a followup Stuart Langridge touches on another kind of diversity:

What’s really important is diversity of influence: who has the ability to make decisions which shape the web in particular ways, and do they make those decisions for good reasons or not so good?

Here’s hoping that the browsers we have left will continue to evolve, perhaps even fork, and find ways to compete on anything except standards. While the current situation isn’t as bad as perhaps some folks were worried about with the loss of Microsoft’s engines (and maybe it’s even a good thing), it would certainly be bad news if we lost even more browsers [nervously glancing at Firefox], both in shrinking numbers and shrinking diversity of influence.

Direct Link to ArticlePermalink

The post Web Engine Diversity and Ecosystem Health appeared first on CSS-Tricks.

CSS-Tricks

, , ,
[Top]

Using Structured Data to Enhance Search Engine Optimization

SEO is often considered the snake oil of the web. How many times have you scrolled through attention-grabbing headlines on know how to improve your SEO? Everyone and their uncle seems to have some “magic” cure to land high in search results and turn impressions into conversions. Sifting through so much noise on the topic can cause us to miss true gems that might be right under our nose.

We’re going to look at one such gem in this article: structured data.

There’s a checklist of SEO must-haves that we know are needed when working on a site. It includes things like a strong <title>, a long list of <meta> tags, and using descriptive alt tags on images (which is a double win for accessibility). Running a cursory check on any site using Lighthouse will flag up turn up even more tips and best practices to squeeze the most SEO out of the content.

Search engines are getting smarter, however, and starting to move past the algorithmic scraping techniques of yesteryear. Google, Amazon, and Microsoft are all known to be investing a considerable amount in machine learning, and with that, they need clean data to feed their search AI.

That’s where the the concept of schemas comes into play. In fact, it’s funding from Google and Microsoft — along with Yahoo and Yandex — that led to the establishment of schema.org, a website and community to push their format — more commonly referred to as structured data —forward so that they and other search engines can help surface content in more useful and engaging ways.

So, what is structured data?

Structured data describes the content of digital documents (i.e. websites, emails, etc). It’s used all over the web and, much like <meta> tags, is an invisible layer of information that search engines use to read the content.

Structured data comes in three flavors: Microdata, RDFa and JSON-LD. Microdata and RDF are both injected directly into the HTML elements of a document, peppering each relevant element of a page with machine readable pointers. For example, an example of using Microdata attributes on a product, taken straight from the schema.org docs:

<div itemscope itemtype="http://schema.org/Product">   <span itemprop="name">Kenmore White 17" Microwave</span>   <img itemprop="image" src="kenmore-microwave-17in.jpg" alt='Kenmore 17" Microwave' />   <div itemprop="aggregateRating"     itemscope itemtype="http://schema.org/AggregateRating">    Rated <span itemprop="ratingValue">3.5</span>/5    based on <span itemprop="reviewCount">11</span> customer reviews   </div>   <div itemprop="offers" itemscope itemtype="http://schema.org/Offer">     <!--price is 1000, a number, with locale-specific thousands separator     and decimal mark, and the $  character is marked up with the     machine-readable code "USD" -->     <span itemprop="priceCurrency" content="USD">$ </span><span           itemprop="price" content="1000.00">1,000.00</span>     <link itemprop="availability" href="http://schema.org/InStock" />In stock   </div>   Product description:   <span itemprop="description">0.7 cubic feet countertop microwave.   Has six preset cooking categories and convenience features like   Add-A-Minute and Child Lock.</span>   Customer reviews:   <div itemprop="review" itemscope itemtype="http://schema.org/Review">     <span itemprop="name">Not a happy camper</span> -     by <span itemprop="author">Ellie</span>,     <meta itemprop="datePublished" content="2011-04-01">April 1, 2011     <div itemprop="reviewRating" itemscope itemtype="http://schema.org/Rating">       <meta itemprop="worstRating" content = "1">       <span itemprop="ratingValue">1</span>/       <span itemprop="bestRating">5</span>stars     </div>     <span itemprop="description">The lamp burned out and now I have to replace     it. </span>   </div>   <div itemprop="review" itemscope itemtype="http://schema.org/Review">     <span itemprop="name">Value purchase</span> -     by <span itemprop="author">Lucas</span>,     <meta itemprop="datePublished" content="2011-03-25">March 25, 2011     <div itemprop="reviewRating" itemscope itemtype="http://schema.org/Rating">       <meta itemprop="worstRating" content = "1"/>       <span itemprop="ratingValue">4</span>/       <span itemprop="bestRating">5</span>stars     </div>     <span itemprop="description">Great microwave for the price. It is small and     fits in my apartment.</span>   </div>   <!-- etc. --> </div>

If that seems like bloated markup, it kinda is. But it’s certainly beneficial if you prefer to consolidate all of your data in one place.

JSON-LD, on the other hand, usually sits in a <script> tag and describes the same properties in a single block of data. Again, from the docs:

<script type="application/ld+json"> {   "@context": "http://schema.org",   "@type": "Product",   "aggregateRating": {     "@type": "AggregateRating",     "ratingValue": "3.5",     "reviewCount": "11"   },   "description": "0.7 cubic feet countertop microwave. Has six preset cooking categories and convenience features like Add-A-Minute and Child Lock.",   "name": "Kenmore White 17" Microwave",   "image": "kenmore-microwave-17in.jpg",   "offers": {     "@type": "Offer",     "availability": "http://schema.org/InStock",     "price": "55.00",     "priceCurrency": "USD"   },   "review": [     {       "@type": "Review",       "author": "Ellie",       "datePublished": "2011-04-01",       "description": "The lamp burned out and now I have to replace it.",       "name": "Not a happy camper",       "reviewRating": {         "@type": "Rating",         "bestRating": "5",         "ratingValue": "1",         "worstRating": "1"       }     },     {       "@type": "Review",       "author": "Lucas",       "datePublished": "2011-03-25",       "description": "Great microwave for the price. It is small and fits in my apartment.",       "name": "Value purchase",       "reviewRating": {         "@type": "Rating",         "bestRating": "5",         "ratingValue": "4",         "worstRating": "1"       }     }   ] } </script>

This is my personal preference, as it is treated as a little external instruction manual for your content, much like JavaScript for scripts, and CSS for your styles, all happily self-contained. JSON-LD can become essential for certain types of schema, where the content of the page is different from the content of the structured data (for example, check out the speakable property, currently in beta).

A welcome introduction to the implementation of JSON-LD on the web is Google’s allowance of fetching structured data from an external source, rather than forcing inline scripting, which was previously frustratingly impossible. This can be done either by the developer, or in Google Tag Manager.

What structured data means to you

Beyond making life easier for search engine crawlers to read your pages? Two words: Rich snippets. Rich snippets are highly visual modules that tend to sit at the top of the search engine, in what is sometimes termed as “Position 0” in the results — displayed above the first search result. Here’s an example of a simple search for “blueberry pie” in Google as an example:

Google search results showing three recipes displayed as cards at the top, a card of nutritional facts in the right sidebar, a first result showing user reviews, and finally, the search results.
Check out those three recipes up top — and that content in the right column — showing up before the list of results using details from structured data.

Even the first result is a rich snippet! As you can see, using structured data is your ticket to get into a rich snippet on a search results page. And, not to spur FOMO or anything, but any site not showing up in a rich snippet is already at risk of dropping into “below the fold” territory. Notice how the second organic result barely makes the cut.

Fear not, dear developers! Adding and testing structured data to a website is aq simple and relatively painless process. Once you get the hang of it, you’ll be adding it to every possible location you can imagine, even emails.

It is worth noting that structured data is not the only way to get into rich snippets. Search engines can sometimes determine enough from your HTML to display some snippets, but utilizing it will push the odds in your favor. Plus, using structured data puts the power of how your content is displayed in your hands, rather than letting Google or the like determine it for you.

Types of structured data

Structured data is more than recipes. Here’s a full list of the types of structured data Google supports. (Spoiler alert: it’s almost any kind of content.)

  • Article
  • Book (limited support)
  • Breadcrumb
  • Carousel
  • Course
  • COVID-19 announcements (beta)
  • Critic review (limited support)
  • Dataset
  • Employer aggregate rating
  • Estimated salary
  • Event
  • Fact check
  • FAQ
  • How-to
  • Image license metadata (beta)
  • Job posting
  • Local business
  • Logo
  • Movie
  • Product
  • Q&A
  • Recipe
  • Review snippet
  • Sitelinks searchbox
  • Software app
  • Speakable (beta)
  • Subscription and paywalled content
  • Video

Yep, lots of options here! But with those come lots of opportunity to enhance a site’s content and leverage these search engine features.

Using structured data

The easiest way to find the right structured data for your project is to look through Google’s search catalogue. Advanced users may like to browse what’s on schema.org, but I’ll warn you that it is a scary rabbit hole to crawl through.

Let’s start with a fairly simple example: the Logo logo data type. It’s simple because all we really need is a website URL and the source URL for an image, along with some basic details to help search engine’s know they are looking at a logo. Here’s our JSON-LD:

<script type="application/ld+json">   {     "@context": "https://schema.org",     "@type": "Organization",     "name": "Example",     "url": "http://www.example.com",     "logo": "http://www.example.com/images/logo.png"   } </script>

First off, we have the <script> tag itself, telling search engines that it’s about to consume some JSON-LD.

From there, we have five properties:

  • @context: This is included on all structured data objects, no matter what type it is. It’s what tells search engines that the JSON-LD contains data that is defined by schema.org specifications.
  • @type: This is the reference type for the object. It’s used to identify what type of content we’re working with. In this case, it’s “Organization” which has a whole bunch of sub-properties that follow.
  • name: This is the sub-property that contains the organization’s name.
  • url: This is the sub-property that contains the organization’s web address.
  • logo: This is the sub-property that contains the file path for the organization’s logo image file. For Google to consider this, it must be at least 112⨉112px and in JPG, PNG, or GIF format. Sorry, no SVG at the moment.

A page can have multiple structured data types. That means it’s possible to mix and match content.

Testing structured data

See, dropping structured data into a page isn’t that tough, right? Once we have it, though, we should probably check to see if it actually works.

Google, Bing, and Yandex (login required) all have testing tools available. Google even has one specifically for validating structured data in email. In most cases, simply drop in the website URL and the tool will spin up a test and show which object it recognizes, the properties it sees, and any errors or warning to look into.

Showing Google's testing results where the JSON-LD is displayed on the left of the screen and the details of it on the right.
Google’s structured data testing tool fetches the markup and displays the information it recognizes.

The next step is to confirm that the structured data is accessible on your live site through Google Search Console. You may need to set up an account and verify your site in order to use a particular search engine’s console, but checking data is — yet again — as simple as dropping in a site URL and using the inspection tools to check that the site is indeed live and sending data when it is accessed by the search engine.

If the structured data is implemented correctly, it will display. In Google’s case, it’s located in the “Enhancements” section with a big ol’ checkmark next to it.

Google Search Console screenshot showing Google can find the site and that it recognizes search enhancements below that. In this case, it is showing that the Logo structured data type was found and is supported.
Notice the “Logo” that is detected at the end — it works!

But wait! I did all that and nothing’s happening… what gives?

As with all search engine optimizations, there are no guarantees or time scales, when it comes to how or when structured data is used. It might take a good while before rich snippets take hold for your content — days, weeks, or even months! I know, it stinks to be left in the dark like that. It is unfortunately a waiting game.


Hopefully, this gives you a good idea of what structured data is and how it can be used to leverage features that search engines have made to spotlight content has it.

There’s absolutely no shortage of advice, tips, and tricks for helping optimize a site for search engines. While so much of it is concerned with what’s contained in the <head> or how content is written, there are practical things that developers can do to make an impact. Structured data is definitely one of those things and worth exploring to get the most value from content. To know more about this, visit https://victoriousseo.com/case-studies/.

The world is your oyster with structured data. And, sure, while search engine only support a selection of the schema.org vocabulary, they are constantly evolving and extending that support. Why not start small by adding structured data to an email link in a newsletter? Or perhaps you’re into trying something different, like defining a sitelinks search box (which is very meta but very cool). Or, hey, add a recipe for Pinterest. Blueberry pie, anyone?

The post Using Structured Data to Enhance Search Engine Optimization appeared first on CSS-Tricks.

CSS-Tricks

, , , , , ,
[Top]

Weekly Platform News: Upgrading Navigations to HTTPS, Sale of .org Domains, New Browser Engine

In this week’s roundup: DuckDuckGo gets smarter encryption, a fight over the sale of dot org domains, and a new browser engine is in the works.

Let’s get into the news!

DuckDuckGo upgrades and open-sources its encryption

DuckDuckGo has open-sourced its “Smarter Encryption” technology that enables upgrading from HTTP to HTTPS, and Pinterest (a popular social network) is already using it for outbound traffic — when people navigate from Pinterest to other websites — with great results: Their outbound HTTPS traffic increased from 60% to 80%.

DuckDuckGo uses its crawler to automatically generate and maintain a huge list of websites that support HTTPS, approximately 12 million entries. For comparison, Chromium’s HSTS Preload List contains only about 85 thousand entries.

(via DuckDuckGo)

Nonprofits oppose the sale of the .org domain

A coalition of organizations consisting of EFF, Wikimedia, and many others, are urging the Internet Society to stop the sale of the nonprofit organization that operates the .org domain to an investment firm.

Non-governmental organizations (NGOs) all over the world rely on the .org top-level domain. […] We cannot afford to put them into the hands of a private equity firm that has not earned the trust of the NGO community.

In a separate blog post, Mark Surman (CEO of Mozilla Foundation) urges the Internet Society to “step back and provide public answers to questions of interest to the public and the millions of organizations that have made dot org their home online for the last 15 years.”

(via Elliot Harmon)

A new browser engine is in development

Ekioh (a company from Cambridge, U.K.) is developing an entirely new browser engine for their Flow browser, which also includes Mozilla’s SpiderMonkey as its JavaScript engine. The browser can already run web apps such as Gmail (mostly), and the company plans to release it on desktop soon.

(via Flow Browser)

More news…

Read more news in my weekly newsletter for web developers. Pledge as little as $ 2 per month to get the latest news from me via email every Monday.

More News →

CSS-Tricks

, , , , , , , , , ,
[Top]

Browser Engine Diversity

We lost Opera when they went Chrome in 2013. Same deal with Edge when it also went Chrome earlier this year. Mike Taylor called these changes a “Decreasingly Diverse Browser Engine World” in a talk I’d like to see.

So all we’ve got left is Chrome-stuff, Firefox-stuff, and Safari-stuff. Chrome and Safari share the same lineage but have diverged enough, evolve separately enough, and are walled away from each other enough that it makes sense to think of them as different from one another.

I know there are fancier words to articulate this. For example, browser engines themselves have names that are distinct and separate from the names of the browsers.

Take Chrome, which is based on the open-source project Chromium, which uses the rendering engine Blink and the JavaScript engine V8.

Firefox uses Gecko as its browser engine, which is turning into Quantum, which has sub-parts like Servo for CSS and rendering.

Safari uses WebKit as a browser engine, which has parts like WebCore and JavaScriptCore.

It’s all kinda complicated and I’m not even sure I quite understand it all. My brain just thinks of it as everything under the umbrella of the main browser name.

The two extremes of looking at this from the perspective of decreasing diversity:

  • This is bad. Decreased diversity may hinder ecosystems from competing and innovating.
  • This is good. Cross-engine problems are a major productivity loss for the world. Getting down to one ecosystem would be even better.

Whichever it is, the ship has sailed. All we can do is look forward.

Random thoughts:

  • Perhaps diversity has just moved scope. Rather than the browser engines themselves representing diversity, maybe forks of the engnies we have left can compete against each other. Maybe starting from a strong foundation is a good place to start innovating?
  • If, god forbid, we got down to one browser engine, what happens to the web standards process? The fear would be that the last-engine-standing doesn’t have to worry about interop anymore and they run wild with implementations. But does running wild mean the playing field can never be competitive again?
  • It’s awesome when browsers compete on features that are great for users but don’t affect web standards. Great password managers, user protection features, clever bookmarking ideas, reader modes, clean integrations with payment APIs, free VPNs, etc. That was Opera’s play, and now we see many more in the same vein. Vivaldi is all about customization, Brave doubles down on privacy and security, and Puma is about monetization.

Brian Kardell wrote about some of this stuff recently in his “Beyond Browser Vendors” post. An interesting point is that the remaining browser engines are all open source. That means they can and do take outside contributions, which is exactly how CSS Grid came to exist.

Most of the work on CSS Grid in both WebKit and Chromium (Blink) was done, not by Google or Apple, but by teams at Igalia.

Think about that for a minute: The prioritization of its work was determined in 2 browsers not by a vendor, but by an investment from Bloomberg who had the foresight to fund this largely uncontroversial work.

And now, that idea continues:

This isn’t a unique story, it’s just a really important and highly visible one that’s fun to hold up. In fact, just in the last 6 months engineers as Igalia have worked on CSS Containment, ResizeObserver, BigInt, private fields and methods, responsive image preloading, CSS Text Level 3, bringing MathML to Chromium, normalizing SVG and MathML DOMs and a lot more.

What we may have lost in browser engine diversity we may gain back in the openness of browser engines and outside players stepping up.

The post Browser Engine Diversity appeared first on CSS-Tricks.

CSS-Tricks

, ,
[Top]

How Google PageSpeed Works: Improve Your Score and Search Engine Ranking

This article is from my friend Ben who runs Calibre, a tool for monitoring the performance of websites. We use Calibre here on CSS-Tricks to keep an eye on things. In fact, I just popped over there to take a look and was notified of some little mistakes that slipped by, and I fixed them. Recommended!

In this article, we uncover how PageSpeed calculates it’s critical speed score.

It’s no secret that speed has become a crucial factor in increasing revenue and lowering abandonment rates. Now that Google uses page speed as a ranking factor, many organizations have become laser-focused on performance.

Last year, Google made two significant changes to their search indexing and ranking algorithms:

From this, we’re able to state two truths:

  • The speed of your site on mobile will affect your overall SEO ranking.
  • If your pages load slowly, it will reduce your ad quality score, and ads will cost more.

Google wrote:

Faster sites don’t just improve user experience; recent data shows that improving site speed also reduces operating costs. Like us, our users place a lot of value in speed — that’s why we’ve decided to take site speed into account in our search rankings.

To understand how these changes affect us from a performance perspective, we need to grasp the underlying technology. PageSpeed 5.0 is a complete overhaul of previous editions. It’s now being powered by Lighthouse and CrUX (Chrome User Experience Report).

This upgrade also brings a new scoring algorithm that makes it far more challenging to receive a high PageSpeed score.

What changed in PageSpeed 5.0?

Before 5.0, PageSpeed ran a series of heuristics against a given page. If the page has large, uncompressed images, PageSpeed would suggest image compression. Cache-Headers missing? Add them.

These heuristics were coupled with a set of guidelines that would likely result in better performance, if followed, but were merely superficial and didn’t actually analyze the load and render experience that real visitors face.

In PageSpeed 5.0, pages are loaded in a real Chrome browser that is controlled by Lighthouse. Lighthouse records metrics from the browser, applies a scoring model to them, and presents an overall performance score. Guidelines for improvement are suggested based on how specific metrics score.

Like PageSpeed, Lighthouse also has a performance score. In PageSpeed 5.0, the performance score is taken from Lighthouse directly. PageSpeed’s speed score is now the same as Lighthouse’s Performance score.

Calibre scores 97 on Google’s Pagespeed

Now that we know where the PageSpeed score comes from, let’s dive into how it’s calculated, and how we can make meaningful improvements.

What is Google Lighthouse?

Lighthouse is an open source project run by a dedicated team from Google Chrome. Over the past couple of years, it has become the go-to free performance analysis tool.

Lighthouse uses Chrome’s Remote Debugging Protocol to read network request information, measure JavaScript performance, observe accessibility standards and measure user-focused timing metrics like First Contentful Paint, Time to Interactive or Speed Index.

If you’re interested in a high-level overview of Lighthouse architecture, read this guide from the official repository.

How Lighthouse calculates the Performance Score

During performance tests, Lighthouse records many metrics focused on what a user sees and experiences. There are six metrics used to create the overall performance score. They are:

  • Time to Interactive (TTI)
  • Speed Index
  • First Contentful Paint (FCP)
  • First CPU Idle
  • First Meaningful Paint (FMP)
  • Estimated Input Latency

Lighthouse will apply a 0 – 100 scoring model to each of these metrics. This process works by obtaining mobile 75th and 95th percentiles from HTTP Archive, then applying a log normal function.

Following the algorithm and reference data used to calculate Time to Interactive, we can see that if a page managed to become "interactive" in 2.1 seconds, the Time to Interactive metric score would be 92/100.

Once each metric is scored, it’s assigned a weighting which is used as a modifier in calculating the overall performance score. The weightings are as follows:

Metric Weighting
Time to Interactive (TTI) 5
Speed Index 4
First Contentful Paint 3
First CPU Idle 2
First Meaningful Paint 1
Estimated Input Latency 0

These weightings refer to the impact of each metric in regards to mobile user experience.

In the future, this may also be enhanced by the inclusion of user-observed data from the Chrome User Experience Report dataset.

You may be wondering how the weighting of each metric affects the overall performance score. The Lighthouse team have created a useful Google Spreadsheet calculator explaining this process:

Picture of a spreadsheet that can be used to calculate performance scores

Using the example above, if we change (time to) interactive from 5 seconds to 17 seconds (the global average mobile TTI), our score drops to 56% (aka 56 out of 100).

Whereas, if we change First Contentful Paint to 17 seconds, we’d score 62%.

TTI is the most impactful metric to your performance score. Therefore, to receive a high PageSpeed score, you will need a speedy TTI measurement.

Moving the needle on TTI

At a high level, there are two significant factors that hugely influence TTI:

  • The amount of JavaScript delivered to the page
  • The run time of JavaScript tasks on the main thread

Our Time to Interactive guide explains how TTI works in great detail, but if you’re looking for some quick no-research wins, we’d suggest: Reducing the amount of JavaScript

Where possible, remove unused JavaScript code or focus on only delivering a script that will be run by the current page. That might mean removing old polyfills or replacing third-party libraries with smaller, more modern alternatives.

It’s important to remember that the cost of JavaScript is not only the time it takes to download it. The browser needs to decompress, parse, compile and eventually execute it, which takes non-trivial time, especially in mobile devices.

Effective measures for reducing the amount of scripts from your pages include:

  • Reviewing and removing polyfills that are no longer required for your audience.
  • Understanding the cost of each third-party JavaScript library. Use webpack-bundle-analyser or source-map-explorer to visualise the how large each library is.
  • Modern JavaScript tooling (like webpack) can break-up large JavaScript applications into a series of small bundles that are automatically loaded as a user navigates. This approach is known as code splitting and is extremely effective in improving TTI.
  • Service workers that will cache the bytecode result of a parsed and compiled script. If you’re able to make use of this, visitors will pay a one-time performance cost for parse and compilation. After that, it’ll be mitigated by cache.

Monitoring Time to Interactive

To successfully uncover significant differences in user experience, we suggest using a performance monitoring system (like Calibre!) that allows for testing a minimum of two devices; a fast desktop and a low-mid range mobile phone.

That way, you’ll have the data for both the best and worst case of what your customers experience. It’s time to come to terms that your customers aren’t using the same powerful hardware as you.

In-depth manual profiling

To get the best results in profiling JavaScript performance, test pages using intentionally slow mobile devices. If you have an old phone in a desk drawer, this is a great second-life for it.

An excellent substitute for using a real device is to use Chrome DevTools hardware emulation mode. We’ve written an extensive performance profiling guide to help you get started with runtime performance.

What about the other metrics?

Speed Index, First Contentful Paint and First Meaningful Paint are all browser-paint-based metrics. They’re influenced by similar factors and can often be improved at the same time.

It’s objectively easier to improve these metrics as they are calculated by how quickly a page renders. Following the Lighthouse Performance audit rules closely will result in these metrics improving.

If you aren’t already preloading your fonts or optimizing for critical requests, that is an excellent place to start a performance journey. Our article titled “The Critical Request” explains in great detail how the browser fetches and renders critical resources used to render your pages.

Tracking your progress and making meaningful improvements

Google’s newly updated search console, Lighthouse and PageSpeed Insights, are a great way to get initial visibility into the performance of your pages but fall short for teams that need to continuously track and improve the performance of their pages.

Continuous performance monitoring is essential to ensuring speed improvements last, and teams get instantly notified when regressions happen. Manual testing introduces unexpected variability in results and makes testing from different regions as well as on various devices nearly impossible without a dedicated lab environment.

Speed has become a crucial factor for SEO rankings, especially now that nearly 50% of web traffic comes from mobile devices.

To avoid losing your search positioning, ensure you’re using an up-to-date performance suite to track key pages. (Pssst, we built Calibre to be your performance companion. It has Lighthouse built-in. Hundreds of teams from around the globe are using it every day.)

Related Articles

The post How Google PageSpeed Works: Improve Your Score and Search Engine Ranking appeared first on CSS-Tricks.

CSS-Tricks

, , , , , , ,
[Top]