Tag: Website

Every Website is an Essay

Every website that’s made me oooo and aaahhh lately has been of a special kind; they’re written and designed like essays. There’s an argument, a playfulness in the way that they’re not so much selling me something as they are trying to convince me of the thing. They use words and type and color in a way that makes me sit up and listen.

And I think that framing our work in this way lets us web designers explore exciting new possibilities. Instead of throwing a big carousel on the page and being done with it, thinking about making a website like an essay encourages us to focus on the tough questions. We need an introduction, we need to provide evidence for our statements, we need a conclusion, etc. This way we don’t have to get so caught up in the same old patterns that we’ve tried again and again in our work.

And by treating web design like an essay, we can be weird with the design. We can establish a distinct voice and make it sound like an honest-to-goodness human being wrote it, too.

One example of a website-as-an-essay is the Analogue Pocket site which uses real paragraphs to market their fancy new device.

Another example is the new email app Hey in which the website is nothing but paragraphs — no screenshots, no fancy product information. It’s almost feels like a political manifesto hammered onto a giant wooden door.

Apple’s marketing sites are little essays, too. Take this one section from the iPad Pro all about the LiDAR Scanner. It’s not so much trying to sell you an iPad at this point so much as it is trying to argue the case for LiDAR. And as with all good essays it answers the who, what, why, when, and how.

Another example is Stripe’s recent beautiful redesign. What I love more than the outrageously gorgeous animated gradients is the argument that the website is making. What is Stripe? How can I trust them? How easy is it to get set up? Who, what, why, when, how.

To be my own devil’s advocate for a bit though, we’re all familiar with this line of reasoning: Why care about the writing so much when people don’t read? Folks skim through a website. They don’t persevere with the text, they don’t engage with the writing, and you only have half a millisecond to hit them with something flashy before they leave. They can’t handle complex words or sentences. They can’t grasp complex ideas. So keep those paragraphs short! Remove all text from the page!

The implication here is that users are dumb. They can’t focus and they don’t care. You have to shout at them. And I kinda sorta hate that.

Instead, I think the opposite is true. They’ve seen the same boring websites for years. Everyone is tired of lifeless, humorless copywriting. They’ve seen all the animations, witnessed all the cool fonts, and in the face of all that stuff, they yawn. They yawn because it supports a bad argument, or more precisely, a bad essay; one that doesn’t charm the reader, or give them a reason to care.

So what if we made our websites more like essays and less like billboards that dot the freeways? What would that look like?


The post Every Website is an Essay appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, ,

In Defense of a Fussy Website

The other day I was doom-scrolling twitter, and I saw a delightful article titled “The Case for Fussy Breakfasts.” I love food and especially breakfast, and since the pandemic hit I’ve been using my breaks in between meetings (or sometimes on meetings, shh) to make a full bacon, poached egg, vegetable plate, so I really got into the article. This small joy of creating a bit of space for myself for the most important meal of the day has been meaningful to me — while everything else feels out of control, indulging in some ceremony has done a tiny part to offset the intensity of our collective situation.

It caused me to think of this “fussiness” as applied to other inconsequential joys. A walk. A bath. What about programming?

While we’re all laser-focused on shipping the newest feature with the hottest software and the best Lighthouse scores, I’ve been missing a bit of the joy on the web. Apps are currently conveying little care for UX, guidance, richness, and — well, for humans trying to communicate through a computer, we’re certainly bending a lot to… the computer.

I’m getting a little tired of the web being seen as a mere document reader, and though I do love me a healthy lighthouse score, some of these point matrixes seem to live and die more by our developer ego in this gamification than actually considering what we can do without incurring much weight. SVGs can be very small while still being impactful. Some effects are tiny bits of CSS. JS animations can be lazy-loaded. You can even dazzle with words, color, and layout if you’re willing to be a bit adventurous, no weight at all!

A few of my favorite developer sites lately have been Josh Comeau, Johnson Ogwuru and Cassie Evans. The small delights and touches, the little a-ha moments, make me STAY. I wander around the site, exploring, learning, feeling actually more connected to each of these humans rather than as if I’m glancing at a PDF of their resume. They flex their muscles, show me the pride they have in building things, and it intrigues me! These small bits are more than the fluff that many portray any “excess” as: they do the job that the web is intending. We are communicating using this tool- the computer- as an extension of ourselves.

Nuance can be challenging. It’s easy as programmers to get stuck in absolutes, and one of these of late has been that if you’re having any bit of fun, any bit of style, that must mean it’s “not useful.” Honestly, I’d make the case that the opposite is true. Emotions attach to the limbic system, making memories easier to recall. If your site is a flat bit of text, how will anyone remember it?

Don’t you want to build the site that teams in companies the world over remember and cite as an inspiration? I’ve been at four different companies where people have mentioned Stripe as a site they would aspire to be like. Stripe took chances. Stripe told stories. Stripe engaged the imagination of developers, spoke directly to us.

I’m sad acknowledging the irony that after thinking about how spot on Stripe was, most of those companies ignored much of what they learned while exploring it. Any creativity, risk, and intention was slowly, piece by piece, chipped away by the drumbeat of “usefulness,” missing the forest for the trees.

When a site is done with care and excitement you can tell. You feel it as you visit, the hum of intention. The craft, the cohesiveness, the attention to detail is obvious. And in turn, you meet them halfway. These are the sites with the low bounce rates, the best engagement metrics, the ones where they get questions like “can I contribute?” No gimmicks needed.

What if you don’t have the time? Of course, we all have to get things over the line. Perhaps a challenge: what small thing can you incorporate that someone might notice? Can you start with a single detail? I didn’t start with a poached egg in my breakfast, one day I made a goofy scrambled one. It went on from there. Can you challenge yourself to learn one small new technique? Can you outsource one graphic? Can you introduce a tiny easter egg? Say something just a little differently from the typical corporate lingo?

If something is meaningful to you, the audience you’ll gather will likely be the folks that find it meaningful, too.

The post In Defense of a Fussy Website appeared first on CSS-Tricks.

CSS-Tricks

, ,
[Top]

How to Add Lunr Search to your Gatsby Website

The Jamstack way of thinking and building websites is becoming more and more popular.

Have you already tried Gatsby, Nuxt, or Gridsome (to cite only a few)? Chances are that your first contact was a “Wow!” moment — so many things are automatically set up and ready to use. 

There are some challenges, though, one of which is search functionality. If you’re working on any sort of content-driven site, you’ll likely run into search and how to handle it. Can it be done without any external server-side technology? 

Search is not one of those things that come out of the box with Jamstack. Some extra decisions and implementation are required.

Fortunately, we have a bunch of options that might be more or less adapted to a project. We could use Algolia’s powerful search-as-service API. It comes with a free plan that is restricted to non-commercial projects with  a limited capacity. If we were to use WordPress with WPGraphQL as a data source, we could take advantage of WordPress native search functionality and Apollo Client. Raymond Camden recently explored a few Jamstack search options, including pointing a search form directly at Google.

In this article, we will build a search index and add search functionality to a Gatsby website with Lunr, a lightweight JavaScript library providing an extensible and customizable search without the need for external, server-side services. We used it recently to add “Search by Tartan Name” to our Gatsby project tartanify.com. We absolutely wanted persistent search as-you-type functionality, which brought some extra challenges. But that’s what makes it interesting, right? I’ll discuss some of the difficulties we faced and how we dealt with them in the second half of this article.

Getting started

For the sake of simplicity, let’s use the official Gatsby blog starter. Using a generic starter lets us abstract many aspects of building a static website. If you’re following along, make sure to install and run it:

gatsby new gatsby-starter-blog https://github.com/gatsbyjs/gatsby-starter-blog cd gatsby-starter-blog gatsby develop

It’s a tiny blog with three posts we can view by opening up http://localhost:8000/___graphql in the browser.

Showing the GraphQL page on the localhost installation in the browser.

Inverting index with Lunr.js 🙃

Lunr uses a record-level inverted index as its data structure. The inverted index stores the mapping for each word found within a website to its location (basically a set of page paths). It’s on us to decide which fields (e.g. title, content, description, etc.) provide the keys (words) for the index.

For our blog example, I decided to include all titles and the content of each article. Dealing with titles is straightforward since they are composed uniquely of words. Indexing content is a little more complex. My first try was to use the rawMarkdownBody field. Unfortunately, rawMarkdownBody introduces some unwanted keys resulting from the markdown syntax.

Showing an attempt at using markdown syntax for links.

I obtained a “clean” index using the html field in conjunction with the striptags package (which, as the name suggests, strips out the HTML tags). Before we get into the details, let’s look into the Lunr documentation.

Here’s how we create and populate the Lunr index. We will use this snippet in a moment, specifically in our gatsby-node.js file.

const index = lunr(function () {   this.ref('slug')   this.field('title')   this.field('content')   for (const doc of documents) {     this.add(doc)   } })

 documents is an array of objects, each with a slug, title and content property:

{   slug: '/post-slug/',   title: 'Post Title',   content: 'Post content with all HTML tags stripped out.' }

We will define a unique document key (the slug) and two fields (the title and content, or the key providers). Finally, we will add all of the documents, one by one.

Let’s get started.

Creating an index in gatsby-node.js 

Let’s start by installing the libraries that we are going to use.

yarn add lunr graphql-type-json striptags

Next, we need to edit the gatsby-node.js file. The code from this file runs once in the process of building a site, and our aim is to add index creation to the tasks that Gatsby executes on build. 

CreateResolvers is one of the Gatsby APIs controlling the GraphQL data layer. In this particular case, we will use it to create a new root field; Let’s call it LunrIndex

Gatsby’s internal data store and query capabilities are exposed to GraphQL field resolvers on context.nodeModel. With getAllNodes, we can get all nodes of a specified type:

/* gatsby-node.js */ const { GraphQLJSONObject } = require(`graphql-type-json`) const striptags = require(`striptags`) const lunr = require(`lunr`)  exports.createResolvers = ({ cache, createResolvers }) => {   createResolvers({     Query: {       LunrIndex: {         type: GraphQLJSONObject,         resolve: (source, args, context, info) => {           const blogNodes = context.nodeModel.getAllNodes({             type: `MarkdownRemark`,           })           const type = info.schema.getType(`MarkdownRemark`)           return createIndex(blogNodes, type, cache)         },       },     },   }) }

Now let’s focus on the createIndex function. That’s where we will use the Lunr snippet we mentioned in the last section. 

/* gatsby-node.js */ const createIndex = async (blogNodes, type, cache) => {   const documents = []   // Iterate over all posts    for (const node of blogNodes) {     const html = await type.getFields().html.resolve(node)     // Once html is resolved, add a slug-title-content object to the documents array     documents.push({       slug: node.fields.slug,       title: node.frontmatter.title,       content: striptags(html),     })   }   const index = lunr(function() {     this.ref(`slug`)     this.field(`title`)     this.field(`content`)     for (const doc of documents) {       this.add(doc)     }   })   return index.toJSON() }

Have you noticed that instead of accessing the HTML element directly with  const html = node.html, we’re using an  await expression? That’s because node.html isn’t available yet. The gatsby-transformer-remark plugin (used by our starter to parse Markdown files) does not generate HTML from markdown immediately when creating the MarkdownRemark nodes. Instead,  html is generated lazily when the html field resolver is called in a query. The same actually applies to the excerpt that we will need in just a bit.

Let’s look ahead and think about how we are going to display search results. Users expect to obtain a link to the matching post, with its title as the anchor text. Very likely, they wouldn’t mind a short excerpt as well.

Lunr’s search returns an array of objects representing matching documents by the ref property (which is the unique document key slug in our example). This array does not contain the document title nor the content. Therefore, we need to store somewhere the post title and excerpt corresponding to each slug. We can do that within our LunrIndex as below:

/* gatsby-node.js */ const createIndex = async (blogNodes, type, cache) => {   const documents = []   const store = {}   for (const node of blogNodes) {     const {slug} = node.fields     const title = node.frontmatter.title     const [html, excerpt] = await Promise.all([       type.getFields().html.resolve(node),       type.getFields().excerpt.resolve(node, { pruneLength: 40 }),     ])     documents.push({       // unchanged     })     store[slug] = {       title,       excerpt,     }   }   const index = lunr(function() {     // unchanged   })   return { index: index.toJSON(), store } }

Our search index changes only if one of the posts is modified or a new post is added. We don’t need to rebuild the index each time we run gatsby develop. To avoid unnecessary builds, let’s take advantage of the cache API:

/* gatsby-node.js */ const createIndex = async (blogNodes, type, cache) => {   const cacheKey = `IndexLunr`   const cached = await cache.get(cacheKey)   if (cached) {     return cached   }   // unchanged   const json = { index: index.toJSON(), store }   await cache.set(cacheKey, json)   return json }

Enhancing pages with the search form component

We can now move on to the front end of our implementation. Let’s start by building a search form component.

touch src/components/search-form.js 

I opt for a straightforward solution: an input of type="search", coupled with a label and accompanied by a submit button, all wrapped within a form tag with the search landmark role.

We will add two event handlers, handleSubmit on form submit and handleChange on changes to the search input.

/* src/components/search-form.js */ import React, { useState, useRef } from "react" import { navigate } from "@reach/router" const SearchForm = ({ initialQuery = "" }) => {   // Create a piece of state, and initialize it to initialQuery   // query will hold the current value of the state,   // and setQuery will let us change it   const [query, setQuery] = useState(initialQuery)      // We need to get reference to the search input element   const inputEl = useRef(null)    // On input change use the current value of the input field (e.target.value)   // to update the state's query value   const handleChange = e => {     setQuery(e.target.value)   }      // When the form is submitted navigate to /search   // with a query q paramenter equal to the value within the input search   const handleSubmit = e => {     e.preventDefault()     // `inputEl.current` points to the mounted search input element     const q = inputEl.current.value     navigate(`/search?q=$ {q}`)   }   return (     <form role="search" onSubmit={handleSubmit}>       <label htmlFor="search-input" style={{ display: "block" }}>         Search for:       </label>       <input         ref={inputEl}         id="search-input"         type="search"         value={query}         placeholder="e.g. duck"         onChange={handleChange}       />       <button type="submit">Go</button>     </form>   ) } export default SearchForm

Have you noticed that we’re importing navigate from the @reach/router package? That is necessary since neither Gatsby’s <Link/> nor navigate provide in-route navigation with a query parameter. Instead, we can import @reach/router — there’s no need to install it since Gatsby already includes it — and use its navigate function.

Now that we’ve built our component, let’s add it to our home page (as below) and 404 page.

/* src/pages/index.js */ // unchanged import SearchForm from "../components/search-form" const BlogIndex = ({ data, location }) => {   // unchanged   return (     <Layout location={location} title={siteTitle}>       <SEO title="All posts" />       <Bio />       <SearchForm />       // unchanged

Search results page

Our SearchForm component navigates to the /search route when the form is submitted, but for the moment, there is nothing behing this URL. That means we need to add a new page:

touch src/pages/search.js 

I proceeded by copying and adapting the content of the the index.js page. One of the essential modifications concerns the page query (see the very bottom of the file). We will replace allMarkdownRemark with the LunrIndex field. 

/* src/pages/search.js */ import React from "react" import { Link, graphql } from "gatsby" import { Index } from "lunr" import Layout from "../components/layout" import SEO from "../components/seo" import SearchForm from "../components/search-form" 
 // We can access the results of the page GraphQL query via the data props const SearchPage = ({ data, location }) => {   const siteTitle = data.site.siteMetadata.title      // We can read what follows the ?q= here   // URLSearchParams provides a native way to get URL params   // location.search.slice(1) gets rid of the "?"    const params = new URLSearchParams(location.search.slice(1))   const q = params.get("q") || "" 
   // LunrIndex is available via page query   const { store } = data.LunrIndex   // Lunr in action here   const index = Index.load(data.LunrIndex.index)   let results = []   try {     // Search is a lunr method     results = index.search(q).map(({ ref }) => {       // Map search results to an array of {slug, title, excerpt} objects       return {         slug: ref,         ...store[ref],       }     })   } catch (error) {     console.log(error)   }   return (     // We will take care of this part in a moment   ) } export default SearchPage export const pageQuery = graphql`   query {     site {       siteMetadata {         title       }     }     LunrIndex   } `

Now that we know how to retrieve the query value and the matching posts, let’s display the content of the page. Notice that on the search page we pass the query value to the <SearchForm /> component via the initialQuery props. When the user arrives to the search results page, their search query should remain in the input field. 

return (   <Layout location={location} title={siteTitle}>     <SEO title="Search results" />     {q ? <h1>Search results</h1> : <h1>What are you looking for?</h1>}     <SearchForm initialQuery={q} />     {results.length ? (       results.map(result => {         return (           <article key={result.slug}>             <h2>               <Link to={result.slug}>                 {result.title || result.slug}               </Link>             </h2>             <p>{result.excerpt}</p>           </article>         )       })     ) : (       <p>Nothing found.</p>     )}   </Layout> )

You can find the complete code in this gatsby-starter-blog fork and the live demo deployed on Netlify.

Instant search widget

Finding the most “logical” and user-friendly way of implementing search may be a challenge in and of itself. Let’s now switch to the real-life example of tartanify.com — a Gatsby-powered website gathering 5,000+ tartan patterns. Since tartans are often associated with clans or organizations, the possibility to search a tartan by name seems to make sense. 

We built tartanify.com as a side project where we feel absolutely free to experiment with things. We didn’t want a classic search results page but an instant search “widget.” Often, a given search keyword corresponds with a number of results — for example, “Ramsay” comes in six variations.  We imagined the search widget would be persistent, meaning it should stay in place when a user navigates from one matching tartan to another.

Let me show you how we made it work with Lunr.  The first step of building the index is very similar to the gatsby-starter-blog example, only simpler:

/* gatsby-node.js */ exports.createResolvers = ({ cache, createResolvers }) => {   createResolvers({     Query: {       LunrIndex: {         type: GraphQLJSONObject,         resolve(source, args, context) {           const siteNodes = context.nodeModel.getAllNodes({             type: `TartansCsv`,           })           return createIndex(siteNodes, cache)         },       },     },   }) } const createIndex = async (nodes, cache) => {   const cacheKey = `LunrIndex`   const cached = await cache.get(cacheKey)   if (cached) {     return cached   }   const store = {}   const index = lunr(function() {     this.ref(`slug`)     this.field(`title`)     for (node of nodes) {       const { slug } = node.fields       const doc = {         slug,         title: node.fields.Unique_Name,       }       store[slug] = {         title: doc.title,       }       this.add(doc)     }   })   const json = { index: index.toJSON(), store }   cache.set(cacheKey, json)   return json }

We opted for instant search, which means that search is triggered by any change in the search input instead of a form submission.

/* src/components/searchwidget.js */ import React, { useState } from "react" import lunr, { Index } from "lunr" import { graphql, useStaticQuery } from "gatsby" import SearchResults from "./searchresults" 
 const SearchWidget = () => {   const [value, setValue] = useState("")   // results is now a state variable    const [results, setResults] = useState([]) 
   // Since it's not a page component, useStaticQuery for quering data   // https://www.gatsbyjs.org/docs/use-static-query/   const { LunrIndex } = useStaticQuery(graphql`     query {       LunrIndex     }   `)   const index = Index.load(LunrIndex.index)   const { store } = LunrIndex   const handleChange = e => {     const query = e.target.value     setValue(query)     try {       const search = index.search(query).map(({ ref }) => {         return {           slug: ref,           ...store[ref],         }       })       setResults(search)     } catch (error) {       console.log(error)     }   }   return (     <div className="search-wrapper">       // You can use a form tag as well, as long as we prevent the default submit behavior       <div role="search">         <label htmlFor="search-input" className="visually-hidden">           Search Tartans by Name         </label>         <input           id="search-input"           type="search"           value={value}           onChange={handleChange}           placeholder="Search Tartans by Name"         />       </div>       <SearchResults results={results} />     </div>   ) } export default SearchWidget

The SearchResults are structured like this:

/* src/components/searchresults.js */ import React from "react" import { Link } from "gatsby" const SearchResults = ({ results }) => (   <div>     {results.length ? (       <>         <h2>{results.length} tartan(s) matched your query</h2>         <ul>           {results.map(result => (             <li key={result.slug}>               <Link to={`/tartan/$ {result.slug}`}>{result.title}</Link>             </li>           ))}         </ul>       </>     ) : (       <p>Sorry, no matches found.</p>     )}   </div> ) export default SearchResults

Making it persistent

Where should we use this component? We could add it to the Layout component. The problem is that our search form will get unmounted on page changes that way. If a user wants to browser all tartans associated with the “Ramsay” clan, they will have to retype their query several times. That’s not ideal.

Thomas Weibenfalk has written a great article on keeping state between pages with local state in Gatsby.js. We will use the same technique, where the wrapPageElement browser API sets persistent UI elements around pages. 

Let’s add the following code to the gatsby-browser.js. You might need to add this file to the root of your project.

/* gatsby-browser.js */ import React from "react" import SearchWrapper from "./src/components/searchwrapper" export const wrapPageElement = ({ element, props }) => (   <SearchWrapper {...props}>{element}</SearchWrapper> )

Now let’s add a new component file:

touch src/components/searchwrapper.js

Instead of adding SearchWidget component to the Layout, we will add it to the SearchWrapper and the magic happens. ✨

/* src/components/searchwrapper.js */ import React from "react" import SearchWidget from "./searchwidget" 
 const SearchWrapper = ({ children }) => (   <>     {children}     <SearchWidget />   </> ) export default SearchWrapper

Creating a custom search query

At this point, I started to try different keywords but very quickly realized that Lunr’s default search query might not be the best solution when used for instant search.

Why? Imagine that we are looking for tartans associated with the name MacCallum. While typing “MacCallum” letter-by-letter, this is the evolution of the results:

  • m – 2 matches (Lyon, Jeffrey M, Lyon, Jeffrey M (Hunting))
  • ma – no matches
  • mac – 1 match (Brighton Mac Dermotte)
  • macc – no matches
  • macca – no matches
  • maccal – 1 match (MacCall)
  • maccall – 1 match (MacCall)
  • maccallu – no matches
  • maccallum – 3 matches (MacCallum, MacCallum #2, MacCallum of Berwick)

Users will probably type the full name and hit the button if we make a button available. But with instant search, a user is likely to abandon early because they may expect that the results can only narrow down letters are added to the keyword query.

 That’s not the only problem. Here’s what we get with “Callum”:

  • c – 3 unrelated matches
  • ca – no matches
  • cal – no matches
  • call – no matches
  • callu – no matches
  • callum – one match 

You can see the trouble if someone gives up halfway into typing the full query.

Fortunately, Lunr supports more complex queries, including fuzzy matches, wildcards and boolean logic (e.g. AND, OR, NOT) for multiple terms. All of these are available either via a special query syntax, for example: 

index.search("+*callum mac*")

We could also reach for the index query method to handle it programatically.

The first solution is not satisfying since it requires more effort from the user. I used the index.query method instead:

/* src/components/searchwidget.js */ const search = index   .query(function(q) {     // full term matching     q.term(el)     // OR (default)     // trailing or leading wildcard     q.term(el, {       wildcard:         lunr.Query.wildcard.LEADING | lunr.Query.wildcard.TRAILING,     })   })   .map(({ ref }) => {     return {       slug: ref,       ...store[ref],     }   })

Why use full term matching with wildcard matching? That’s necessary for all keywords that “benefit” from the stemming process. For example, the stem of “different” is “differ.”  As a consequence, queries with wildcards — such as differe*, differen* or  different* — all result in no matches, while the full term queries differe, differen and different return matches.

Fuzzy matches can be used as well. In our case, they are allowed uniquely for terms of five or more characters:

q.term(el, { editDistance: el.length > 5 ? 1 : 0 }) q.term(el, {   wildcard:     lunr.Query.wildcard.LEADING | lunr.Query.wildcard.TRAILING, })

The handleChange function also “cleans up” user inputs and ignores single-character terms:

/* src/components/searchwidget.js */   const handleChange = e => {   const query = e.target.value || ""   setValue(query)   if (!query.length) {     setResults([])   }   const keywords = query     .trim() // remove trailing and leading spaces     .replace(/*/g, "") // remove user's wildcards     .toLowerCase()     .split(/s+/) // split by whitespaces   // do nothing if the last typed keyword is shorter than 2   if (keywords[keywords.length - 1].length < 2) {     return   }   try {     const search = index       .query(function(q) {         keywords           // filter out keywords shorter than 2           .filter(el => el.length > 1)           // loop over keywords           .forEach(el => {             q.term(el, { editDistance: el.length > 5 ? 1 : 0 })             q.term(el, {               wildcard:                 lunr.Query.wildcard.LEADING | lunr.Query.wildcard.TRAILING,             })           })       })       .map(({ ref }) => {         return {           slug: ref,           ...store[ref],         }       })     setResults(search)   } catch (error) {     console.log(error)   } }

Let’s check it in action:

  • m – pending
  • ma – 861 matches
  • mac – 600 matches
  • macc – 35 matches
  • macca – 12 matches
  • maccal – 9 matches
  • maccall – 9 matches
  • maccallu – 3 matches
  • maccallum – 3 matches

Searching for “Callum” works as well, resulting in four matches: Callum, MacCallum, MacCallum #2, and MacCallum of Berwick.

There is one more problem, though: multi-terms queries. Say, you’re looking for “Loch Ness.” There are two tartans associated with  that term, but with the default OR logic, you get a grand total of 96 results. (There are plenty of other lakes in Scotland.)

I wound up deciding that an AND search would work better for this project. Unfortunately, Lunr does not support nested queries, and what we actually need is (keyword1 OR *keyword*) AND (keyword2 OR *keyword2*). 

To overcome this, I ended up moving the terms loop outside the query method and intersecting the results per term. (By intersecting, I mean finding all slugs that appear in all of the per-single-keyword results.)

/* src/components/searchwidget.js */ try {   // andSearch stores the intersection of all per-term results   let andSearch = []   keywords     .filter(el => el.length > 1)     // loop over keywords     .forEach((el, i) => {       // per-single-keyword results       const keywordSearch = index         .query(function(q) {           q.term(el, { editDistance: el.length > 5 ? 1 : 0 })           q.term(el, {             wildcard:               lunr.Query.wildcard.LEADING | lunr.Query.wildcard.TRAILING,           })         })         .map(({ ref }) => {           return {             slug: ref,             ...store[ref],           }         })       // intersect current keywordSearch with andSearch       andSearch =         i > 0           ? andSearch.filter(x => keywordSearch.some(el => el.slug === x.slug))           : keywordSearch     })   setResults(andSearch) } catch (error) {   console.log(error) }

The source code for tartanify.com is published on GitHub. You can see the complete implementation of the Lunr search there.

Final thoughts

Search is often a non-negotiable feature for finding content on a site. How important the search functionality actually is may vary from one project to another. Nevertheless, there is no reason to abandon it under the pretext that it does not tally with the static character of Jamstack websites. There are many possibilities. We’ve just discussed one of them.

And, paradoxically in this specific example, the result was a better all-around user experience, thanks to the fact that implementing search was not an obvious task but instead required a lot of deliberation. We may not have been able to say the same with an over-the-counter solution.

The post How to Add Lunr Search to your Gatsby Website appeared first on CSS-Tricks.

CSS-Tricks

, , ,
[Top]

Let a website be a worry stone

Ethan Marcotte just redesigned his website and wrote about how the process was a distraction from the difficult things that are going on right now. Adding new features to your blog or your portfolio, tidying up performance issues, and improving things bit by bit can certainly relieve a lot of stress. Also? It’s fun!

What about adding a dark mode to our websites? Or playing around with Next.js? How about finally updating to that static site generator we’ve always wanted to experiment with? Or perhaps we could make the background color of our website animate slowly over time, like Robin’s? Or what about rolling up our sleeves and making a buck wild animation like the one on Sarah’s homepage?

Not so long ago, I felt a bout of intense anxiety hit me out of nowhere and I wound up updating my own website — it was nothing short of relaxing, like going to the spa for a day. I suddenly realized that I could just throw all that stress at my website and do something half-useful in the process. One evening I sat down to focus on my Lighthouse score, the next day was all about fonts, and after that I made a bunch of commits to update the layouts on my site.

This isn’t about being productive right now — it’s barely possible to focus on anything for me with the state of things. And also I’m certainly not trying to guilt trip anyone into cranking out more websites. We all need to take a breath and take each day at a time.

But! If treating your website like a worry stone can help, then I think it’s time to roll up our sleeves and make something weird for ourselves.

Direct Link to ArticlePermalink

The post Let a website be a worry stone appeared first on CSS-Tricks.

CSS-Tricks

, ,
[Top]

Let a website be a worry stone

Ethan Marcotte just redesigned his website and wrote about how the process was a distraction from the difficult things that are going on right now. Adding new features to your blog or your portfolio, tidying up performance issues, and improving things bit by bit can certainly relieve a lot of stress. Also? It’s fun!

What about adding a dark mode to our websites? Or playing around with Next.js? How about finally updating to that static site generator we’ve always wanted to experiment with? Or perhaps we could make the background color of our website animate slowly over time, like Robin’s? Or what about rolling up our sleeves and making a buck wild animation like the one on Sarah’s homepage?

Not so long ago, I felt a bout of intense anxiety hit me out of nowhere and I wound up updating my own website — it was nothing short of relaxing, like going to the spa for a day. I suddenly realized that I could just throw all that stress at my website and do something half-useful in the process. One evening I sat down to focus on my Lighthouse score, the next day was all about fonts, and after that I made a bunch of commits to update the layouts on my site.

This isn’t about being productive right now — it’s barely possible to focus on anything for me with the state of things. And also I’m certainly not trying to guilt trip anyone into cranking out more websites. We all need to take a breath and take each day at a time.

But! If treating your website like a worry stone can help, then I think it’s time to roll up our sleeves and make something weird for ourselves.

Direct Link to ArticlePermalink

The post Let a website be a worry stone appeared first on CSS-Tricks.

CSS-Tricks

, ,
[Top]

Emergency Website Kit

Here’s an outstanding idea from Max Böck. He’s created a boilerplate project for building websites that fit within a single HTTP request. This is extremely important for websites that contain critical information for public safety. As Max writes:

In cases of emergency, many organizations need a quick way to publish critical information. But exisiting (CMS) websites are often unable to handle sudden spikes in traffic.

What’s so special about this boilerplate? Well, it does smart stuff like:

  • generates a static site using Eleventy,
  • uses minimal markup with inlined CSS,
  • aims to transmit everything in the first connection roundtrip (~14KB),
  • progressively enables offline-support with Service Workers,
  • uses Netlify CMS for easy content editing, and
  • provides one-click deployment via Netlify to get off the ground quickly

The example website that Max built with this boilerplate is shockingly fast and I would go one step further to argue that all websites should feel as fast as this, not just websites that are useful in an emergency.

Direct Link to ArticlePermalink

The post Emergency Website Kit appeared first on CSS-Tricks.

CSS-Tricks

,
[Top]

15 Things to Improve Your Website Accessibility

This is a really great list from Bruce. There is a lot of directly actionable stuff here. Send it around to your team and make it something that you all go through together.

Here’s a little one that prodded me to finally fix…

Most screen readers allow the user to quickly see a list of links on a page [..] However, if every link has text saying “Click here” or “Read more”, with nothing else to distinguish them, this is useless. The easiest way to solve this is simply to write unique link text, but if that isn’t possible, you can over-ride the link text for assistive technology by using a unique aria-label attribute on each link.

I had links like that right here on CSS-Tricks. Some of them are automatically created by WordPress itself, not something I hand-coded into a template. When you show the_excerpt of a post, you get a “read more” link automatically, and aside from getting your hands dirty with some filters, you don’t have that much control over it.

DevTools showing the DOM of a "read more" link with no context.

Fortunately, I already use a cool plugin called Advanced Excerpt. I poked into the settings to see if I could do something about injecting the post title in there somehow. Lookie lookie:

A setting for Advanced Excerpt that does screen reader links.

That screen-reader-text class is exactly what I already used for that kind of stuff, so it was a one-click fix!

Much nicer DOM now for those links:

Direct Link to ArticlePermalink

The post 15 Things to Improve Your Website Accessibility appeared first on CSS-Tricks.

CSS-Tricks

, , ,
[Top]

Considerations When Choosing Fonts for a Multilingual Website

As a front-end developer working for clients all over the world, I’ve always struggled to deal with multilingual websites — especially cases where both right-to-left (RTL) and left-to-right (LTR) are used. That said, I’ve learned a few things along the way and am going to share a few tips in this post.

Let’s work in Arabic and English, not just because Arabic is my native language, but because it’s a classic example of RTL in use.

Adding RTL support to a site

Before this though, we’ll want to add support for an RTL language on our site. There are two ways we can go about this, both of which aren’t exactly ideal.

Not ideal: Use a specific font for each direction

The first way we could go about this is to rely on the direction (dir) attribute on any given element (which is usually the <html> element so it sets the direction globally):

/* For LTR, use Roboto */ [dir=ltr] body {   font-family: "Roboto", sans-serif; }  /* For RTL, use Amiri */ [dir=rtl] body {   font-family: "Amiri", sans-serif; }

PostCSS-RTL makes it even easier to generate styles for each direction, but the issue with this method is that you are only using one font which is not ideal if you have multiple languages in one paragraph.

Here’s why. You’ll find that multi-lingual paragraphs mess up the UI because the Arabic glyphs are given a default font that doesn’t align with the element.

It might be worse in some browsers over others.

Also not ideal: Use one single font that supports both languages

The second option could be using fonts that offer support for both directions. However, in my personal experience, using just one font for both languages limits creativity and freedom to use a different font for each direction. It might not be that bad, depending on the design requirements. But I’ve definitely worked on projects where it makes a difference.

OK, so what then?

We need a simpler solution. According to MDN:

Font selection does not simply stop at the first font in the list that is on the user’s system. Rather, font selection is done one character at a time, so that if an available font does not have a glyph for a needed character, the latter fonts are tried.

Meaning, we can still use the font-family property but using a fallback in cases where the first font doesn’t have a glyph for a character. This actually solves both of the issues above, two birds with one stone style!

body {   font-family: 'Roboto', 'Amiri', 'Galada', sans-serif; }

Why does this work?

Just like the way flexbox and CSS grid, make CSS layouts a lot more flexible, the font matching algorithm makes it even easier to work with content in different languages. Here’s what W3C says about it matching characters to fonts:

When text contains characters such as combining marks, ideally the base character should be rendered using the same font as the mark, this assures proper placement of the mark. For this reason, the font matching algorithm for clusters is more specialized than the general case of matching a single character by itself. For sequences containing variation selectors, which indicate the precise glyph to be used for a given character, user agents always attempt system font fallback to find the appropriate glyph before using the default glyph of the base character.

(Emphasis mine)

And how are fonts matched? The spec outlines the steps the algorithm takes, which I’ll paraphrase here.

  • The browser looks at a cluster of text and tries to match it to the list of fonts that are declared in CSS.
  • If it finds a font that supports all of the characters, great! That’s what gets used.
  • If the browser doesn’t find a font that supports all of the characters, it re-reads the list of fonts to find one that supports the unmatched characters and applies it to those specific characters. 
  • If the browser doesn’t find a font in the list that matches neither all of the characters in the cluster nor individual ones, then it reaches for the default system font and checks that it supports all of the characters.
  • If the default system font matches, again, great! That’s what gets used.
  • If the system font doesn’t work, that’s where the browser renders a broken glyph.

Let’s talk performance

The sequence we just looked at could be taxing on a site’s performance. Imagine the browser having to loop through every defined fallback, match specific characters to glyphs, and download font files based on what it finds. That can add up to a lot of work, not to mention FOUT and other rendering weirdness.

The goal is to let the font matching algorithm decide which font to apply to each text instead of relying on one font for both languages or adding extra CSS to handle different directions. If a font is never applied to anything (say a particular page is in RTL and happens to not have any LTR text on it, or vice versa) the font further down the stack that isn’t used is never downloaded.

Making that happen requires selecting good multilingual fonts. Good multilingual fonts are ones that have glyphs for as many characters you anticipate using on a page. And if you are unable to find one that supports them all, using one that supports most of them and then falling back to another font that does is an efficient way to go. If that happens to be the default system font, that’s just as great because it’s one less downloaded font file.


The good thing about letting the font-family property decide the font for each glyph (instead of making extra CSS selectors for each direction) is that the behavior is already there as we outlined earlier — we simply need to make use of it. 

The post Considerations When Choosing Fonts for a Multilingual Website appeared first on CSS-Tricks.

CSS-Tricks

, , , ,
[Top]

Do This to Improve Image Loading on Your Website

Jen Simmons explains how to improve image loading by simply using width and height attributes. The issue is that there’s a lot of jank when an image is first loaded because an img will naturally have a height of 0 before the image asset has been successfully downloaded by the browser. Then it needs to repaint the page after that which pushes all the content around. I’ve definitely seen this problem a lot on big news websites.

Anyway, Jen is recommending that we should add height and width attributes to images like so:

<img src="dog.png" height="400" width="1000" alt="A cool dog" />

This is because Firefox will now take those values into consideration and remove all the jank before the image has loaded. That means content will always stay in the same position, even if the image hasn’t loaded yet. In the past, I’ve worked on a bunch of projects where I’ve placed images lower down the page simply because I want to prevent this sort of jank. I reckon this fixes that problem quite nicely.

Direct Link to ArticlePermalink

The post Do This to Improve Image Loading on Your Website appeared first on CSS-Tricks.

CSS-Tricks

, , , ,
[Top]

Free Website Builder + Free CRM + Free Live Chat = Bitrix24

(This is a sponsored post.)

You may know Bitrix24 as the world’s most popular free CRM and sales management system, used by over 6 million businesses. But the free website builder available inside Bitrix24 is worthy of your attention, too.

Why do I need another free website/landing page builder?

There are many ways to create free websites — Wix, Squarepage, WordPress, etc. And if you need a blog — Medium, Tumblr and others are at your disposal. Bitrix24 is geared toward businesses that need websites to generate leads, sell online, issue invoices or accept payments. And there’s a world of difference between regular website builders and the ones that are designed with specific business needs in mind.

What does a good business website builder do? First, it creates websites that engage visitors so that they start interacting. This is done with the help of tools like website live chat, contact form or a call back request widget. Second, it comes with a landing page designer, because business websites are all about conversion rates, and increasing conversion rates requires endless tweaking and repeated testing. Third, integration between a website and a CRM system is crucial. It’s difficult to attract traffic to websites and advertising expensive. So, it makes sense that every prospect from the website as logged into CRM automatically and that you sell your goods and services to clients not only once but on a regular basis. This is why Bitrix24 comes with email and SMS marketing and advertising ROI calculator.

Another critical requirement for many business websites is ability to accept payments online and function as an ecommerce store, with order processing and inventory management. Bitrix24 does that too. Importantly, unlike other ecommerce platforms, Bitrix24 doesn’t charge any transaction fees or come with sales volume limits.

What else does Bitrix24 offer free of charge?

The only practical limit of the free plan is 12 users inside the account. You can use your own domain free of charge, the bandwidth is free and unlimited and there’s only a technical limit on the number of free pages allowed (around 100) in order to prevent misusing Bitrix24 for SEO-spam pages. In addition to offering free cloud service, Bitrix24 has on-premise editions with open source code access that can be purchased. This means that you can migrate your cloud Bitrix24 account to your own server at any moment, if necessary.

To register your free Bitrix24 account, simply click here. And if you have a public Facebook or Twitter profile and share this post, you’ll be automatically entered into a contest, in which the winner gets a 24-month subscription for the Bitrix24 Professional plan ($ 3,336 value).

Direct Link to ArticlePermalink

The post Free Website Builder + Free CRM + Free Live Chat = Bitrix24 appeared first on CSS-Tricks.

CSS-Tricks

, , , , ,
[Top]