Tag: Gatsby

Roll Your Own Comments With Gatsby and FaunaDB

If you haven’t used Gatsby before have a read about why it’s fast in every way that matters, and if you haven’t used FaunaDB before you’re in for a treat. If you’re looking to make your static sites full blown Jamstack applications this is the back end solution for you!

This tutorial will only focus on the operations you need to use FaunaDB to power a comment system for a Gatsby blog. The app comes complete with inputs fields that allow users to comment on your posts and an admin area for you to approve or delete comments before they appear on each post. Authentication is provided by Netlify’s Identity widget and it’s all sewn together using Netlify serverless functions and an Apollo/GraphQL API that pushes data up to a FaunaDB database collection.

I chose FaunaDB for the database for a number of reasons. Firstly there’s a very generous free tier! perfect for those small projects that need a back end, there’s native support for GraphQL queries and it has some really powerful indexing features!

…and to quote the creators;

No matter which stack you use, or where you’re deploying your app, FaunaDB gives you effortless, low-latency and reliable access to your data via APIs familiar to you

You can see the finished comments app here.

Get Started

To get started clone the repo at https://github.com/PaulieScanlon/fauna-gatsby-comments

or:

git clone https://github.com/PaulieScanlon/fauna-gatsby-comments.git

Then install all the dependencies:

npm install

Also cd in to functions/apollo-graphql and install the dependencies for the Netlify function:

npm install

This is a separate package and has its own dependencies, you’ll be using this later.

We also need to install the Netlify CLI as you’ll also use this later:

npm install netlify-cli -g

Now lets add three new files that aren’t part of the repo.

At the root of your project create a .env .env.development and .env.production

Add the following to .env:

GATSBY_FAUNA_DB = GATSBY_FAUNA_COLLECTION =

Add the following to .env.development:

GATSBY_FAUNA_DB = GATSBY_FAUNA_COLLECTION = GATSBY_SHOW_SIGN_UP = true GATSBY_ADMIN_ID =

Add the following to .env.production:

GATSBY_FAUNA_DB = GATSBY_FAUNA_COLLECTION = GATSBY_SHOW_SIGN_UP = false GATSBY_ADMIN_ID =

You’ll come back to these later but in case you’re wondering

  • GATSBY_FAUNA_DB is the FaunaDB secret key for your database
  • GATSBY_FAUNA_COLLECTION is the FaunaDB collection name
  • GATSBY_SHOW_SIGN_UP is used to hide the Sign up button when the site is in production
  • GATSBY_ADMIN_ID is a user id that Netlify Identity will generate for you

If you’re the curious type you can get a taster of the app by running gatsby develop or yarn develop and then navigate to http://localhost:8000 in your browser.

FaunaDB

So Let’s get cracking, but before we write any operations head over to https://fauna.com/ and sign up!

Database and Collection

  • Create a new database by clicking NEW DATABASE
  • Name the database: I’ve called the demo database fauna-gatsby-comments
  • Create a new Collection by clicking NEW COLLECTION
  • Name the collection: I’ve called the demo collection demo-blog-comments

Server Key

Now you’ll need to to set up a server key. Go to SECURITY

  • Create a new key by clicking NEW KEY
  • Select the database you want the key to apply to, fauna-gatsby-comments for example
  • Set the Role as Admin
  • Name the server key: I’ve called the demo key demo-blog-server-key

Environment Variables Pt. 1

Copy the server key and add it to GATSBY_FAUNA_DB in .env.development, .env.production and .env.

You’ll also need to add the name of the collection to GATSBY_FAUNA_COLLECTION in .env.development, .env.production and .env.

Adding these values to .env are just so you can test your development FaunaDB operations, which you’ll do next.

Let’s start by creating a comment so head back to boop.js:

// boop.js ... // CREATE COMMENT createComment: async () => {   const slug = "/posts/some-post"   const name = "some name"   const comment = "some comment"   const results = await client.query(     q.Create(q.Collection(COLLECTION_NAME), {       data: {         isApproved: false,         slug: slug,         date: new Date().toString(),         name: name,         comment: comment,       },     })   )   console.log(JSON.stringify(results, null, 2))   return {     commentId: results.ref.id,   } }, ...

The breakdown of this function is as follows;

  • q is the instance of faunadb.query
  • Create is the FaunaDB method to create an entry within a collection
  • Collection is area in the database to store the data. It takes the name of the collection as the first argument and a data object as the second.

The second argument is the shape of the data you need to drive the applications comment system.

For now you’re going to hard-code slugname and comment but in the final app these values are captured by the input form on the posts page and passed in via args

The breakdown for the shape is as follows;

  • isApproved is the status of the comment and by default it’s false until we approve it in the Admin page
  • slug is the path to the post where the comment was written
  • date is the time stamp the comment was written
  • name is the name the user entered in the comments from
  • comment is the comment the user entered in the comments form

When you (or a user) creates a comment you’re not really interested in dealing with the response because as far as the user is concerned all they’ll see is either a success or error message.

After a user has posted a comment it will go in to your Admin queue until you approve it but if you did want to return something you could surface this in the UI by returning something from the createComment function.

Create a comment

If you’ve hard coded a slugname and comment you can now run the following in your CLI

node boop createComment

If everything worked correctly you should see a log in your terminal of the new comment.

{    "ref": {      "@ref": {        "id": "263413122555970050",        "collection": {          "@ref": {            "id": "demo-blog-comments",            "collection": {              "@ref": {                "id": "collections"              }            }          }        }      }    },    "ts": 1587469179600000,    "data": {      "isApproved": false,      "slug": "/posts/some-post",      "date": "Tue Apr 21 2020 12:39:39 GMT+0100 (British Summer Time)",      "name": "some name",      "comment": "some comment"    }  }  { commentId: '263413122555970050' }

If you head over to COLLECTIONS in FaunaDB you should see your new entry in the collection.

You’ll need to create a few more comments while in development so change the hard-coded values for name and comment and run the following again.

node boop createComment

Do this a few times so you end up with at least three new comments stored in the database, you’ll use these in a moment.

Delete comment by id

Now that you can create comments you’ll also need to be able to delete a comment.

By adding the commentId of one of the comments you created above you can delete it from the database. The commentId is the id in the ref.@ref object

Again you’re not really concerned with the return value here but if you wanted to surface this in the UI you could do so by returning something from the deleteCommentById function.

// boop.js ... // DELETE COMMENT deleteCommentById: async () => {   const commentId = "263413122555970050";   const results = await client.query(     q.Delete(q.Ref(q.Collection(COLLECTION_NAME), commentId))   );   console.log(JSON.stringify(results, null, 2));   return {     commentId: results.ref.id,   }; }, ...

The breakdown of this function is as follows

  • client is the FaunaDB client instance
  • query is a method to get data from FaunaDB
  • q is the instance of faunadb.query
  • Delete is the FaunaDB delete method to delete entries from a collection
  • Ref is the unique FaunaDB ref used to identify the entry
  • Collection is area in the database where the data is stored

If you’ve hard coded a commentId you can now run the following in your CLI:

node boop deleteCommentById

If you head back over to COLLECTIONS in FaunaDB you should see that entry no longer exists in collection

Indexes

Next you’re going to create an INDEX in FaunaDB.

An INDEX allows you to query the database with a specific term and define a specific data shape to return.

When working with GraphQL and / or TypeScript this is really powerful because you can use FaunaDB indexes to return only the data you need and in a predictable shape. This makes data typing responses in GraphQL and / TypeScript a dream… I’ve worked on a number of applications that just return a massive object of useless values which will inevitably cause bugs in your app. blurg!

  • Go to INDEXES and click NEW INDEX
  • Name the index: I’ve called this one get-all-comments
  • Set the source collection to the name of the collection you setup earlier

As mentioned above when you query the database using this index you can tell FaunaDB which parts of the entry you want to return.

You can do this by adding “values” but be careful to enter the values exactly as they appear below because (on the FaunaDB free tier) you can’t amend these after you’ve created them so if there’s a mistake you’ll have to delete the index and start again… bummer!

The values you need to add are as follows:

  • ref
  • data.isApproved
  • data.slug
  • data.date
  • data.name
  • data.comment

After you’ve added all the values you can click SAVE.

Get all comments

// boop.js ... // GET ALL COMMENTS getAllComments: async () => {    const results = await client.query(      q.Paginate(q.Match(q.Index("get-all-comments")))    );    console.log(JSON.stringify(results, null, 2));    return results.data.map(([ref, isApproved, slug, date, name, comment]) => ({      commentId: ref.id,      isApproved,      slug,      date,      name,      comment,    }));  }, ...

The breakdown of this function is as follows

  • client is the FaunaDB client instance
  • query is a method to get data from FaunaDB
  • q is the instance of faunadb.query
  • Paginate paginates the responses
  • Match returns matched results
  • Index is the name of the Index you just created

The shape of the returned result here is an array of the same shape you defined in the Index “values”

If you run the following you should see the list of all the comments you created earlier:

node boop getAllComments

Get comments by slug

You’re going to take a similar approach as above but this time create a new Index that allows you to query FaunaDB in a different way. The key difference here is that when you get-comments-by-slug you’ll need to tell FaunaDB about this specific term and you can do this by adding data.slug to the Terms field.

  • Go to INDEX and click NEW INDEX
  • Name the index, I’ve called this one get-comments-by-slug
  • Set the source collection to the name of the collection you setup earlier
  • Add data.slug in the terms field

The values you need to add are as follows:

  • ref
  • data.isApproved
  • data.slug
  • data.date
  • data.name
  • data.comment

After you’ve added all the values you can click SAVE.

// boop.js ... // GET COMMENT BY SLUG getCommentsBySlug: async () => {   const slug = "/posts/some-post";   const results = await client.query(     q.Paginate(q.Match(q.Index("get-comments-by-slug"), slug))   );   console.log(JSON.stringify(results, null, 2));   return results.data.map(([ref, isApproved, slug, date, name, comment]) => ({     commentId: ref.id,     isApproved,     slug,     date,     name,     comment,   })); }, ...

The breakdown of this function is as follows:

  • client is the FaunaDB client instance
  • query is a method to get data from FaunaDB
  • q is the instance of faunadb.query
  • Paginate paginates the responses
  • Match returns matched results
  • Index is the name of the Index you just created

The shape of the returned result here is an array of the same shape you defined in the Index “values” you can create this shape in the same way you did above and be sure to add a value for terms. Again be careful to enter these with care.

If you run the following you should see the list of all the comments you created earlier but for a specific slug:

node boop getCommentsBySlug

Approve comment by id

When you create a comment you manually set the isApproved value to false. This prevents the comment from being shown in the app until you approve it.

You’ll now need to create a function to do this but you’ll need to hard-code a commentId. Use a commentId from one of the comments you created earlier:

// boop.js ... // APPROVE COMMENT BY ID approveCommentById: async () => {   const commentId = '263413122555970050'   const results = await client.query(     q.Update(q.Ref(q.Collection(COLLECTION_NAME), commentId), {       data: {         isApproved: true,       },     })   );   console.log(JSON.stringify(results, null, 2));   return {     isApproved: results.isApproved,   }; }, ...

The breakdown of this function is as follows:

  • client is the FaunaDB client instance
  • query is a method to get data from FaunaDB
  • q is the instance of faunadb.query
  • Update is the FaundaDB method up update an entry
  • Ref is the unique FaunaDB ref used to identify the entry
  • Collection is area in the database where the data is stored

If you’ve hard coded a commentId you can now run the following in your CLI:

node boop approveCommentById

If you run the getCommentsBySlug again you should now see the isApproved status of the entry you hard-coded the commentId for will have changed to true.

node boop getCommentsBySlug

These are all the operations required to manage the data from the app.

In the repo if you have a look at apollo-graphql.js which can be found in functions/apollo-graphql you’ll see the all of the above operations. As mentioned before the hard-coded values are replaced by args, these are the values passed in from various parts of the app.

Netlify

Assuming you’ve completed the Netlify sign up process or already have an account with Netlify you can now push the demo app to your GitHub account.

To do this you’ll need to have initialize git locally, added a remote and have pushed the demo repo upstream before proceeding.

You should now be able to link the repo up to Netlify’s Continuous Deployment.

If you click the “New site from Git” button on the Netlify dashboard you can authorize access to your GitHub account and select the gatsby-fauna-comments repo to enable Netlify’s Continuous Deployment. You’ll need to have deployed at least once so that we have a pubic URL of your app.

The URL will look something like this https://ecstatic-lewin-b1bd17.netlify.app but feel free to rename it and make a note of the URL as you’ll need it for the Netlify Identity step mentioned shortly.

Environment Variables Pt. 2

In a previous step you added the FaunaDB database secret key and collection name to your .env files(s). You’ll also need to add the same to Netlify’s Environment variables.

  • Navigate to Settings from the Netlify navigation
  • Click on Build and deploy
  • Either select Environment or scroll down until you see Environment variables
  • Click on Edit variables

Proceed to add the following:

GATSBY_SHOW_SIGN_UP = false GATSBY_FAUNA_DB = you FaunaDB secret key GATSBY_FAUNA_COLLECTION = you FaunaDB collection name

While you’re here you’ll also need to amend the Sensitive variable policy, select Deploy without restrictions

Netlify Identity Widget

I mentioned before that when a comment is created the isApproved value is set to false, this prevents comments from appearing on blog posts until you (the admin) have approved them. In order to become admin you’ll need to create an identity.

You can achieve this by using the Netlify Identity Widget.

If you’ve completed the Continuous Deployment step above you can navigate to the Identity page from the Netlify navigation.

You wont see any users in here just yet so lets use the app to connect the dots, but before you do that make sure you click Enable Identity

Before you continue I just want to point out you’ll be using netlify dev instead of gatsby develop or yarn develop from now on. This is because you’ll be using some “special” Netlify methods in the app and staring the server using netlify dev is required to spin up various processes you’ll be using.

  • Spin up the app using netlify dev
  • Navigate to http://localhost:8888/admin/
  • Click the Sign Up button in the header

You will also need to point the Netlify Identity widget at your newly deployed app URL. This was the URL I mentioned you’ll need to make a note of earlier, if you’ve not renamed your app it’ll look something like this https://ecstatic-lewin-b1bd17.netlify.app/ There will be a prompt in the pop up window to Set site’s URL.

You can now complete the necessary sign up steps.

After sign up you’ll get an email asking you to confirm you identity and once that’s completed refresh the Identity page in Netlify and you should see yourself as a user.

It’s now login time, but before you do this find Identity.js in src/components and temporarily un-comment the console.log() on line 14. This will log the Netlify Identity user object to the console.

  • Restart your local server
  • Spin up the app again using netlify dev
  • Click the Login button in the header

If this all works you should be able to see a console log for netlifyIdentity.currentUser: find the id key and copy the value.

Set this as the value for GATSBY_ADMIN_ID = in both .env.production and .env.development

You can now safely remove the console.log() on line 14 in Identity.js or just comment it out again.

GATSBY_ADMIN_ID = your Netlify Identity user id

…and finally

  • Restart your local server
  • Spin up the app again using netlify dev

Now you should be able to login as “Admin”… hooray!

Navigate to http://localhost:8888/admin/ and Login.

It’s important to note here you’ll be using localhost:8888 for development now and NOT localhost:8000 which is more common with Gatsby development

Before you test this in the deployed environment make sure you go back to Netlify’s Environment variables and add your Netlify Identity user id to the Environment variables!

  • Navigate to Settings from the Netlify navigation
  • Click on Build and deploy
  • Either select Environment or scroll down until you see Environment variables
  • Click on Edit variables

Proceed to add the following:

GATSBY_ADMIN_ID = your Netlify Identity user id

If you have a play around with the app and enter a few comments on each of the posts then navigate back to Admin page you can choose to either approve or delete the comments.

Naturally only approved comments will be displayed on any given post and deleted ones are gone forever.

If you’ve used this tutorial for your project I’d love to hear from you at @pauliescanlon.


By Paulie Scanlon (@pauliescanlon), Front End React UI Developer / UX Engineer: After all is said and done, structure + order = fun.

Visit Paulie’s Blog at: www.paulie.dev

The post Roll Your Own Comments With Gatsby and FaunaDB appeared first on CSS-Tricks.

CSS-Tricks

, , ,

How to Make Taxonomy Pages With Gatsby and Sanity.io

In this tutorial, we’ll cover how to make taxonomy pages with Gatsby with structured content from Sanity.io. You will learn how to use Gatsby’s Node creation APIs to add fields to your content types in Gatsby’s GraphQL API. Specifically, we’re going to create category pages for the Sanity’s blog starter.

That being said, there is nothing Sanity-specific about what we’re covering here. You’re able to do this regardless of which content source you may have. We’re just reaching for Sanity.io for the sake of demonstration.

Get up and running with the blog

If you want to follow this tutorial with your own Gatsby project, go ahead and skip to the section for creating a new page template in Gatsby. If not, head over to sanity.io/create and launch the Gatsby blog starter. It will put the code for Sanity Studio and the Gatsby front-end in your GitHub account and set up the deployment for both on Netlify. All the configuration, including example content, will be in place so that you can dive right into learning how to create taxonomy pages.

Once the project is iniated, make sure to clone the new repository on GitHub to local, and install the dependencies:

git clone git@github.com:username/your-repository-name.git cd your-repository-name npm i

If you want to run both Sanity Studio (the CMS) and the Gatsby front-end locally, you can do so by running the command npm run dev in a terminal from the project root. You can also cd into the web folder and just run Gatsby with the same command.

You should also install the Sanity CLI and log in to your account from the terminal: npm i -g @sanity/cli && sanity login. This will give you tooling and useful commands to interact with Sanity projects. You can add the --help flag to get more information on its functionality and commands.

We will be doing some customization to the gatsby-node.js file. To see the result of the changes, restart Gatsby’s development server. This is done in most systems by hitting CTRL + C in the terminal and running npm run dev again.

Getting familiar with the content model

Look into the /studio/schemas/documents folder. There are schema files for our main content types: author, category, site settings, and posts. Each of the files exports a JavaScript object that defines the fields and properties of these content types. Inside of post.js is the field definition for categories:

{   name: 'categories',   type: 'array',   title: 'Categories',   of: [     {       type: 'reference',       to: {         type: 'category'       }     }   ] },

This will create an array field with reference objects to category documents. Inside of the blog’s studio it will look like this:

An array field with references to category documents in the blog studio
An array field with references to category documents in the blog studio

Adding slugs to the category type

Head over to /studio/schemas/documents/category.js. There is a simple content model for a category that consists of a title and a description. Now that we’re creating dedicated pages for categories, it would be handy to have a slug field as well. We can define that in the schema like this:

// studio/schemas/documents/category.js export default {   name: 'category',   type: 'document',   title: 'Category',   fields: [     {       name: 'title',       type: 'string',       title: 'Title'     },     {       name: 'slug',       type: 'slug',       title: 'Slug',       options: {         // add a button to generate slug from the title field         source: 'title'       }     },     {       name: 'description',       type: 'text',       title: 'Description'     }   ] }

Now that we have changed the content model, we need to update the GraphQL schema definition as well. Do this by executing npm run graphql-deploy (alternatively: sanity graphql deploy) in the studio folder. You will get warnings about breaking changes, but since we are only adding a field, you can proceed without worry. If you want the field to accessible in your studio on Netlify, check the changes into git (with git add . && git commit -m"add slug field") and push it to your GitHub repository (git push origin master).

Now we should go through the categories and generate slugs for them. Remember to hit the publish button to make the changes accessible for Gatsby! And if you were running Gatsby’s development server, you’ll need to restart that too.

Quick sidenote on how the Sanity source plugin works

When starting Gatsby in development or building a website, the source plugin will first fetch the GraphQL Schema Definitions from Sanity deployed GraphQL API. The source plugin uses this to tell Gatsby which fields should be available to prevent it from breaking if the content for certain fields happens to disappear. Then it will hit the project’s export endpoint, which streams all the accessible documents to Gatsby’s in-memory datastore.

In order words, the whole site is built with two requests. Running the development server, will also set up a listener that pushes whatever changes come from Sanity to Gatsby in real-time, without doing additional API queries. If we give the source plugin a token with permission to read drafts, we’ll see the changes instantly. This can also be experienced with Gatsby Preview.

Adding a category page template in Gatsby

Now that we have the GraphQL schema definition and some content ready, we can dive into creating category page templates in Gatsby. We need to do two things:

  • Tell Gatsby to create pages for the category nodes (that is Gatsby’s term for “documents”).
  • Give Gatsby a template file to generate the HTML with the page data.

Begin by opening the /web/gatsby-node.js file. Code will already be here that can be used to create the blog post pages. We’ll largely leverage this exact code, but for categories. Let’s take it step-by-step:

Between the createBlogPostPages function and the line that starts with exports.createPages, we can add the following code. I’ve put in comments here to explain what’s going on:

// web/gatsby-node.js  // ...  async function createCategoryPages (graphql, actions) {   // Get Gatsby‘s method for creating new pages   const {createPage} = actions   // Query Gatsby‘s GraphAPI for all the categories that come from Sanity   // You can query this API on http://localhost:8000/___graphql   const result = await graphql(`{     allSanityCategory {       nodes {         slug {           current         }         id       }     }   }   `)   // If there are any errors in the query, cancel the build and tell us   if (result.errors) throw result.errors    // Let‘s gracefully handle if allSanityCatgogy is null   const categoryNodes = (result.data.allSanityCategory || {}).nodes || []    categoryNodes     // Loop through the category nodes, but don't return anything     .forEach((node) => {       // Desctructure the id and slug fields for each category       const {id, slug = {}} = node       // If there isn't a slug, we want to do nothing       if (!slug) return        // Make the URL with the current slug       const path = `/categories/$ {slug.current}`        // Create the page using the URL path and the template file, and pass down the id       // that we can use to query for the right category in the template file       createPage({         path,         component: require.resolve('./src/templates/category.js'),         context: {id}       })     }) }

Last, this function is needed at the bottom of the file:

// /web/gatsby-node.js  // ...  exports.createPages = async ({graphql, actions}) => {   await createBlogPostPages(graphql, actions)   await createCategoryPages(graphql, actions) // <= add the function here }

Now that we have the machinery to create the category page node in place, we need to add a template for how it actually should look in the browser. We’ll base it on the existing blog post template to get some consistent styling, but keep it fairly simple in the process.

// /web/src/templates/category.js import React from 'react' import {graphql} from 'gatsby' import Container from '../components/container' import GraphQLErrorList from '../components/graphql-error-list' import SEO from '../components/seo' import Layout from '../containers/layout'  export const query = graphql`   query CategoryTemplateQuery($ id: String!) {     category: sanityCategory(id: {eq: $ id}) {       title       description     }   } ` const CategoryPostTemplate = props => {   const {data = {}, errors} = props   const {title, description} = data.category || {}    return (     <Layout>       <Container>         {errors && <GraphQLErrorList errors={errors} />}         {!data.category && <p>No category data</p>}         <SEO title=How to Make Taxonomy Pages With Gatsby and Sanity.io description={description} />         <article>           <h1>Category: How to Make Taxonomy Pages With Gatsby and Sanity.io</h1>           <p>{description}</p>         </article>       </Container>     </Layout>   ) }  export default CategoryPostTemplate

We are using the ID that was passed into the context in gatsby-node.js to query the category content. Then we use it to query the title and description fields that are on the category type. Make sure to restart with npm run dev after saving these changes, and head over to localhost:8000/categories/structured-content in the browser. The page should look something like this:

A barebones category page with a site title, Archive link, page title, dummy content and a copyright in the footer.
A barebones category page

Cool stuff! But it would be even cooler if we actually could see what posts that belong to this category, because, well, that’s kinda the point of having categories in the first place, right? Ideally, we should be able to query for a “pages” field on the category object.

Before we learn how to that, we need to take a step back to understand how Sanity’s references work.

Querying Sanity’s references

Even though we’re only defining the references in one type, Sanity’s datastore will index them “bi-directionally.” That means creating a reference to the “Structured content” category document from a post lets Sanity know that the category has these incoming references and will keep you from deleting it as long as the reference exists (references can be set as “weak” to override this behavior). If we use GROQ, we can query categories and join posts that have them like this (see the query and result in action on groq.dev):

*[_type == "category"]{   _id,   _type,   title,   "posts": *[_type == "post" && references(^._id)]{     title,     slug   } } // alternative: *[_type == "post" && ^._id in categories[]._ref]{

This ouputs a data structure that lets us make a simple category post template:

[   {     "_id": "39d2ca7f-4862-4ab2-b902-0bf10f1d4c34",     "_type": "category",     "title": "Structured content",     "posts": [       {         "title": "Exploration powered by structured content",         "slug": {           "_type": "slug",           "current": "exploration-powered-by-structured-content"         }       },       {         "title": "My brand new blog powered by Sanity.io",         "slug": {           "_type": "slug",           "current": "my-brand-new-blog-powered-by-sanity-io"         }       }     ]   },   // ... more entries ]

That’s fine for GROQ, what about GraphQL?

Here‘s the kicker: As of yet, this kind of query isn’t possible with Gatsby’s GraphQL API out of the box. But fear not! Gatsby has a powerful API for changing its GraphQL schema that lets us add fields.

Using createResolvers to edit Gatsby’s GraphQL API

Gatsby holds all the content in memory when it builds your site and exposes some APIs that let us tap into how it processes this information. Among these are the Node APIs. It’s probably good to clarify that when we are talking about “node” in Gatsby — not to be confused with Node.js. The creators of Gatsby have borrowed “edges and nodes” from Graph theory where “edges” are the connections between the “nodes” which are the “points” where the actual content is located. Since an edge is a connection between nodes, it can have a “next” and “previous” property.

The edges with next and previous, and the node with fields in GraphQL’s API explorer
The edges with next and previous, and the node with fields in GraphQL’s API explorer

The Node APIs are used by plugins first and foremost, but they can be used to customize how our GraphQL API should work as well. One of these APIs is called createResolvers. It’s fairly new and it lets us tap into how a type’s nodes are created so we can make queries that add data to them.

Let’s use it to add the following logic:

  • Check for ones with the SanityCategory type when creating the nodes.
  • If a node matches this type, create a new field called posts and set it to the SanityPost type.
  • Then run a query that filters all posts that has lists a category that matches the current category’s ID.
  • If there are matching IDs, add the content of the post nodes to this field.

Add the following code to the /web/gatsby-node.js file, either below or above the code that’s already in there:

// /web/gatsby-node.js // Notice the capitalized type names exports.createResolvers = ({createResolvers}) => {   const resolvers = {     SanityCategory: {       posts: {         type: ['SanityPost'],         resolve (source, args, context, info) {           return context.nodeModel.runQuery({             type: 'SanityPost',             query: {               filter: {                 categories: {                   elemMatch: {                     _id: {                       eq: source._id                     }                   }                 }               }             }           })         }       }     }   }   createResolvers(resolvers) }

Now, let’s restart Gatsby’s development server. We should be able to find a new field for posts inside of the sanityCategory and allSanityCategory types.

A GraphQL query for categories with the category title and the titles of the belonging posts

Adding the list of posts to the category template

Now that we have the data we need, we can return to our category page template (/web/src/templates/category.js) and add a list with links to the posts belonging to the category.

// /web/src/templates/category.js import React from 'react' import {graphql, Link} from 'gatsby' import Container from '../components/container' import GraphQLErrorList from '../components/graphql-error-list' import SEO from '../components/seo' import Layout from '../containers/layout' // Import a function to build the blog URL import {getBlogUrl} from '../lib/helpers'  // Add “posts” to the GraphQL query export const query = graphql`   query CategoryTemplateQuery($ id: String!) {     category: sanityCategory(id: {eq: $ id}) {       title       description       posts {         _id         title         publishedAt         slug {           current         }       }     }   } ` const CategoryPostTemplate = props => {   const {data = {}, errors} = props   // Destructure the new posts property from props   const {title, description, posts} = data.category || {}    return (     <Layout>       <Container>         {errors && <GraphQLErrorList errors={errors} />}         {!data.category && <p>No category data</p>}         <SEO title=How to Make Taxonomy Pages With Gatsby and Sanity.io description={description} />         <article>           <h1>Category: How to Make Taxonomy Pages With Gatsby and Sanity.io</h1>           <p>{description}</p>           {/*             If there are any posts, add the heading,             with the list of links to the posts           */}           {posts && (             <React.Fragment>               <h2>Posts</h2>               <ul>                 { posts.map(post => (                   <li key={post._id}>                     <Link to={getBlogUrl(post.publishedAt, post.slug)}>{post.title}</Link>                   </li>))                 }               </ul>             </React.Fragment>)           }         </article>       </Container>     </Layout>   ) }  export default CategoryPostTemplate 

This code will produce this simple category page with a list of linked posts – just liked we wanted!

The category page with the category title and description, as well as a list of its posts

Go make taxonomy pages!

We just completed the process of creating new page types with custom page templates in Gatsby. We covered one of Gatsby’s Node APIs called createResolver and used it to add a new posts field to the category nodes.

This should give you what you need to make other types of taxonomy pages! Do you have multiple authors on your blog? Well, you can use the same logic to create author pages. The interesting thing with the GraphQL filter is that you can use it to go beyond the explicit relationship made with references. It can also be used to match other fields using regular expressions or string comparisons. It’s fairly flexible!

The post How to Make Taxonomy Pages With Gatsby and Sanity.io appeared first on CSS-Tricks.

CSS-Tricks

, , ,
[Top]

Enable Gatsby Incremental Builds on Netlify

The concept of an “incremental build” is that, when using some kind of generator that builds all the files that make for a website, rather than rebuilding 100% of those files every single time, it only changes the files that need to be changed since the last build. Seems like an obviously good idea, but in practice I’m sure it’s extremely tricky. How do you know what exactly which files will change and which won’t before building?

I don’t have the answer to that, but Gatsby has it figured out. Faster local builds is half the joy, the other half is that deployment also becomes faster, as the files that need to move around are far fewer.

I’d say incremental builds are a pretty damn big deal. I like seeing these hurdles get cleared Jamstack-land. I’m linking to the Netlify blog post here as getting it going on Netlify requires you to enable their “build plugins” feature which is also a real ahead-of-the-game feature, allowing you to run code during different parts of CI/CD with a really clean syntax.

Direct Link to ArticlePermalink

The post Enable Gatsby Incremental Builds on Netlify appeared first on CSS-Tricks.

CSS-Tricks

, , , ,
[Top]

How to Add Lunr Search to your Gatsby Website

The Jamstack way of thinking and building websites is becoming more and more popular.

Have you already tried Gatsby, Nuxt, or Gridsome (to cite only a few)? Chances are that your first contact was a “Wow!” moment — so many things are automatically set up and ready to use. 

There are some challenges, though, one of which is search functionality. If you’re working on any sort of content-driven site, you’ll likely run into search and how to handle it. Can it be done without any external server-side technology? 

Search is not one of those things that come out of the box with Jamstack. Some extra decisions and implementation are required.

Fortunately, we have a bunch of options that might be more or less adapted to a project. We could use Algolia’s powerful search-as-service API. It comes with a free plan that is restricted to non-commercial projects with  a limited capacity. If we were to use WordPress with WPGraphQL as a data source, we could take advantage of WordPress native search functionality and Apollo Client. Raymond Camden recently explored a few Jamstack search options, including pointing a search form directly at Google.

In this article, we will build a search index and add search functionality to a Gatsby website with Lunr, a lightweight JavaScript library providing an extensible and customizable search without the need for external, server-side services. We used it recently to add “Search by Tartan Name” to our Gatsby project tartanify.com. We absolutely wanted persistent search as-you-type functionality, which brought some extra challenges. But that’s what makes it interesting, right? I’ll discuss some of the difficulties we faced and how we dealt with them in the second half of this article.

Getting started

For the sake of simplicity, let’s use the official Gatsby blog starter. Using a generic starter lets us abstract many aspects of building a static website. If you’re following along, make sure to install and run it:

gatsby new gatsby-starter-blog https://github.com/gatsbyjs/gatsby-starter-blog cd gatsby-starter-blog gatsby develop

It’s a tiny blog with three posts we can view by opening up http://localhost:8000/___graphql in the browser.

Showing the GraphQL page on the localhost installation in the browser.

Inverting index with Lunr.js 🙃

Lunr uses a record-level inverted index as its data structure. The inverted index stores the mapping for each word found within a website to its location (basically a set of page paths). It’s on us to decide which fields (e.g. title, content, description, etc.) provide the keys (words) for the index.

For our blog example, I decided to include all titles and the content of each article. Dealing with titles is straightforward since they are composed uniquely of words. Indexing content is a little more complex. My first try was to use the rawMarkdownBody field. Unfortunately, rawMarkdownBody introduces some unwanted keys resulting from the markdown syntax.

Showing an attempt at using markdown syntax for links.

I obtained a “clean” index using the html field in conjunction with the striptags package (which, as the name suggests, strips out the HTML tags). Before we get into the details, let’s look into the Lunr documentation.

Here’s how we create and populate the Lunr index. We will use this snippet in a moment, specifically in our gatsby-node.js file.

const index = lunr(function () {   this.ref('slug')   this.field('title')   this.field('content')   for (const doc of documents) {     this.add(doc)   } })

 documents is an array of objects, each with a slug, title and content property:

{   slug: '/post-slug/',   title: 'Post Title',   content: 'Post content with all HTML tags stripped out.' }

We will define a unique document key (the slug) and two fields (the title and content, or the key providers). Finally, we will add all of the documents, one by one.

Let’s get started.

Creating an index in gatsby-node.js 

Let’s start by installing the libraries that we are going to use.

yarn add lunr graphql-type-json striptags

Next, we need to edit the gatsby-node.js file. The code from this file runs once in the process of building a site, and our aim is to add index creation to the tasks that Gatsby executes on build. 

CreateResolvers is one of the Gatsby APIs controlling the GraphQL data layer. In this particular case, we will use it to create a new root field; Let’s call it LunrIndex

Gatsby’s internal data store and query capabilities are exposed to GraphQL field resolvers on context.nodeModel. With getAllNodes, we can get all nodes of a specified type:

/* gatsby-node.js */ const { GraphQLJSONObject } = require(`graphql-type-json`) const striptags = require(`striptags`) const lunr = require(`lunr`)  exports.createResolvers = ({ cache, createResolvers }) => {   createResolvers({     Query: {       LunrIndex: {         type: GraphQLJSONObject,         resolve: (source, args, context, info) => {           const blogNodes = context.nodeModel.getAllNodes({             type: `MarkdownRemark`,           })           const type = info.schema.getType(`MarkdownRemark`)           return createIndex(blogNodes, type, cache)         },       },     },   }) }

Now let’s focus on the createIndex function. That’s where we will use the Lunr snippet we mentioned in the last section. 

/* gatsby-node.js */ const createIndex = async (blogNodes, type, cache) => {   const documents = []   // Iterate over all posts    for (const node of blogNodes) {     const html = await type.getFields().html.resolve(node)     // Once html is resolved, add a slug-title-content object to the documents array     documents.push({       slug: node.fields.slug,       title: node.frontmatter.title,       content: striptags(html),     })   }   const index = lunr(function() {     this.ref(`slug`)     this.field(`title`)     this.field(`content`)     for (const doc of documents) {       this.add(doc)     }   })   return index.toJSON() }

Have you noticed that instead of accessing the HTML element directly with  const html = node.html, we’re using an  await expression? That’s because node.html isn’t available yet. The gatsby-transformer-remark plugin (used by our starter to parse Markdown files) does not generate HTML from markdown immediately when creating the MarkdownRemark nodes. Instead,  html is generated lazily when the html field resolver is called in a query. The same actually applies to the excerpt that we will need in just a bit.

Let’s look ahead and think about how we are going to display search results. Users expect to obtain a link to the matching post, with its title as the anchor text. Very likely, they wouldn’t mind a short excerpt as well.

Lunr’s search returns an array of objects representing matching documents by the ref property (which is the unique document key slug in our example). This array does not contain the document title nor the content. Therefore, we need to store somewhere the post title and excerpt corresponding to each slug. We can do that within our LunrIndex as below:

/* gatsby-node.js */ const createIndex = async (blogNodes, type, cache) => {   const documents = []   const store = {}   for (const node of blogNodes) {     const {slug} = node.fields     const title = node.frontmatter.title     const [html, excerpt] = await Promise.all([       type.getFields().html.resolve(node),       type.getFields().excerpt.resolve(node, { pruneLength: 40 }),     ])     documents.push({       // unchanged     })     store[slug] = {       title,       excerpt,     }   }   const index = lunr(function() {     // unchanged   })   return { index: index.toJSON(), store } }

Our search index changes only if one of the posts is modified or a new post is added. We don’t need to rebuild the index each time we run gatsby develop. To avoid unnecessary builds, let’s take advantage of the cache API:

/* gatsby-node.js */ const createIndex = async (blogNodes, type, cache) => {   const cacheKey = `IndexLunr`   const cached = await cache.get(cacheKey)   if (cached) {     return cached   }   // unchanged   const json = { index: index.toJSON(), store }   await cache.set(cacheKey, json)   return json }

Enhancing pages with the search form component

We can now move on to the front end of our implementation. Let’s start by building a search form component.

touch src/components/search-form.js 

I opt for a straightforward solution: an input of type="search", coupled with a label and accompanied by a submit button, all wrapped within a form tag with the search landmark role.

We will add two event handlers, handleSubmit on form submit and handleChange on changes to the search input.

/* src/components/search-form.js */ import React, { useState, useRef } from "react" import { navigate } from "@reach/router" const SearchForm = ({ initialQuery = "" }) => {   // Create a piece of state, and initialize it to initialQuery   // query will hold the current value of the state,   // and setQuery will let us change it   const [query, setQuery] = useState(initialQuery)      // We need to get reference to the search input element   const inputEl = useRef(null)    // On input change use the current value of the input field (e.target.value)   // to update the state's query value   const handleChange = e => {     setQuery(e.target.value)   }      // When the form is submitted navigate to /search   // with a query q paramenter equal to the value within the input search   const handleSubmit = e => {     e.preventDefault()     // `inputEl.current` points to the mounted search input element     const q = inputEl.current.value     navigate(`/search?q=$ {q}`)   }   return (     <form role="search" onSubmit={handleSubmit}>       <label htmlFor="search-input" style={{ display: "block" }}>         Search for:       </label>       <input         ref={inputEl}         id="search-input"         type="search"         value={query}         placeholder="e.g. duck"         onChange={handleChange}       />       <button type="submit">Go</button>     </form>   ) } export default SearchForm

Have you noticed that we’re importing navigate from the @reach/router package? That is necessary since neither Gatsby’s <Link/> nor navigate provide in-route navigation with a query parameter. Instead, we can import @reach/router — there’s no need to install it since Gatsby already includes it — and use its navigate function.

Now that we’ve built our component, let’s add it to our home page (as below) and 404 page.

/* src/pages/index.js */ // unchanged import SearchForm from "../components/search-form" const BlogIndex = ({ data, location }) => {   // unchanged   return (     <Layout location={location} title={siteTitle}>       <SEO title="All posts" />       <Bio />       <SearchForm />       // unchanged

Search results page

Our SearchForm component navigates to the /search route when the form is submitted, but for the moment, there is nothing behing this URL. That means we need to add a new page:

touch src/pages/search.js 

I proceeded by copying and adapting the content of the the index.js page. One of the essential modifications concerns the page query (see the very bottom of the file). We will replace allMarkdownRemark with the LunrIndex field. 

/* src/pages/search.js */ import React from "react" import { Link, graphql } from "gatsby" import { Index } from "lunr" import Layout from "../components/layout" import SEO from "../components/seo" import SearchForm from "../components/search-form" 
 // We can access the results of the page GraphQL query via the data props const SearchPage = ({ data, location }) => {   const siteTitle = data.site.siteMetadata.title      // We can read what follows the ?q= here   // URLSearchParams provides a native way to get URL params   // location.search.slice(1) gets rid of the "?"    const params = new URLSearchParams(location.search.slice(1))   const q = params.get("q") || "" 
   // LunrIndex is available via page query   const { store } = data.LunrIndex   // Lunr in action here   const index = Index.load(data.LunrIndex.index)   let results = []   try {     // Search is a lunr method     results = index.search(q).map(({ ref }) => {       // Map search results to an array of {slug, title, excerpt} objects       return {         slug: ref,         ...store[ref],       }     })   } catch (error) {     console.log(error)   }   return (     // We will take care of this part in a moment   ) } export default SearchPage export const pageQuery = graphql`   query {     site {       siteMetadata {         title       }     }     LunrIndex   } `

Now that we know how to retrieve the query value and the matching posts, let’s display the content of the page. Notice that on the search page we pass the query value to the <SearchForm /> component via the initialQuery props. When the user arrives to the search results page, their search query should remain in the input field. 

return (   <Layout location={location} title={siteTitle}>     <SEO title="Search results" />     {q ? <h1>Search results</h1> : <h1>What are you looking for?</h1>}     <SearchForm initialQuery={q} />     {results.length ? (       results.map(result => {         return (           <article key={result.slug}>             <h2>               <Link to={result.slug}>                 {result.title || result.slug}               </Link>             </h2>             <p>{result.excerpt}</p>           </article>         )       })     ) : (       <p>Nothing found.</p>     )}   </Layout> )

You can find the complete code in this gatsby-starter-blog fork and the live demo deployed on Netlify.

Instant search widget

Finding the most “logical” and user-friendly way of implementing search may be a challenge in and of itself. Let’s now switch to the real-life example of tartanify.com — a Gatsby-powered website gathering 5,000+ tartan patterns. Since tartans are often associated with clans or organizations, the possibility to search a tartan by name seems to make sense. 

We built tartanify.com as a side project where we feel absolutely free to experiment with things. We didn’t want a classic search results page but an instant search “widget.” Often, a given search keyword corresponds with a number of results — for example, “Ramsay” comes in six variations.  We imagined the search widget would be persistent, meaning it should stay in place when a user navigates from one matching tartan to another.

Let me show you how we made it work with Lunr.  The first step of building the index is very similar to the gatsby-starter-blog example, only simpler:

/* gatsby-node.js */ exports.createResolvers = ({ cache, createResolvers }) => {   createResolvers({     Query: {       LunrIndex: {         type: GraphQLJSONObject,         resolve(source, args, context) {           const siteNodes = context.nodeModel.getAllNodes({             type: `TartansCsv`,           })           return createIndex(siteNodes, cache)         },       },     },   }) } const createIndex = async (nodes, cache) => {   const cacheKey = `LunrIndex`   const cached = await cache.get(cacheKey)   if (cached) {     return cached   }   const store = {}   const index = lunr(function() {     this.ref(`slug`)     this.field(`title`)     for (node of nodes) {       const { slug } = node.fields       const doc = {         slug,         title: node.fields.Unique_Name,       }       store[slug] = {         title: doc.title,       }       this.add(doc)     }   })   const json = { index: index.toJSON(), store }   cache.set(cacheKey, json)   return json }

We opted for instant search, which means that search is triggered by any change in the search input instead of a form submission.

/* src/components/searchwidget.js */ import React, { useState } from "react" import lunr, { Index } from "lunr" import { graphql, useStaticQuery } from "gatsby" import SearchResults from "./searchresults" 
 const SearchWidget = () => {   const [value, setValue] = useState("")   // results is now a state variable    const [results, setResults] = useState([]) 
   // Since it's not a page component, useStaticQuery for quering data   // https://www.gatsbyjs.org/docs/use-static-query/   const { LunrIndex } = useStaticQuery(graphql`     query {       LunrIndex     }   `)   const index = Index.load(LunrIndex.index)   const { store } = LunrIndex   const handleChange = e => {     const query = e.target.value     setValue(query)     try {       const search = index.search(query).map(({ ref }) => {         return {           slug: ref,           ...store[ref],         }       })       setResults(search)     } catch (error) {       console.log(error)     }   }   return (     <div className="search-wrapper">       // You can use a form tag as well, as long as we prevent the default submit behavior       <div role="search">         <label htmlFor="search-input" className="visually-hidden">           Search Tartans by Name         </label>         <input           id="search-input"           type="search"           value={value}           onChange={handleChange}           placeholder="Search Tartans by Name"         />       </div>       <SearchResults results={results} />     </div>   ) } export default SearchWidget

The SearchResults are structured like this:

/* src/components/searchresults.js */ import React from "react" import { Link } from "gatsby" const SearchResults = ({ results }) => (   <div>     {results.length ? (       <>         <h2>{results.length} tartan(s) matched your query</h2>         <ul>           {results.map(result => (             <li key={result.slug}>               <Link to={`/tartan/$ {result.slug}`}>{result.title}</Link>             </li>           ))}         </ul>       </>     ) : (       <p>Sorry, no matches found.</p>     )}   </div> ) export default SearchResults

Making it persistent

Where should we use this component? We could add it to the Layout component. The problem is that our search form will get unmounted on page changes that way. If a user wants to browser all tartans associated with the “Ramsay” clan, they will have to retype their query several times. That’s not ideal.

Thomas Weibenfalk has written a great article on keeping state between pages with local state in Gatsby.js. We will use the same technique, where the wrapPageElement browser API sets persistent UI elements around pages. 

Let’s add the following code to the gatsby-browser.js. You might need to add this file to the root of your project.

/* gatsby-browser.js */ import React from "react" import SearchWrapper from "./src/components/searchwrapper" export const wrapPageElement = ({ element, props }) => (   <SearchWrapper {...props}>{element}</SearchWrapper> )

Now let’s add a new component file:

touch src/components/searchwrapper.js

Instead of adding SearchWidget component to the Layout, we will add it to the SearchWrapper and the magic happens. ✨

/* src/components/searchwrapper.js */ import React from "react" import SearchWidget from "./searchwidget" 
 const SearchWrapper = ({ children }) => (   <>     {children}     <SearchWidget />   </> ) export default SearchWrapper

Creating a custom search query

At this point, I started to try different keywords but very quickly realized that Lunr’s default search query might not be the best solution when used for instant search.

Why? Imagine that we are looking for tartans associated with the name MacCallum. While typing “MacCallum” letter-by-letter, this is the evolution of the results:

  • m – 2 matches (Lyon, Jeffrey M, Lyon, Jeffrey M (Hunting))
  • ma – no matches
  • mac – 1 match (Brighton Mac Dermotte)
  • macc – no matches
  • macca – no matches
  • maccal – 1 match (MacCall)
  • maccall – 1 match (MacCall)
  • maccallu – no matches
  • maccallum – 3 matches (MacCallum, MacCallum #2, MacCallum of Berwick)

Users will probably type the full name and hit the button if we make a button available. But with instant search, a user is likely to abandon early because they may expect that the results can only narrow down letters are added to the keyword query.

 That’s not the only problem. Here’s what we get with “Callum”:

  • c – 3 unrelated matches
  • ca – no matches
  • cal – no matches
  • call – no matches
  • callu – no matches
  • callum – one match 

You can see the trouble if someone gives up halfway into typing the full query.

Fortunately, Lunr supports more complex queries, including fuzzy matches, wildcards and boolean logic (e.g. AND, OR, NOT) for multiple terms. All of these are available either via a special query syntax, for example: 

index.search("+*callum mac*")

We could also reach for the index query method to handle it programatically.

The first solution is not satisfying since it requires more effort from the user. I used the index.query method instead:

/* src/components/searchwidget.js */ const search = index   .query(function(q) {     // full term matching     q.term(el)     // OR (default)     // trailing or leading wildcard     q.term(el, {       wildcard:         lunr.Query.wildcard.LEADING | lunr.Query.wildcard.TRAILING,     })   })   .map(({ ref }) => {     return {       slug: ref,       ...store[ref],     }   })

Why use full term matching with wildcard matching? That’s necessary for all keywords that “benefit” from the stemming process. For example, the stem of “different” is “differ.”  As a consequence, queries with wildcards — such as differe*, differen* or  different* — all result in no matches, while the full term queries differe, differen and different return matches.

Fuzzy matches can be used as well. In our case, they are allowed uniquely for terms of five or more characters:

q.term(el, { editDistance: el.length > 5 ? 1 : 0 }) q.term(el, {   wildcard:     lunr.Query.wildcard.LEADING | lunr.Query.wildcard.TRAILING, })

The handleChange function also “cleans up” user inputs and ignores single-character terms:

/* src/components/searchwidget.js */   const handleChange = e => {   const query = e.target.value || ""   setValue(query)   if (!query.length) {     setResults([])   }   const keywords = query     .trim() // remove trailing and leading spaces     .replace(/*/g, "") // remove user's wildcards     .toLowerCase()     .split(/s+/) // split by whitespaces   // do nothing if the last typed keyword is shorter than 2   if (keywords[keywords.length - 1].length < 2) {     return   }   try {     const search = index       .query(function(q) {         keywords           // filter out keywords shorter than 2           .filter(el => el.length > 1)           // loop over keywords           .forEach(el => {             q.term(el, { editDistance: el.length > 5 ? 1 : 0 })             q.term(el, {               wildcard:                 lunr.Query.wildcard.LEADING | lunr.Query.wildcard.TRAILING,             })           })       })       .map(({ ref }) => {         return {           slug: ref,           ...store[ref],         }       })     setResults(search)   } catch (error) {     console.log(error)   } }

Let’s check it in action:

  • m – pending
  • ma – 861 matches
  • mac – 600 matches
  • macc – 35 matches
  • macca – 12 matches
  • maccal – 9 matches
  • maccall – 9 matches
  • maccallu – 3 matches
  • maccallum – 3 matches

Searching for “Callum” works as well, resulting in four matches: Callum, MacCallum, MacCallum #2, and MacCallum of Berwick.

There is one more problem, though: multi-terms queries. Say, you’re looking for “Loch Ness.” There are two tartans associated with  that term, but with the default OR logic, you get a grand total of 96 results. (There are plenty of other lakes in Scotland.)

I wound up deciding that an AND search would work better for this project. Unfortunately, Lunr does not support nested queries, and what we actually need is (keyword1 OR *keyword*) AND (keyword2 OR *keyword2*). 

To overcome this, I ended up moving the terms loop outside the query method and intersecting the results per term. (By intersecting, I mean finding all slugs that appear in all of the per-single-keyword results.)

/* src/components/searchwidget.js */ try {   // andSearch stores the intersection of all per-term results   let andSearch = []   keywords     .filter(el => el.length > 1)     // loop over keywords     .forEach((el, i) => {       // per-single-keyword results       const keywordSearch = index         .query(function(q) {           q.term(el, { editDistance: el.length > 5 ? 1 : 0 })           q.term(el, {             wildcard:               lunr.Query.wildcard.LEADING | lunr.Query.wildcard.TRAILING,           })         })         .map(({ ref }) => {           return {             slug: ref,             ...store[ref],           }         })       // intersect current keywordSearch with andSearch       andSearch =         i > 0           ? andSearch.filter(x => keywordSearch.some(el => el.slug === x.slug))           : keywordSearch     })   setResults(andSearch) } catch (error) {   console.log(error) }

The source code for tartanify.com is published on GitHub. You can see the complete implementation of the Lunr search there.

Final thoughts

Search is often a non-negotiable feature for finding content on a site. How important the search functionality actually is may vary from one project to another. Nevertheless, there is no reason to abandon it under the pretext that it does not tally with the static character of Jamstack websites. There are many possibilities. We’ve just discussed one of them.

And, paradoxically in this specific example, the result was a better all-around user experience, thanks to the fact that implementing search was not an obvious task but instead required a lot of deliberation. We may not have been able to say the same with an over-the-counter solution.

The post How to Add Lunr Search to your Gatsby Website appeared first on CSS-Tricks.

CSS-Tricks

, , ,
[Top]

Gatsby and WordPress

Gatsby and WordPress is an interesting combo to watch. On one hand, it makes perfect sense. Gatsby can suck up data from anywhere, and with WordPress having a native REST API, it makes for a good pairing. Of course Gatsby has a first-class plugin for sourcing data from WordPress that even supports data from popular plugins like Advanced Custom Fields.

On the other hand, Gatsby is such a part of the JAMstack world that combining it with something as non-JAMstack-y as WordPress feels funny.

Here’s some random thoughts and observations I have about this pairing.

  • Markus says this combination allowed him to “find joy again” in WordPress development.
  • A world in which you get to build a WordPress site but get to host it on Netlify, with all their fancy developer features (e.g. build previews), is certainly appealing.
  • Scott Bolinger has a five-minute tour of his own site, with the twist of some of the pages can be statically-built, and other parts dynamically loaded.
  • There is a GraphQL plugin for WordPress, which I suppose would be an alternate way to yank data in a Gatsby-friendly way. Jason Bahl, the wp-graphql guy, literally works for Gatsby now and has “Development sponsored by Gatsby” as the plugin’s Twitter bio. It’s unclear if this will be the default future way to integrate Gatsby and WordPress. I sort of suspect not, just because the REST API requires no additional plugin and the GraphQL plugin takes a little work to install. Anecdotally, just installing it and activating it triggers a fatal error on my site, so I’ll need to work with my host on that at some point because I’d love to have it installed.
  • We see big tutorial series on the subject, like Tim Smith’s How To Build a Blog with WordPress and Gatsby.js.
  • Getting a WordPress site on static hosting seems like a big opportunity that is barely being tapped. Gatsby is just an early player here and is focused on re-building your site the React way. But there are other tools like WP2Static that claim to export a static version of your WordPress site-as is then upload the output to a static host. Ashley Williams and Kristian Freeman get into that in this video (starting about 20 minutes in) and host the result on a Cloudflare Workers site.

The post Gatsby and WordPress appeared first on CSS-Tricks.

CSS-Tricks

,
[Top]

How to the Get Current Page URL in Gatsby

This seemingly simple task had me scratching my head for a few hours while I was working on my website. As it turns out, getting the current page URL in Gatsby is not as straightforward as you may think, but also not so complicated to understand.

Let’s look at a few methods of making it happen. But first, you might be wondering why on earth we’d even want to do something like this.

Why you might need the current URL

So before we get into the how, let’s first answer the bigger question: Why would you want to get the URL of the current page? I can offer a few use cases.

Meta tags

The first obvious thing that you’d want the current URL for is meta tags in the document head:

<link rel="canonical" href=https://css-tricks.com/how-to-the-get-current-page-url-in-gatsby/ /> <meta property="og:url" content=https://css-tricks.com/how-to-the-get-current-page-url-in-gatsby/ />

Social Sharing

I’ve seen it on multiple websites where a link to the current page is displayed next to sharing buttons. Something like this (found on Creative Market)

Styling

This one is less obvious but I’ve used it a few times with styled-components. You can render different styles based on certain conditions. One of those conditions can be a page path (i.e. part of the URL after the name of the site). Here’s a quick example:

import React from 'react'; import styled from 'styled-components';  const Layout = ({ path, children }) => (   <StyledLayout path={path}>     {children}   </StyledLayout> );      const StyledLayout = styled.main`   background-color: $ {({ path }) => (path === '/' ? '#fff' : '#000')}; `;  export default Layout;

Here, I’ve created a styled Layout component that, based on the path, has a different background color.

This list of examples only illustrates the idea and is by no means comprehensive. I’m sure there are more cases where you might want to get the current page URL. So how do we get it?

Understand build time vs. runtime

Not so fast! Before we get to the actual methods and code snippets, I’d like to make one last stop and briefly explain a few core concepts of Gatsby.

The first thing that we need to understand is that Gatsby, among many other things, is a static site generator. That means it creates static files (that are usually HTML and JavaScript). There is no server and no database on the production website. All pieces of information (including the current page URL) must be pulled from other sources or generated during build time or runtime before inserting it into the markup.

That leads us to the second important concept we need to understand: Build time vs. runtime. I encourage you to read the official Gatsby documentation about it, but here’s my interpretation.

Runtime is when one of the static pages is opened in the browser. In that case, the page has access to all the wonderful browser APIs, including the Window API that, among many other things, contains the current page URL.

One thing that is easy to confuse, especially when starting out with Gatsby, is that running gatsby develop in the terminal in development mode spins up the browser for you. That means all references to the window object work and don’t trigger any errors.

Build time happens when you are done developing and tell Gatsby to generate final optimized assets using the gatsby build command. During build time, the browser doesn’t exist. This means you can’t use the window object.

Here comes the a-ha! moment. If builds are isolated from the browser, and there is no server or database where we can get the URL, how is Gatsby supposed to know what domain name is being used? That’s the thing — it can’t! You can get the slug or path of the page, but you simply can’t tell what the base URL is. You have to specify it.

This is a very basic concept, but if you are coming in fresh with years of WordPress experience, it can take some time for this info to sink in. You know that Gatsby is serverless and all but moments like this make you realize: There is no server.

Now that we have that sorted out, let’s jump to the actual methods for getting the URL of the current page.

Method 1: Use the href property of the window.location object

This first method is not specific to Gatsby and can be used in pretty much any JavaScript application in the browser. See, browser is the key word here.

Let’s say you are building one of those sharing components with an input field that must contain the URL of the current page. Here’s how you might do that:

import React from 'react';  const Foo = () => {   const url = typeof window !== 'undefined' ? window.location.href : '';    return (     <input type="text" readOnly="readonly" value=https://css-tricks.com/how-to-the-get-current-page-url-in-gatsby/ />   ); };  export default Foo;

If the window object exists, we get the href property of the location object that is a child of the window. If not, we give the url variable an empty string value.

If we do it without the check and write it like this:

const url = window.location.href;

…the build will fail with an error that looks something like this:

failed Building static HTML for pages - 2.431s ERROR #95312  "window" is not available during server-side rendering.

As I mentioned earlier, this happens because the browser doesn’t exist during the build time. That’s a huge disadvantage of this method. You can’t use it if you need the URL to be present on the static version of the page.

But there is a big advantage as well! You can access the window object from a component that is nested deep inside other components. In other words, you don’t have to drill the URL prop from parent components.

Method 2: Get the href property of location data from props

Every page and template component in Gatsby has a location prop that contains information about the current page. However, unlike window.location, this prop is present on all pages.

Quoting Gatsby docs:

The great thing is you can expect the location prop to be available to you on every page.

But there may be a catch here. If you are new to Gatsby, you’ll log that prop to the console, and notice that it looks pretty much identical to the window.location (but it’s not the same thing) and also contains the href attribute. How is this possible? Well, it is not. The href prop is only there during runtime.

The worst thing about this is that using location.href directly without first checking if it exists won’t trigger an error during build time.

All this means that we can rely on the location prop to be on every page, but can’t expect it to have the href property during build time. Be aware of that, and don’t use this method for critical cases where you need the URL to be in the markup on the static version of the page.

So let’s rewrite the previous example using this method:

import React from 'react';  const Page = ({ location }) => {   const url = location.href ? location.href : '';    return (     <input type="text" readOnly="readonly" value=https://css-tricks.com/how-to-the-get-current-page-url-in-gatsby/ />   ); };  export default Page;

This has to be a top-level page or template component. You can’t just import it anywhere and expect it work. location prop will be undefined.

As you can see, this method is pretty similar to the previous one. Use it for cases where the URL is needed only during runtime.

But what if you need to have a full URL in the markup of a static page? Let’s move on to the third method.

Method 3: Generate the current page URL with the pathname property from location data

As we discussed at the start of this post, if you need to include the full URL to the static pages, you have to specify the base URL for the website somewhere and somehow get it during build time. I’ll show you how to do that.

As an example, I’ll create a <link rel="canonical" href=https://css-tricks.com/how-to-the-get-current-page-url-in-gatsby/ /> tag in the header. It is important to have the full page URL in it before the page hits the browser. Otherwise, search engines and site scrapers will see the empty href attribute, which is unacceptable.

Here’s the plan:

  1. Add the siteURL property to siteMetadata in gatsby-config.js.
  2. Create a static query hook to retrieve siteMetadata in any component.
  3. Use that hook to get siteURL.
  4. Combine it with the path of the page and add it to the markup.

Let’s break each step down.

Add the siteURL property to siteMetadata in gatsby-config.js

Gatsby has a configuration file called gatsby-config.js that can be used to store global information about the site inside siteMetadata object. That works for us, so we’ll add siteURL to that object:

module.exports = {   siteMetadata: {     title: 'Dmitry Mayorov',     description: 'Dmitry is a front-end developer who builds cool sites.',     author: '@dmtrmrv',     siteURL: 'https://dmtrmrv.com',   } };

Create a static query hook to retrieve siteMetadata in any component

Next, we need a way to use siteMetadata in our components. Luckily, Gatsby has a StaticQuery API that allows us to do just that. You can use the useStaticQuery hook directly inside your components, but I prefer to create a separate file for each static query I use on the website. This makes the code easier to read.

To do that, create a file called use-site-metadata.js inside a hooks folder inside the src folder of your site and copy and paste the following code to it.

import { useStaticQuery, graphql } from 'gatsby';  const useSiteMetadata = () => {   const { site } = useStaticQuery(   graphql`     query {     site {       siteMetadata {       title       description       author       siteURL       }     }     }   `,   );   return site.siteMetadata; };  export default useSiteMetadata;

Make sure to check that all properties — like title, description, author, and any other properties you have in the siteMetadata object — appear in the GraphQL query.

Use that hook to get siteURL

Here’s the fun part: We get the site URL and use it inside the component.

import React from 'react'; import Helmet from 'react-helmet'; import useSiteMetadata from '../hooks/use-site-metadata';  const Page = ({ location }) => {   const { siteURL } = useSiteMetadata();   return (     <Helmet>       <link rel="canonical" href={`$ {siteURL}$ {location.pathname}`} />     </Helmet>   ); };  export default Page;

Let’s break it down.

On Line 3, we import the useSiteMetadata hook we created into the component.

import useSiteMetadata from '../hooks/use-site-metadata';

Then, on Line 6, we destructure the data that comes from it, creating the siteURL variable. Now we have the site URL that is available for us during build and runtime. Sweet!

const { siteURL } = useSiteMetadata();

Combine the site URL with the path of the page and add it to the markup

Now, remember the location prop from the second method? The great thing about it is that it contains the pathname property during both build and runtime. See where it’s going? All we have to do is combine the two:

`$ {siteURL}$ {location.pathname}`

This is probably the most robust solution that will work in the browsers and during production builds. I personally use this method the most.

I’m using React Helmet in this example. If you haven’t heard of it, it’s a tool for rendering the head section in React applications. Darrell Hoffman wrote up a nice explanation of it here on CSS-Tricks.

Method 4: Generate the current page URL on the server side

What?! Did you just say server? Isn’t Gatsby a static site generator? Yes, I did say server. But it’s not a server in the traditional sense.

As we already know, Gatsby generates (i.e. server renders) static pages during build time. That’s where the name comes from. What’s great about that is that we can hook into that process using multiple APIs that Gatsby already provides.

The API that interests us the most is called onRenderBody. Most of the time, it is used to inject custom scripts and styles to the page. But what’s exciting about this (and other server-side APIs) is that it has a pathname parameter. This means we can generate the current page URL “on the server.”

I wouldn’t personally use this method to add meta tags to the head section because the third method we looked at is more suitable for that. But for the sake of example, let me show you how you could add the canonical link to the site using onRenderBody.

To use any server-side API, you need to write the code in a file called gatsby-ssr.js that is located in the root folder of your site. To add the link to the head section, you would write something like this:

const React = require('react'); const config = require('./gatsby-config');  exports.onRenderBody = ({ pathname, setHeadComponents }) => {   setHeadComponents([     <link rel="canonical" href={`$ {config.siteMetadata.siteURL}$ {pathname}`} />,   ]); };

Let’s break this code bit by bit.

We require React on Line 1. It is necessary to make the JSX syntax work. Then, on Line 2, we pull data from the gatsby-config.js file into a config variable.

Next, we call the setHeadComponents method inside onRenderBody and pass it an array of components to add to the site header. In our case, it’s just one link tag. And for the href attribute of the link itself, we combine the siteURL and the pathname:

`$ {config.siteMetadata.siteURL}$ {pathname}`

Like I said earlier, this is probably not the go-to method for adding tags to the head section, but it is good to know that Gatsby has server-side APIs that make it possible to generate a URL for any given page during the server rendering stage.

If you want to learn more about server-side rendering with Gatsby, I encourage you to read their official documentation.

That’s it!

As you can see, getting the URL of the current page in Gatsby is not very complicated, especially once you understand the core concepts and know the tools that are available to use. If you know other methods, please let me know in the comments!

Resources

The post How to the Get Current Page URL in Gatsby appeared first on CSS-Tricks.

CSS-Tricks

, ,
[Top]

Ways to Organize and Prepare Images for a Blur-Up Effect Using Gatsby

Gatsby does a great job processing and handling images. For example, it helps you save time with image optimization because you don’t have to manually optimize each image on your own.

With plugins and some configuration, you can even setup image preloading and a technique called blur-up for your images using Gatsby. This helps with a smoother user experience that is faster and more appealing.

I found the combination of gatsby-source-filesystem, GraphQL, Sharp plugins and gatsby-image quite tedious to organize and un-intuitive, especially considering it is fairly common functionality. Adding to the friction is that gatsby-image works quite differently from a regular <img> tag and implementing general use cases for sites could end up complex as you configure the whole system.

Medium uses the blur-up technique for images.

If you haven’t done it already, you should go through the gatsby-image docs. It is the React component that Gatsby uses to process and place responsive, lazy-loaded images. Additionally, it holds the image position which prevents page jumps as they load and you can even create blur-up previews for each image.

For responsive images you’d generally use an <img> tag with a bunch of appropriately sized images in a srcset attribute, along with a sizes attribute that informs the layout situation the image will be used in.

<img srcset="img-320w.jpg 320w,               img-480w.jpg 480w,               img-800w.jpg 800w"       sizes="(max-width: 320px) 280px,             (max-width: 480px) 440px,             800px"       src="img-800w.jpg">

You can read up more on how this works in the Mozilla docs. This is one of the benefits of using gatsby-image in the first place: it does all the resizing and compressing automatically while doing the job of setting up srcset attributes in an <img /> tag.

Directory structure for images

Projects can easily grow in size and complexity. Even a single page site can contain a whole bunch of image assets, ranging from icons to full-on gallery slides. It helps to organize images in some order rather than piling all of them up in a single directory on the server. This helps us set up processing more intuitively and create a separation of concerns.

While attempting to organize files, another thing to consider is that Gatsby uses a custom webpack configuration to process, minify, and export all of the files in a project. The generated output is placed in a /public folder. The overall structure gatsby-starter-default uses looks like this:

/ |-- /.cache |-- /plugins |-- /public |-- /src     |-- /pages     |-- /components     |-- /images     |-- html.js |-- /static (not present by default) |-- gatsby-config.js |-- gatsby-node.js |-- gatsby-ssr.js |-- gatsby-browser.js

Read more about how the Gatsby project structure works here.

Let’s start with the common image files that we could encounter and would need to organize

For instance:

  • icons
  • logos
  • favicon
  • decorative images (generally vector or PNG files)
  • Image gallery (like team head shots on an About page or something)

How do we group these assets? Considering our goal of efficiency and the Gatsby project structure mentioned above, the best approach would be to split them into two groups: one group that requires no processing and directly imported into the project; and another group for images that require processing and optimization.

Your definitions may differ, but that grouping might look something like this:

Static, no processing required:

  • icons and logos that require no processing
  • pre-optimized images
  • favicons
  • other vector files (like decorative artwork)

Processing required:

  • non-vector artwork (e.g. PNG and JPG files)
  • gallery images
  • any other image that can be processed, which are basically common image formats other than vectors

Now that we have things organized in some form of order, we can move onto managing each of these groups.

The “static” group

Gatsby provides a very simple process for dealing with the static group: add all the files to a folder named static at the root of the project. The bundler automatically copies the contents to the public folder where the final build can directly access the files.

Say you have a file named logo.svg that requires no processing. Place it in the static folder and use it in a component file like this:

import React from "react"  // Tell webpack this JS file requires this image import logo from "../../static/logo.svg"   function Header() {   // This can be directly used as image src   return <img src={logo} alt="Logo" /> }  export default Header

Yes, it’s as simple as that — much like importing a component or variable and then directly using it. Gatsby has detailed documentation on importing assets directly into files you could refer to for further understanding.

Special case: Favicon

The plugin gatsby-plugin-manifest not only adds a manifest.json file to the project but also generates favicons for all required sizes and links them up in the site.

With minimal configuration, we have favicons, no more manually resizing, and no more adding individual links in the HTML head. Place favicon.svg (or .png or whatever format you’re using) in the static folder and tweak the gatsby-config.js file with settings for gatsby-plugin-manifest

{   resolve: `gatsby-plugin-manifest`,   options: {     name: `Absurd`,     icon: `static/favicon.svg`,   }, },

The “processed” group

Ideally, what we’d like is gatsby-image to work like an img tag where we specify the src and it does all the processing under the hood. Unfortunately, it’s not that straightforward. Gatsby requires you to configure gatsby-source-filesystem for the files then use GraphQL to query and processed them using Gatsby Sharp plugins (e.g. gatsby-transformer-sharp, gatsby-plugin-sharp) with gatsby-image. The result is a responsive, lazy-loaded image.

Rather than walking you through how to set up image processing in Gatsby (which is already well documented in the Gatsby docs), I’ll show you a couple of approaches to optimize this process for a couple of common use cases. I assume you have a basic knowledge of how image processing in Gatsby works — but if not, I highly recommend you first go through the docs.

Use case: An image gallery

Let’s take the common case of profile images on an About page. The arrangement is basically an array of data with title, description and image as a grid or collection in a particular section.

The data array would be something like:

const TEAM = [   {     name: 'Josh Peck',     image: 'josh.jpg',     role: 'Founder',   },   {     name: 'Lisa Haydon',     image: 'lisa.jpg',     role: 'Art Director',   },   {     name: 'Ashlyn Harris',     image: 'ashlyn.jpg',     role: 'Frontend Engineer',   } ];

Now let’s place all the images (josh.jpg, lisa.jpg and so on) in src/images/team You can create a folder in images based on what group it is. Since we’re dealing with team members on an About page, we’ve gone with images/team The next step is to query these images and link them up with the data.

To make these files available in the Gatsby system for processing, we use gatsby-source-filesystem. The configuration in gatsby-config.js for this particular folder would look like:

{   resolve: `gatsby-source-filesystem`,   options: {     name: `team`,     path: `$ {__dirname}/src/images/team`,   },   `gatsby-transformer-sharp`,   `gatsby-plugin-sharp`, },

To query for an array of files from this particular folder, we can use sourceInstanceName It takes the value of the name specified in gatsby-config.js:

{   allFile(filter: { sourceInstanceName: { eq: "team" } }) {     edges {       node {         relativePath         childImageSharp {           fluid(maxWidth: 300, maxHeight: 400) {             ...GatsbyImageSharpFluid           }         }       }     }   } }

This returns an array:

// Sharp-processed image data is removed for readability {   "data": {     "allFile": {       "edges": [         {           "node": {             "relativePath": "josh.jpg"           }         },         {           "node": {             "relativePath": "ashlyn.jpg"           }         },         {           "node": {             "relativePath": "lisa.jpg"           }         }       ]     }   }

As you can see, we’re using relativePath to associate the images we need to the item in the data array. Some quick JavaScript could help here:

// Img is gatsby-image // TEAM is the data array  TEAM.map(({ name, image, role }) => {   // Finds associated image from the array of images   const img = data.allFile.edges.find(     ({ node }) => node.relativePath === image   ).node;    return (     <div>       <Img fluid={img.childImageSharp.fluid} alt={name} />       <Title>{name}</Title>       <Subtitle>{role}</Subtitle>     </div>   ); })

That’s the closest we’re getting to using src similar to what we do for <img> tags.

Use case: Artwork

Although artwork may be created using the same type of file, the files are usually spread throughout the in different sections (e.g. pages and components), with each usually coming in different dimensions.

It’s pretty clear that querying the whole array, as we did previously, won’t wor. However, we can still organize all the images in a single folder. That means we an still use sourceInstanceName for specifying which folder we are querying the image from.

Similar to our previous use case, let’s create a folder called src/images/art and configure gatsby-source-filesystem. While querying, rather than getting the whole array, here we will query for the particular image we need in the size and specification as per our requirements:

art_team: file(     sourceInstanceName: { eq: "art" }     name: { eq: "team_work" }   ) {     childImageSharp {     fluid(maxWidth: 1600) {       ...GatsbyImageSharpFluid     }   } }

This can be directly used in the component:

<Img fluid={data.art_team.childImageSharp.fluid} />

Further, this can be repeated for each component or section that requires an image from this group.

Special case: Inlining SVGs

Gatsby automatically encodes smaller images into a base64 format and places the data inline, reducing the number of requests to boost performance. That’s great in general, but might actually be a detriment to SVG files. Instead, we can manually wrangle SVGs to get the same performance benefits, or in the case we might want to make things more interactive, incorporate animations.

I found gatsby-plugin-svgr to be the most convenient solution here. It allows us to import all SVG files as React components:

import { ReactComponent as GithubIcon } from './github.svg';

Since we’re technically processing SVG files instead of raster images, it’d make sense to move the SVG file out of static folder and place it in the folder of the component that’s using it.

Conclusion

After working with Gatsby on a couple of projects, these are a few of the ways I overcame hurdles when working with images to get that nice blur-up effect. I figured they might come handy for you, particularly for the common use cases we looked at.

All the conventions used here came from the gatsby-absurd starter project I set up on GitHub. Here’s the result:

It’s a good idea to check that out if you’d like to see examples of it used in a project. Take a look at Team.js to see how multiple images are queried from the same group. Other sections — such as About.js and Header.js — illustrate how design graphics (the group of images shared across different sections) are queried. Footer.js and Navbar.js have examples for handling icons.

The post Ways to Organize and Prepare Images for a Blur-Up Effect Using Gatsby appeared first on CSS-Tricks.

CSS-Tricks

, , , , , , ,
[Top]

Using GraphQL Playground with Gatsby

I’m assuming most of you have already heard about Gatsby, and at least loosely know that it’s basically a static site generator for React sites. It generally runs like this:

  1. Data Sources → Pull data from anywhere.
  2. Build → Generate your website with React and GraphQL.
  3. Deploy → Send the site to any static site host.

What this typically means is that you can get your data from any recognizable data source — CMS, markdown, file systems and databases, to name a few — manage the data through GraphQL to build your website, and finally deploy your website to any static web host (such as Netlify or Zeit).

Screenshot of the Gatsby homepage. It shows the three different steps of the Gatsby build process showing how data sources get built and then deployed.
The Gatsby homepage illustration of the Gatsby workflow.

In this article, we are concerned with the build process, which is powered by GraphQL. This is the part where your data is managed. Unlike traditional REST APIs where you often need to send anonymous data to test your endpoints, GraphQL consolidates your APIs into a self-documenting IDE. Using this IDE, you can perform GraphQL operations such as queries, mutations, and subscriptions, as well as view your GraphQL schema, and documentation.

GraphQL has an IDE built right into it, but what if we want something a little more powerful? That’s where GraphQL Playground comes in and we’re going to walk through the steps to switch from the default over to GraphQL Playground so that it works with Gatsby.

GraphiQL and GraphQL Playground

GraphiQL is GraphQL’s default IDE for exploring GraphQL operations, but you could switch to something else, like GraphQL Playground. Both have their advantages. For example, GraphQL Playground is essentially a wrapper over GraphiQL but includes additional features such as:

  • Interactive, multi-column schema documentation
  • Automatic schema reloading
  • Support for GraphQL Subscriptions
  • Query history
  • Configuration of HTTP headers
  • Tabs
  • Extensibility (themes, etc.)

Choosing either GraphQL Playground or GraphiQL most likely depends on whether you need to use those additional features. There’s no strict rule that will make you write better GraphQL operations, or build a better website or app.

This post isn’t meant to sway you toward one versus the other. We’re looking at GraphQL Playground in this post specifically because it’s not the default IDE and there may be use cases where you need the additional features it provides and needs to set things up to work with Gatsby. So, let’s dig in and set up a new Gatsby project from scratch. We’ll integrate GraphQL Playground and configure it for the project.

Setting up a Gatsby Project

To set up a new Gatsby project, we first need to install the gatsby-cli. This will give us Gatsby-specific commands we can use in the Terminal.

npm install -g gatsby-cli

Now, let’s set up a new site. I’ve decided to call this example “gatsby-playground” but you can name it whatever you’d like.

gatsby new gatsby-playground

Let’s navigate to the directory where it was installed.

cd gatsby-playground

And, finally, flip on our development server.

gatsby develop

Head over to http://localhost:8000 in the browser for the opening page of the project. Your Gatsby GraphQL operations are going to be located at http://localhost:8000/___graphql.

Screenshot of the starter page for a new Gatsby project. It says Welcome to your new Gatsby website. Now go build something great.
The GraphiQL interface. There are four panels from left to right showing the explorer, query variables and documentation.
The GraphiQL interface.

At this point, I think it’s worth calling out that there is a desktop app for GraphQL Playground. You could just access your Gatsby GraphQL operations with the URL Endpoint localhost:8000/___graphql without going any further with this article. But, we want to get our hands dirty and have some fun under the hood!

Screenshot of the GraphQL Playground interface. It has two panels showing the Gatsby GraphQL operations.
GraphQL Playground running Gatsby GraphQL Operations.

Gatsby’s Environmental Variables

Still around? Cool. Moving on.

Since we’re not going to be relying on the desktop app, we’ll need to do a little bit of Environmental Variable setup.

Environmental Variables are variables used specifically to customize the behavior of a website in different environments. These environments could be when the website is in active development, or perhaps when it is live in production and available for the world to see. We can have as many environments as we want, and define different Environmental Variables for each of the environments.

Learn more about Environmental Variables in Gatsby.

Gatsby supports two environments: development and production. To set a development environmental variable, we need to have a .env.development file at the root of the project. Same sort of deal for production, but it’s .env.production.

To swap out both environments, we’ll need to set an environment variable in a cross-platform compatible way. Let’s create a .env.development file at the root of the project. Here, we set a key/value pair for our variables. The key will be GATSBY_GRAPHQL_IDE, and the value will be playground, like so:

GATSBY_GRAPHQL_IDE=playground

Accessing Environment Variables in JavaScript

In Gatsby, our Environmental Variables are only available at build time, or when Node.JS is running (what we’ll call run time). Since the variables are loaded client-side at build time, we need to use them dynamically at run time. It is important that we restart our server or rebuild our website every time we modify any of these variables.

To load our environmental variables into our project, we need to install a package:

yarn add env-cmd --dev // npm install --save-dev env-cmd

With that, we will change the develop script in package.json as the final step, to this instead:

"develop": "env-cmd --file .env.development --fallback gatsby develop"

The develop script instructs the env-cmd package to load environmental variables from a custom environmental variable file (.env.development in this case), and if it can’t find it, fallback to .env (if you have one, so if you see the need to, create a .env file at the root of your project with the same content as .env.development).

And that’s it! But, hey, remember to restart the server since we change the variable.

yarn start // npm run develop

If you refresh the http://localhost:8000/___graphql in the browser, you should now see GraphQL playground. Cool? Cool!

GraphQL Playground with Gatsby.

And that’s how we get GraphQL Playground to work with Gatsby!

So that’s how we get from GraphQL’s default GraphiQL IDE to GraphQL Playground. Like we covered earlier, the decision of whether or not to make the switch at all comes down to whether the additional features offered in GraphQL Playground are required for your project. Again, we’re basically working with a GraphiQL wrapper that piles on more features.

Resources

Here are some additional articles around the web to get you started with Gatsby and more familiar with GraphiQL, GraphQL Playground, and Environment Variables.

The post Using GraphQL Playground with Gatsby appeared first on CSS-Tricks.

CSS-Tricks

, , ,
[Top]

The Power of Serverless v2.0! (Now an Open-Source Gatsby Site Hosted on Netlify)

I created a website called The Power of Serverless for Front-End Developers over at thepowerofserverless.info a little while back while I was learning about that whole idea. I know a little more now but still have an endless amount to learn. Still, I felt like it was time to revamp that site a bit.

For one thing, just like our little conferences website, the new site is a subdomain of this very site:

https://serverless.css-tricks.com/

Why? What’s this site all about?

The whole idea behind the serverless buzzword is a pretty big deal. Rather than maintaining your own servers, which you already buy from some other company, you architect your app such that everything is run on commoditized servers you access on-demand instead.

Hosting becomes static, which is rife with advantages. Just look at Netlify who offer blazing-fast static hosting and innovate around the developer experience. The bits you still need back-end services for run in cloud functions that are cheap and efficient.

This is a big deal for front-end developers. We’ve already seen a massive growth in what we are capable of doing on the front end, thanks to the expanding power of JavaScript. Now a JavaScript developer can be building entire websites from end-to-end with the JAMstack concept.

But you still need to know how to string it all together. Who do you use to process forms? Where do you store the data? What can I use for user authentication? What content management systems are available in this world? That’s what this site is all about! I’d like the site to be able to explain the concept and offer resources, but more importantly, be a directory to the slew of services out there that make up this new serverless world.

The site also features a section containing ideas that might help you figure out how you might use serverless technology. Perhaps you’ll even take a spin making a serverless site.

Design by Kylie Timpani and illustration by Geri Coady

Kylie Timpani (yes, the same Kylie who worked on the v17 design of this site!) did all the visual design for this project.

Geri Coady did all the illustration work.

If anything looks off or weird, blame my poor implementation of their work. I’m still making my way through checklists of improvements as we speak. Sometimes you just gotta launch things and improve as you go.

Everything is on GitHub and contributions are welcome

It’s all right here.

I’d appreciate any help cleaning up copy, adding services, making it more accessible… really anything you think would improve the site. Feel free to link up your own work, although I tend to find that contributions are stronger when you are propping up someone else rather than yourself. It’s ultimately my call whether your pull request is accepted. That might be subjective sometimes.

Before doing anything dramatic, probably best to talk it out by emailing me or opening an issue. There’s already a handful of issues in there.

I suspect companies that exist in this space will be interested in being represented in here somewhere, and I’m cool with that. Go for it. Perhaps we can open up some kind of sponsorship opportunities as well.

Creating with components: A good idea

I went with Gatsby for this project. A little site like this (a couple of pages of static content) deserves to be rendered entirely server-side. Gatsby does that, even though you work entirely in React, which is generally thought of as a client-side technology. Next.js and react-static are similar in spirit.

I purposely wanted to work in JavaScript because I feel like JavaScript has been doing the best job around the idea of architecting sites in components. Sure, you could sling some partials and pass local variables in Rails partials or Nunjucks includes, but it’s a far cry from the versatility you get in a framework like React, Vue or Angular that are designing entirely to help build components for the front end.

The fact that these JavaScript frameworks are getting first-class server-side rendering stories is big. Plus, after the site’s initial render, the site “hydrates” and you end up getting that SPA feel anyway… fantastic. Yet another thing that shows how a JavaScript-focused front-end developer is getting more and more powerful.

As an aside: I don’t have much experience with more complicated content data structures and JAMstack sites. I suspect once you’ve gone past this “little simple cards of data” structure, you might be beyond what front-matter Markdown files are best suited toward and need to get into a more full-fledged CMS situation, hopefully with a GraphQL endpoint to get whatever you need. Ripe space, for sure.

The post The Power of Serverless v2.0! (Now an Open-Source Gatsby Site Hosted on Netlify) appeared first on CSS-Tricks.

CSS-Tricks

, , , , , , ,
[Top]