Tag: Netlify

How to create a client-serverless Jamstack app using Netlify, Gatsby and Fauna

The Jamstack is a modern web development architecture based on client-side JavaScript, reusable APIs, and prebuilt Markup.

The key aspects of a Jamstack application are the following:

  • The entire app runs on a CDN (or ADN). CDN stands for Content Delivery Network and an ADN is an Application Delivery Network.
  • Everything lives in Git.
  • Automated builds run with a workflow when developers push the code.
  • There’s Automatic deployment of the prebuilt markup to the CDN/ADN.
  • Reusable APIs make hasslefree integrations with many of the services. To take a few examples, Stripe for the payment and checkout, Mailgun for email services, etc. We can also write custom APIs targeted to a specific use-case. We will see such examples of custom APIs in this article.
  • It’s practically Serverless. To put it more clearly, we do not maintain any servers, rather make use of already existing services (like email, media, database, search, and so on) or serverless functions.

In this article, we will learn how to build a Jamstack application that has:

  • A global data store with GraphQL support to store and fetch data with ease. We will use Fauna to accomplish this.
  • Serverless functions that also act as the APIs to fetch data from the Fauna data store. We will use Netlify serverless functions for this.
  • We will build the client side of the app using a Static Site Generator called Gatsbyjs.
  • Finally we will deploy the app on a CDN configured and managed by Netlify CDN.

So, what are we building today?

We all love shopping. How cool would it be to manage all of our shopping notes in a centralized place? So we’ll be building an app called ‘shopnote’ that allows us to manage shop notes. We can also add one or more items to a note, mark them as done, mark them as urgent, etc.

At the end of this article, our shopnote app will look like this,

TL;DR

We will learn things with a step-by-step approach in this article. If you want to jump into the source code or demonstration sooner, here are links to them.

Set up Fauna

Fauna is the data API for client-serverless applications. If you are familiar with any traditional RDBMS, a major difference with Fauna would be, it is a relational NOSQL system that gives all the capabilities of the legacy RDBMS. It is very flexible without compromising scalability and performance.

Fauna supports multiple APIs for data-access,

  • GraphQL: An open source data query and manipulation language. If you are new to the GraphQL, you can find more details from here, https://graphql.org/
  • Fauna Query Language (FQL): An API for querying Fauna. FQL has language specific drivers which makes it flexible to use with languages like JavaScript, Java, Go, etc. Find more details of FQL from here.

In this article we will explain the usages of GraphQL for the ShopNote application.

First thing first, sign up using this URL. Please select the free plan which is with a generous daily usage quota and more than enough for our usage.

Next, create a database by providing a database name of your choice. I have used shopnotes as the database name.

After creating the database, we will be defining the GraphQL schema and importing it into the database. A GraphQL schema defines the structure of the data. It defines the data types and the relationship between them. With schema we can also specify what kind of queries are allowed.

At this stage, let us create our project folder. Create a project folder somewhere on your hard drive with the name, shopnote. Create a file with the name, shopnotes.gql with the following content:

type ShopNote {   name: String!   description: String   updatedAt: Time   items: [Item!] @relation }   type Item {   name: String!   urgent: Boolean   checked: Boolean   note: ShopNote! }   type Query {   allShopNotes: [ShopNote!]! }

Here we have defined the schema for a shopnote list and item, where each ShopNote contains name, description, update time and a list of Items. Each Item type has properties like, name, urgent, checked and which shopnote it belongs to. 

Note the @relation directive here. You can annotate a field with the @relation directive to mark it for participating in a bi-directional relationship with the target type. In this case, ShopNote and Item are in a one-to-many relationship. It means, one ShopNote can have multiple Items, where each Item can be related to a maximum of one ShopNote.

You can read more about the @relation directive from here. More on the GraphQL relations can be found from here.

As a next step, upload the shopnotes.gql file from the Fauna dashboard using the IMPORT SCHEMA button,

Upon importing a GraphQL Schema, FaunaDB will automatically create, maintain, and update, the following resources:

  • Collections for each non-native GraphQL Type; in this case, ShopNote and Item.
  • Basic CRUD Queries/Mutations for each Collection created by the Schema, e.g. createShopNote allShopNotes; each of which are powered by FQL.
  • For specific GraphQL directives: custom Indexes or FQL for establishing relationships (i.e. @relation), uniqueness (@unique), and more!

Behind the scene, Fauna will also help to create the documents automatically. We will see that in a while.

Fauna supports a schema-free object relational data model. A database in Fauna may contain a group of collections. A collection may contain one or more documents. Each of the data records are inserted into the document. This forms a hierarchy which can be visualized as:

Here the data record can be arrays, objects, or of any other supported types. With the Fauna data model we can create indexes, enforce constraints. Fauna indexes can combine data from multiple collections and are capable of performing computations. 

At this stage, Fauna already created a couple of collections for us, ShopNote and Item. As we start inserting records, we will see the Documents are also getting created. We will be able view and query the records and utilize the power of indexes. You may see the data model structure appearing in your Fauna dashboard like this in a while,

Point to note here, each of the documents is identified by the unique ref attribute. There is also a ts field which returns the timestamp of the recent modification to the document. The data record is part of the data field. This understanding is really important when you interact with collections, documents, records using FQL built-in functions. However, in this article we will interact with them using GraphQL queries with Netlify Functions.

With all these understanding, let us start using our Shopenotes database that is created successfully and ready for use. 

Let us try some queries

Even though we have imported the schema and underlying things are in place, we do not have a document yet. Let us create one. To do that, copy the following GraphQL mutation query to the left panel of the GraphQL playground screen and execute.

mutation {   createShopNote(data: {     name: "My Shopping List"     description: "This is my today's list to buy from Tom's shop"     items: {       create: [         { name: "Butther - 1 pk", urgent: true }         { name: "Milk - 2 ltrs", urgent: false }         { name: "Meat - 1lb", urgent: false }       ]     }   }) {     _id     name     description     items {       data {         name,         urgent       }     }   } }

Note, as Fauna already created the GraphQL mutation classes in the background, we can directly use it like, createShopNote. Once successfully executed, you can see the response of a ShopNote creation at the right side of the editor.

The newly created ShopNote document has all the required details we have passed while creating it. We have seen ShopNote has a one-to-many relation with Item. You can see the shopnote response has the item data nested within it. In this case, one shopnote has three items. This is really powerful. Once the schema and relation are defined, the document will be created automatically keeping that relation in mind.

Now, let us try fetching all the shopnotes. Here is the GraphQL query:

query {     allShopNotes {     data {       _id       name       description       updatedAt       items {         data {           name,           checked,           urgent         }       }     }   } }

Let’s try the query in the playground as before:

Now we have a database with a schema and it is fully operational with creating and fetch functionality. Similarly, we can create queries for adding, updating, removing items to a shopnote and also updating and deleting a shopnote. These queries will be used at a later point in time when we create the serverless functions.

If you are interested to run other queries in the GraphQL editor, you can find them from here,

Create a Server Secret Key

Next, we need to create a secured server key to make sure the access to the database is authenticated and authorized.

Click on the SECURITY option available in the FaunaDB interface to create the key, like so,

On successful creation of the key, you will be able to view the key’s secret. Make sure to copy and save it somewhere safe.

We do not want anyone else to know about this key. It is not even a good idea to commit it to the source code repository. To maintain this secrecy, create an empty file called .env at the root level of your project folder.

Edit the .env file and add the following line to it (paste the generated server key in the place of, <YOUR_FAUNA_KEY_SECRET>).

FAUNA_SERVER_SECRET=<YOUR_FAUNA_KEY_SECRET>

Add a .gitignore file and write the following content to it. This is to make sure we do not commit the .env file to the source code repo accidentally. We are also ignoring node_modules as a best practice.

.env

We are done with all that had to do with Fauna’s setup. Let us move to the next phase to create serverless functions and APIs to access data from the Fauna data store. At this stage, the directory structure may look like this,

Set up Netlify Serverless Functions

Netlify is a great platform to create hassle-free serverless functions. These functions can interact with databases, file-system, and in-memory objects.

Netlify functions are powered by AWS Lambda. Setting up AWS Lambdas on our own can be a fairly complex job. With Netlify, we will simply set a folder and drop our functions. Writing simple functions automatically becomes APIs. 

First, create an account with Netlify. This is free and just like the FaunaDB free tier, Netlify is also very flexible.

Now we need to install a few dependencies using either npm or yarn. Make sure you have nodejs installed. Open a command prompt at the root of the project folder. Use the following command to initialize the project with node dependencies,

npm init -y

Install the netlify-cli utility so that we can run the serverless function locally.

npm install netlify-cli -g

Now we will install two important libraries, axios and dotenv. axios will be used for making the HTTP calls and dotenv will help to load the FAUNA_SERVER_SECRET environment variable from the .env file into process.env.

yarn add axios dotenv

Or:

npm i axios dotenv

Create serverless functions

Create a folder with the name, functions at the root of the project folder. We are going to keep all serverless functions under it.

Now create a subfolder called utils under the functions folder. Create a file called query.js under the utils folder. We will need some common code to query the data store for all the serverless functions. The common code will be in the query.js file.

First we import the axios library functionality and load the .env file. Next, we export an async function that takes the query and variables. Inside the async function, we make calls using axios with the secret key. Finally, we return the response.

// query.js   const axios = require("axios"); require("dotenv").config();   module.exports = async (query, variables) => {   const result = await axios({       url: "https://graphql.fauna.com/graphql",       method: "POST",       headers: {           Authorization: `Bearer $ {process.env.FAUNA_SERVER_SECRET}`       },       data: {         query,         variables       }  });    return result.data; };

Create a file with the name, get-shopnotes.js under the functions folder. We will perform a query to fetch all the shop notes.

// get-shopnotes.js   const query = require("./utils/query");   const GET_SHOPNOTES = `    query {        allShopNotes {        data {          _id          name          description          updatedAt          items {            data {              _id,              name,              checked,              urgent          }        }      }    }  }   `;   exports.handler = async () => {   const { data, errors } = await query(GET_SHOPNOTES);     if (errors) {     return {       statusCode: 500,       body: JSON.stringify(errors)     };   }     return {     statusCode: 200,     body: JSON.stringify({ shopnotes: data.allShopNotes.data })   }; };

Time to test the serverless function like an API. We need to do a one time setup here. Open a command prompt at the root of the project folder and type:

netlify login

This will open a browser tab and ask you to login and authorize access to your Netlify account. Please click on the Authorize button.

Next, create a file called, netlify.toml at the root of your project folder and add this content to it,

[build]     functions = "functions"   [[redirects]]    from = "/api/*"    to = "/.netlify/functions/:splat"    status = 200

This is to tell Netlify about the location of the functions we have written so that it is known at the build time.

Netlify automatically provides the APIs for the functions. The URL to access the API is in this form, /.netlify/functions/get-shopnotes which may not be very user-friendly. We have written a redirect to make it like, /api/get-shopnotes.

Ok, we are done. Now in command prompt type,

netlify dev

By default the app will run on localhost:8888 to access the serverless function as an API.

Open a browser tab and try this URL, http://localhost:8888/api/get-shopnotes:

Congratulations!!! You have got your first serverless function up and running.

Let us now write the next serverless function to create a ShopNote. This is going to be simple. Create a file named, create-shopnote.js under the functions folder. We need to write a mutation by passing the required parameters. 

//create-shopnote.js   const query = require("./utils/query");   const CREATE_SHOPNOTE = `   mutation($ name: String!, $ description: String!, $ updatedAt: Time!, $ items: ShopNoteItemsRelation!) {     createShopNote(data: {name: $ name, description: $ description, updatedAt: $ updatedAt, items: $ items}) {       _id       name       description       updatedAt       items {         data {           name,           checked,           urgent         }       }     }   } `;   exports.handler = async event => {      const { name, items } = JSON.parse(event.body);   const { data, errors } = await query(     CREATE_SHOPNOTE, { name, items });     if (errors) {     return {       statusCode: 500,       body: JSON.stringify(errors)     };   }     return {     statusCode: 200,     body: JSON.stringify({ shopnote: data.createShopNote })   }; };

Please give your attention to the parameter, ShopNotesItemRelation. As we had created a relation between the ShopNote and Item in our schema, we need to maintain that while writing the query as well.

We have de-structured the payload to get the required information from the payload. Once we got those, we just called the query method to create a ShopNote.

Alright, let’s test it out. You can use postman or any other tools of your choice to test it like an API. Here is the screenshot from postman.

Great, we can create a ShopNote with all the items we want to buy from a shopping mart. What if we want to add an item to an existing ShopNote? Let us create an API for it. With the knowledge we have so far, it is going to be really quick.

Remember, ShopNote and Item are related? So to create an item, we have to mandatorily tell which ShopNote it is going to be part of. Here is our next serverless function to add an item to an existing ShopNote.

//add-item.js   const query = require("./utils/query");   const ADD_ITEM = `   mutation($ name: String!, $ urgent: Boolean!, $ checked: Boolean!, $ note: ItemNoteRelation!) {     createItem(data: {name: $ name, urgent: $ urgent, checked: $ checked, note: $ note}) {       _id       name       urgent       checked       note {         name       }     }   } `;   exports.handler = async event => {      const { name, urgent, checked, note} = JSON.parse(event.body);   const { data, errors } = await query(     ADD_ITEM, { name, urgent, checked, note });     if (errors) {     return {       statusCode: 500,       body: JSON.stringify(errors)     };   }     return {     statusCode: 200,     body: JSON.stringify({ item: data.createItem })   }; };

We are passing the item properties like, name, if it is urgent, the check value and the note the items should be part of. Let’s see how this API can be called using postman,

As you see, we are passing the id of the note while creating an item for it.

We won’t bother writing the rest of the API capabilities in this article,  like updating, deleting a shop note, updating, deleting items, etc. In case, you are interested, you can look into those functions from the GitHub Repository.

However, after creating the rest of the API, you should have a directory structure like this,

We have successfully created a data store with Fauna, set it up for use, created an API backed by serverless functions, using Netlify Functions, and tested those functions/routes.

Congratulations, you did it. Next, let us build some user interfaces to show the shop notes and add items to it. To do that, we will use Gatsby.js (aka, Gatsby) which is a super cool, React-based static site generator.

The following section requires you to have basic knowledge of ReactJS. If you are new to it, you can learn it from here. If you are familiar with any other user interface technologies like, Angular, Vue, etc feel free to skip the next section and build your own using the APIs explained so far.

Set up the User Interfaces using Gatsby

We can set up a Gatsby project either using the starter projects or initialize it manually. We will build things from scratch to understand it better.

Install gatsby-cli globally. 

npm install -g gatsby-cli

Install gatsby, react and react-dom

yarn add gatsby react react-dom

Edit the scripts section of the package.json file to add a script for develop.

"scripts": {   "develop": "gatsby develop"  }

Gatsby projects need a special configuration file called, gatsby-config.js. Please create a file named, gatsby-config.js at the root of the project folder with the following content,

module.exports = {   // keep it empty     }

Let’s create our first page with Gatsby. Create a folder named, src at the root of the project folder. Create a subfolder named pages under src. Create a file named, index.js under src/pages with the following content:

import React, { useEffect, useState } from 'react';       export default () => {       const [loading, setLoading ] = useState(false);       const [shopnotes, setShopnotes] = useState(null);         return (     <>           <h1>Shopnotes to load here...</h1>     </>           )     } 

Let’s run it. We generally need to use the command gatsby develop to run the app locally. As we have to run the client side application with netlify functions, we will continue to use, netlify dev command.

netlify dev

That’s all. Try accessing the page at http://localhost:8888. You should see something like this,

Gatsby project build creates a couple of output folders which you may not want to push to the source code repository. Let us add a few entries to the .gitignore file so that we do not get unwanted noise.

Add .cache, node_modules and public to the .gitignore file. Here is the full content of the file:

.cache public node_modules *.env

At this stage, your project directory structure should match with the following:

Thinking of the UI components

We will create small logical components to achieve the ShopNote user interface. The components are:

  • Header: A header component consists of the Logo, heading and the create button to create a shopnote.
  • Shopenotes: This component will contain the list of the shop note (Note component).
  • Note: This is individual notes. Each of the notes will contain one or more items.
  • Item: Each of the items. It consists of the item name and actions to add, remove, edit an item.

You can see the sections marked in the picture below:

Install a few more dependencies

We will install a few more dependencies required for the user interfaces to be functional and look better. Open a command prompt at the root of the project folder and install these dependencies,

yarn add bootstrap lodash moment react-bootstrap react-feather shortid

Lets load all the Shop Notes

We will use the Reactjs useEffect hook to make the API call and update the shopnotes state variables. Here is the code to fetch all the shop notes. 

useEffect(() => {   axios("/api/get-shopnotes").then(result => {     if (result.status !== 200) {       console.error("Error loading shopnotes");       console.error(result);       return;     }     setShopnotes(result.data.shopnotes);     setLoading(true);   }); }, [loading]);

Finally, let us change the return section to use the shopnotes data. Here we are checking if the data is loaded. If so, render the Shopnotes component by passing the data we have received using the API.

return (   <div className="main">     <Header />     {       loading ? <Shopnotes data = { shopnotes } /> : <h1>Loading...</h1>     }   </div> );  

You can find the entire index.js file code from here The index.js file creates the initial route(/) for the user interface. It uses other components like, Shopnotes, Note and Item to make the UI fully operational. We will not go to a great length to understand each of these UI components. You can create a folder called components under the src folder and copy the component files from here.

Finally, the index.css file

Now we just need a css file to make things look better. Create a file called index.css under the pages folder. Copy the content from this CSS file to the index.css file.

import 'bootstrap/dist/css/bootstrap.min.css'; import './index.css'

That’s all. We are done. You should have the app up and running with all the shop notes created so far. We are not getting into the explanation of each of the actions on items and notes here not to make the article very lengthy. You can find all the code in the GitHub repo. At this stage, the directory structure may look like this,

A small exercise

I have not included the Create Note UI implementation in the GitHib repo. However, we have created the API already. How about you build the front end to add a shopnote? I suggest implementing a button in the header, which when clicked, creates a shopnote using the API we’ve already defined. Give it a try!

Let’s Deploy

All good so far. But there is one issue. We are running the app locally. While productive, it’s not ideal for the public to access. Let’s fix that with a few simple steps.

Make sure to commit all the code changes to the Git repository, say, shopnote. You have an account with Netlify already. Please login and click on the button, New site from Git.

Next, select the relevant Git services where your project source code is pushed. In my case, it is GitHub.

Browse the project and select it.

Provide the configuration details like the build command, publish directory as shown in the image below. Then click on the button to provide advanced configuration information. In this case, we will pass the FAUNA_SERVER_SECRET key value pair from the .env file. Please copy paste in the respective fields. Click on deploy.

You should see the build successful in a couple of minutes and the site will be live right after that.

In Summary

To summarize:

  • The Jamstack is a modern web development architecture based on client-side JavaScript, reusable APIs, and prebuilt Markup.
  • 70% – 80% of the features that once required a custom back-end can now be done either on the front end or there are APIs, services to take advantage of.
  • Fauna provides the data API for the client-serverless applications. We can use GraphQL or Fauna’s FQL to talk to the store.
  • Netlify serverless functions can be easily integrated with Fauna using the GraphQL mutations and queries. This approach may be useful when you have the need of  custom authentication built with Netlify functions and a flexible solution like Auth0.
  • Gatsby and other static site generators are great contributors to the Jamstack to give a fast end user experience.

Thank you for reading this far! Let’s connect. You can @ me on Twitter (@tapasadhikary) with comments, or feel free to follow.


The post How to create a client-serverless Jamstack app using Netlify, Gatsby and Fauna appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , , ,

Netlify Background Functions

As quickly as I can:

  • AWS Lambda is great: it allows you to run server-side code without really running a server. This is what “serverless” largely means.
  • Netlify Functions run on AWS Lambda and make them way easier to use. For example, you just chuck some scripts into a folder they deploy when you push to your main branch. Plus you get logs.
  • Netlify Functions used to be limited to a 10-second execution time, even though Lambda’s can run 15 minutes.
  • Now, you can run 15-minute functions on Netlify also, by appending -background to the filename like my-function-background.js. (You can write in Go also.)
  • This means you can do long-ish running tasks, like spin up a headless browser and scrape some data, process images to build into a PDF and email it, sync data across systems with batch API requests… or anything else that takes a lot longer than 10 seconds to do.

The post Netlify Background Functions appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, ,
[Top]

Create an FAQ Slack app with Netlify functions and FaunaDB

Sometimes, when you’re looking for a quick answer, it’s really useful to have an FAQ system in place, rather than waiting for someone to respond to a question. Wouldn’t it be great if Slack could just answer these FAQs for us? In this tutorial, we’re going to be making just that: a slash command for Slack that will answer user FAQs. We’ll be storing our answers in FaunaDB, using FQL to search the database, and utilising a Netlify function to provide a serverless endpoint to connect Slack and FaunaDB.

Prerequisites

This tutorial assumes you have the following requirements:

  • Github account, used to log in to Netlify and Fauna, as well as storing our code
  • Slack workspace with permission to create and install new apps
  • Node.js v12

Create npm package

To get started, create a new folder and initialise a npm package by using your package manager of choice and run npm init -y from inside the folder. After the package has been created, we have a few npm packages to install.

Run this to install all the packages we will need for this tutorial:

npm install express body-parser faunadb encoding serverless-http netlify-lambda

These packages are explained below, but if you are already familiar with them, feel free to skip ahead.

Encoding has been installed due to a plugin error occurring in @netlify/plugin-functions-core at the time of writing and may not be needed when you follow this tutorial.

Packages

Express is a web application framework that will allow us to simplify writing multiple endpoints for our function. Netlify functions require handlers for each endpoint, but express combined with serverless-http will allow us to write the endpoints all in one place.

Body-parser is an express middleware which will take care of the application/x-www-form-urlencoded data Slack will be sending to our function.

Faunadb is an npm module that allows us to interact with the database through the FaunaDB Javascript driver. It allows us to pass queries from our function to the database, in order to get the answers

Serverless-http is a module that wraps Express applications to the format expected by Netlify functions, meaning we won’t have to rewrite our code when we shift from local development to Netlify.

Netlify-lambda is a tool which will allow us to build and serve our functions locally, in the same way they will be built and deployed on Netlify. This means we can develop locally before pushing our code to Netlify, increasing the speed of our workflow.

Create a function

With our npm packages installed, it’s time to begin work on the function. We’ll be using serverless to wrap an express app, which will allow us to deploy it to Netlify later. To get started, create a file called netlify.toml, and add the following into it:

[build]   functions = "functions"

We will use a .gitignore file, to prevent our node_modules and functions folders from being added to git later. Create a file called .gitignore, and add the following:

functions/

node_modules/

We will also need a folder called src, and a file inside it called server.js. Your final file structure should look like:

With this in place, create a basic express app by inserting the code below into server.js:

const express = require("express"); const bodyParser = require("body-parser"); const fauna = require("faunadb"); const serverless = require("serverless-http");   const app = express();   module.exports.handler = serverless(app);

Check out the final line; it looks a little different to a regular express app. Rather than listening on a port, we’re passing our app into serverless and using this as our handler, so that Netlify can invoke our function.

Let’s set up our body parser to use application/x-www-form-urlencoded data, as well as putting a router in place. Add the following to server.js after defining app: 

const router = express.Router(); app.use(bodyParser.json()); app.use(bodyParser.urlencoded({ extended: true })); app.use("/.netlify/functions/server", router);

Notice that the router is using /.netlify/functions/server as an endpoint. This is so that Netlify will be able to correctly deploy the function later in the tutorial. This means we will need to add this to any base URLs, in order to invoke the function.

Create a test route

With a basic app in place, let’s create a test route to check everything is working. Insert the following code to create a simple GET route, that returns a simple json object:

router.get("/test", (req, res) => {  res.json({ hello: "world" }); });

With this route in place, let’s spin up our function on localhost, and check that we get a response. We’ll be using netlify-lambda to serve our app, so that we can imitate a Netlify function locally on port 9000. In our package.json, add the following lines into the scripts section:

"start": "./node_modules/.bin/netlify-lambda serve src",    "build": "./node_modules/.bin/netlify-lambda build src"

With this in place, after saving the file, we can run npm start to begin netlify-lambda on port 9000.

The build command will be used when we deploy to Netlify later.

Once it is up and running, we can visit http://localhost:9000/.netlify/functions/server/test to check our function is working as expected.

The great thing about netlify-lambda is it will listen for changes to our code, and automatically recompile whenever we update something, so we can leave it running for the duration of this tutorial.

Start ngrok URL

Now we have a test route working on our local machine, let’s make it available online. To do this, we’ll be using ngrok, a npm package that provides a public URL for our function. If you don’t have ngrok installed already, first run npm install -g ngrok to globally install it on your machine. Then run ngrok http 9000 which will automatically direct traffic to our function running on port 9000.

After starting ngrok, you should see a forwarding URL in the terminal, which we can visit to confirm our server is available online. Copy this base URL to your browser, and follow it with /.netlify/functions/server/test. You should see the same result as when we made our calls on localhost, which means we can now use this URL as an endpoint for Slack!

Each time you restart ngrok, it creates a new URL, so if you need to stop it at any point, you will need to update your URL endpoint in Slack.

Setting up Slack

Now that we have a function in place, it’s time to move to Slack to create the app and slash command. We will have to deploy this app to our workspace, as well as making a few updates to our code to connect our function. For a more in depth set of instructions on how to create a new slash command, you can follow the official Slack documentation. For a streamlined set of instructions, follow along below:

Create a new Slack app

First off, let’s create our new Slack app for these FAQs. Visit https://api.slack.com/apps and select Create New App to begin. Give your app a name (I used Fauna FAQ), and select a development workspace for the app.

Create a slash command

After creating the app, we need to add a slash command to it, so that we can interact with the app. Select slash commands from the menu after the app has been created, then create a new command. Fill in the following form with the name of your command (I used /faq) as well as providing the URL from ngrok. Don’t forget to add /.netlify/functions/server/ to the end!

Install app to workspace

Once you have created your slash command, click on basic information in the sidebar on the left to return to the app’s main page. From here, select the dropdown “Install app to your workspace” and click the button to install it.

Once you have allowed access, the app will be installed, and you’ll be able to start using the slash command in your workspace.

Update the function

With our new app in place, we’ll need to create a new endpoint for Slack to send the requests to. For this, we’ll use the root endpoint for simplicity. The endpoint will need to be able to take a post request with application/x-www-form-urlencoded data, then return a 200 status response with a message. To do this, let’s create a new post route at the root by adding the following code to server.js:

router.post("/", async (req, res) => {   });

Now that we have our endpoint, we can also extract and view the text that has been sent by slack by adding the following line before we set the status:

const text = req.body.text; console.log(`Input text: $  {text}`);

For now, we’ll just pass this text into the response and send it back instantly, to ensure the slack app and function are communicating.

res.status(200); res.send(text);

Now, when you type /faq <somequestion> on a slack channel, you should get back the same message from the slack slash command.

Formatting the response

Rather than just sending back plaintext, we can make use of Slack’s Block Kit to use specialised UI elements to improve the look of our answers. If you want to create a more complex layout, Slack provides a Block Kit builder to visually design your layout.

For now, we’re going to keep things simple, and just provide a response where each answer is separated by a divider. Add the following function to your server.js file after the post route:

const format = (answers) => {  if (answers.length == 0) {    answers = ["No answers found"];  }    let formatted = {    blocks: [],  };    for (answer of answers) {    formatted["blocks"].push({      type: "divider",    });    formatted["blocks"].push({      type: "section",      text: {        type: "mrkdwn",        text: answer,      },    });  }    return formatted; };

With this in place, we now need to pass our answers into this function, to format the answers before returning them to Slack. Update the following in the root post route:

let answers = text; const formattedAnswers = format(answers);

Now when we enter the same command to the slash app, we should get back the same message, but this time in a formatted version!

Setting up Fauna

With our slack app in place, and a function to connect to it, we now need to start working on the database to store our answers. If you’ve never set up a database with FaunaDB before, there is some great documentation on how to quickly get started. A brief step-by-step overview for the database and collection is included below:

Create database

First, we’ll need to create a new database. After logging into the Fauna dashboard online, click New Database. Give your new database a name you’ll remember (I used “slack-faq”) and save the database.

Create collection

With this database in place, we now need a collection. Click the “New Collection” button that should appear on your dashboard, and give your collection a name (I used “faq”). The history days and TTL values can be left as their defaults, but you should ensure you don’t add a value to the TTL field, as we don’t want our documents to be removed automatically after a certain time.

Add question / answer documents

Now we have a database and collection in place, we can start adding some documents to it. Each document should follow the structure:

{    question: "a question string",    answer: "an answer string",    qTokens: [        "first token",        "second token",        "third token"    ] }

The qToken values should be key terms in the question, as we will use them for a tokenized search when we can’t match a question exactly. You can add as many qTokens as you like for each question. The more relevant the tokens are, the more accurate results will be. For example, if our question is “where are the bathrooms”, we should include the qTokens “bathroom”, “bathrooms”, “toilet”, “toilets” and any other terms you may think people will search for when trying to find information about a bathroom.

The questions I used to develop a proof of concept are as follows:

{   question: "where is the lobby",   answer: "On the third floor",   qTokens: ["lobby", "reception"], }, {   question: "when is payday",   answer: "On the first Monday of each month",   qTokens: ["payday", "pay", "paid"], }, {   question: "when is lunch",   answer: "Lunch break is *12 - 1pm*",   qTokens: ["lunch", "break", "eat"], }, {   question: "where are the bathrooms",   answer: "Next to the elevators on each floor",   qTokens: ["toilet", "bathroom", "toilets", "bathrooms"], }, {   question: "when are my breaks",   answer: "You can take a break whenever you want",   qTokens: ["break", "breaks"], }

Feel free to take this time to add as many documents as you like, and as many qTokens as you think each question needs, then we’ll move on to the next step.

Creating Indexes

With these questions in place, we will create two indexes to allow us to search the database. First, create an index called “answers_by_question”, selecting question as the term and answer as the value. This will allow us to search all answers by their associated question.

Then, create an index called “answers_by_qTokens”, selecting qTokens as the term and answer as the value. We will use this index to allow us to search through the qTokens of all items in the database.

Searching the database

To run a search in our database, we will do two things. First, we’ll run a search for an exact match to the question, so we can provide a single answer to the user. Second, if this search doesn’t find a result, we’ll do a search on the qTokens each answer has, returning any results that provide a match. We’ll use Fauna’s online shell to demonstrate and explain these queries, before using them in our function.

Exact Match

Before searching the tokens, we’ll test whether we can match the input question exactly, as this will allow for the best answer to what the user has asked. To search our questions, we will match against the “answers_by_question” index, then paginate our answers. Copy the following code into the online Fauna shell to see this in action:

q.Paginate(q.Match(q.Index("answers_by_question"), "where is the lobby"))

If you have a question matching the “where is the lobby” example above, you should see the expected answer of “On the third floor” as a result.

Searching the tokens

For cases where there is no exact match on the database, we will have to use our qTokens to find any relevant answers. For this, we will match against the “answers_by_qTokens” index we created and again paginate our answers. Copy the following into the online shell to see how this works:

q.Paginate(q.Match(q.Index("answers_by_qTokens"), "break"))

If you have any questions with the qToken “break” from the example questions, you should see all answers returned as a result.

Connect function to Fauna

We have our searches figured out, but currently we can only run them from the online shell. To use these in our function, there is some configuration required, as well as an update to our function’s code.

Function configuration

To connect to Fauna from our function, we will need to create a server key. From your database’s dashboard, select security in the left hand sidebar, and create a new key. Give your new key a name you will recognise, and ensure that the dropdown has Server selected, not Admin. Finally, once the key has been created, add the following code to server.js before the test route, replacing the <secretKey> value with the secret provided by Fauna.

const q = fauna.query; const client = new fauna.Client({  secret: "<secretKey>", });

It would be preferred to store this key in an environment variable in Netlify, rather than directly in the code, but that is beyond the scope of this tutorial. If you would like to use environment variables, this Netlify post explains how to do so.

Update function code

To include our new search queries in the function, copy the following code into server.js after the post route:

const searchText = async (text) => {  console.log("Beginning searchText");  const answer = await client.query(    q.Paginate(q.Match(q.Index("answers_by_question"), text))  );  console.log(`searchText response: $  {answer.data}`);  return answer.data; };   const getTokenResponse = async (text) => {  console.log("Beginning getTokenResponse");  let answers = [];  const questionTokens = text.split(/[ ]+/);  console.log(`Tokens: $  {questionTokens}`);  for (token of questionTokens) {    const tokenResponse = await client.query(      q.Paginate(q.Match(q.Index("answers_by_qTokens"), text))    );    answers = [...answers, ...tokenResponse.data];  }  console.log(`Token answers: $  {answers}`);  return answers; };

These functions replicate the same functionality as the queries we previously ran in the online Fauna shell, but now we can utilise them from our function.

Deploy to Netlify

Now the function is searching the database, the only thing left to do is put it on the cloud, rather than a local machine. To do this, we’ll be making use of a Netlify function deployed from a GitHub repository.

First things first, add a new repo on Github, and push your code to it. Once the code is there, go to Netlify and either sign up or log in using your Github profile. From the home page of Netlify, select “New site from git” to deploy a new site, using the repo you’ve just created in Github.

If you have never deployed a site in Netlify before, this post explains the process to deploy from git.

Ensure while you are creating the new site, that your build command is set to npm run build, to have Netlify build the function before deployment. The publish directory can be left blank, as we are only deploying a function, rather than any pages.

Netlify will now build and deploy your repo, generating a unique URL for the site deployment. We can use this base URL to access the test endpoint of our function from earlier, to ensure things are working.

The last thing to do is update the Slack endpoint to our new URL! Navigate to your app, then select ‘slash commands’ in the left sidebar. Click on the pencil icon to edit the slash command and paste in the new URL for the function. Finally, you can use your new slash command in any authorised Slack channels!

Conclusion

There you have it, an entirely serverless, functional slack slash command. We have used FaunaDB to store our answers and connected to it through a Netlify function. Also, by using Express, we have the flexibility to add further endpoints to the function for adding new questions, or anything else you can think up to further extend this project! Hopefully now, instead of waiting around for someone to answer your questions, you can just use /faq and get the answer instantly!


Matthew Williams is a software engineer from Melbourne, Australia who believes the future of technology is serverless. If you’re interested in more from him, check out his Medium articles, or his GitHub repos.


The post Create an FAQ Slack app with Netlify functions and FaunaDB appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , ,
[Top]

Netlify Edge Handlers

Some very cool news from Netlify: Edge Handlers are in Early Access (request it here). I think these couple of lines of code do a great job in explaining what an Edge Handler is:

export function onRequest(event) {   console.log(`Incoming request for $  {event.request.url}`);   event.replaceResponse(() => fetch("https://www.netlify.com/")); }

So that’s a little bitty bit of JavaScript that runs at “the edge” (at the CDN level) for every request through your site. In the case above, I’m totally replacing the response with an Ajax request for another URL. Weird! But cool. This has incredible power. I can replace the response with a slight-manipulated response. Say, change the headers. Or check who the logged-in user is, make a request for data on their behalf, and inject that data into the response. 🤯

So you might think of Jamstack as either pre-render or get data client-side. This is opening up a new door: build your response dynamically at the edge.

What’s nice about the Netlify approach is that the code that runs these sits right alongside the rest of your code in the repo itself, just like functions.


The post Netlify Edge Handlers appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, ,
[Top]

Comparing Data in Google and Netlify Analytics

Jim Nielsen:

the datasets weren’t even close for me.

Google Analytics works by putting a client-side bit of JavaScript on your site. Netlify Analytics works by parsing server logs server-side. They are not exactly apples to apples, feature-wise. Google Analytics is, I think it’s fair to say, far more robust. You can do things like track custom events which might be very important analytics data to a site. But they both have the basics. They both want to tell you how many pageviews your homepage got, for instance.

There are two huge things that affect these numbers:

  • Client-side JavaScript is blockable and tons of people use content blockers, particularly for third-party scripts from Google. Server-side logs are not blockable.
  • Netlify doesn’t filter things out of that log, meaning bots are counted in addition to regular people visiting.

So I’d say: Netlify probably has more accurate numbers, but a bit inflated from the bots.

Also worth noting, you can do server-side Google Analytics. I’ve never seen anyone actually do it but it seems like a good idea.

One bit of advice from Jim:

Never assume too much from a single set of data. In other words, don’t draw all your data-driven insights from one basket.

Direct Link to ArticlePermalink


The post Comparing Data in Google and Netlify Analytics appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , ,
[Top]

Queue Jumping in Netlify

Cutting to the chase: if you’re on a Business or Enterprise team on Netlify, you can click a build to make it run next in a queue. For example, if you have a really time-sensitive thing (e.g. a bug fix going to production), it can jump ahead of some random development branch building. Now I’ll elaborate.

Part of the rocketjuice of Netlify is that it runs your builds for you. Say you have a Jekyll site. The build command is probably jekyll build. You tell Netlify that’s the command you want it to run, and if successful, deploy it.

You can set the build command from a configuration file in the repo, or here in the UI for settings.

That build command is totally up to you. It could be npm run build and that calls the build command in your package.json which kicks off your custom scripts. Plus, with build plugins, you have a ton of control over the process (e.g. I got it to run Sass easily). That’s CI/CD!

Assuming you are linking up a Git repo, it’s not just pushing to your main branch where these builds runs — it’s on any branch. That’s great for a bunch of reasons. For one, your build is probably running tests too, so it’s keeping you honest. For another, Netlify gives each push a permalink to a deployed version of that exact set of code. That’s tremendously useful. It’s like staging on steroids. Anybody who needs it can get a preview of the site.

On certain projects, you might have a whole team of developers working on a bunch of branches, committing code, and running builds. So Netlify might be awful busy doing all that work. Your build might get stuck behind other people’s stuff. Maybe it absolutely doesn’t matter. Or maybe you have an important meeting in 2 minutes and you really need this deploy preview for everyone to see.

Phil prioritizing some kind of musical coffee over the conference site build.

Now if you’re on a team (on a Business or Enterprise account), you can choose to hop the queue and have yours run next. People will be able to see it was you who did it so, ya know, ya gotta have a little courtesy.


The post Queue Jumping in Netlify appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, ,
[Top]

Netlify Does Cache Invalidation For You

This is one of my favorite Netlify features. Say you’re working on a site and you change as asset like a CSS, JavaScript, or image file. Ya know, like do our job. On Netlify, you don’t have to think about how that’s going to play out with deployment, browsers, and cache. Netlify just handles it for you.

Netlify calls this Instant Cache Invalidation, part of the “rocketjuice” of Netlify.

On all the sites I work on that aren’t on Netlify, I do have to think about it (ugh). If you look at this very websites source, you’ll see a link to a stylesheet something like this:

<link href="https://css-tricks.com/wp-content/themes/CSS-Tricks-17/style.css?cache_bust=1594590986788"> rel="stylesheet"

See that ?cache_bust= stuff at the end of the stylesheet URL? Those are just gibberish characters I put into that URL manually (based on a Date() call) so that when I push a change to the file, it breaks both the CDN and people’s own browser cache and they get the new file. If I didn’t do that, the changes I push won’t be seen until all the cache expires or is manually removed by users, which is… bad. I might be fixing a bug! Or releasing a new feature! It’s extra bad because that CSS might go along with some HTML which doesn’t cache as aggressively and could lead to a mismatch of HTML and expected CSS.

I work on some sites where I change that cache-busting string by hand because I’m too lazy to automate it. Usually, I do automate it though. I recently shared my Gulpfile which I hand-wrote, and part of which deals with this cache-busting. It is work to write, work to maintain, and work to use during development. You can even read the comments on that post and see other people’s strategies for doing the same thing that are different than how I do it. Errrrrrybody be cache-busting.

Not on Netlify.

Again, you change an asset, push it up, Netlify knows it’s changed and does all the cache busting for you. So your stylesheet can be linked up like:

<link href="dont-even-worry-about-it.css" rel="stylesheet" />


The post Netlify Does Cache Invalidation For You appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, ,
[Top]

Building Serverless GraphQL API in Node with Express and Netlify

I’ve always wanted to build an API, but was scared away by just how complicated things looked. I’d read a lot of tutorials that start with “first, install this library and this library and this library” without explaining why that was important. I’m kind of a Luddite when it comes to these things.

Well, I recently rolled up my sleeves and got my hands dirty. I wanted to build and deploy a simple read-only API, and goshdarnit, I wasn’t going to let some scary dependency lists and fancy cutting-edge services stop me¹.

What I discovered is that underneath many of the tutorials and projects out there is a small, easy-to-understand set of tools and techniques. In less than an hour and with only 30 lines of code, I believe anyone can write and deploy their very own read-only API. You don’t have to be a senior full-stack engineer — a basic grasp of JavaScript and some experience with npm is all you need.

At the end of this article you’ll be able to deploy your very own API without the headache of managing a server. I’ll list out each dependency and explain why we’re incorporating it. I’ll also give you an intro to some of the newer concepts involved, and provide links to resources to go deeper.

Let’s get started!

A rundown of the API concepts

There are a couple of common ways to work with APIs. But let’s begin by (super briefly) explaining what an API is all about: reading and updating data.

Over the past 20 years, some standard ways to build APIs have emerged. REST (short for REpresentational State Transfer) is one of the most common. To use a REST API, you make a call to a server through a URL — say api.example.com/rest/books — and expect to get a list of books back in a format like JSON or XML. To get a single book, we’d go back to the server at a URL — like api.example.com/rest/books/123 — and expect the data for book #123. Adding a new book or updating a specific book’s data means more trips to the server at similar, purpose-defined URLs.

That’s the basic idea of two concepts we’ll be looking at here: GraphQL and Serverless.

GraphQL

Applications that do a lot of getting and updating of data make a lot of API calls. Complicated software, like Twitter, might make hundreds of calls to get the data for a single page. Collecting the right data from a handful of URLs and formatting it can be a real headache. In 2012, Facebook developers starting looking for new ways to get and update data more efficiently.

Their key insight was that for the most part, data in complicated applications has relationships to other data. A user has followers, who are each users themselves, who each have their own followers, and those followers have tweets, which have replies from other users. Drawing the relationships between data results in a graph and that graph can help a server do a lot of clever work formatting and sending (or updating) data, and saving front-end developers time and frustration. Graph Query Language, aka GraphQL, was born.

GraphQL is different from the REST API approach in its use of URLs and queries. To get a list of books from our API using GraphQL, we don’t need to go to a specific URL (like our api.example.com/graphql/books example). Instead, we call up the API at the top level — which would be api.example.com/graphql in our example — and tell it what kind of information we want back with a JSON object:

{   books {     id     title     author   } }

The server sees that request, formats our data, and sends it back in another JSON object:

{   "books" : [     {       "id" : 123       "title" : "The Greatest CSS Tricks Vol. I"       "author" : "Chris Coyier"     }, {       // ...     }   ] }

Sebastian Scholl compares GraphQL to REST using a fictional cocktail party that makes the distinction super clear. The bottom line: GraphQL allows us to request the exact data we want while REST gives us a dump of everything at the URL.

Concept 2: Serverless

Whenever I see the word “serverless,” I think of Chris Watterston’s famous sticker.

Similarly, there is no such thing as a truly “serverless” application. Chris Coyier nice sums it up his “Serverless” post:

What serverless is trying to mean, it seems to me, is a new way to manage and pay for servers. You don’t buy individual servers. You don’t manage them. You don’t scale them. You don’t balance them. You aren’t really responsible for them. You just pay for what you use.

The serverless approach makes it easier to build and deploy back-end applications. It’s especially easy for folks like me who don’t have a background in back-end development. Rather than spend my time learning how to provision and maintain a server, I often hand the hard work off to someone (or even perhaps something) else.

It’s worth checking out the CSS-Tricks guide to all things serverless. On the Ideas page, there’s even a link to a tutorial on building a serverless API!

Picking our tools

If you browse through that serverless guide you’ll see there’s no shortage of tools and resources to help us on our way to building an API. But exactly which ones we use requires some initial thought and planning. I’m going to cover two specific tools that we’ll use for our read-only API.

Tool 1: NodeJS and Express

Again, I don’t have much experience with back-end web development. But one of the few things I have encountered is Node.js. Many of you are probably aware of it and what it does, but it’s essentially JavaScript that runs on a server instead of a web browser. Node.js is perfect for someone coming from the front-end development side of things because we can work directly in JavaScript — warts and all — without having to reach for some back-end language.

Express is one of the most popular frameworks for Node.js. Back before React was king (How Do You Do, Fellow Kids?), Express was the go-to for building web applications. It does all sorts of handy thing like routing, templating, and error handling.

I’ll be honest: frameworks like Express intimidate me. But for a simple API, Express is extremely easy to use and understand. There’s an official GraphQL helper for Express, and a plug-and-play library for making a serverless application called serverless-http. Neat, right?!

Tool 2: Netlify functions

The idea of running an application without maintaining a server sounds too good to be true. But check this out: not only can you accomplish this feat of modern sorcery, you can do it for free. Mind blowing.

Netlify offers a free plan with serverless functions that will give you up to 125,000 API calls in a month. Amazon offers a similar service called Lambda. We’ll stick with Netlify for this tutorial.

Netlify includes Netlify Dev which is a CLI for Netlify’s platform. Essentially, it lets us run a simulation of our in a fully-featured production environment, all within the safety of our local machine. We can use it to build and test our serverless functions without needing to deploy them.

At this point, I think it’s worth noting that not everyone agrees that running Express in a serverless function is a good idea. As Paul Johnston explains, if you’re building your functions for scale, it’s best to break each piece of functionality out into its own single-purpose function. Using Express the way I have means that every time a request goes to the API, the whole Express server has to be booted up from scratch — not very efficient. Deploy to production at your own risk.

Let’s get building!

Now that we have out tools in place, we can kick off the project. Let’s start by creating a new folder, navigating to fit in terminal, then running npm init  on it. Once npm creates a package.json file, we can install the dependencies we need. Those dependencies are:

  1. Express
  2. GraphQL and express-graphql. These allow us to receive and respond to GraphQL requests.
  3. Bodyparser. This is a small layer that translates the requests we get to and from JSON, which is what GraphQL expects.
  4. Serverless-http. This serves as a wrapper for Express that makes sure our application can be used on a serverless platform, like Netlify.

That’s it! We can install them all in a single command:

npm i express express-graphql graphql body-parser serverless-http

We also need to install Netlify Dev as a global dependency so we can use it as a CLI:

npm i -g netlify-dev

File structure

There’s a few files that are required for our API to work correctly. The first is netlify.toml which should be created at the project’s root directory. This is a configuration file to tell Netlify how to handle our project. Here’s what we need in the file to define our startup command, our build command and where our serverless functions are located:

[build] 
   # This command builds the site   command = "npm run build" 
   # This is the directory that will be deployed   publish = "build" 
   # This is where our functions are located   functions = "functions"

That functions line is super important; it tells Netlify where we’ll be putting our API code.

Next, let’s create that /functions folder at the project’s root, and create a new file inside it called api.js.  Open it up and add the following lines to the top so our dependencies are available to use and are included in the build:

const express = require("express"); const bodyParser = require("body-parser"); const expressGraphQL = require("express-graphql"); const serverless = require("serverless-http");

Setting up Express only takes a few lines of code. First, we’ll initial Express and wrap it in the serverless-http serverless function:

const app = express(); module.exports.handler = serverless(app);

These lines initialize Express, and wrap it in the serverless-http function. module.exports.handler lets Netlify know that our serverless function is the Express function.

Now let’s configure Express itself:

app.use(bodyParser.json()); app.use(   "/",   expressGraphQL({     graphiql: true   }) );

These two declarations tell Express what middleware we’re running. Middleware is what we want to happen between the request and response. In our case, we want to parse JSON using bodyparser, and handle it with express-graphql. The graphiql:true configuration for express-graphql will give us a nice user interface and playground for testing.

Defining the GraphQL schema

In order to understand requests and format responses, GraphQL needs to know what our data looks like. If you’ve worked with databases then you know that this kind of data blueprint is called a schema. GraphQL combines this well-defined schema with types — that is, definitions of different kinds of data — to work its magic.

The very first thing our schema needs is called a root query. This will handle any data requests coming in to our API. It’s called a “root” query because it’s accessed at the root of our API— say, api.example.com/graphql.

For this demonstration, we’ll build a hello world example; the root query should result in a response of “Hello world.”

So, our GraphQL API will need a schema (composed of types) for the root query. GraphQL provides some ready-built types, including a schema, a generic object², and a string.

Let’s get those by adding this below the imports:

const {   GraphQLSchema,   GraphQLObjectType,   GraphQLString } = require("graphql");

Then we’ll define our schema like this:

const schema = new GraphQLSchema({   query: new GraphQLObjectType({     name: 'HelloWorld',     fields: () => ({ /* we'll put our response here */ })   }) })

The first element in the object, with the key query, tells GraphQL how to handle a root query. Its value is a GraphQL object with the following configuration:

  • name – A reference used for documentation purposes
  • fields – Defines the data that our server will respond with. It might seem strange to have a function that just returns an object here, but this allows us to use variables and functions defined elsewhere in our file without needing to define them first³.
const schema = new GraphQLSchema({   query: new GraphQLObjectType({     name: "HelloWorld",     fields: () => ({       message: {         type: GraphQLString,         resolve: () => "Hello World",       },     }),   }), });

The fields function returns an object and our schema only has a single message field so far. The message we want to respond with is a string, so we specify its type as a GraphQLString. The resolve function is run by our server to generate the response we want. In this case, we’re only  returning “Hello World” but in a more complicated application, we’d probably use this function to go to our database and retrieve some data.

That’s our schema! We need to tell our Express server about it, so let’s open up api.js and make sure the Express configuration is updated to this:

app.use(   "/",   expressGraphQL({     schema: schema,     graphiql: true   }) );

Running the server locally

Believe it or not, we’re ready to start the server! Run netlify dev in Terminal from the project’s root folder. Netlify Dev will read the netlify.toml configuration, bundle up your api.js function, and make it available locally from there. If everything goes according to plan, you’ll see a message like “Server now ready on http://localhost:8888.” 

If you go to localhost:8888 like I did the first time, you might be a little disappointed to get a 404 error.

But fear not! Netlify is running the function, only in a different directory than you might expect, which is /.netlify/functions. So, if you go to localhost:8888/.netlify/functions/api, you should see the GraphiQL interface as expected. Success!

Now, that’s more like it!

The screen we get is the GraphiQL playground and we can use it to test out the API. First, clear out the comments in the left pane and replace them with the following:

{   message }

This might seem a little… naked… but you just wrote a GraphQL query! What we’re saying is that we’d like to see the message field we defined in api.js. Click the “Run” button, and on the righth, you’ll see the following:

{   "data": {     "message": "Hello World"   } }

I don’t know about you, but I did a little fist pump when I did this the first time. We built an API!

Bonus: Redirecting requests

One of my hang-ups while learning about Netlify’s serverless functions is that they run on the /.netlify/functions path. It wasn’t ideal to type or remember it and I nearly bailed for another solution. But it turns out you can easily redirect requests when running and deploying on Netlfiy. All it takes is creating a file in the project’s root directory called _redirects (no extension necessary) with the following line in it:

/api /.netlify/functions/api 200!

This tells Netlify that any traffic that goes to yoursite.com/api should be sent to /.netlify/functions/api. The 200! bit instructs the server to send back a status code of 200 (meaning everything’s OK).

Deploying the API

To deploy the project, we need to connect the source code to Netlfiy. I host mine in a GitHub repo, which allows for continuous deployment.

After connecting the repository to Netlfiy, the rest is automatic: the code is processed and deployed as a serverless function! You can log into the Netlify dashboard to see the logs from any function.

Conclusion

Just like that, we are able to create a serverless API using GraphQL with a few lines of JavaScript and some light configuration. And hey, we can even deploy — for free. 

The possibilities are endless. Maybe you want to create your own personal knowledge base, or a tool to serve up design tokens. Maybe you want to try your hand at making your own PokéAPI. Or, maybe you’re interesting in working with GraphQL.

Regardless of what you make, it’s these sorts of technologies that are getting more and more accessible every day. It’s exciting to be able to work with some of the most modern tools and techniques without needing a deep technical back-end knowledge.

If you’d like to see at the complete source code for this project, it’s available on GitHub.

Some of the code in this tutorial was adapted from Web Dev Simplified’s “Learn GraphQL in 40 minutes” article. It’s a great resource to go one step deeper into GraphQL. However, it’s also focused on a more traditional server-full Express.


  1. If you’d like to see the full result of my explorations, I’ve written a companion piece called “A design API in practice” on my website.
  2. The reasons you need a special GraphQL object, instead of a regular ol’ vanilla JavaScript object in curly braces, is a little beyond the scope of this tutorial. Just keep in mind that GraphQL is a finely-tuned machine that uses these specialized types to be fast and resilient.
  3. Scope and hoisting are some of the more confusing topics in JavaScript. MDN has a good primer that’s worth checking out.

The post Building Serverless GraphQL API in Node with Express and Netlify appeared first on CSS-Tricks.

CSS-Tricks

, , , , ,
[Top]

Making My Netlify Build Run Sass

Let’s say you wanted to build a site with Eleventy as the generator. Popular choice these days! Eleventy doesn’t have some particularly blessed way of preprocessing your CSS, if that’s something you want to do. There are a variety of ways to do it and perhaps that freedom is part of the spirit of Eleventy.

I’ve seen people set up Gulp for this, which is cool, I still use and like Gulp for some stuff. I’ve seen someone use templating to return preprocessed CSS, which seems weird, but hey, whatever works. I’ve even seen someone extend the Eleventy config itself to run the processing.

So far, the thing that has made the most sense to me is to use npm scripts do the Sass processing. Do the CSS first, then the HTML, with npm-run-all. So, you’d set up something like this in your package.json:

  "scripts": {     "build": "npm-run-all build:css build:html",     "build:css": "node-sass src/site/_includes/css/main.scss > src/site/css/main.css",     "build:html": "eleventy",     "watch": "npm-run-all --parallel watch:css watch:html",     "watch:css": "node-sass --watch src/site/_includes/css/main.scss > src/site/css/main.css",     "watch:html": "eleventy --serve --port=8181",     "start": "npm run watch"   },

I think that’s fairly nice. Since Eleventy doesn’t have a blessed CSS processing route anyway, it feels OK to have it de-coupled from Eleventy processing.

But I see Netlify has come along nicely with their build plugins. As Sarah put it:

What the Build Plugin does is give you access to key points in time during that process, for instance, onPreBuildonPostBuildonSuccess, and so forth. You can execute some logic at those specific points in time

There is something really intuitive and nice about that structure. A lot of build plugins are created by the community or Netlify themselves. You just click them on via the UI or reference them in your config. But Sass isn’t a build-in project (as I write), which I would assume is because people are a pretty opinionated about what/where/how their CSS is processed that it makes sense to just let people do it themselves. So let’s do that.

In our project, we’d create a directory for our plugins, and then a folder for this particular plugin we want to write:

project-root/   src/   whatever/   plugins/     sass/       index.js       manifest.yml

That index.js file is where we write our code, and we’ll specifically want to use the onPreBuild hook here, because we’d want our Sass to be done preprocessing before the build process runs Eleventy and Eleventy moves things around.

module.exports = {   onPreBuild: async ({ utils: { run } }) => {     await run.command(       "node-sass src/site/_includes/css/main.scss src/site/css/main.css"     );   }, };

Here’s a looksie into all the relevant files together:

Now, if I netlify build from the command line, it will run the same build process that Netlify itself does, and it will hook into my plugin and run it!

One little thing I noticed is that I was trying to have my config be the (newer) netlify.yml format, but the plugins didn’t work, and I had to re-do the config as netlify.toml.

So we de-coupled ourselves from Eleventy with this particular processing, and coupled ourselves to Netlify. Just something to be aware of. I’m down with that as this way of configuring a build is so nice and I see so much potential in it.

I prefer the more explicit and broken up configuration of this style. Just look at how much cleaner the package.json gets:

Removed a bunch of lines from the scripts area of a package.json file, like the specific build:css and build:html commands

I still have this idea…

…of building a site that is a dog-fooded example of all the stuff you could/should do during a build process. I’ve started the site here, (and repo), but it’s not doing too much yet. I think it would be cool to wire up everything on that list (and more?) via Build Plugins.

If you wanna contribute, feel free to let me know. Maybe email me or open an issue to talk about what you’d want to do. You’re free to do a Pull Request too, but PRs without any prior communication are a little tricky sometimes as it’s harder to ensure our visions are aligned before you put in a bunch of effort.

The post Making My Netlify Build Run Sass appeared first on CSS-Tricks.

CSS-Tricks

, , ,
[Top]

A/B Testing Instant.Page With Netlify and Speedcurve

Instant.Page does one special thing to make sites faster: it preloads the next page when it’s pretty sure you’re going to click a link (either by hovering over 65ms or mousedown on desktop, or touchstart on mobile), so when you do complete the click (probably a few hundred milliseconds later), it loads that much faster.

It’s one thing to understand that approach, buy into it, integrate it, and consider it a perf win. I have it installed here!

It’s another thing to actually get the data on your own site. Leave it to Tim Kadlec to get clever and A/B test it. Tim was able to do a 50/50 A/B split with performance-neutral Netlify split testing. Half loaded Instant.Page, the other half didn’t. And the same halves told SpeedCurve which half they were in, so performance charts could be built to compare.

Tim says it mostly looks good, but his site probably isn’t the best test:

It’s also worth noting that even if the results do look good, just because it does or doesn’t make an impact on my site doesn’t mean it won’t have a different impact elsewhere. My site has a short session length, typically, and very lightweight pages: putting this on a larger commercial site would inevitably yield much different results.

I’d love to see someone do this on a beefier site. I’m in the how could it not be faster?! camp, but with zero data.

Direct Link to ArticlePermalink

The post A/B Testing Instant.Page With Netlify and Speedcurve appeared first on CSS-Tricks.

CSS-Tricks

, , ,
[Top]