Tag: Netlify

Accessing Your Data With Netlify Functions and React

(This is a sponsored post.)

Static site generators are popular for their speed, security, and user experience. However, sometimes your application needs data that is not available when the site is built. React is a library for building user interfaces that helps you retrieve and store dynamic data in your client application. 

Fauna is a flexible, serverless database delivered as an API that completely eliminates operational overhead such as capacity planning, data replication, and scheduled maintenance. Fauna allows you to model your data as documents, making it a natural fit for web applications written with React. Although you can access Fauna directly via a JavaScript driver, this requires a custom implementation for each client that connects to your database. By placing your Fauna database behind an API, you can enable any authorized client to connect, regardless of the programming language.

Netlify Functions allow you to build scalable, dynamic applications by deploying server-side code that works as API endpoints. In this tutorial, you build a serverless application using React, Netlify Functions, and Fauna. You learn the basics of storing and retrieving your data with Fauna. You create and deploy Netlify Functions to access your data in Fauna securely. Finally, you deploy your React application to Netlify.

Getting started with Fauna

Fauna is a distributed, strongly consistent OLTP NoSQL serverless database that is ACID-compliant and offers a multi-model interface. Fauna also supports document, relational, graph, and temporal data sets from a single query. First, we will start by creating a database in the Fauna console by selecting the Database tab and clicking on the Create Database button.

Next, you will need to create a Collection. For this, you will need to select a database, and under the Collections tab, click on Create Collection.

Fauna uses a particular structure when it comes to persisting data. The design consists of attributes like the example below.

{   "ref": Ref(Collection("avengers"), "299221087899615749"),   "ts": 1623215668240000,   "data": {     "id": "db7bd11d-29c5-4877-b30d-dfc4dfb2b90e",     "name": "Captain America",     "power": "High Strength",     "description": "Shield"   } }

Notice that Fauna keeps a ref column which is a unique identifier used to identify a particular document. The ts attribute is a timestamp to determine the time of creating the record and the data attribute responsible for the data.

Why creating an index is important

Next, let’s create two indexes for our avengers collection. This will be pretty valuable in the latter part of the project. You can create an index from the Index tab or from the Shell tab, which provides a console to execute scripts. Fauna supports two types of querying techniques: FQL (Fauna’s Query language) and GraphQL. FQL operates based on the schema of Fauna, which includes documents, collections, indexes, sets, and databases. 

Let’s create the indexes from the shell.

This command will create an index on the Collection, which will create an index by the id field inside the data object. This index will return a ref of the data object. Next, let’s create another index for the name attribute and name it avenger_by_name.

Creating a server key

To create a server key, we need to navigate the Security tab and click on the New Key button. This section will prompt you to create a key for a selected database and the user’s role.

Getting started with Netlify functions and React

In this section, we’ll see how we create Netlify functions with React. We will be using create-react-app to create the react app.

npx create-react-app avengers-faunadb

After creating the react app, let’s install some dependencies, including Fauna and Netlify dependencies.

yarn add axios bootstrap node-sass uuid faunadb react-netlify-identity react-netlify-identity-widget

Now let’s create our first Netlfiy function. To make the functions, first, we need to install Netlifiy CLI globally.

npm install netlify-cli -g

Now that the CLI is installed, let’s create a .env file on our project root with the following fields.

FAUNADB_SERVER_SECRET= <FaunaDB secret key> REACT_APP_NETLIFY= <Netlify app url>

Next, Let’s see how we can start with creating netlify functions. For this, we will need to create a directory in our project root called functions and a file called netlify.toml, which will be responsible for maintaining configurations for our Netlify project. This file defines our function’s directory, build directory, and commands to execute.

[build] command = "npm run build" functions = "functions/" publish = "build"  [[redirects]]   from = "/api/*"   to = "/.netlify/functions/:splat"   status = 200   force = true

We will do some additional configuration for the Netlify configuration file, like in the redirection section in this example. Notice that we are changing the default path of the Netlify function of /.netlify/** to /api/. This configuration is mainly for the improvement of the look and field of the API URL. So to trigger or call our function, we can use the path:

https://domain.com/api/getPokemons

 …instead of:

https://domain.com/.netlify/getPokemons

Next, let’s create our Netlify function in the functions directory. But, first, let’s make a connection file for Fauna called util/connections.js, returning a Fauna connection object.

const faunadb = require('faunadb'); const q = faunadb.query  const clientQuery = new faunadb.Client({   secret: process.env.FAUNADB_SERVER_SECRET, });  module.exports = { clientQuery, q };

Next, let’s create a helper function checking for reference and returning since we will need to parse the data on several occasions throughout the application. This file will be util/helper.js.

const responseObj = (statusCode, data) => {   return {     statusCode: statusCode,     headers: {      /* Required for CORS support to work */       "Access-Control-Allow-Origin": "*",       "Access-Control-Allow-Headers": "*",       "Access-Control-Allow-Methods": "GET, POST, OPTIONS",     },    body: JSON.stringify(data)   }; };  const requestObj = (data) => {   return JSON.parse(data); }  module.exports = { responseObj: responseObj, requestObj: requestObj }

Notice that the above helper functions handle the CORS issues, stringifying and parsing of JSON data. Let’s create our first function, getAvengers, which will return all the data.

const { responseObj } = require('./util/helper'); const { q, clientQuery } = require('./util/connection');  exports.handler = async (event, context) => {   try {    let avengers = await clientQuery.query(      q.Map(        q.Paginate(q.Documents(q.Collection('avengers'))),        q.Lambda(x => q.Get(x))       )     )     return responseObj(200, avengers)   } catch (error) {     console.log(error)     return responseObj(500, error);   } };

In the above code example, you can see that we have used several FQL commands like Map, Paginate, Lamda. The Map key is used to iterate through the array, and it takes two arguments: an Array and Lambda. We have passed the Paginate for the first parameter, which will check for reference and return a page of results (an array). Next, we used a Lamda statement, an anonymous function that is quite similar to an anonymous arrow function in ES6.

Next, Let’s create our function AddAvenger responsible for creating/inserting data into the Collection.

const { requestObj, responseObj } = require('./util/helper'); const { q, clientQuery } = require('./util/connection');  exports.handler = async (event, context) => {   let data = requestObj(event.body);    try {     let avenger = await clientQuery.query(       q.Create(         q.Collection('avengers'),         {           data: {             id: data.id,             name: data.name,             power: data.power,             description: data.description           }         }       )     );      return responseObj(200, avenger)   } catch (error) {     console.log(error)     return responseObj(500, error);   }   };

To save data for a particular collection, we will have to pass, or data to the data:{} object like in the above code example. Then we need to pass it to the Create function and point it to the collection you want and the data. So, let’s run our code and see how it works through the netlify dev command.

Let’s trigger the GetAvengers function through the browser through the URL http://localhost:8888/api/GetAvengers.

The above function will fetch the avenger object by the name property searching from the avenger_by_name index. But, first, let’s invoke the GetAvengerByName function through a Netlify function. For that, let’s create a function called SearchAvenger.

const { responseObj } = require('./util/helper'); const { q, clientQuery } = require('./util/connection');  exports.handler = async (event, context) => {   const {     queryStringParameters: { name },   } = event;    try {     let avenger = await clientQuery.query(       q.Call(q.Function("GetAvengerByName"), [name])     );     return responseObj(200, avenger)   } catch (error) {     console.log(error)     return responseObj(500, error);   } };

Notice that the Call function takes two arguments where the first parameter will be the reference for the FQL function that we created and the data that we need to pass to the function.

Calling the Netlify function through React

Now that several functions are available let’s consume those functions through React. Since the functions are REST APIs, let’s consume them via Axios, and for state management, let’s use React’s Context API. Let’s start with the Application context called AppContext.js.

import { createContext, useReducer } from "react"; import AppReducer from "./AppReducer"  const initialState = {     isEditing: false,     avenger: { name: '', description: '', power: '' },     avengers: [],     user: null,     isLoggedIn: false };  export const AppContext = createContext(initialState);  export const AppContextProvider = ({ children }) => {     const [state, dispatch] = useReducer(AppReducer, initialState);      const login = (data) => { dispatch({ type: 'LOGIN', payload: data }) }     const logout = (data) => { dispatch({ type: 'LOGOUT', payload: data }) }     const getAvenger = (data) => { dispatch({ type: 'GET_AVENGER', payload: data }) }     const updateAvenger = (data) => { dispatch({ type: 'UPDATE_AVENGER', payload: data }) }     const clearAvenger = (data) => { dispatch({ type: 'CLEAR_AVENGER', payload: data }) }     const selectAvenger = (data) => { dispatch({ type: 'SELECT_AVENGER', payload: data }) }     const getAvengers = (data) => { dispatch({ type: 'GET_AVENGERS', payload: data }) }     const createAvenger = (data) => { dispatch({ type: 'CREATE_AVENGER', payload: data }) }     const deleteAvengers = (data) => { dispatch({ type: 'DELETE_AVENGER', payload: data }) }      return <AppContext.Provider value={{         ...state,         login,         logout,         selectAvenger,         updateAvenger,         clearAvenger,         getAvenger,         getAvengers,         createAvenger,         deleteAvengers     }}>{children}</AppContext.Provider> }  export default AppContextProvider;

Let’s create the Reducers for this context in the AppReducer.js file, Which will consist of a reducer function for each operation in the application context.

const updateItem = (avengers, data) => {     let avenger = avengers.find((avenger) => avenger.id === data.id);     let updatedAvenger = { ...avenger, ...data };     let avengerIndex = avengers.findIndex((avenger) => avenger.id === data.id);     return [         ...avengers.slice(0, avengerIndex),         updatedAvenger,         ...avengers.slice(++avengerIndex),     ]; }  const deleteItem = (avengers, id) => {     return avengers.filter((avenger) => avenger.data.id !== id) }  const AppReducer = (state, action) => {     switch (action.type) {         case 'SELECT_AVENGER':             return {                 ...state,                 isEditing: true,                 avenger: action.payload             }         case 'CLEAR_AVENGER':             return {                 ...state,                 isEditing: false,                 avenger: { name: '', description: '', power: '' }             }         case 'UPDATE_AVENGER':             return {                 ...state,                 isEditing: false,                 avengers: updateItem(state.avengers, action.payload)             }         case 'GET_AVENGER':             return {                 ...state,                 avenger: action.payload.data             }         case 'GET_AVENGERS':             return {                 ...state,                 avengers: Array.isArray(action.payload && action.payload.data) ? action.payload.data : [{ ...action.payload }]             };         case 'CREATE_AVENGER':             return {                 ...state,                 avengers: [{ data: action.payload }, ...state.avengers]             };         case 'DELETE_AVENGER':             return {                 ...state,                 avengers: deleteItem(state.avengers, action.payload)             };         case 'LOGIN':             return {                 ...state,                 user: action.payload,                 isLoggedIn: true             };         case 'LOGOUT':             return {                 ...state,                 user: null,                 isLoggedIn: false             };         default:             return state     } }  export default AppReducer; 

Since the application context is now available, we can fetch data from the Netlify functions that we have created and persist them in our application context. So let’s see how to call one of these functions.

const { avengers, getAvengers } = useContext(AppContext);  const GetAvengers = async () => {   let { data } = await axios.get('/api/GetAvengers);   getAvengers(data) }

To get the data to the application contexts, let’s import the function getAvengers from our application context and pass the data fetched by the get call. This function will call the reducer function, which will keep the data in the context. To access the context, we can use the attribute called avengers. Next, let’s see how we could save data on the avengers collection.

const { createAvenger } = useContext(AppContext);  const CreateAvenger = async (e) => {   e.preventDefault();   let new_avenger = { id: uuid(), ...newAvenger }   await axios.post('/api/AddAvenger', new_avenger);   clear();   createAvenger(new_avenger) }

The above newAvenger object is the state object which will keep the form data. Notice that we pass a new id of type uuid to each of our documents. Thus, when the data is saved in Fauna, We will be using the createAvenger function in the application context to save the data in our context. Similarly, we can invoke all the netlify functions with CRUD operations like this via Axios.

How to deploy the application to Netlify

Now that we have a working application, we can deploy this app to Netlify. There are several ways that we can deploy this application:

  1. Connecting and deploying the application through GitHub
  2. Deploying the application through the Netlify CLI

Using the CLI will prompt you to enter specific details and selections, and the CLI will handle the rest. But in this example, we will be deploying the application through Github. So first, let’s log in to the Netlify dashboard and click on New Site from Git button. Next, It will prompt you to select the Repo you need to deploy and the configurations for your site like build command, build folder, etc.

How to authenticate and authorize functions by Netlify Identity

Netlify Identity provides a full suite of authentication functionality to your application which will help us to manage authenticated users throughout the application. Netlify Identity can be integrated easily into the application without using any other 3rd party service and libraries. To enable Netlify Identity, we need to login into our Neltify dashboard, and under our deployed site, we need to move to the Identity tab and allow the identity feature.

Enabling Identity will provide a link to your netlify identity. You will have to copy that URL and add it to the .env file of your application for REACT_APP_NETLIFY. Next, We need to add the Netlify Identity to our React application through the netlify-identity-widget and the Netlify functions. But, first, let’s add the REACT_APP_NETLIFY property for the Identity Context Provider component in the index.js file.

import React from 'react'; import ReactDOM from 'react-dom'; import './index.css'; import "react-netlify-identity-widget/styles.css" import 'bootstrap/dist/css/bootstrap.css'; import App from './App'; import { IdentityContextProvider } from "react-netlify-identity-widget" const url = process.env.REACT_APP_NETLIFY;  ReactDOM.render(   <IdentityContextProvider url=https://css-tricks.com/accessing-data-netlify-functions-react/>     <App />   </IdentityContextProvider>,   document.getElementById('root') );

This component is the Navigation bar that we use in this application. This component will be on top of all the other components to be the ideal place to handle the authentication. This react-netlify-identity-widget will add another component that will handle the user signI= in and sign up.

Next, let’s use the Identity in our Netlify functions. Identity will introduce some minor modifications to our functions, like the below function GetAvenger.

const { responseObj } = require('./util/helper'); const { q, clientQuery } = require('./util/connection');  exports.handler = async (event, context) => {     if (context.clientContext.user) {         const {             queryStringParameters: { id },         } = event;         try {             const avenger = await clientQuery.query(                 q.Get(                     q.Match(q.Index('avenger_by_id'), id)                 )             );             return responseObj(200, avenger)         } catch (error) {             console.log(error)             return responseObj(500, error);         }     } else {         return responseObj(401, 'Unauthorized');     } };

The context of each request will consist of a property called clientContext, which will consist of authenticated user details. In the above example, we use a simple if condition to check the user context. 

To get the clientContext in each of our requests, we need to pass the user token through the Authorization Headers. 

const { user } = useIdentityContext();  const GetAvenger = async (id) => {   let { data } = await axios.get('/api/GetAvenger/?id=' + id, user && {     headers: {       Authorization: `Bearer $ {user.token.access_token}`     }   });   getAvenger(data) }

This user token will be available in the user context once logged in to the application through the netlify identity widget.

As you can see, Netlify functions and Fauna look to be a promising duo for building serverless applications. You can follow this GitHub repo for the complete code and refer to this URL for the working demo.

Conclusion

In conclusion, Fauna and Netlify look to be a promising duo for building serverless applications. Netlify also provides the flexibility to extend its functionality through the plugins to enhance the experience. The pricing plan with pay as you go is ideal for developers to get started with fauna. Fauna is extremely fast, and it auto-scales so that developers will have the time to focus on their development more than ever. Fauna can handle complex database operations where you would find in Relational, Document, Graph, Temporal databases. Fauna Driver support all the major languages such as Android, C#, Go, Java, JavaScript, Python, Ruby, Scala, and Swift. With all these excellent features, Fauna looks to be one of the best Serverless databases. For more information, go through Fauna documentation.


The post Accessing Your Data With Netlify Functions and React appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , ,

Hack the “Deploy to Netlify” Button Using Environment Variables to Make a Customizable Site Generator

If you’re anything like me, you like being lazy shortcuts. The “Deploy to Netlify” button allows me to take this lovely feature of my personality and be productive with it.

Deploy to Netlify

Clicking the button above lets me (or you!) instantly clone my Next.js starter project and automatically deploy it to Netlify. Wow! So easy! I’m so happy!

Now, as I was perusing the docs for the button the other night, as one does, I noticed that you can pre-fill environment variables to the sites you deploy with the button. Which got me thinking… what kind of sites could I customize with that?

Ah, the famed “link in bio” you see all over social media when folks want you to see all of their relevant links in life. You can sign up for the various services that’ll make one of these sites for you, but what if you could make one yourself without having to sign up for yet another service?

But, we also are lazy and like shortcuts. Sounds like we can solve all of these problems with the “Deploy to Netlify” (DTN) button, and environment variables.

How would we build something like this?

In order to make our DTN button work, we need to make two projects that work together:

  • A template project (This is the repo that will be cloned and customized based on the environment variables passed in.)
  • A generator project (This is the project that will create the environment variables that should be passed to the button.)

I decided to be a little spicy with my examples, and so I made both projects with Vite, but the template project uses React and the generator project uses Vue.

I’ll do a high-level overview of how I built these two projects, and if you’d like to just see all the code, you can skip to the end of this post to see the final repositories!

The Template project

To start my template project, I’ll pull in Vite and React.

npm init @vitejs/app

After running this command, you can follow the prompts with whatever frameworks you’d like!

Now after doing the whole npm install thing, you’ll want to add a .local.env file and add in the environment variables you want to include. I want to have a name for the person who owns the site, their profile picture, and then all of their relevant links.

VITE_NAME=Cassidy Williams VITE_PROFILE_PIC=https://github.com/cassidoo.png VITE_GITHUB_LINK=https://github.com/cassidoo VITE_TWITTER_LINK=https://twitter.com/cassidoo

You can set this up however you’d like, because this is just test data we’ll build off of! As you build out your own application, you can pull in your environment variables at any time for parsing with import.meta.env. Vite lets you access those variables from the client code with VITE_, so as you play around with variables, make sure you prepend that to your variables.

Ultimately, I made a rather large parsing function that I passed to my components to render into the template:

function getPageContent() {   // Pull in all variables that start with VITE_ and turn it into an array   let envVars = Object.entries(import.meta.env).filter((key) => key[0].startsWith('VITE_'))    // Get the name and profile picture, since those are structured differently from the links   const name = envVars.find((val) => val[0] === 'VITE_NAME')[1].replace(/_/g, ' ')   const profilePic = envVars.find((val) => val[0] === 'VITE_PROFILE_PIC')[1]      // ...      // Pull all of the links, and properly format the names to be all lowercase and normalized   let links = envVars.map((k) => {     return [deEnvify(k[0]), k[1]]   })    // This object is what is ultimately sent to React to be rendered   return { name, profilePic, links } }  function deEnvify(str) {   return str.replace('VITE_', '').replace('_LINK', '').toLowerCase().split('_').join(' ') }

I can now pull in these variables into a React function that renders the components I need:

// ...   return (     <div>       <img alt={vars.name} src={vars.profilePic} />       <p>{vars.name}</p>       {vars.links.map((l, index) => {         return <Link key={`link$ {index}`} name={l[0]} href={l[1]} />       })}     </div>   )  // ...

And voilà! With a little CSS, we have a “link in bio” site!

Now let’s turn this into something that doesn’t rely on hard-coded variables. Generator time!

The Generator project

I’m going to start a new Vite site, just like I did before, but I’ll be using Vue for this one, for funzies.

Now in this project, I need to generate the environment variables we talked about above. So we’ll need an input for the name, an input for the profile picture, and then a set of inputs for each link that a person might want to make.

In my App.vue template, I’ll have these separated out like so:

<template>   <div>     <p>       <span>Your name:</span>       <input type="text" v-model="name" /> 	</p>     <p>       <span>Your profile picture:</span>	       <input type="text" v-model="propic" />     </p>   </div>    <List v-model:list="list" />    <GenerateButton :name="name" :propic="propic" :list="list" /> </template>

In that List component, we’ll have dual inputs that gather all of the links our users might want to add:

<template>   <div class="list">     Add a link: <br />     <input type="text" v-model="newItem.name" />     <input type="text" v-model="newItem.url" @keyup.enter="addItem" />     <button @click="addItem">+</button>      <ListItem       v-for="(item, index) in list"       :key="index"       :item="item"       @delete="removeItem(index)"     />   </div> </template>

So in this component, there’s the two inputs that are adding to an object called newItem, and then the ListItem component lists out all of the links that have been created already, and each one can delete itself.

Now, we can take all of these values we’ve gotten from our users, and populate the GenerateButton component with them to make our DTN button work!

The template in GenerateButton is just an <a> tag with the link. The power in this one comes from the methods in the <script>.

// ... methods: {   convertLink(str) {     // Convert each string passed in to use the VITE_WHATEVER_LINK syntax that our template expects     return `VITE_$ {str.replace(/ /g, '_').toUpperCase()}_LINK`   },   convertListOfLinks() {     let linkString = ''          // Pass each link given by the user to our helper function     this.list.forEach((l) => {       linkString += `$ {this.convertLink(l.name)}=$ {l.url}&`     })      return linkString   },   // This function pushes all of our strings together into one giant link that will be put into our button that will deploy everything!   siteLink() {     return (       // This is the base URL we need of our template repo, and the Netlify deploy trigger       'https://app.netlify.com/start/deploy?repository=https://github.com/cassidoo/link-in-bio-template#' +       'VITE_NAME=' +       // Replacing spaces with underscores in the name so that the URL doesn't turn that into %20       this.name.replace(/ /g, '_') +       '&' +       'VITE_PROFILE_PIC=' +       this.propic +       '&' +       // Pulls all the links from our helper function above       this.convertListOfLinks()     )   }, }, 

Believe it or not, that’s it. You can add whatever styles you like or change up what variables are passed (like themes, toggles, etc.) to make this truly customizable!

Put it all together

Once these projects are deployed, they can work together in beautiful harmony!

This is the kind of project that can really illustrate the power of customization when you have access to user-generated environment variables. It may be a small one, but when you think about generating, say, resume websites, e-commerce themes, “/uses” websites, marketing sites… the possibilities are endless for turning this into a really cool boilerplate method.


The post Hack the “Deploy to Netlify” Button Using Environment Variables to Make a Customizable Site Generator appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , , , , , ,
[Top]

Why Netlify?

I think it’s fair to think of Netlify as a CDN-backed static file host. But it would also be silly to think that’s all it is. That’s why I think it’s smart for them to have pages like this, comparing Netlify to GitHub Pages. GitHub Pages is a lot closer to only being a static file host. That’s still nice, but Netlify just brings so much more to the table.

Need to add a functional form to the site? Netlify does that.

Need to roll back to a previous version without any git-fu? Netlify does that.

Need to make sure you’re caching assets the best you can and breaking that cache for new versions? Netlify does that.

Need a preview of a pull request before you merge it? Netlify does that.

Need to set up redirects and rewrite rules so that your SPA behaves correctly? Netlify does that.

Need to run some server-side code? Netlify does that.

Need to do some A/B testing? Netlify does that.

That’s not all, just a random spattering of Netlify’s many features that take it to another level of hosting with a developer experience that’s beyond a static file host.

This same kind of thing came up on ShopTalk the other week. Why pick Netlify when you can toss files in a S3 bucket with Cloudfront in front of it? It’s a fair question, as maybe the outcome isn’t that different. But there are 100 other things to think about that, once you do, make Netlify seem like a no-brainer.


The post Why Netlify? appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

[Top]

Next.js on Netlify

(This is a sponsored post.)

If you want to put Next.js on Netlify, here’s a 5 minute tutorial¹. One of the many strengths of Next.js is that it can do server-side rendering (SSR) with a Node server behind it. But Netlify does static hosting not Node hosting, right? Well Netlify has functions, and those functions can handle the SSR. But you don’t even really need to know that, you can just use the plugin.

Need a little bit more hand-holding than that? You got it, Cassidy is doing a free Webinar about all the next Thursday (March 4th, 2021) at 9am Pacific. That way you can watch live and ask questions and stuff. Netlify has a bunch of webinars they have now smartly archived on a new resources site.

  1. I’ve also mentioned this before if it sounds familiar, the fact that it supports the best of the entire rendering spectrum is very good.

Direct Link to ArticlePermalink


The post Next.js on Netlify appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

,
[Top]

Netlify

High five to Netlify for the ❥ sponsorship. Netlify is a great place to host your static (or not-so-static!) website because of the great speed, DX, pricing, and feature set. I’ve thought of Netlify a bunch of times just in the past week or so, because either they release something cool, or someone else is writing about it.


The post Netlify appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

[Top]

Netlify Edge Handlers

Netlify Edge Handlers are in Early Access (you can request it), but they are super cool and I think they are worth wrapping your brain around now. I think they change the nature of what Jamstack is and can be.

You know about CDNs. They are global. They host assets geographically close to people so that websites are faster. Netlify does this for everything. The more you can put on a CDN, the better. Jamstack promotes the concept that assets, as well as pre-rendered content, should be on a global CDN. Speed is a major benefit of that.

The mental math with Jamstack and CDNs has traditionally gone like this: I’m making tradeoffs. I’m doing more at build time, rather than at render time, because I want to be on that global CDN for the speed. But in doing so, I’m losing a bit of the dynamic power of using a server. Or, I’m still doing dynamic things, but I’m doing them at render time on the client because I have to.

That math is changing. What Edge Handlers are saying is: you don’t have to make that trade off. You can do dynamic server-y things and stay on the global CDN. Here’s an example.

  1. You have an area of your site at /blog and you’d like it to return recent blog posts which are in a cloud database somewhere. This Edge Handler only needs to run at /blog, so you configure the Edge Handler only to run at that URL.
  2. You write the code to fetch those posts in a JavaScript file and put it at: /edge-handlers/getBlogPosts.js.
  3. Now, when you build and deploy, that code will run — only at that URL — and do its job.

So what kind of JavaScript are you writing? It’s pretty focused. I’d think 95% of the time you’re outright replacing the original response. Like, maybe the HTML for /blog on your site is literally this:

<!DOCTYPE html> <html lang="en"> <head>   <meta charset="UTF-8">   <meta name="viewport" content="width=device-width, initial-scale=1.0">   <title>Test a Netlify Edge Function</title> </head> <body>   <div id="blog-posts"></div> </body> </html>

With an Edge Handler, it’s not particularly difficult to get that original response, make the cloud data call, and replace the guts with blog posts.

export function onRequest(event) {   event.replaceResponse(async () => {     // Get the original response HTML     const originalRequest = await fetch(event.request);     const originalBody = await originalRequest.text();      // Get the data     const cloudRequest = await fetch(       `https://css-tricks.com/wp-json/wp/v2/posts`     );     const data = await cloudRequest.json();      // Replace the empty div with content     // Maybe you could use Cheerio or something for more robustness     const manipulatedResponse = originalBody.replace(       `<div id="blog-posts"></div>`,       `         <h2>           <a href="$ {data[0].link}">$ {data[0].title.rendered}</a>         </h2>         $ {data[0].excerpt.rendered}       `     );      let response = new Response(manipulatedResponse, {       headers: {         "content-type": "text/html",       },       status: 200,     });      return response;   }); }

(I’m hitting this site’s REST API as an example cloud data store.)

It’s a lot like a client-side fetch, except instead of manipulating the DOM after request for some data, this is happening before the response even makes it to the browser for the very first time. It’s code running on the CDN itself (“the edge”).

So, this must be slower than pre-rendered CDN content then, because it needs to make an additional network request before responding, right. Well, there is some overhead, but it’s faster than you probably think. The network request is happening on the network itself, so smokin’ fast computers on smokin’ fast networks. Likely, it’ll be a few milliseconds. They are only allowed 50ms of execution time anyway.

I was able to get this all up and running on my account that was granted access. It’s very nice that you can test them locally with:

netlify dev --trafficMesh

…which worked great both in development and deployed.

Anything you console.log() you’ll be able to set in the Netlify dashboard as well:

Here’s a repo with my functioning edge handler.


The post Netlify Edge Handlers appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, ,
[Top]

Netlify & Next.js

Cassidy Williams has been doing a Blogvent (blogging every day for a month) over on the Netlify Blog. A lot of the blog posts are about Next.js. There is a lot to like about Next.js. I just pulled one of Cassidy’s starters for fun. It’s very nice that it has React Fast-Refresh built-in. I like how on any given “Page” you can import and use a <Head> to control stuff that would be in a <head>. This was my first tiny little play with Next so, excuse my basicness.

But the most compelling thing about Next.js, to me, is how easily it supports the entire rendering spectrum. It encourages you to do static-file rendering by default (smart), then if you need to do server-side rendering (SSR), you just update any given Page component to have this:

export async function getServerSideProps() {   // Fetch data from external API   const res = await fetch(`https://.../data`)   const data = await res.json()    // Pass data to the page via props   return { props: { data } } }

The assumption is that you’re doing SSR because you need to hit a server for data in order to render the page, but would prefer to do that server-side so that the page can render quickly and without JavaScript if needed (great for SEO). That assumes a Node server is sitting there ready to do that work. On Netlify, that means a function (Node Lambda), but you barely even have to think about it, because you just put this in your netlify.toml file:

[[plugins]]   package = "@netlify/plugin-nextjs"

Now you’ve got static where you need it, server-rendered where you need it, but you aren’t giving up on client-side rendering either, which is nice and fast after the site is all booted up. I think it shoots some JSON around or something, framework magic.

I set up a quick SSR route off my homepage to have a play, and I can clearly see that both my homepage (static) and /cool route (SSR) both return static HTML on load.

I even had to prettify this source, as you HTML minification out of the box

I admit I like working in React, and Next.js is a darn nice framework to do it with because of the balance of simplicity and power. It’s great it runs on Netlify so easily.


The post Netlify & Next.js appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

,
[Top]

How to create a client-serverless Jamstack app using Netlify, Gatsby and Fauna

The Jamstack is a modern web development architecture based on client-side JavaScript, reusable APIs, and prebuilt Markup.

The key aspects of a Jamstack application are the following:

  • The entire app runs on a CDN (or ADN). CDN stands for Content Delivery Network and an ADN is an Application Delivery Network.
  • Everything lives in Git.
  • Automated builds run with a workflow when developers push the code.
  • There’s Automatic deployment of the prebuilt markup to the CDN/ADN.
  • Reusable APIs make hasslefree integrations with many of the services. To take a few examples, Stripe for the payment and checkout, Mailgun for email services, etc. We can also write custom APIs targeted to a specific use-case. We will see such examples of custom APIs in this article.
  • It’s practically Serverless. To put it more clearly, we do not maintain any servers, rather make use of already existing services (like email, media, database, search, and so on) or serverless functions.

In this article, we will learn how to build a Jamstack application that has:

  • A global data store with GraphQL support to store and fetch data with ease. We will use Fauna to accomplish this.
  • Serverless functions that also act as the APIs to fetch data from the Fauna data store. We will use Netlify serverless functions for this.
  • We will build the client side of the app using a Static Site Generator called Gatsbyjs.
  • Finally we will deploy the app on a CDN configured and managed by Netlify CDN.

So, what are we building today?

We all love shopping. How cool would it be to manage all of our shopping notes in a centralized place? So we’ll be building an app called ‘shopnote’ that allows us to manage shop notes. We can also add one or more items to a note, mark them as done, mark them as urgent, etc.

At the end of this article, our shopnote app will look like this,

TL;DR

We will learn things with a step-by-step approach in this article. If you want to jump into the source code or demonstration sooner, here are links to them.

Set up Fauna

Fauna is the data API for client-serverless applications. If you are familiar with any traditional RDBMS, a major difference with Fauna would be, it is a relational NOSQL system that gives all the capabilities of the legacy RDBMS. It is very flexible without compromising scalability and performance.

Fauna supports multiple APIs for data-access,

  • GraphQL: An open source data query and manipulation language. If you are new to the GraphQL, you can find more details from here, https://graphql.org/
  • Fauna Query Language (FQL): An API for querying Fauna. FQL has language specific drivers which makes it flexible to use with languages like JavaScript, Java, Go, etc. Find more details of FQL from here.

In this article we will explain the usages of GraphQL for the ShopNote application.

First thing first, sign up using this URL. Please select the free plan which is with a generous daily usage quota and more than enough for our usage.

Next, create a database by providing a database name of your choice. I have used shopnotes as the database name.

After creating the database, we will be defining the GraphQL schema and importing it into the database. A GraphQL schema defines the structure of the data. It defines the data types and the relationship between them. With schema we can also specify what kind of queries are allowed.

At this stage, let us create our project folder. Create a project folder somewhere on your hard drive with the name, shopnote. Create a file with the name, shopnotes.gql with the following content:

type ShopNote {   name: String!   description: String   updatedAt: Time   items: [Item!] @relation }   type Item {   name: String!   urgent: Boolean   checked: Boolean   note: ShopNote! }   type Query {   allShopNotes: [ShopNote!]! }

Here we have defined the schema for a shopnote list and item, where each ShopNote contains name, description, update time and a list of Items. Each Item type has properties like, name, urgent, checked and which shopnote it belongs to. 

Note the @relation directive here. You can annotate a field with the @relation directive to mark it for participating in a bi-directional relationship with the target type. In this case, ShopNote and Item are in a one-to-many relationship. It means, one ShopNote can have multiple Items, where each Item can be related to a maximum of one ShopNote.

You can read more about the @relation directive from here. More on the GraphQL relations can be found from here.

As a next step, upload the shopnotes.gql file from the Fauna dashboard using the IMPORT SCHEMA button,

Upon importing a GraphQL Schema, FaunaDB will automatically create, maintain, and update, the following resources:

  • Collections for each non-native GraphQL Type; in this case, ShopNote and Item.
  • Basic CRUD Queries/Mutations for each Collection created by the Schema, e.g. createShopNote allShopNotes; each of which are powered by FQL.
  • For specific GraphQL directives: custom Indexes or FQL for establishing relationships (i.e. @relation), uniqueness (@unique), and more!

Behind the scene, Fauna will also help to create the documents automatically. We will see that in a while.

Fauna supports a schema-free object relational data model. A database in Fauna may contain a group of collections. A collection may contain one or more documents. Each of the data records are inserted into the document. This forms a hierarchy which can be visualized as:

Here the data record can be arrays, objects, or of any other supported types. With the Fauna data model we can create indexes, enforce constraints. Fauna indexes can combine data from multiple collections and are capable of performing computations. 

At this stage, Fauna already created a couple of collections for us, ShopNote and Item. As we start inserting records, we will see the Documents are also getting created. We will be able view and query the records and utilize the power of indexes. You may see the data model structure appearing in your Fauna dashboard like this in a while,

Point to note here, each of the documents is identified by the unique ref attribute. There is also a ts field which returns the timestamp of the recent modification to the document. The data record is part of the data field. This understanding is really important when you interact with collections, documents, records using FQL built-in functions. However, in this article we will interact with them using GraphQL queries with Netlify Functions.

With all these understanding, let us start using our Shopenotes database that is created successfully and ready for use. 

Let us try some queries

Even though we have imported the schema and underlying things are in place, we do not have a document yet. Let us create one. To do that, copy the following GraphQL mutation query to the left panel of the GraphQL playground screen and execute.

mutation {   createShopNote(data: {     name: "My Shopping List"     description: "This is my today's list to buy from Tom's shop"     items: {       create: [         { name: "Butther - 1 pk", urgent: true }         { name: "Milk - 2 ltrs", urgent: false }         { name: "Meat - 1lb", urgent: false }       ]     }   }) {     _id     name     description     items {       data {         name,         urgent       }     }   } }

Note, as Fauna already created the GraphQL mutation classes in the background, we can directly use it like, createShopNote. Once successfully executed, you can see the response of a ShopNote creation at the right side of the editor.

The newly created ShopNote document has all the required details we have passed while creating it. We have seen ShopNote has a one-to-many relation with Item. You can see the shopnote response has the item data nested within it. In this case, one shopnote has three items. This is really powerful. Once the schema and relation are defined, the document will be created automatically keeping that relation in mind.

Now, let us try fetching all the shopnotes. Here is the GraphQL query:

query {     allShopNotes {     data {       _id       name       description       updatedAt       items {         data {           name,           checked,           urgent         }       }     }   } }

Let’s try the query in the playground as before:

Now we have a database with a schema and it is fully operational with creating and fetch functionality. Similarly, we can create queries for adding, updating, removing items to a shopnote and also updating and deleting a shopnote. These queries will be used at a later point in time when we create the serverless functions.

If you are interested to run other queries in the GraphQL editor, you can find them from here,

Create a Server Secret Key

Next, we need to create a secured server key to make sure the access to the database is authenticated and authorized.

Click on the SECURITY option available in the FaunaDB interface to create the key, like so,

On successful creation of the key, you will be able to view the key’s secret. Make sure to copy and save it somewhere safe.

We do not want anyone else to know about this key. It is not even a good idea to commit it to the source code repository. To maintain this secrecy, create an empty file called .env at the root level of your project folder.

Edit the .env file and add the following line to it (paste the generated server key in the place of, <YOUR_FAUNA_KEY_SECRET>).

FAUNA_SERVER_SECRET=<YOUR_FAUNA_KEY_SECRET>

Add a .gitignore file and write the following content to it. This is to make sure we do not commit the .env file to the source code repo accidentally. We are also ignoring node_modules as a best practice.

.env

We are done with all that had to do with Fauna’s setup. Let us move to the next phase to create serverless functions and APIs to access data from the Fauna data store. At this stage, the directory structure may look like this,

Set up Netlify Serverless Functions

Netlify is a great platform to create hassle-free serverless functions. These functions can interact with databases, file-system, and in-memory objects.

Netlify functions are powered by AWS Lambda. Setting up AWS Lambdas on our own can be a fairly complex job. With Netlify, we will simply set a folder and drop our functions. Writing simple functions automatically becomes APIs. 

First, create an account with Netlify. This is free and just like the FaunaDB free tier, Netlify is also very flexible.

Now we need to install a few dependencies using either npm or yarn. Make sure you have nodejs installed. Open a command prompt at the root of the project folder. Use the following command to initialize the project with node dependencies,

npm init -y

Install the netlify-cli utility so that we can run the serverless function locally.

npm install netlify-cli -g

Now we will install two important libraries, axios and dotenv. axios will be used for making the HTTP calls and dotenv will help to load the FAUNA_SERVER_SECRET environment variable from the .env file into process.env.

yarn add axios dotenv

Or:

npm i axios dotenv

Create serverless functions

Create a folder with the name, functions at the root of the project folder. We are going to keep all serverless functions under it.

Now create a subfolder called utils under the functions folder. Create a file called query.js under the utils folder. We will need some common code to query the data store for all the serverless functions. The common code will be in the query.js file.

First we import the axios library functionality and load the .env file. Next, we export an async function that takes the query and variables. Inside the async function, we make calls using axios with the secret key. Finally, we return the response.

// query.js   const axios = require("axios"); require("dotenv").config();   module.exports = async (query, variables) => {   const result = await axios({       url: "https://graphql.fauna.com/graphql",       method: "POST",       headers: {           Authorization: `Bearer $ {process.env.FAUNA_SERVER_SECRET}`       },       data: {         query,         variables       }  });    return result.data; };

Create a file with the name, get-shopnotes.js under the functions folder. We will perform a query to fetch all the shop notes.

// get-shopnotes.js   const query = require("./utils/query");   const GET_SHOPNOTES = `    query {        allShopNotes {        data {          _id          name          description          updatedAt          items {            data {              _id,              name,              checked,              urgent          }        }      }    }  }   `;   exports.handler = async () => {   const { data, errors } = await query(GET_SHOPNOTES);     if (errors) {     return {       statusCode: 500,       body: JSON.stringify(errors)     };   }     return {     statusCode: 200,     body: JSON.stringify({ shopnotes: data.allShopNotes.data })   }; };

Time to test the serverless function like an API. We need to do a one time setup here. Open a command prompt at the root of the project folder and type:

netlify login

This will open a browser tab and ask you to login and authorize access to your Netlify account. Please click on the Authorize button.

Next, create a file called, netlify.toml at the root of your project folder and add this content to it,

[build]     functions = "functions"   [[redirects]]    from = "/api/*"    to = "/.netlify/functions/:splat"    status = 200

This is to tell Netlify about the location of the functions we have written so that it is known at the build time.

Netlify automatically provides the APIs for the functions. The URL to access the API is in this form, /.netlify/functions/get-shopnotes which may not be very user-friendly. We have written a redirect to make it like, /api/get-shopnotes.

Ok, we are done. Now in command prompt type,

netlify dev

By default the app will run on localhost:8888 to access the serverless function as an API.

Open a browser tab and try this URL, http://localhost:8888/api/get-shopnotes:

Congratulations!!! You have got your first serverless function up and running.

Let us now write the next serverless function to create a ShopNote. This is going to be simple. Create a file named, create-shopnote.js under the functions folder. We need to write a mutation by passing the required parameters. 

//create-shopnote.js   const query = require("./utils/query");   const CREATE_SHOPNOTE = `   mutation($ name: String!, $ description: String!, $ updatedAt: Time!, $ items: ShopNoteItemsRelation!) {     createShopNote(data: {name: $ name, description: $ description, updatedAt: $ updatedAt, items: $ items}) {       _id       name       description       updatedAt       items {         data {           name,           checked,           urgent         }       }     }   } `;   exports.handler = async event => {      const { name, items } = JSON.parse(event.body);   const { data, errors } = await query(     CREATE_SHOPNOTE, { name, items });     if (errors) {     return {       statusCode: 500,       body: JSON.stringify(errors)     };   }     return {     statusCode: 200,     body: JSON.stringify({ shopnote: data.createShopNote })   }; };

Please give your attention to the parameter, ShopNotesItemRelation. As we had created a relation between the ShopNote and Item in our schema, we need to maintain that while writing the query as well.

We have de-structured the payload to get the required information from the payload. Once we got those, we just called the query method to create a ShopNote.

Alright, let’s test it out. You can use postman or any other tools of your choice to test it like an API. Here is the screenshot from postman.

Great, we can create a ShopNote with all the items we want to buy from a shopping mart. What if we want to add an item to an existing ShopNote? Let us create an API for it. With the knowledge we have so far, it is going to be really quick.

Remember, ShopNote and Item are related? So to create an item, we have to mandatorily tell which ShopNote it is going to be part of. Here is our next serverless function to add an item to an existing ShopNote.

//add-item.js   const query = require("./utils/query");   const ADD_ITEM = `   mutation($ name: String!, $ urgent: Boolean!, $ checked: Boolean!, $ note: ItemNoteRelation!) {     createItem(data: {name: $ name, urgent: $ urgent, checked: $ checked, note: $ note}) {       _id       name       urgent       checked       note {         name       }     }   } `;   exports.handler = async event => {      const { name, urgent, checked, note} = JSON.parse(event.body);   const { data, errors } = await query(     ADD_ITEM, { name, urgent, checked, note });     if (errors) {     return {       statusCode: 500,       body: JSON.stringify(errors)     };   }     return {     statusCode: 200,     body: JSON.stringify({ item: data.createItem })   }; };

We are passing the item properties like, name, if it is urgent, the check value and the note the items should be part of. Let’s see how this API can be called using postman,

As you see, we are passing the id of the note while creating an item for it.

We won’t bother writing the rest of the API capabilities in this article,  like updating, deleting a shop note, updating, deleting items, etc. In case, you are interested, you can look into those functions from the GitHub Repository.

However, after creating the rest of the API, you should have a directory structure like this,

We have successfully created a data store with Fauna, set it up for use, created an API backed by serverless functions, using Netlify Functions, and tested those functions/routes.

Congratulations, you did it. Next, let us build some user interfaces to show the shop notes and add items to it. To do that, we will use Gatsby.js (aka, Gatsby) which is a super cool, React-based static site generator.

The following section requires you to have basic knowledge of ReactJS. If you are new to it, you can learn it from here. If you are familiar with any other user interface technologies like, Angular, Vue, etc feel free to skip the next section and build your own using the APIs explained so far.

Set up the User Interfaces using Gatsby

We can set up a Gatsby project either using the starter projects or initialize it manually. We will build things from scratch to understand it better.

Install gatsby-cli globally. 

npm install -g gatsby-cli

Install gatsby, react and react-dom

yarn add gatsby react react-dom

Edit the scripts section of the package.json file to add a script for develop.

"scripts": {   "develop": "gatsby develop"  }

Gatsby projects need a special configuration file called, gatsby-config.js. Please create a file named, gatsby-config.js at the root of the project folder with the following content,

module.exports = {   // keep it empty     }

Let’s create our first page with Gatsby. Create a folder named, src at the root of the project folder. Create a subfolder named pages under src. Create a file named, index.js under src/pages with the following content:

import React, { useEffect, useState } from 'react';       export default () => {       const [loading, setLoading ] = useState(false);       const [shopnotes, setShopnotes] = useState(null);         return (     <>           <h1>Shopnotes to load here...</h1>     </>           )     } 

Let’s run it. We generally need to use the command gatsby develop to run the app locally. As we have to run the client side application with netlify functions, we will continue to use, netlify dev command.

netlify dev

That’s all. Try accessing the page at http://localhost:8888. You should see something like this,

Gatsby project build creates a couple of output folders which you may not want to push to the source code repository. Let us add a few entries to the .gitignore file so that we do not get unwanted noise.

Add .cache, node_modules and public to the .gitignore file. Here is the full content of the file:

.cache public node_modules *.env

At this stage, your project directory structure should match with the following:

Thinking of the UI components

We will create small logical components to achieve the ShopNote user interface. The components are:

  • Header: A header component consists of the Logo, heading and the create button to create a shopnote.
  • Shopenotes: This component will contain the list of the shop note (Note component).
  • Note: This is individual notes. Each of the notes will contain one or more items.
  • Item: Each of the items. It consists of the item name and actions to add, remove, edit an item.

You can see the sections marked in the picture below:

Install a few more dependencies

We will install a few more dependencies required for the user interfaces to be functional and look better. Open a command prompt at the root of the project folder and install these dependencies,

yarn add bootstrap lodash moment react-bootstrap react-feather shortid

Lets load all the Shop Notes

We will use the Reactjs useEffect hook to make the API call and update the shopnotes state variables. Here is the code to fetch all the shop notes. 

useEffect(() => {   axios("/api/get-shopnotes").then(result => {     if (result.status !== 200) {       console.error("Error loading shopnotes");       console.error(result);       return;     }     setShopnotes(result.data.shopnotes);     setLoading(true);   }); }, [loading]);

Finally, let us change the return section to use the shopnotes data. Here we are checking if the data is loaded. If so, render the Shopnotes component by passing the data we have received using the API.

return (   <div className="main">     <Header />     {       loading ? <Shopnotes data = { shopnotes } /> : <h1>Loading...</h1>     }   </div> );  

You can find the entire index.js file code from here The index.js file creates the initial route(/) for the user interface. It uses other components like, Shopnotes, Note and Item to make the UI fully operational. We will not go to a great length to understand each of these UI components. You can create a folder called components under the src folder and copy the component files from here.

Finally, the index.css file

Now we just need a css file to make things look better. Create a file called index.css under the pages folder. Copy the content from this CSS file to the index.css file.

import 'bootstrap/dist/css/bootstrap.min.css'; import './index.css'

That’s all. We are done. You should have the app up and running with all the shop notes created so far. We are not getting into the explanation of each of the actions on items and notes here not to make the article very lengthy. You can find all the code in the GitHub repo. At this stage, the directory structure may look like this,

A small exercise

I have not included the Create Note UI implementation in the GitHib repo. However, we have created the API already. How about you build the front end to add a shopnote? I suggest implementing a button in the header, which when clicked, creates a shopnote using the API we’ve already defined. Give it a try!

Let’s Deploy

All good so far. But there is one issue. We are running the app locally. While productive, it’s not ideal for the public to access. Let’s fix that with a few simple steps.

Make sure to commit all the code changes to the Git repository, say, shopnote. You have an account with Netlify already. Please login and click on the button, New site from Git.

Next, select the relevant Git services where your project source code is pushed. In my case, it is GitHub.

Browse the project and select it.

Provide the configuration details like the build command, publish directory as shown in the image below. Then click on the button to provide advanced configuration information. In this case, we will pass the FAUNA_SERVER_SECRET key value pair from the .env file. Please copy paste in the respective fields. Click on deploy.

You should see the build successful in a couple of minutes and the site will be live right after that.

In Summary

To summarize:

  • The Jamstack is a modern web development architecture based on client-side JavaScript, reusable APIs, and prebuilt Markup.
  • 70% – 80% of the features that once required a custom back-end can now be done either on the front end or there are APIs, services to take advantage of.
  • Fauna provides the data API for the client-serverless applications. We can use GraphQL or Fauna’s FQL to talk to the store.
  • Netlify serverless functions can be easily integrated with Fauna using the GraphQL mutations and queries. This approach may be useful when you have the need of  custom authentication built with Netlify functions and a flexible solution like Auth0.
  • Gatsby and other static site generators are great contributors to the Jamstack to give a fast end user experience.

Thank you for reading this far! Let’s connect. You can @ me on Twitter (@tapasadhikary) with comments, or feel free to follow.


The post How to create a client-serverless Jamstack app using Netlify, Gatsby and Fauna appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , , ,
[Top]

Netlify Background Functions

As quickly as I can:

  • AWS Lambda is great: it allows you to run server-side code without really running a server. This is what “serverless” largely means.
  • Netlify Functions run on AWS Lambda and make them way easier to use. For example, you just chuck some scripts into a folder they deploy when you push to your main branch. Plus you get logs.
  • Netlify Functions used to be limited to a 10-second execution time, even though Lambda’s can run 15 minutes.
  • Now, you can run 15-minute functions on Netlify also, by appending -background to the filename like my-function-background.js. (You can write in Go also.)
  • This means you can do long-ish running tasks, like spin up a headless browser and scrape some data, process images to build into a PDF and email it, sync data across systems with batch API requests… or anything else that takes a lot longer than 10 seconds to do.

The post Netlify Background Functions appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, ,
[Top]

Create an FAQ Slack app with Netlify functions and FaunaDB

Sometimes, when you’re looking for a quick answer, it’s really useful to have an FAQ system in place, rather than waiting for someone to respond to a question. Wouldn’t it be great if Slack could just answer these FAQs for us? In this tutorial, we’re going to be making just that: a slash command for Slack that will answer user FAQs. We’ll be storing our answers in FaunaDB, using FQL to search the database, and utilising a Netlify function to provide a serverless endpoint to connect Slack and FaunaDB.

Prerequisites

This tutorial assumes you have the following requirements:

  • Github account, used to log in to Netlify and Fauna, as well as storing our code
  • Slack workspace with permission to create and install new apps
  • Node.js v12

Create npm package

To get started, create a new folder and initialise a npm package by using your package manager of choice and run npm init -y from inside the folder. After the package has been created, we have a few npm packages to install.

Run this to install all the packages we will need for this tutorial:

npm install express body-parser faunadb encoding serverless-http netlify-lambda

These packages are explained below, but if you are already familiar with them, feel free to skip ahead.

Encoding has been installed due to a plugin error occurring in @netlify/plugin-functions-core at the time of writing and may not be needed when you follow this tutorial.

Packages

Express is a web application framework that will allow us to simplify writing multiple endpoints for our function. Netlify functions require handlers for each endpoint, but express combined with serverless-http will allow us to write the endpoints all in one place.

Body-parser is an express middleware which will take care of the application/x-www-form-urlencoded data Slack will be sending to our function.

Faunadb is an npm module that allows us to interact with the database through the FaunaDB Javascript driver. It allows us to pass queries from our function to the database, in order to get the answers

Serverless-http is a module that wraps Express applications to the format expected by Netlify functions, meaning we won’t have to rewrite our code when we shift from local development to Netlify.

Netlify-lambda is a tool which will allow us to build and serve our functions locally, in the same way they will be built and deployed on Netlify. This means we can develop locally before pushing our code to Netlify, increasing the speed of our workflow.

Create a function

With our npm packages installed, it’s time to begin work on the function. We’ll be using serverless to wrap an express app, which will allow us to deploy it to Netlify later. To get started, create a file called netlify.toml, and add the following into it:

[build]   functions = "functions"

We will use a .gitignore file, to prevent our node_modules and functions folders from being added to git later. Create a file called .gitignore, and add the following:

functions/

node_modules/

We will also need a folder called src, and a file inside it called server.js. Your final file structure should look like:

With this in place, create a basic express app by inserting the code below into server.js:

const express = require("express"); const bodyParser = require("body-parser"); const fauna = require("faunadb"); const serverless = require("serverless-http");   const app = express();   module.exports.handler = serverless(app);

Check out the final line; it looks a little different to a regular express app. Rather than listening on a port, we’re passing our app into serverless and using this as our handler, so that Netlify can invoke our function.

Let’s set up our body parser to use application/x-www-form-urlencoded data, as well as putting a router in place. Add the following to server.js after defining app: 

const router = express.Router(); app.use(bodyParser.json()); app.use(bodyParser.urlencoded({ extended: true })); app.use("/.netlify/functions/server", router);

Notice that the router is using /.netlify/functions/server as an endpoint. This is so that Netlify will be able to correctly deploy the function later in the tutorial. This means we will need to add this to any base URLs, in order to invoke the function.

Create a test route

With a basic app in place, let’s create a test route to check everything is working. Insert the following code to create a simple GET route, that returns a simple json object:

router.get("/test", (req, res) => {  res.json({ hello: "world" }); });

With this route in place, let’s spin up our function on localhost, and check that we get a response. We’ll be using netlify-lambda to serve our app, so that we can imitate a Netlify function locally on port 9000. In our package.json, add the following lines into the scripts section:

"start": "./node_modules/.bin/netlify-lambda serve src",    "build": "./node_modules/.bin/netlify-lambda build src"

With this in place, after saving the file, we can run npm start to begin netlify-lambda on port 9000.

The build command will be used when we deploy to Netlify later.

Once it is up and running, we can visit http://localhost:9000/.netlify/functions/server/test to check our function is working as expected.

The great thing about netlify-lambda is it will listen for changes to our code, and automatically recompile whenever we update something, so we can leave it running for the duration of this tutorial.

Start ngrok URL

Now we have a test route working on our local machine, let’s make it available online. To do this, we’ll be using ngrok, a npm package that provides a public URL for our function. If you don’t have ngrok installed already, first run npm install -g ngrok to globally install it on your machine. Then run ngrok http 9000 which will automatically direct traffic to our function running on port 9000.

After starting ngrok, you should see a forwarding URL in the terminal, which we can visit to confirm our server is available online. Copy this base URL to your browser, and follow it with /.netlify/functions/server/test. You should see the same result as when we made our calls on localhost, which means we can now use this URL as an endpoint for Slack!

Each time you restart ngrok, it creates a new URL, so if you need to stop it at any point, you will need to update your URL endpoint in Slack.

Setting up Slack

Now that we have a function in place, it’s time to move to Slack to create the app and slash command. We will have to deploy this app to our workspace, as well as making a few updates to our code to connect our function. For a more in depth set of instructions on how to create a new slash command, you can follow the official Slack documentation. For a streamlined set of instructions, follow along below:

Create a new Slack app

First off, let’s create our new Slack app for these FAQs. Visit https://api.slack.com/apps and select Create New App to begin. Give your app a name (I used Fauna FAQ), and select a development workspace for the app.

Create a slash command

After creating the app, we need to add a slash command to it, so that we can interact with the app. Select slash commands from the menu after the app has been created, then create a new command. Fill in the following form with the name of your command (I used /faq) as well as providing the URL from ngrok. Don’t forget to add /.netlify/functions/server/ to the end!

Install app to workspace

Once you have created your slash command, click on basic information in the sidebar on the left to return to the app’s main page. From here, select the dropdown “Install app to your workspace” and click the button to install it.

Once you have allowed access, the app will be installed, and you’ll be able to start using the slash command in your workspace.

Update the function

With our new app in place, we’ll need to create a new endpoint for Slack to send the requests to. For this, we’ll use the root endpoint for simplicity. The endpoint will need to be able to take a post request with application/x-www-form-urlencoded data, then return a 200 status response with a message. To do this, let’s create a new post route at the root by adding the following code to server.js:

router.post("/", async (req, res) => {   });

Now that we have our endpoint, we can also extract and view the text that has been sent by slack by adding the following line before we set the status:

const text = req.body.text; console.log(`Input text: $  {text}`);

For now, we’ll just pass this text into the response and send it back instantly, to ensure the slack app and function are communicating.

res.status(200); res.send(text);

Now, when you type /faq <somequestion> on a slack channel, you should get back the same message from the slack slash command.

Formatting the response

Rather than just sending back plaintext, we can make use of Slack’s Block Kit to use specialised UI elements to improve the look of our answers. If you want to create a more complex layout, Slack provides a Block Kit builder to visually design your layout.

For now, we’re going to keep things simple, and just provide a response where each answer is separated by a divider. Add the following function to your server.js file after the post route:

const format = (answers) => {  if (answers.length == 0) {    answers = ["No answers found"];  }    let formatted = {    blocks: [],  };    for (answer of answers) {    formatted["blocks"].push({      type: "divider",    });    formatted["blocks"].push({      type: "section",      text: {        type: "mrkdwn",        text: answer,      },    });  }    return formatted; };

With this in place, we now need to pass our answers into this function, to format the answers before returning them to Slack. Update the following in the root post route:

let answers = text; const formattedAnswers = format(answers);

Now when we enter the same command to the slash app, we should get back the same message, but this time in a formatted version!

Setting up Fauna

With our slack app in place, and a function to connect to it, we now need to start working on the database to store our answers. If you’ve never set up a database with FaunaDB before, there is some great documentation on how to quickly get started. A brief step-by-step overview for the database and collection is included below:

Create database

First, we’ll need to create a new database. After logging into the Fauna dashboard online, click New Database. Give your new database a name you’ll remember (I used “slack-faq”) and save the database.

Create collection

With this database in place, we now need a collection. Click the “New Collection” button that should appear on your dashboard, and give your collection a name (I used “faq”). The history days and TTL values can be left as their defaults, but you should ensure you don’t add a value to the TTL field, as we don’t want our documents to be removed automatically after a certain time.

Add question / answer documents

Now we have a database and collection in place, we can start adding some documents to it. Each document should follow the structure:

{    question: "a question string",    answer: "an answer string",    qTokens: [        "first token",        "second token",        "third token"    ] }

The qToken values should be key terms in the question, as we will use them for a tokenized search when we can’t match a question exactly. You can add as many qTokens as you like for each question. The more relevant the tokens are, the more accurate results will be. For example, if our question is “where are the bathrooms”, we should include the qTokens “bathroom”, “bathrooms”, “toilet”, “toilets” and any other terms you may think people will search for when trying to find information about a bathroom.

The questions I used to develop a proof of concept are as follows:

{   question: "where is the lobby",   answer: "On the third floor",   qTokens: ["lobby", "reception"], }, {   question: "when is payday",   answer: "On the first Monday of each month",   qTokens: ["payday", "pay", "paid"], }, {   question: "when is lunch",   answer: "Lunch break is *12 - 1pm*",   qTokens: ["lunch", "break", "eat"], }, {   question: "where are the bathrooms",   answer: "Next to the elevators on each floor",   qTokens: ["toilet", "bathroom", "toilets", "bathrooms"], }, {   question: "when are my breaks",   answer: "You can take a break whenever you want",   qTokens: ["break", "breaks"], }

Feel free to take this time to add as many documents as you like, and as many qTokens as you think each question needs, then we’ll move on to the next step.

Creating Indexes

With these questions in place, we will create two indexes to allow us to search the database. First, create an index called “answers_by_question”, selecting question as the term and answer as the value. This will allow us to search all answers by their associated question.

Then, create an index called “answers_by_qTokens”, selecting qTokens as the term and answer as the value. We will use this index to allow us to search through the qTokens of all items in the database.

Searching the database

To run a search in our database, we will do two things. First, we’ll run a search for an exact match to the question, so we can provide a single answer to the user. Second, if this search doesn’t find a result, we’ll do a search on the qTokens each answer has, returning any results that provide a match. We’ll use Fauna’s online shell to demonstrate and explain these queries, before using them in our function.

Exact Match

Before searching the tokens, we’ll test whether we can match the input question exactly, as this will allow for the best answer to what the user has asked. To search our questions, we will match against the “answers_by_question” index, then paginate our answers. Copy the following code into the online Fauna shell to see this in action:

q.Paginate(q.Match(q.Index("answers_by_question"), "where is the lobby"))

If you have a question matching the “where is the lobby” example above, you should see the expected answer of “On the third floor” as a result.

Searching the tokens

For cases where there is no exact match on the database, we will have to use our qTokens to find any relevant answers. For this, we will match against the “answers_by_qTokens” index we created and again paginate our answers. Copy the following into the online shell to see how this works:

q.Paginate(q.Match(q.Index("answers_by_qTokens"), "break"))

If you have any questions with the qToken “break” from the example questions, you should see all answers returned as a result.

Connect function to Fauna

We have our searches figured out, but currently we can only run them from the online shell. To use these in our function, there is some configuration required, as well as an update to our function’s code.

Function configuration

To connect to Fauna from our function, we will need to create a server key. From your database’s dashboard, select security in the left hand sidebar, and create a new key. Give your new key a name you will recognise, and ensure that the dropdown has Server selected, not Admin. Finally, once the key has been created, add the following code to server.js before the test route, replacing the <secretKey> value with the secret provided by Fauna.

const q = fauna.query; const client = new fauna.Client({  secret: "<secretKey>", });

It would be preferred to store this key in an environment variable in Netlify, rather than directly in the code, but that is beyond the scope of this tutorial. If you would like to use environment variables, this Netlify post explains how to do so.

Update function code

To include our new search queries in the function, copy the following code into server.js after the post route:

const searchText = async (text) => {  console.log("Beginning searchText");  const answer = await client.query(    q.Paginate(q.Match(q.Index("answers_by_question"), text))  );  console.log(`searchText response: $  {answer.data}`);  return answer.data; };   const getTokenResponse = async (text) => {  console.log("Beginning getTokenResponse");  let answers = [];  const questionTokens = text.split(/[ ]+/);  console.log(`Tokens: $  {questionTokens}`);  for (token of questionTokens) {    const tokenResponse = await client.query(      q.Paginate(q.Match(q.Index("answers_by_qTokens"), text))    );    answers = [...answers, ...tokenResponse.data];  }  console.log(`Token answers: $  {answers}`);  return answers; };

These functions replicate the same functionality as the queries we previously ran in the online Fauna shell, but now we can utilise them from our function.

Deploy to Netlify

Now the function is searching the database, the only thing left to do is put it on the cloud, rather than a local machine. To do this, we’ll be making use of a Netlify function deployed from a GitHub repository.

First things first, add a new repo on Github, and push your code to it. Once the code is there, go to Netlify and either sign up or log in using your Github profile. From the home page of Netlify, select “New site from git” to deploy a new site, using the repo you’ve just created in Github.

If you have never deployed a site in Netlify before, this post explains the process to deploy from git.

Ensure while you are creating the new site, that your build command is set to npm run build, to have Netlify build the function before deployment. The publish directory can be left blank, as we are only deploying a function, rather than any pages.

Netlify will now build and deploy your repo, generating a unique URL for the site deployment. We can use this base URL to access the test endpoint of our function from earlier, to ensure things are working.

The last thing to do is update the Slack endpoint to our new URL! Navigate to your app, then select ‘slash commands’ in the left sidebar. Click on the pencil icon to edit the slash command and paste in the new URL for the function. Finally, you can use your new slash command in any authorised Slack channels!

Conclusion

There you have it, an entirely serverless, functional slack slash command. We have used FaunaDB to store our answers and connected to it through a Netlify function. Also, by using Express, we have the flexibility to add further endpoints to the function for adding new questions, or anything else you can think up to further extend this project! Hopefully now, instead of waiting around for someone to answer your questions, you can just use /faq and get the answer instantly!


Matthew Williams is a software engineer from Melbourne, Australia who believes the future of technology is serverless. If you’re interested in more from him, check out his Medium articles, or his GitHub repos.


The post Create an FAQ Slack app with Netlify functions and FaunaDB appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , ,
[Top]