Tag: FaunaDB

Create an FAQ Slack app with Netlify functions and FaunaDB

Sometimes, when you’re looking for a quick answer, it’s really useful to have an FAQ system in place, rather than waiting for someone to respond to a question. Wouldn’t it be great if Slack could just answer these FAQs for us? In this tutorial, we’re going to be making just that: a slash command for Slack that will answer user FAQs. We’ll be storing our answers in FaunaDB, using FQL to search the database, and utilising a Netlify function to provide a serverless endpoint to connect Slack and FaunaDB.

Prerequisites

This tutorial assumes you have the following requirements:

  • Github account, used to log in to Netlify and Fauna, as well as storing our code
  • Slack workspace with permission to create and install new apps
  • Node.js v12

Create npm package

To get started, create a new folder and initialise a npm package by using your package manager of choice and run npm init -y from inside the folder. After the package has been created, we have a few npm packages to install.

Run this to install all the packages we will need for this tutorial:

npm install express body-parser faunadb encoding serverless-http netlify-lambda

These packages are explained below, but if you are already familiar with them, feel free to skip ahead.

Encoding has been installed due to a plugin error occurring in @netlify/plugin-functions-core at the time of writing and may not be needed when you follow this tutorial.

Packages

Express is a web application framework that will allow us to simplify writing multiple endpoints for our function. Netlify functions require handlers for each endpoint, but express combined with serverless-http will allow us to write the endpoints all in one place.

Body-parser is an express middleware which will take care of the application/x-www-form-urlencoded data Slack will be sending to our function.

Faunadb is an npm module that allows us to interact with the database through the FaunaDB Javascript driver. It allows us to pass queries from our function to the database, in order to get the answers

Serverless-http is a module that wraps Express applications to the format expected by Netlify functions, meaning we won’t have to rewrite our code when we shift from local development to Netlify.

Netlify-lambda is a tool which will allow us to build and serve our functions locally, in the same way they will be built and deployed on Netlify. This means we can develop locally before pushing our code to Netlify, increasing the speed of our workflow.

Create a function

With our npm packages installed, it’s time to begin work on the function. We’ll be using serverless to wrap an express app, which will allow us to deploy it to Netlify later. To get started, create a file called netlify.toml, and add the following into it:

[build]   functions = "functions"

We will use a .gitignore file, to prevent our node_modules and functions folders from being added to git later. Create a file called .gitignore, and add the following:

functions/

node_modules/

We will also need a folder called src, and a file inside it called server.js. Your final file structure should look like:

With this in place, create a basic express app by inserting the code below into server.js:

const express = require("express"); const bodyParser = require("body-parser"); const fauna = require("faunadb"); const serverless = require("serverless-http");   const app = express();   module.exports.handler = serverless(app);

Check out the final line; it looks a little different to a regular express app. Rather than listening on a port, we’re passing our app into serverless and using this as our handler, so that Netlify can invoke our function.

Let’s set up our body parser to use application/x-www-form-urlencoded data, as well as putting a router in place. Add the following to server.js after defining app: 

const router = express.Router(); app.use(bodyParser.json()); app.use(bodyParser.urlencoded({ extended: true })); app.use("/.netlify/functions/server", router);

Notice that the router is using /.netlify/functions/server as an endpoint. This is so that Netlify will be able to correctly deploy the function later in the tutorial. This means we will need to add this to any base URLs, in order to invoke the function.

Create a test route

With a basic app in place, let’s create a test route to check everything is working. Insert the following code to create a simple GET route, that returns a simple json object:

router.get("/test", (req, res) => {  res.json({ hello: "world" }); });

With this route in place, let’s spin up our function on localhost, and check that we get a response. We’ll be using netlify-lambda to serve our app, so that we can imitate a Netlify function locally on port 9000. In our package.json, add the following lines into the scripts section:

"start": "./node_modules/.bin/netlify-lambda serve src",    "build": "./node_modules/.bin/netlify-lambda build src"

With this in place, after saving the file, we can run npm start to begin netlify-lambda on port 9000.

The build command will be used when we deploy to Netlify later.

Once it is up and running, we can visit http://localhost:9000/.netlify/functions/server/test to check our function is working as expected.

The great thing about netlify-lambda is it will listen for changes to our code, and automatically recompile whenever we update something, so we can leave it running for the duration of this tutorial.

Start ngrok URL

Now we have a test route working on our local machine, let’s make it available online. To do this, we’ll be using ngrok, a npm package that provides a public URL for our function. If you don’t have ngrok installed already, first run npm install -g ngrok to globally install it on your machine. Then run ngrok http 9000 which will automatically direct traffic to our function running on port 9000.

After starting ngrok, you should see a forwarding URL in the terminal, which we can visit to confirm our server is available online. Copy this base URL to your browser, and follow it with /.netlify/functions/server/test. You should see the same result as when we made our calls on localhost, which means we can now use this URL as an endpoint for Slack!

Each time you restart ngrok, it creates a new URL, so if you need to stop it at any point, you will need to update your URL endpoint in Slack.

Setting up Slack

Now that we have a function in place, it’s time to move to Slack to create the app and slash command. We will have to deploy this app to our workspace, as well as making a few updates to our code to connect our function. For a more in depth set of instructions on how to create a new slash command, you can follow the official Slack documentation. For a streamlined set of instructions, follow along below:

Create a new Slack app

First off, let’s create our new Slack app for these FAQs. Visit https://api.slack.com/apps and select Create New App to begin. Give your app a name (I used Fauna FAQ), and select a development workspace for the app.

Create a slash command

After creating the app, we need to add a slash command to it, so that we can interact with the app. Select slash commands from the menu after the app has been created, then create a new command. Fill in the following form with the name of your command (I used /faq) as well as providing the URL from ngrok. Don’t forget to add /.netlify/functions/server/ to the end!

Install app to workspace

Once you have created your slash command, click on basic information in the sidebar on the left to return to the app’s main page. From here, select the dropdown “Install app to your workspace” and click the button to install it.

Once you have allowed access, the app will be installed, and you’ll be able to start using the slash command in your workspace.

Update the function

With our new app in place, we’ll need to create a new endpoint for Slack to send the requests to. For this, we’ll use the root endpoint for simplicity. The endpoint will need to be able to take a post request with application/x-www-form-urlencoded data, then return a 200 status response with a message. To do this, let’s create a new post route at the root by adding the following code to server.js:

router.post("/", async (req, res) => {   });

Now that we have our endpoint, we can also extract and view the text that has been sent by slack by adding the following line before we set the status:

const text = req.body.text; console.log(`Input text: $  {text}`);

For now, we’ll just pass this text into the response and send it back instantly, to ensure the slack app and function are communicating.

res.status(200); res.send(text);

Now, when you type /faq <somequestion> on a slack channel, you should get back the same message from the slack slash command.

Formatting the response

Rather than just sending back plaintext, we can make use of Slack’s Block Kit to use specialised UI elements to improve the look of our answers. If you want to create a more complex layout, Slack provides a Block Kit builder to visually design your layout.

For now, we’re going to keep things simple, and just provide a response where each answer is separated by a divider. Add the following function to your server.js file after the post route:

const format = (answers) => {  if (answers.length == 0) {    answers = ["No answers found"];  }    let formatted = {    blocks: [],  };    for (answer of answers) {    formatted["blocks"].push({      type: "divider",    });    formatted["blocks"].push({      type: "section",      text: {        type: "mrkdwn",        text: answer,      },    });  }    return formatted; };

With this in place, we now need to pass our answers into this function, to format the answers before returning them to Slack. Update the following in the root post route:

let answers = text; const formattedAnswers = format(answers);

Now when we enter the same command to the slash app, we should get back the same message, but this time in a formatted version!

Setting up Fauna

With our slack app in place, and a function to connect to it, we now need to start working on the database to store our answers. If you’ve never set up a database with FaunaDB before, there is some great documentation on how to quickly get started. A brief step-by-step overview for the database and collection is included below:

Create database

First, we’ll need to create a new database. After logging into the Fauna dashboard online, click New Database. Give your new database a name you’ll remember (I used “slack-faq”) and save the database.

Create collection

With this database in place, we now need a collection. Click the “New Collection” button that should appear on your dashboard, and give your collection a name (I used “faq”). The history days and TTL values can be left as their defaults, but you should ensure you don’t add a value to the TTL field, as we don’t want our documents to be removed automatically after a certain time.

Add question / answer documents

Now we have a database and collection in place, we can start adding some documents to it. Each document should follow the structure:

{    question: "a question string",    answer: "an answer string",    qTokens: [        "first token",        "second token",        "third token"    ] }

The qToken values should be key terms in the question, as we will use them for a tokenized search when we can’t match a question exactly. You can add as many qTokens as you like for each question. The more relevant the tokens are, the more accurate results will be. For example, if our question is “where are the bathrooms”, we should include the qTokens “bathroom”, “bathrooms”, “toilet”, “toilets” and any other terms you may think people will search for when trying to find information about a bathroom.

The questions I used to develop a proof of concept are as follows:

{   question: "where is the lobby",   answer: "On the third floor",   qTokens: ["lobby", "reception"], }, {   question: "when is payday",   answer: "On the first Monday of each month",   qTokens: ["payday", "pay", "paid"], }, {   question: "when is lunch",   answer: "Lunch break is *12 - 1pm*",   qTokens: ["lunch", "break", "eat"], }, {   question: "where are the bathrooms",   answer: "Next to the elevators on each floor",   qTokens: ["toilet", "bathroom", "toilets", "bathrooms"], }, {   question: "when are my breaks",   answer: "You can take a break whenever you want",   qTokens: ["break", "breaks"], }

Feel free to take this time to add as many documents as you like, and as many qTokens as you think each question needs, then we’ll move on to the next step.

Creating Indexes

With these questions in place, we will create two indexes to allow us to search the database. First, create an index called “answers_by_question”, selecting question as the term and answer as the value. This will allow us to search all answers by their associated question.

Then, create an index called “answers_by_qTokens”, selecting qTokens as the term and answer as the value. We will use this index to allow us to search through the qTokens of all items in the database.

Searching the database

To run a search in our database, we will do two things. First, we’ll run a search for an exact match to the question, so we can provide a single answer to the user. Second, if this search doesn’t find a result, we’ll do a search on the qTokens each answer has, returning any results that provide a match. We’ll use Fauna’s online shell to demonstrate and explain these queries, before using them in our function.

Exact Match

Before searching the tokens, we’ll test whether we can match the input question exactly, as this will allow for the best answer to what the user has asked. To search our questions, we will match against the “answers_by_question” index, then paginate our answers. Copy the following code into the online Fauna shell to see this in action:

q.Paginate(q.Match(q.Index("answers_by_question"), "where is the lobby"))

If you have a question matching the “where is the lobby” example above, you should see the expected answer of “On the third floor” as a result.

Searching the tokens

For cases where there is no exact match on the database, we will have to use our qTokens to find any relevant answers. For this, we will match against the “answers_by_qTokens” index we created and again paginate our answers. Copy the following into the online shell to see how this works:

q.Paginate(q.Match(q.Index("answers_by_qTokens"), "break"))

If you have any questions with the qToken “break” from the example questions, you should see all answers returned as a result.

Connect function to Fauna

We have our searches figured out, but currently we can only run them from the online shell. To use these in our function, there is some configuration required, as well as an update to our function’s code.

Function configuration

To connect to Fauna from our function, we will need to create a server key. From your database’s dashboard, select security in the left hand sidebar, and create a new key. Give your new key a name you will recognise, and ensure that the dropdown has Server selected, not Admin. Finally, once the key has been created, add the following code to server.js before the test route, replacing the <secretKey> value with the secret provided by Fauna.

const q = fauna.query; const client = new fauna.Client({  secret: "<secretKey>", });

It would be preferred to store this key in an environment variable in Netlify, rather than directly in the code, but that is beyond the scope of this tutorial. If you would like to use environment variables, this Netlify post explains how to do so.

Update function code

To include our new search queries in the function, copy the following code into server.js after the post route:

const searchText = async (text) => {  console.log("Beginning searchText");  const answer = await client.query(    q.Paginate(q.Match(q.Index("answers_by_question"), text))  );  console.log(`searchText response: $  {answer.data}`);  return answer.data; };   const getTokenResponse = async (text) => {  console.log("Beginning getTokenResponse");  let answers = [];  const questionTokens = text.split(/[ ]+/);  console.log(`Tokens: $  {questionTokens}`);  for (token of questionTokens) {    const tokenResponse = await client.query(      q.Paginate(q.Match(q.Index("answers_by_qTokens"), text))    );    answers = [...answers, ...tokenResponse.data];  }  console.log(`Token answers: $  {answers}`);  return answers; };

These functions replicate the same functionality as the queries we previously ran in the online Fauna shell, but now we can utilise them from our function.

Deploy to Netlify

Now the function is searching the database, the only thing left to do is put it on the cloud, rather than a local machine. To do this, we’ll be making use of a Netlify function deployed from a GitHub repository.

First things first, add a new repo on Github, and push your code to it. Once the code is there, go to Netlify and either sign up or log in using your Github profile. From the home page of Netlify, select “New site from git” to deploy a new site, using the repo you’ve just created in Github.

If you have never deployed a site in Netlify before, this post explains the process to deploy from git.

Ensure while you are creating the new site, that your build command is set to npm run build, to have Netlify build the function before deployment. The publish directory can be left blank, as we are only deploying a function, rather than any pages.

Netlify will now build and deploy your repo, generating a unique URL for the site deployment. We can use this base URL to access the test endpoint of our function from earlier, to ensure things are working.

The last thing to do is update the Slack endpoint to our new URL! Navigate to your app, then select ‘slash commands’ in the left sidebar. Click on the pencil icon to edit the slash command and paste in the new URL for the function. Finally, you can use your new slash command in any authorised Slack channels!

Conclusion

There you have it, an entirely serverless, functional slack slash command. We have used FaunaDB to store our answers and connected to it through a Netlify function. Also, by using Express, we have the flexibility to add further endpoints to the function for adding new questions, or anything else you can think up to further extend this project! Hopefully now, instead of waiting around for someone to answer your questions, you can just use /faq and get the answer instantly!


Matthew Williams is a software engineer from Melbourne, Australia who believes the future of technology is serverless. If you’re interested in more from him, check out his Medium articles, or his GitHub repos.


The post Create an FAQ Slack app with Netlify functions and FaunaDB appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , ,

A Complete Walkthrough of GraphQL APIs with React and FaunaDB

As a web developer, there is an interesting bit of back and forth that always comes along with setting up a new application. Even using a full stack web framework like Ruby on Rails can be non-trivial to set up and deploy, especially if it’s your first time doing so in a while.

Personally I have always enjoyed being able to dig in and write the actual bit of application logic more so than setting up the apps themselves. Lately I have become a big fan of React applications together with a GraphQL API and storing state with the Apollo library.

Setting up a React application has become very easy in the past few years, but setting up a backend with a GraphQL API? Not so much. So when I was working on a project recently, I decided to look for an easier way to integrate a GraphQL API and was delighted to find FaunaDB.

FaunaDB is a NoSQL database as a service that makes provisioning a GraphQL API an incredibly simple process, and even comes with a free tier. Frankly I was surprised and really impressed with how quickly I was able to go from zero to a working API.

The service also touts its production readiness, with a focus on making scalability much easier than if you were managing your own backend. Although I haven’t explored its more advanced features yet, if it’s anything like my first impression then the prospects and implications of using FaunaDB are quite exciting. For now, I can confirm that for many of my projects, it provides an excellent solution for managing state together with a React application.

While working on my project, I did run into a few configuration issues when making all of the frameworks work together which I think could’ve been addressed with a guide that focuses on walking through standing up an application in its entirety. So in this article, I’m going to do a thorough walkthrough of setting up a small to-do React application on Heroku, then persisting data to that application with FaunaDB using the Apollo library. You can find the full source code here.

Our Application

For this walkthrough, we’re building a to-do list where a user can take the following actions:

  • Add a new item
  • Mark an item as complete
  • Remove an item

From a technical perspective, we’re going to accomplish this by doing the following:

  • Creating a React application
  • Deploying the application to Heroku
  • Provisioning a new FaunaDB database
  • Declaring a GraphQL API schema
  • Provisioning a new database key
  • Configuring Apollo in our React application to interact with our API
  • Writing application logic and consume our API to persist information

Here’s a preview of what the final result will look like:

Creating the React Application

First we’ll create a boilerplate React application and make sure it runs. Assuming you have create-react-app installed, the commands to create a new application are:

create-react-app fauna-todo cd fauna-todo yarn start

After which you should be able to head to http://localhost:3000 and see the generated homepage for the application.

Deploying to Heroku

As I mentioned above, deploying React applications has become awesomely easy over the last few years. I’m using Heroku here since it’s been my go-to platform as a service for a while now, but you could just as easily use another service like Netlify (though of course the configuration will be slightly different). Assuming you have a Heroku account and the Heroku CLI installed, then this article shows that you only need a few lines of code to create and deploy a React application.

git init heroku create -b https://github.com/mars/create-react-app-buildpack.git git push heroku master

And your app is deployed! To view it you can run:

heroku open

Provisioning a FaunaDB Database

Now that we have a React app up and running, let’s add persistence to the application using FaunaDB. Head to fauna.com to create a free account. After you have an account, click “New Database” on the dashboard, and enter in a name of your choosing:

Creating an API via GraphQL Schema in FaunaDB

In this example, we’re going to declare a GraphQL schema then use that file to automatically generate our API within FaunaDB. As a first stab at the schema for our to-do application, let’s suppose that there is simply a collection of “Items” with “name” as its sole field. Since I plan to build upon this schema later and like being able to see the schema itself at a glance, I’m going to create a schema.graphql file and add it to the top level of my React application. Here is the content for this file:

type Item {  name: String } type Query {  allItems: [Item!] }

If you’re unfamiliar with the concept of defining a GraphQL schema, think of it as a manifest for declaring what kinds of objects and queries are available within your API. In this case, we’re saying that there is going to be an Item type with a name string field and that we are going to have an explicit query allItems to look up all item records. You can read more about schemas in this Apollo article and types in this graphql.org article. FaunaDB also provides a reference document for declaring and importing a schema file.

We can now upload this schema.graphql file and use it to generate a GraphQL API. Head to the FaunaDB dashboard again and click on “GraphQL” then upload your newly created schema file here:

Congratulations! You have created a fully functional GraphQL API. This page turns into a “GraphQL Playground” which lets you interact with your API. Click on the “Docs” tab in the sidebar to see the available queries and mutations.

Note that in addition to our allItems query, FaunaDB has generated the following queries/mutations automatically on our behalf:

  • findItemByID
  • createItem
  • updateItem
  • deleteItem

All of these were derived by declaring the Item type in our schema file. Pretty cool right? Let’s give these queries and mutations a spin to familiarize ourselves with them. We can execute queries and mutations directly in the “GraphQL Playground.” Let’s first run a query for items. Enter this query into the left pane of the playground:

query MyItemQuery {  allItems {    data {     name    }  } }

Then click on the play button to run it:

The result is listed on the right pane, and unsurprisingly returns no results since we haven’t created any items yet. Fortunately createItem was one of the mutations that was automatically generated from the schema and we can use that to populate a sample item. Let’s run this mutation:

mutation MyItemCreation {  createItem(data: { name: "My first todo item" }) {    name  } }

You can see the result of the mutation in the right pane. It seems like our item was created successfully, but just to double check we can re-run our first query and see the result:

You can see that if we add our first query back to the left pane in the playground that the play button gives you a choice as to which operation you’d like to perform. Finally, note in step 3 of the screenshot above that our item was indeed created successfully.

In addition to running the query above, we can also look in the “Collections” tab of FaunaDB to view the collection directly:

Provisioning a New Database Key

Now that we have the database itself configured, we need a way for our React application to access it.

For the sake of simplicity in this application, this will be done with a secret key that we can add as an environment variable to our React application. We aren’t going to have authentication for individual users. Instead we’ll generate an application key which has permission to create, read, update, and delete items.

Authentication and authorization are substantial topics on their own — if you would like to learn more on how FaunaDB handles them as a follow up exercise to this guide, you can read all about the topic here.

The application key we generate has an associate set of permissions that are grouped together in a “role.” Let’s begin by first defining a role that has permission to perform CRUD operations on items, as well as perform the allItems query. Start by going to the “Security” tab, then clicking on “Manage Roles”:

There are 2 built in roles, admin and server. We could in theory use these roles for our key, but this is a bad idea as those keys would allow whoever has access to this key to perform database level operations such as creating new collections or even destroy the database itself. So instead, let’s create a new role by clicking on “New Custom Role” button:

You can name the role whatever you’d like, here we’re using the name ItemEditor and giving the role permission to read, write, create, and delete items — as well as permission to read the allItems index.

Save this role then, head to the “Security” tab and create a new key:

When creating a key, make sure to select “ItemEditor” for the role and whatever name you please:

Next you’ll be presented with your secret key which you’ll need to copy:

In order for React to load the key’s value as an environment variable, create a new file called .env.local which lives at the root level of your React application. In this file, add an entry for the generated key:

REACT_APP_FAUNA_SECRET=fnADzT7kXcACAFHdiKG-lIUWq-hfWIVxqFi4OtTv

Important: Since it’s not good practice to store secrets directly in source control in plain text, make sure that you also have a .gitignore file in your project’s root directory that contains .env.local so that your secrets won’t be added to your git repo and shared with others.

It’s critical that this variable’s name starts with “REACT_APP_” otherwise it won’t be recognized when the application is started. By adding the value to the .env.local file, it will still be loaded for the application when running locally. You’ll have to explicitly stop and restart your application with yarn start in order to see these changes take.

If you’re interested in reading more about how environment variables are loaded in apps created via create-react-app, there is a full explanation here. We’ll cover adding this secret as an environment variable in Heroku later on in this article.

Connecting to FaunaDB in React with Apollo

In order for our React application to interact with our GraphQL API, we need some sort of GraphQL client library. Fortunately for us, the Apollo client provides an elegant interface for making API requests as well as caching and interacting with the results.

To install the relevant Apollo packages we’ll need, run:

yarn add @apollo/client graphql @apollo/react-hooks

Now in your src directory of your application, add a new file named client.js with the following content:

import { ApolloClient, InMemoryCache } from "@apollo/client"; export const client = new ApolloClient({  uri: "https://graphql.fauna.com/graphql",  headers: {    authorization: `Bearer $ {process.env.REACT_APP_FAUNA_SECRET}`,  },  cache: new InMemoryCache(), });

What we’re doing here is configuring Apollo to make requests to our FaunaDB database. Specifically, the uri makes the request to Fauna itself, then the authorization header indicates that we’re connecting to the specific database instance for the provided key that we generated earlier.

There are 2 important implications from this snippet of code:

  1. The authorization header contains the key with the “ItemEditor” role, and is currently hard coded to use the same header regardless of which user is looking at our application. If you were to update this application to show a different to-do list for each user, you would need to login for each user and generate a token which could instead be passed in this header. Again, the FaunaDB documentation covers this concept if you care to learn more about it.
  2. As with any time you add a layer of caching to a system (as we are doing here with Apollo), you introduce the potential to have stale data. FaunaDB’s operations are strongly consistent, and you can configure Apollo’s fetchPolicy to minimize the potential for stale data. In order to prevent stale reads to our cache, we’ll use a combination of refetch queries and specifying response fields in our mutations.

Next we’ll replace the contents of the home page’s component. Head to App.js and replace its content with:

import React from "react"; import { ApolloProvider } from "@apollo/client"; import { client } from "./client"; function App() {  return (    <ApolloProvider client={client}>      <div style={{ padding: "5px" }}>        <h3>My Todo Items</h3>        <div>items to get loaded here</div>      </div>    </ApolloProvider>  ); }

Note: For this sample application I’m focusing on functionality over presentation, so you’ll see some inline styles. While I certainly wouldn’t recommend this for a production-grade application, I think it does at least demonstrate any added styling in the most straightforward manner within a small demo.

Visit http://localhost:3000 again and you’ll see:

Which contains the hard coded values we’ve set in our jsx above. What we would really like to see however is the to-do item we created in our database. In the src directory, let’s create a component called ItemList which lists out any items in our database:

import React from "react"; import gql from "graphql-tag"; import { useQuery } from "@apollo/react-hooks"; const ITEMS_QUERY = gql`  {    allItems {      data {        _id        name      }    }  } `; export function ItemList() {  const { data, loading } = useQuery(ITEMS_QUERY); if (loading) {    return "Loading...";  } return (    <ul>      {data.allItems.data.map((item) => {        return <li key={item._id}>{item.name}</li>;      })}    </ul>  ); }

Then update App.js to render this new component  —  see the full commit in this example’s source code to see this step in its entirety. Previewing your app in again, you’ll see that your to-do item has loaded:

Now is a good time to commit your progress in git. And since we’re using Heroku, deploying is a snap:

git push heroku master heroku open

When you run heroku open though, you’ll see that the page is blank. If we inspect the network traffic and request to FaunaDB, we’ll see an error about how the database secret is missing:

Which makes sense since we haven’t configured this value in Heroku yet. Let’s set it by going to the Heroku dashboard, selecting your application, then clicking on the “Settings” tab. In there you should add the REACT_APP_FAUNA_SECRET key and value used in the .env.local file earlier. Reusing this key is done for demonstration purposes. In a “real” application, you would probably have a separate database and separate keys for each environment.

If you would prefer to use the command line rather than Heroku’s web interface, you can use the following command and replace the secret with your key instead:

heroku config:set REACT_APP_FAUNA_SECRET=fnADzT7kXcACAFHdiKG-lIUWq-hfWIVxqFi4OtTv

Important: as noted in the Heroku docs, you need to trigger a deploy in order for this environment variable to apply in your app:

git commit — allow-empty -m 'Add REACT_APP_FAUNA_SECRET env var' git push heroku master heroku open

After running this last command, your Heroku-hosted app should appear and load the items from your database.

Adding New To-Do Items

We now have all of the pieces in place for accessing our FaunaDB database both locally and a hosted Heroku environment. Now adding items is as simple as calling the mutation we used in the GraphQL Playground earlier. Here is the code for an AddItem component, which uses a bare bones html form to call the createItem mutation:

import React from "react"; import gql from "graphql-tag"; import { useMutation } from "@apollo/react-hooks"; const CREATE_ITEM = gql`  mutation CreateItem($ data: ItemInput!) {    createItem(data: $ data) {      _id    }  } `; const ITEMS_QUERY = gql`  {    allItems {      data {        _id        name      }    }  } `; export function AddItem() {  const [showForm, setShowForm] = React.useState(false);  const [newItemName, setNewItemName] = React.useState(""); const [createItem, { loading }] = useMutation(CREATE_ITEM, {    refetchQueries: [{ query: ITEMS_QUERY }],    onCompleted: () => {      setNewItemName("");      setShowForm(false);    },  }); if (showForm) {    return (      <form        onSubmit={(e) => {          e.preventDefault();          createItem({ variables: { data: { name: newItemName } } });        }}      >        <label>          <input            disabled={loading}            type="text"            value={newItemName}            onChange={(e) => setNewItemName(e.target.value)}            style={{ marginRight: "5px" }}          />        </label>        <input disabled={loading} type="submit" value="Add" />      </form>    );  } return <button onClick={() => setShowForm(true)}>Add Item</button>; }

After adding a reference to AddItem in our App component, we can verify that adding items works as expected. Again, you can see the full commit in the demo app for a recap of this step.

Deleting New To-Do Items

Similar to how we called the automatically generated AddItem mutation to add new items, we can call the generated DeleteItem mutation to remove items from our list. Here’s what our updated ItemList component looks like after adding this mutation:

import React from "react"; import gql from "graphql-tag"; import { useMutation, useQuery } from "@apollo/react-hooks"; const ITEMS_QUERY = gql`  {    allItems {      data {        _id        name      }    }  } `; const DELETE_ITEM = gql`  mutation DeleteItem($ id: ID!) {    deleteItem(id: $ id) {      _id    }  } `; export function ItemList() {  const { data, loading } = useQuery(ITEMS_QUERY); const [deleteItem, { loading: deleteLoading }] = useMutation(DELETE_ITEM, {    refetchQueries: [{ query: ITEMS_QUERY }],  }); if (loading) {    return <div>Loading...</div>;  } return (    <ul>      {data.allItems.data.map((item) => {        return (          <li key={item._id}>            {item.name}{" "}            <button              disabled={deleteLoading}              onClick={(e) => {                e.preventDefault();                deleteItem({ variables: { id: item._id } });              }}            >              Remove            </button>          </li>        );      })}    </ul>  ); }

Reloading our app and adding another item should result in a page that looks like this:

If you click on the “Remove” button for any item, the DELETE_ITEM mutation is fired and the entire list of items is fired upon completion as specified per the refetchQuery option. 

One thing you may have noticed is that in our ITEMS_QUERY, we’re specifying _id as one of the fields we’d like in the result set from our query. This _id field is automatically generated by FaunaDB as a unique identifier for each collection, and should be used when updating or deleting a record.

Marking Items as Complete

This wouldn’t be a fully functional to-do list without the ability to mark items as complete! So let’s add that now. By the time we’re done, we expect the app to look like this:

The first step we need to take is updating our Item schema within FaunaDB since right now the only information we store about an item is its name. Heading to our schema.graphql file, we can add a new field to track the completion state for an item:

type Item {  name: String  isComplete: Boolean } type Query {  allItems: [Item!] }

Now head to the GraphQL tab in the FaunaDB console and click on the “Update Schema” link to upload the newly updated schema file.

Note: there is also an “Override Schema” option, which can be used to rewrite your database’s schema from scratch if you’d like. One consideration to make when choosing to override the schema completely is that the data is deleted from your database. This may be fine for a test environment, but a test or production environment would require a proper database migration instead.

Since the changes we’re making here are additive, there won’t be any conflict with the existing schema so we can keep our existing data.

You can view the mutation itself and its expected schema in the GraphQL Playground in FaunaDB:

This tells us that we can call the deleteItem mutation with a data parameter of type ItemInput. As with our other requests, we could test this mutation in the playground if we wanted. For now, we can add it directly to the application. In ItemList.js, let’s add this mutation with this code as outlined in the example repository.

The references to UPDATE_ITEM are the most relevant changes we’ve made. It’s interesting to note as well that we don’t need the refetchQueries parameter for this mutation. When the update mutation returns, Apollo updates the corresponding item in the cache based on its identifier field (_id in this case) so our React component re-renders appropriately as the cache updates.

We now have all of the functionality for an initial version of a to-do application. As a final step, push your branch one more time to Heroku:

git push heroku master heroku open

Conclusion

Take a moment to pat yourself on the back! You’ve created a brand-new React application, added persistence at a database level with FaunaDB, and can do deployments available to the entire world with the push of a branch to Heroku.

Now that we’ve covered some of the concepts behind provisioning and interacting with FaunaDB, setting up any similar project in the future is an amazingly fast process. Being able to provision a GraphQL-accessible database in minutes is a dream for me when it comes to spinning up a new project. Not only that, but this is a production grade database that you don’t have to worry about configuring or scaling — and instead get to focus on writing the rest of your application instead of playing the role of a database administrator.


The post A Complete Walkthrough of GraphQL APIs with React and FaunaDB appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , ,
[Top]

Roll Your Own Comments With Gatsby and FaunaDB

If you haven’t used Gatsby before have a read about why it’s fast in every way that matters, and if you haven’t used FaunaDB before you’re in for a treat. If you’re looking to make your static sites full blown Jamstack applications this is the back end solution for you!

This tutorial will only focus on the operations you need to use FaunaDB to power a comment system for a Gatsby blog. The app comes complete with inputs fields that allow users to comment on your posts and an admin area for you to approve or delete comments before they appear on each post. Authentication is provided by Netlify’s Identity widget and it’s all sewn together using Netlify serverless functions and an Apollo/GraphQL API that pushes data up to a FaunaDB database collection.

I chose FaunaDB for the database for a number of reasons. Firstly there’s a very generous free tier! perfect for those small projects that need a back end, there’s native support for GraphQL queries and it has some really powerful indexing features!

…and to quote the creators;

No matter which stack you use, or where you’re deploying your app, FaunaDB gives you effortless, low-latency and reliable access to your data via APIs familiar to you

You can see the finished comments app here.

Get Started

To get started clone the repo at https://github.com/PaulieScanlon/fauna-gatsby-comments

or:

git clone https://github.com/PaulieScanlon/fauna-gatsby-comments.git

Then install all the dependencies:

npm install

Also cd in to functions/apollo-graphql and install the dependencies for the Netlify function:

npm install

This is a separate package and has its own dependencies, you’ll be using this later.

We also need to install the Netlify CLI as you’ll also use this later:

npm install netlify-cli -g

Now lets add three new files that aren’t part of the repo.

At the root of your project create a .env .env.development and .env.production

Add the following to .env:

GATSBY_FAUNA_DB = GATSBY_FAUNA_COLLECTION =

Add the following to .env.development:

GATSBY_FAUNA_DB = GATSBY_FAUNA_COLLECTION = GATSBY_SHOW_SIGN_UP = true GATSBY_ADMIN_ID =

Add the following to .env.production:

GATSBY_FAUNA_DB = GATSBY_FAUNA_COLLECTION = GATSBY_SHOW_SIGN_UP = false GATSBY_ADMIN_ID =

You’ll come back to these later but in case you’re wondering

  • GATSBY_FAUNA_DB is the FaunaDB secret key for your database
  • GATSBY_FAUNA_COLLECTION is the FaunaDB collection name
  • GATSBY_SHOW_SIGN_UP is used to hide the Sign up button when the site is in production
  • GATSBY_ADMIN_ID is a user id that Netlify Identity will generate for you

If you’re the curious type you can get a taster of the app by running gatsby develop or yarn develop and then navigate to http://localhost:8000 in your browser.

FaunaDB

So Let’s get cracking, but before we write any operations head over to https://fauna.com/ and sign up!

Database and Collection

  • Create a new database by clicking NEW DATABASE
  • Name the database: I’ve called the demo database fauna-gatsby-comments
  • Create a new Collection by clicking NEW COLLECTION
  • Name the collection: I’ve called the demo collection demo-blog-comments

Server Key

Now you’ll need to to set up a server key. Go to SECURITY

  • Create a new key by clicking NEW KEY
  • Select the database you want the key to apply to, fauna-gatsby-comments for example
  • Set the Role as Admin
  • Name the server key: I’ve called the demo key demo-blog-server-key

Environment Variables Pt. 1

Copy the server key and add it to GATSBY_FAUNA_DB in .env.development, .env.production and .env.

You’ll also need to add the name of the collection to GATSBY_FAUNA_COLLECTION in .env.development, .env.production and .env.

Adding these values to .env are just so you can test your development FaunaDB operations, which you’ll do next.

Let’s start by creating a comment so head back to boop.js:

// boop.js ... // CREATE COMMENT createComment: async () => {   const slug = "/posts/some-post"   const name = "some name"   const comment = "some comment"   const results = await client.query(     q.Create(q.Collection(COLLECTION_NAME), {       data: {         isApproved: false,         slug: slug,         date: new Date().toString(),         name: name,         comment: comment,       },     })   )   console.log(JSON.stringify(results, null, 2))   return {     commentId: results.ref.id,   } }, ...

The breakdown of this function is as follows;

  • q is the instance of faunadb.query
  • Create is the FaunaDB method to create an entry within a collection
  • Collection is area in the database to store the data. It takes the name of the collection as the first argument and a data object as the second.

The second argument is the shape of the data you need to drive the applications comment system.

For now you’re going to hard-code slugname and comment but in the final app these values are captured by the input form on the posts page and passed in via args

The breakdown for the shape is as follows;

  • isApproved is the status of the comment and by default it’s false until we approve it in the Admin page
  • slug is the path to the post where the comment was written
  • date is the time stamp the comment was written
  • name is the name the user entered in the comments from
  • comment is the comment the user entered in the comments form

When you (or a user) creates a comment you’re not really interested in dealing with the response because as far as the user is concerned all they’ll see is either a success or error message.

After a user has posted a comment it will go in to your Admin queue until you approve it but if you did want to return something you could surface this in the UI by returning something from the createComment function.

Create a comment

If you’ve hard coded a slugname and comment you can now run the following in your CLI

node boop createComment

If everything worked correctly you should see a log in your terminal of the new comment.

{    "ref": {      "@ref": {        "id": "263413122555970050",        "collection": {          "@ref": {            "id": "demo-blog-comments",            "collection": {              "@ref": {                "id": "collections"              }            }          }        }      }    },    "ts": 1587469179600000,    "data": {      "isApproved": false,      "slug": "/posts/some-post",      "date": "Tue Apr 21 2020 12:39:39 GMT+0100 (British Summer Time)",      "name": "some name",      "comment": "some comment"    }  }  { commentId: '263413122555970050' }

If you head over to COLLECTIONS in FaunaDB you should see your new entry in the collection.

You’ll need to create a few more comments while in development so change the hard-coded values for name and comment and run the following again.

node boop createComment

Do this a few times so you end up with at least three new comments stored in the database, you’ll use these in a moment.

Delete comment by id

Now that you can create comments you’ll also need to be able to delete a comment.

By adding the commentId of one of the comments you created above you can delete it from the database. The commentId is the id in the ref.@ref object

Again you’re not really concerned with the return value here but if you wanted to surface this in the UI you could do so by returning something from the deleteCommentById function.

// boop.js ... // DELETE COMMENT deleteCommentById: async () => {   const commentId = "263413122555970050";   const results = await client.query(     q.Delete(q.Ref(q.Collection(COLLECTION_NAME), commentId))   );   console.log(JSON.stringify(results, null, 2));   return {     commentId: results.ref.id,   }; }, ...

The breakdown of this function is as follows

  • client is the FaunaDB client instance
  • query is a method to get data from FaunaDB
  • q is the instance of faunadb.query
  • Delete is the FaunaDB delete method to delete entries from a collection
  • Ref is the unique FaunaDB ref used to identify the entry
  • Collection is area in the database where the data is stored

If you’ve hard coded a commentId you can now run the following in your CLI:

node boop deleteCommentById

If you head back over to COLLECTIONS in FaunaDB you should see that entry no longer exists in collection

Indexes

Next you’re going to create an INDEX in FaunaDB.

An INDEX allows you to query the database with a specific term and define a specific data shape to return.

When working with GraphQL and / or TypeScript this is really powerful because you can use FaunaDB indexes to return only the data you need and in a predictable shape. This makes data typing responses in GraphQL and / TypeScript a dream… I’ve worked on a number of applications that just return a massive object of useless values which will inevitably cause bugs in your app. blurg!

  • Go to INDEXES and click NEW INDEX
  • Name the index: I’ve called this one get-all-comments
  • Set the source collection to the name of the collection you setup earlier

As mentioned above when you query the database using this index you can tell FaunaDB which parts of the entry you want to return.

You can do this by adding “values” but be careful to enter the values exactly as they appear below because (on the FaunaDB free tier) you can’t amend these after you’ve created them so if there’s a mistake you’ll have to delete the index and start again… bummer!

The values you need to add are as follows:

  • ref
  • data.isApproved
  • data.slug
  • data.date
  • data.name
  • data.comment

After you’ve added all the values you can click SAVE.

Get all comments

// boop.js ... // GET ALL COMMENTS getAllComments: async () => {    const results = await client.query(      q.Paginate(q.Match(q.Index("get-all-comments")))    );    console.log(JSON.stringify(results, null, 2));    return results.data.map(([ref, isApproved, slug, date, name, comment]) => ({      commentId: ref.id,      isApproved,      slug,      date,      name,      comment,    }));  }, ...

The breakdown of this function is as follows

  • client is the FaunaDB client instance
  • query is a method to get data from FaunaDB
  • q is the instance of faunadb.query
  • Paginate paginates the responses
  • Match returns matched results
  • Index is the name of the Index you just created

The shape of the returned result here is an array of the same shape you defined in the Index “values”

If you run the following you should see the list of all the comments you created earlier:

node boop getAllComments

Get comments by slug

You’re going to take a similar approach as above but this time create a new Index that allows you to query FaunaDB in a different way. The key difference here is that when you get-comments-by-slug you’ll need to tell FaunaDB about this specific term and you can do this by adding data.slug to the Terms field.

  • Go to INDEX and click NEW INDEX
  • Name the index, I’ve called this one get-comments-by-slug
  • Set the source collection to the name of the collection you setup earlier
  • Add data.slug in the terms field

The values you need to add are as follows:

  • ref
  • data.isApproved
  • data.slug
  • data.date
  • data.name
  • data.comment

After you’ve added all the values you can click SAVE.

// boop.js ... // GET COMMENT BY SLUG getCommentsBySlug: async () => {   const slug = "/posts/some-post";   const results = await client.query(     q.Paginate(q.Match(q.Index("get-comments-by-slug"), slug))   );   console.log(JSON.stringify(results, null, 2));   return results.data.map(([ref, isApproved, slug, date, name, comment]) => ({     commentId: ref.id,     isApproved,     slug,     date,     name,     comment,   })); }, ...

The breakdown of this function is as follows:

  • client is the FaunaDB client instance
  • query is a method to get data from FaunaDB
  • q is the instance of faunadb.query
  • Paginate paginates the responses
  • Match returns matched results
  • Index is the name of the Index you just created

The shape of the returned result here is an array of the same shape you defined in the Index “values” you can create this shape in the same way you did above and be sure to add a value for terms. Again be careful to enter these with care.

If you run the following you should see the list of all the comments you created earlier but for a specific slug:

node boop getCommentsBySlug

Approve comment by id

When you create a comment you manually set the isApproved value to false. This prevents the comment from being shown in the app until you approve it.

You’ll now need to create a function to do this but you’ll need to hard-code a commentId. Use a commentId from one of the comments you created earlier:

// boop.js ... // APPROVE COMMENT BY ID approveCommentById: async () => {   const commentId = '263413122555970050'   const results = await client.query(     q.Update(q.Ref(q.Collection(COLLECTION_NAME), commentId), {       data: {         isApproved: true,       },     })   );   console.log(JSON.stringify(results, null, 2));   return {     isApproved: results.isApproved,   }; }, ...

The breakdown of this function is as follows:

  • client is the FaunaDB client instance
  • query is a method to get data from FaunaDB
  • q is the instance of faunadb.query
  • Update is the FaundaDB method up update an entry
  • Ref is the unique FaunaDB ref used to identify the entry
  • Collection is area in the database where the data is stored

If you’ve hard coded a commentId you can now run the following in your CLI:

node boop approveCommentById

If you run the getCommentsBySlug again you should now see the isApproved status of the entry you hard-coded the commentId for will have changed to true.

node boop getCommentsBySlug

These are all the operations required to manage the data from the app.

In the repo if you have a look at apollo-graphql.js which can be found in functions/apollo-graphql you’ll see the all of the above operations. As mentioned before the hard-coded values are replaced by args, these are the values passed in from various parts of the app.

Netlify

Assuming you’ve completed the Netlify sign up process or already have an account with Netlify you can now push the demo app to your GitHub account.

To do this you’ll need to have initialize git locally, added a remote and have pushed the demo repo upstream before proceeding.

You should now be able to link the repo up to Netlify’s Continuous Deployment.

If you click the “New site from Git” button on the Netlify dashboard you can authorize access to your GitHub account and select the gatsby-fauna-comments repo to enable Netlify’s Continuous Deployment. You’ll need to have deployed at least once so that we have a pubic URL of your app.

The URL will look something like this https://ecstatic-lewin-b1bd17.netlify.app but feel free to rename it and make a note of the URL as you’ll need it for the Netlify Identity step mentioned shortly.

Environment Variables Pt. 2

In a previous step you added the FaunaDB database secret key and collection name to your .env files(s). You’ll also need to add the same to Netlify’s Environment variables.

  • Navigate to Settings from the Netlify navigation
  • Click on Build and deploy
  • Either select Environment or scroll down until you see Environment variables
  • Click on Edit variables

Proceed to add the following:

GATSBY_SHOW_SIGN_UP = false GATSBY_FAUNA_DB = you FaunaDB secret key GATSBY_FAUNA_COLLECTION = you FaunaDB collection name

While you’re here you’ll also need to amend the Sensitive variable policy, select Deploy without restrictions

Netlify Identity Widget

I mentioned before that when a comment is created the isApproved value is set to false, this prevents comments from appearing on blog posts until you (the admin) have approved them. In order to become admin you’ll need to create an identity.

You can achieve this by using the Netlify Identity Widget.

If you’ve completed the Continuous Deployment step above you can navigate to the Identity page from the Netlify navigation.

You wont see any users in here just yet so lets use the app to connect the dots, but before you do that make sure you click Enable Identity

Before you continue I just want to point out you’ll be using netlify dev instead of gatsby develop or yarn develop from now on. This is because you’ll be using some “special” Netlify methods in the app and staring the server using netlify dev is required to spin up various processes you’ll be using.

  • Spin up the app using netlify dev
  • Navigate to http://localhost:8888/admin/
  • Click the Sign Up button in the header

You will also need to point the Netlify Identity widget at your newly deployed app URL. This was the URL I mentioned you’ll need to make a note of earlier, if you’ve not renamed your app it’ll look something like this https://ecstatic-lewin-b1bd17.netlify.app/ There will be a prompt in the pop up window to Set site’s URL.

You can now complete the necessary sign up steps.

After sign up you’ll get an email asking you to confirm you identity and once that’s completed refresh the Identity page in Netlify and you should see yourself as a user.

It’s now login time, but before you do this find Identity.js in src/components and temporarily un-comment the console.log() on line 14. This will log the Netlify Identity user object to the console.

  • Restart your local server
  • Spin up the app again using netlify dev
  • Click the Login button in the header

If this all works you should be able to see a console log for netlifyIdentity.currentUser: find the id key and copy the value.

Set this as the value for GATSBY_ADMIN_ID = in both .env.production and .env.development

You can now safely remove the console.log() on line 14 in Identity.js or just comment it out again.

GATSBY_ADMIN_ID = your Netlify Identity user id

…and finally

  • Restart your local server
  • Spin up the app again using netlify dev

Now you should be able to login as “Admin”… hooray!

Navigate to http://localhost:8888/admin/ and Login.

It’s important to note here you’ll be using localhost:8888 for development now and NOT localhost:8000 which is more common with Gatsby development

Before you test this in the deployed environment make sure you go back to Netlify’s Environment variables and add your Netlify Identity user id to the Environment variables!

  • Navigate to Settings from the Netlify navigation
  • Click on Build and deploy
  • Either select Environment or scroll down until you see Environment variables
  • Click on Edit variables

Proceed to add the following:

GATSBY_ADMIN_ID = your Netlify Identity user id

If you have a play around with the app and enter a few comments on each of the posts then navigate back to Admin page you can choose to either approve or delete the comments.

Naturally only approved comments will be displayed on any given post and deleted ones are gone forever.

If you’ve used this tutorial for your project I’d love to hear from you at @pauliescanlon.


By Paulie Scanlon (@pauliescanlon), Front End React UI Developer / UX Engineer: After all is said and done, structure + order = fun.

Visit Paulie’s Blog at: www.paulie.dev

The post Roll Your Own Comments With Gatsby and FaunaDB appeared first on CSS-Tricks.

CSS-Tricks

, , ,
[Top]

Instant GraphQL Backend with Fine-grained Security Using FaunaDB

GraphQL is becoming popular and developers are constantly looking for frameworks that make it easy to set up a fast, secure and scalable GraphQL API. In this article, we will learn how to create a scalable and fast GraphQL API with authentication and fine-grained data-access control (authorization). As an example, we’ll build an API with register and login functionality. The API will be about users and confidential files so we’ll define advanced authorization rules that specify whether a logged-in user can access certain files. 

By using FaunaDB’s native GraphQL and security layer, we receive all the necessary tools to set up such an API in minutes. FaunaDB has a free tier so you can easily follow along by creating an account at https://dashboard.fauna.com/. Since FaunaDB automatically provides the necessary indexes and translates each GraphQL query to one FaunaDB query, your API is also as fast as it can be (no n+1 problems!).

Setting up the API is simple: drop in a schema and we are ready to start. So let’s get started!  

The use-case: users and confidential files

We need an example use-case that demonstrates how security and GraphQL API features can work together. In this example, there are users and files. Some files can be accessed by all users, and some are only meant to be accessed by managers. The following GraphQL schema will define our model:

type User {   username: String! @unique   role: UserRole! }  enum UserRole {   MANAGER   EMPLOYEE }  type File {   content: String!   confidential: Boolean! }  input CreateUserInput {   username: String!   password: String!   role: UserRole! }  input LoginUserInput {   username: String!   password: String! }  type Query {   allFiles: [File!]! }  type Mutation {   createUser(input: CreateUserInput): User! @resolver(name: "create_user")   loginUser(input: LoginUserInput): String! @resolver(name: "login_user") }

When looking at the schema, you might notice that the createUser and loginUser Mutation fields have been annotated with a special directive named @resolver. This is a directive provided by the FaunaDB GraphQL API, which allows us to define a custom behavior for a given Query or Mutation field. Since we’ll be using FaunaDB’s built-in authentication mechanisms, we will need to define this logic in FaunaDB after we import the schema. 

Importing the schema

First, let’s import the example schema into a new database. Log into the FaunaDB Cloud Console with your credentials. If you don’t have an account yet, you can sign up for free in a few seconds.

Once logged in, click the “New Database” button from the home page:

Choose a name for the new database, and click the “Save” button: 

Next, we will import the GraphQL schema listed above into the database we just created. To do so, create a file named schema.gql containing the schema definition. Then, select the GRAPHQL tab from the left sidebar, click the “Import Schema” button, and select the newly-created file: 

The import process creates all of the necessary database elements, including collections and indexes, for backing up all of the types defined in the schema. It automatically creates everything your GraphQL API needs to run efficiently. 

You now have a fully functional GraphQL API which you can start testing out in the GraphQL playground. But we do not have data yet. More specifically, we would like to create some users to start testing our GraphQL API. However, since users will be part of our authentication, they are special: they have credentials and can be impersonated. Let’s see how we can create some users with secure credentials!

Custom resolvers for authentication

Remember the createUser and loginUser mutation fields that have been annotated with a special directive named @resolver. createUser is exactly what we need to start creating users, however the schema did not really define how a user needs to created; instead, it was tagged with a @resolver tag.

By tagging a specific mutation with a custom resolver such as @resolver(name: "create_user") we are informing FaunaDB that this mutation is not implemented yet but will be implemented by a User-defined function (UDF). Since our GraphQL schema does not know how to express this, the import process will only create a function template which we still have to fill in.

A UDF is a custom FaunaDB function, similar to a stored procedure, that enables users to define a tailor-made operation in Fauna’s Query Language (FQL). This function is then used as the resolver of the annotated field. 

We will need a custom resolver since we will take advantage of the built-in authentication capabilities which can not be expressed in standard GraphQL. FaunaDB allows you to set a password on any database entity. This password can then be used to impersonate this database entity with the Login function which returns a token with certain permissions. The permissions that this token holds depend on the access rules that we will write.

Let’s continue to implement the UDF for the createUser field resolver so that we can create some test users. First, select the Shell tab from the left sidebar:

As explained before, a template UDF has already been created during the import process. When called, this template UDF prints an error message stating that it needs to be updated with a proper implementation. In order to update it with the intended behavior, we are going to use FQL’s Update function.

So, let’s copy the following FQL query into the web-based shell, and click the “Run Query” button:

Update(Function("create_user"), {   "body": Query(     Lambda(["input"],       Create(Collection("User"), {         data: {           username: Select("username", Var("input")),           role: Select("role", Var("input")),         },         credentials: {           password: Select("password", Var("input"))         }       })       )   ) });

Your screen should look similar to:

The create_user UDF will be in charge of properly creating a User document along with a password value. The password is stored in the document within a special object named credentials that is encrypted and cannot be retrieved back by any FQL function. As a result, the password is securely saved in the database making it impossible to read from either the FQL or the GraphQL APIs. The password will be used later for authenticating a User through a dedicated FQL function named Login, as explained next.

Now, let’s add the proper implementation for the UDF backing up the loginUser field resolver through the following FQL query:

Update(Function("login_user"), {   "body": Query(     Lambda(["input"],       Select(         "secret",         Login(           Match(Index("unique_User_username"), Select("username", Var("input"))),            { password: Select("password", Var("input")) }         )       )     )   ) });

Copy the query listed above and paste it into the shell’s command panel, and click the “Run Query” button:

The login_user UDF will attempt to authenticate a User with the given username and password credentials. As mentioned before, it does so via the Login function. The Login function verifies that the given password matches the one stored along with the User document being authenticated. Note that the password stored in the database is not output at any point during the login process. Finally, in case the credentials are valid, the login_user UDF returns an authorization token called a secret which can be used in subsequent requests for validating the User’s identity.

With the resolvers in place, we will continue with creating some sample data. This will let us try out our use case and help us better understand how the access rules are defined later on.

Creating sample data

First, we are going to create a manager user. Select the GraphQL tab from the left sidebar, copy the following mutation into the GraphQL Playground, and click the “Play” button:

mutation CreateManagerUser {   createUser(input: {     username: "bill.lumbergh"     password: "123456"     role: MANAGER   }) {     username     role   } }

Your screen should look like this:

Next, let’s create an employee user by running the following mutation through the GraphQL Playground editor:

mutation CreateEmployeeUser {   createUser(input: {     username: "peter.gibbons"     password: "abcdef"     role: EMPLOYEE   }) {     username     role   } }

You should see the following response:

Now, let’s create a confidential file by running the following mutation:

mutation CreateConfidentialFile {   createFile(data: {     content: "This is a confidential file!"     confidential: true   }) {     content     confidential   } }

As a response, you should get the following:

And lastly, create a public file with the following mutation:

mutation CreatePublicFile {   createFile(data: {     content: "This is a public file!"     confidential: false   }) {     content     confidential   } }

If successful, it should prompt the following response:

Now that all the sample data is in place, we need access rules since this article is about securing a GraphQL API. The access rules determine how the sample data we just created can be accessed, since by default a user can only access his own user entity. In this case, we are going to implement the following access rules: 

  1. Allow employee users to read public files only.
  2. Allow manager users to read both public files and, only during weekdays, confidential files.

As you might have already noticed, these access rules are highly specific. We will see however that the ABAC system is powerful enough to express very complex rules without getting in the way of the design of your GraphQL API.

Such access rules are not part of the GraphQL specification so we will define the access rules in the Fauna Query Language (FQL), and then verify that they are working as expected by executing some queries from the GraphQL API. 

But what is this “ABAC” system that we just mentioned? What does it stand for, and what can it do?

What is ABAC?

ABAC stands for Attribute-Based Access Control. As its name indicates, it’s an authorization model that establishes access policies based on attributes. In simple words, it means that you can write security rules that involve any of the attributes of your data. If our data contains users we could use the role, department, and clearance level to grant or refuse access to specific data. Or we could use the current time, day of the week, or location of the user to decide whether he can access a specific resource. 

In essence, ABAC allows the definition of fine-grained access control policies based on environmental properties and your data. Now that we know what it can do, let’s define some access rules to give you concrete examples.

Defining the access rules

In FaunaDB, access rules are defined in the form of roles. A role consists of the following data:

  • name —  the name that identifies the role
  • privileges — specific actions that can be executed on specific resources 
  • membership — specific identities that should have the specified privileges

Roles are created through the CreateRole FQL function, as shown in the following example snippet:

CreateRole({   name: "role_name",   membership: [     // ...   ],   privileges: [     // ...   ] })

You can see two important concepts in this role; membership and privileges. Membership defines who receives the privileges of the role and privileges defines what these permissions are. Let’s write a simple example rule to start with: “Any user can read all files.”

Since the rule applies on all users, we would define the membership like this: 

membership: {   resource: Collection("User") }

Simple right? We then continue to define the “Can read all files” privilege for all of these users.

privileges: [   {     resource: Collection("File"),     actions: { read: true }   } ]

The direct effect of this is that any token that you receive by logging in with a user via our loginUser GraphQL mutation can now access all files. 

This is the simplest rule that we can write, but in our example we want to limit access to some confidential files. To do that, we can replace the {read: true} syntax with a function. Since we have defined that the resource of the privilege is the “File” collection, this function will take each file that would be accessed by a query as the first parameter. You can then write rules such as: “A user can only access a file if it is not confidential”. In FaunaDB’s FQL, such a function is written by using Query(Lambda(‘x’, … <logic that users Var(‘x’)>)).

Below is the privilege that would only provide read access to non-confidential files: 

privileges: [   {     resource: Collection("File"),     actions: {       // Read and establish rule based on action attribute       read: Query(         // Read and establish rule based on resource attribute         Lambda("fileRef",           Not(Select(["data", "confidential"], Get(Var("fileRef"))))         )       )     }   } ]

This directly uses properties of the “File” resource we are trying to access. Since it’s just a function, we could also take into account environmental properties like the current time. For example, let’s write a rule that only allows access on weekdays. 

privileges: [     {       resource: Collection("File"),       actions: {         read: Query(           Lambda("fileRef",             Let(               {                 dayOfWeek: DayOfWeek(Now())               },               And(GTE(Var("dayOfWeek"), 1), LTE(Var("dayOfWeek"), 5))               )           )         )       }     } ]

As mentioned in our rules, confidential files should only be accessible by managers. Managers are also users, so we need a rule that applies to a specific segment of our collection of users. Luckily, we can also define the membership as a function; for example, the following Lambda only considers users who have the MANAGER role to be part of the role membership. 

membership: {   resource: Collection("User"),   predicate: Query(    // Read and establish rule based on user attribute     Lambda("userRef",        Equals(Select(["data", "role"], Get(Var("userRef"))), "MANAGER")     )   ) }

In sum, FaunaDB roles are very flexible entities that allow defining access rules based on all of the system elements’ attributes, with different levels of granularity. The place where the rules are defined — privileges or membership — determines their granularity and the attributes that are available, and will differ with each particular use case.

Now that we have covered the basics of how roles work, let’s continue by creating the access rules for our example use case!

In order to keep things neat and tidy, we’re going to create two roles: one for each of the access rules. This will allow us to extend the roles with further rules in an organized way if required later. Nonetheless, be aware that all of the rules could also have been defined together within just one role if needed.

Let’s implement the first rule: 

“Allow employee users to read public files only.”

In order to create a role meeting these conditions, we are going to use the following query:

CreateRole({   name: "employee_role",   membership: {     resource: Collection("User"),     predicate: Query(        Lambda("userRef",         // User attribute based rule:         // It grants access only if the User has EMPLOYEE role.         // If so, further rules specified in the privileges         // section are applied next.                 Equals(Select(["data", "role"], Get(Var("userRef"))), "EMPLOYEE")       )     )   },   privileges: [     {       // Note: 'allFiles' Index is used to retrieve the        // documents from the File collection. Therefore,        // read access to the Index is required here as well.       resource: Index("allFiles"),       actions: { read: true }      },     {       resource: Collection("File"),       actions: {         // Action attribute based rule:         // It grants read access to the File collection.         read: Query(           Lambda("fileRef",             Let(               {                 file: Get(Var("fileRef")),               },               // Resource attribute based rule:               // It grants access to public files only.               Not(Select(["data", "confidential"], Var("file")))             )           )         )       }     }   ] })

Select the Shell tab from the left sidebar, copy the above query into the command panel, and click the “Run Query” button:

Next, let’s implement the second access rule:

“Allow manager users to read both public files and, only during weekdays, confidential files.”

In this case, we are going to use the following query:

CreateRole({   name: "manager_role",   membership: {     resource: Collection("User"),     predicate: Query(       Lambda("userRef",          // User attribute based rule:         // It grants access only if the User has MANAGER role.         // If so, further rules specified in the privileges         // section are applied next.         Equals(Select(["data", "role"], Get(Var("userRef"))), "MANAGER")       )     )   },   privileges: [     {       // Note: 'allFiles' Index is used to retrieve       // documents from the File collection. Therefore,        // read access to the Index is required here as well.       resource: Index("allFiles"),       actions: { read: true }      },     {       resource: Collection("File"),       actions: {         // Action attribute based rule:         // It grants read access to the File collection.         read: Query(           Lambda("fileRef",             Let(               {                 file: Get(Var("fileRef")),                 dayOfWeek: DayOfWeek(Now())               },               Or(                 // Resource attribute based rule:                 // It grants access to public files.                 Not(Select(["data", "confidential"], Var("file"))),                 // Resource and environmental attribute based rule:                 // It grants access to confidential files only on weekdays.                 And(                   Select(["data", "confidential"], Var("file")),                   And(GTE(Var("dayOfWeek"), 1), LTE(Var("dayOfWeek"), 5))                   )               )             )           )         )       }     }   ] })

Copy the query into the command panel, and click the “Run Query” button:

At this point, we have created all of the necessary elements for implementing and trying out our example use case! Let’s continue with verifying that the access rules we just created are working as expected…

Putting everything in action

Let’s start by checking the first rule: 

“Allow employee users to read public files only.”

The first thing we need to do is log in as an employee user so that we can verify which files can be read on its behalf. In order to do so, execute the following mutation from the GraphQL Playground console:

mutation LoginEmployeeUser {   loginUser(input: {     username: "peter.gibbons"     password: "abcdef"   }) }

As a response, you should get a secret access token. The secret represents that the user has been authenticated successfully:

At this point, it’s important to remember that the access rules we defined earlier are not directly associated with the secret that is generated as a result of the login process. Unlike other authorization models, the secret token itself does not contain any authorization information on its own, but it’s just an authentication representation of a given document.

As explained before, access rules are stored in roles, and roles are associated with documents through their membership configuration. After authentication, the secret token can be used in subsequent requests to prove the caller’s identity and determine which roles are associated with it. This means that access rules are effectively verified in every subsequent request and not only during authentication. This model enables us to modify access rules dynamically without requiring users to authenticate again.

Now, we will use the secret issued in the previous step to validate the identity of the caller in our next query. In order to do so, we need to include the secret as a Bearer Token as part of the request. To achieve this, we have to modify the Authorization header value set by the GraphQL Playground. Since we don’t want to miss the admin secret that is being used as default, we’re going to do this in a new tab.

Click the plus (+) button to create a new tab, and select the HTTP HEADERS panel on the bottom left corner of the GraphQL Playground editor. Then, modify the value of the Authorization header to include the secret obtained earlier, as shown in the following example. Make sure to change the scheme value from Basic to Bearer as well:

{   "authorization": "Bearer fnEDdByZ5JACFANyg5uLcAISAtUY6TKlIIb2JnZhkjU-SWEaino" }

With the secret properly set in the request, let’s try to read all of the files on behalf of the employee user. Run the following query from the GraphQL Playground: 

query ReadFiles {   allFiles {     data {       content       confidential     }   } }

In the response, you should see the public file only:

Since the role we defined for employee users does not allow them to read confidential files, they have been correctly filtered out from the response!

Let’s move on now to verifying our second rule:

“Allow manager users to read both public files and, only during weekdays, confidential files.”

This time, we are going to log in as the employee user. Since the login mutation requires an admin secret token, we have to go back first to the original tab containing the default authorization configuration. Once there, run the following query:

mutation LoginManagerUser {   loginUser(input: {     username: "bill.lumbergh"     password: "123456"   }) }

You should get a new secret as a response:

Copy the secret, create a new tab, and modify the Authorization header to include the secret as a Bearer Token as we did before. Then, run the following query in order to read all of the files on behalf of the manager user:

query ReadFiles {   allFiles {     data {       content       confidential     }   } }

As long as you’re running this query on a weekday (if not, feel free to update this rule to include weekends), you should be getting both the public and the confidential file in the response:

And, finally, we have verified that all of the access rules are working successfully from the GraphQL API!

Conclusion

In this post, we have learned how a comprehensive authorization model can be implemented on top of the FaunaDB GraphQL API using FaunaDB’s built-in ABAC features. We have also reviewed ABAC’s distinctive capabilities, which allow defining complex access rules based on the attributes of each system component.

While access rules can only be defined through the FQL API at the moment, they are effectively verified for every request executed against the FaunaDB GraphQL API. Providing support for specifying access rules as part of the GraphQL schema definition is already planned for the future.

In short, FaunaDB provides a powerful mechanism for defining complex access rules on top of the GraphQL API covering most common use cases without the need for third-party services.

The post Instant GraphQL Backend with Fine-grained Security Using FaunaDB appeared first on CSS-Tricks.

CSS-Tricks

, , , , , ,
[Top]

Build a dynamic JAMstack app with GatsbyJS and FaunaDB

In this article, we explain the difference between single-page apps (SPAs) and static sites, and how we can bring the advantages of both worlds together in a dynamic JAMstack app using GatsbyJS and FaunaDB. We will build an application that pulls in some data from FaunaDB during build time, prerenders the HTML for speedy delivery to the client, and then loads additional data at run time as the user interacts with the page. This combination of technologies gives us the best attributes of statically-generated sites and SPAs. 

In short…<deep breath>…auto-scaling distributed websites with low latency, snappy user interfaces, no reloads, and dynamic data for everyone!

Heavy backends, single-page apps, static sites 

In the old days, when JavaScript was new, it was mainly only used to provide effects and improved interactions. Some animations here, a drop-down there, and that was it. The grunt work was performed on the backend by Perl, Java, or PHP. 

This changed as time went on: client code became heavier, and JavaScript took over more and more of the frontend until we finally shipped mostly empty HTML and rendered the whole UI in the browser, leaving the backend to supply us with JSON data.

This led to a neat separation of concerns and allowed us to build whole applications with JavaScript, called Single Page Applications (SPAs). The most important advantage of SPAs was the absence of reloads. You could click on a link to change what’s displayed, without triggering a complete reload of the page. This in itself provided a superior user experience. However, SPAs increased the size of the client code significantly; a client now had to wait for the sum of several latencies:

  • Serving latency: retrieving the HTML and JavaScript from the server where the JavaScript was bigger than it used to be 
  • Data loading latency: loading additional data requested by the client
  • Frontend framework rendering latency: once the data is received, a frontend framework like React, Vue, or Angular still has to do a lot of work to construct the final HTML 

A royal metaphor

We can analogize the loading a SPA with the building and delivery of a toy castle. The client needs to retrieve the HTML and JavaScript, then retrieve the data, and then still has to assemble the page. The building blocks are delivered, but they still need to be put together after they’re delivered.

If only there were a way to build the castle beforehand…

Enter the JAMstack

JAMstack applications consist of JavaScript, APIs and Markup. With today’s static site generators like Next.js and GatsbyJS, the JavaScript and Markup parts can be bundled up into a static package and deployed via a Content Delivery Network (CDN) that delivers files to a browser. A CDN geographically distributes the bundles, and other assets, to multiple locations. When a user’s browser fetches the bundle and assets, it can receive them from the closest location on the network, which reduces the serving latency. 

Continuing our toy castle analogy, JAMstack apps are different from SPAs in the sense that the page (or castle) is delivered pre-assembled. We have a lower latency since we receive the castle in one piece and no longer have to build it. 

Making static JAMstack apps dynamic with hydration

In the JAMstack approach, we start with a dynamic application and prerender static HTML pages to be delivered via a speedy CDN. But what if a fully static site is not sufficient and we need to support some dynamic content as the user interacts with individual components, without reloading the entire page? That’s where client-side hydration comes in.

Hydration is the client-side process by which the server-side rendered HTML (DOM) is “watered” by our frontend framework with event handlers and/or dynamic components to make it more interactive. This can be tricky because it depends on reconciling the original DOM with a new virtual DOM (VDOM) that’s kept in memory as the user interacts with the page. If the DOM and VDOM trees do not match, bugs can arise that cause elements to be displayed out of order, or necessitate rebuilding the page.

Luckily, libraries like GatsbyJS and NextJS have been designed so as to minimize the possibility of such hydration-related bugs, handling everything for you out-of-the-box with only a few lines of code. The result is a dynamic JAMstack web application that is simultaneously both faster and more dynamic than the equivalent SPA. 

One technical detail remains: where will the dynamic data come from?

Distributed frontend-friendly databases!

JAMstack apps typically rely on APIs (ergo the “A” in JAM), but if we need to load any kind of custom data, we need a database. And traditional databases are still a performance bottleneck for globally distributed sites that are otherwise delivered via CDN, because traditional databases are only located in one region. Instead of using a traditional database, we’d like our database to be on a distributed network, just like the CDN, that serves the data from a location as close as possible to wherever our clients are. This type of database is called a distributed database. 

In this example, we’ll choose FaunaDB since it is also strongly consistent, which means that our data will be the same wherever my clients access it from and data won’t be lost. Other features that work particularly well with JAMstack applications are that (a) the database is accessed as an API (GraphQL or FQL) and does not require you to open a connection, and (b) the database has a security layer that makes it possible to access both public and private data in a secure way from the frontend. The implications of that are we can keep the low latencies of JAMstack without having to scale a backend, all with zero configuration. 

Let’s compare the process of loading a hydrated static site with the building of the toy castle. We still have lower latencies thanks to the CDN, but also less data since most the site is statically generated and therefore requires less rendering. Only a small part of the castle (or, the dynamic part of the page) needs to be assembled after it has been delivered:

Example app with GatsbyJS & FaunaDB

Let’s build an example application that loads data from FaunaDB at build time and renders it to static HTML, then loads additional dynamic data inside the client browser at run time. For this example, we use GatsbyJS, a JAMstack framework based on React that prerenders static HTML. Since we use GatsbyJS, we can code our website completely in React, generate and deliver the static pages, and then load additional data dynamically at run time. We’ll use FaunaDB as our fully managed serverless database solution. We will build an application where we can list products and reviews. 

Let’s look at an outline of what we have to do to get our example app up and running and then go through every step in detail.

  1. Set up a new database
  2. Add a GraphQL schema to the database
  3. Seed the database with mock-up data
  4. Create a new GatsbyJS project
  5. Install NPM packages
  6. Create the server key for the database
  7. Update GatsbyJS config files with server key and new read-only key
  8. Load the pre-rendered product data at build time
  9. Load the reviews at run time

1. Set up a new database

Before you start, create an account on dashboard.fauna.com. Once you have an account, let’s set up a new database. It should hold products and their reviews, so we can load the products at build-time and the reviews in the browser. 

2. Add a GraphQL schema to the database

Next, we use the server key to upload a GraphQL schema to our database. For this, we create a new file called schema.gql that has the following content:

type Product {   title: String!   description: String   reviews: [Review] @relation }  type Review {   username: String!   text: String!   product: Product! }  type Query {   allProducts: [Product] }

You can upload your schema.gql file via the FaunaDB Console by clicking “GraphQL” on the left sidebar, and then click the “Import Schema” button.

Upon providing FaunaDB with a GraphQL schema, it automatically creates the required collections for the entities in our schema (products and reviews). Besides that, it also creates the indexes that are needed to interact with those collections in a meaningful and efficient manner. You should now be presented with a GraphQL playground where you can test out 

3. Seed the database with mock-up data

To seed our database with products and reviews, we can use the Shell at dashboard.fauna.com: 

To create some data, we’ll use the Fauna Query Language (FQL), after that we’ll continue with GraphQL to build are example application. Paste the following FQL query into the Shell to create three product documents:

Map(   Paginate(Match(Index("allProducts"))),   Lambda("ref", Create(Collection("Review"), {     data: {       username: "Tina",       text: "Good product!",       product: Var("ref")     }   })) );

We can then write a query that retrieves the products we just made and creates a review document for every product document:

Map(   Paginate(Match(Index("allProducts"))),   Lambda("ref", Create(Collection("Review"), {     data: {       username: "Tina",       text: "Good product!",       product: Var("ref")     }   })) ); 

Both types of documents will be loaded via GraphQL. However, there is a significant difference between products and reviews. The former will not change a lot and is relatively static, while the second is user-driven. GatsbyJS allows us to load data in two ways:

  • data that is loaded at build time which will be used to generate the static site. 
  • data that is loaded live at request time as a client visits and interacts with your website. 

In this example, we chose to let the products be loaded at build time, and the reviews to be loaded on-demand in the browser. Therefore, we get static HTML product pages served by a CDN that the user immediately sees. Then, as our user interacts with the product page, we load the data for the reviews. 

4. Create a new GatsbyJS project

The following command creates a GatsbyJS project based on the starter template:

$  npx gatsby-cli new hello-world-gatsby-faunadb $  cd hello-world-gatsby-faunadb

5. Install npm packages

In order to build our new project with Gatsby and Apollo, we need a few additional packages. We can install the packages with the following command: 

 $  npm i gatsby-source-graphql apollo-boost react-apollo

We will use gatsby-source-graphql as a way to link GraphQL APIs into the build process. Using this library, you can make a GraphQL call from which the results will be automagically provided as the properties for your react component. That way, you can use dynamic data to statically generate your application. The apollo-boost package is an easily configurable GraphQL library that will be used to fetch data on the client. Finally, the link between Apollo and React will be taken care of by the react-apollo library.

6. Create the server key for the database

We will create a Server key which will be used by Gatsby to prerender the page. Remember to copy the secret somewhere since we will use it later on. Protect server keys carefully, they can be used to create, destroy, or manage the database to which they are assigned. To create the key we can go to the fauna dashboard and create the key in the security tab. 

7. Update GatsbyJS config files with server and new read-only keys

To add the GraphQL support to our build process, we need to add the following code into our graphql-config.js inside the plugins section where we will insert the FaunaDB server key which we generated a few moments ago. 

{   resolve: "gatsby-source-graphql",   options: {     typeName: "Fauna",     fieldName: "fauna",     url: "https://graphql.fauna.com/graphql",     headers: {       Authorization: "Bearer <SERVER KEY>",     },   }, }

For the GraphQL access to work in the browser, we have to create a key that only has permissions to read data from the collections. FaunaDB has an extensive security layer in which you can define that. The easiest way is to go to the FaunaDB Console at dashboard.fauna.com and create a new role for your database by clicking “Security” in the left sidebar, then “Manage Roles,” then “New Custom Role”:

Call the new custom role ‘ClientRead’ and make sure to add all collections and indexes (these are the collections that were created by importing the GraphQL schema). Then, select Read for each for them. Your screen should look like this:

You have probably noticed the Membership tab on this page. Although we are not using it in this tutorial, it is interesting enough to explain it since it’s an alternative way to get security tokens. In the Membership tab can specify that entities of a collection (let’s say we have a ‘Users’ collection) in FaunaDb are members of a particular role. That means that if you impersonate one of these entities in that collection, the role privileges apply. You impersonate a database entity (e.g. a User) by associating credentials with the entity and using the Login function, which will return a token. That way you can also implement password-based authentication in FaunaDb. We won’t use it in this tutorial, but if that interests you, check the FaunaDB authentication tutorial.

Let’s ignore Membership for now, once you have created the role, we can create a new key with the new role. As before, click “Security”, then “New Key,” but this time select “ClientRead” from the Role dropdown:

Now, let’s insert this read-only key in the gatsby-browser.js configuration file to be able to call the GraphQL API from the browser:

import React from "react" import ApolloClient from "apollo-boost" import { ApolloProvider } from "react-apollo"  const client = new ApolloClient({   uri: "https://graphql.fauna.com/graphql",   request: operation => {     operation.setContext({       headers: {         Authorization: "Bearer <CLIENT_KEY>",       },     })   }, })  export const wrapRootElement = ({ element }) => (   <ApolloProvider client={client}>{element}</ApolloProvider> )

GatsbyJS will render its Router component as a root element. If we want to use the ApolloClient everywhere in the application on the client, we need to wrap this root element with the ApolloProvider component.

8. Load the pre-rendered product data at build time

Now that everything is set up, we can finally write the actual code to load our data. Let’s start with the products we will load at build time.

For this we need to modify src/pages/index.js file to look like this:

import React from "react" import { graphql } from "gatsby" Import Layout from "../components/Layout"  const IndexPage = ({ data }) => (   <Layout>     <ul>       {data.fauna.allProducts.data.map(product => (         <li>{product.title} - {product.description}</li>       ))}     </ul>   </Layout> )  export const query = graphql` {   fauna {     allProducts {       data { _id title description }     }   } } `  export default IndexPage

The exported query will automatically get picked up by GatsbyJS and executed before rendering the IndexPage component. The result of that query will be passed as data prop into the IndexPage component.If we now run the develop script, we can see the pre-rendered documents on the development server on http://localhost:8000/.

 $  npm run develop

9. Load the reviews at run time

To load the reviews of a product on the client, we have to make some changes to the src/pages/index.js: 

import { gql } from "apollo-boost" import { useQuery } from "@apollo/react-hooks" import { graphql } from "gatsby" import React, { useState } from "react" import Layout from "../components/layout"  // Query for fetching at build-time export const query = graphql  ` {    fauna {      allProducts {        data {          _id title description         }       }      }   }   `    // Query for fetching on the client   const GET_REVIEWS = gql   `   query GetReviews($ productId: ID!) {     findProductByID(id: $ productId) {       reviews {          data {            _id username text         }       }     }   } ` const IndexPage = props => {   const [productId, setProductId] = useState(null)   const { loading, data } = useQuery(GET_REVIEWS, {     variables: {       productId     },     skip: !productId,   }) }  export default IndexPage

Let’s go through this step by step.

First, we need to import parts of the apollo-boost and apollo-react packages so we can use the GraphQL client we previously set up in the gatsby-browser.js file.

Then, we need to implement our GET_REVIEWS query. It tries to find a product by its ID and then loads the associated reviews of that product. The query takes one variable, which is the productId.

In the component function, we use two hooks: useState and useQuery

The useState hook keeps track of the productId for which we want to load reviews. If a user clicks a button, the state will be set to the productId corresponding to that button.

The useQuery hook then applies this productId to load reviews for that product from FaunaDB. The skip parameter of the hook prevents the execution of the query when the page is rendered for the first time because productId will be null.

If we now run the development server again and click on the buttons, our application should execute the query with different productIds as expected.

$  npm run develop

Conclusion

A combination of server-side data fetching and client-side hydration make JAMstack applications pretty powerful. These methods enable flexible interaction with our data so we can adhere to different business needs. 

It’s usually a good idea to load as much data at build time as possible to improve page performance. But if the data isn’t needed by all clients, or too big to be sent to the client all at once, we can split things up and switch to on-demand loading on the client. This is the case for user-specific data, pagination, or any data that changes rather frequently and might be outdated by the time it reaches the user.

In this article, we implemented an approach that loads part of the data at build time, and then loads the rest of the data in the frontend as the user interacts with the page. 

Of course, we have not implemented a login or forms yet to create new reviews. How would we tackle that? That is material for another tutorial where we can use FaunaDB’s attribute-based access control to specify what a client key can read and write from the frontend. 

The code of this tutorial can be found in this repo.

The post Build a dynamic JAMstack app with GatsbyJS and FaunaDB appeared first on CSS-Tricks.

CSS-Tricks

, , , ,
[Top]

Build a 100% Serverless REST API with Firebase Functions & FaunaDB

Indie and enterprise web developers alike are pushing toward a serverless architecture for modern applications. Serverless architectures typically scale well, avoid the need for server provisioning and most importantly are easy and cheap to set up! And that’s why I believe the next evolution for cloud is serverless because it enables developers to focus on writing applications.

With that in mind, let’s build a REST API (because will we ever stop making these?) using 100% serverless technology.

We’re going to do that with Firebase Cloud Functions and FaunaDB, a globally distributed serverless database with native GraphQL.

Those familiar with Firebase know that Google’s serverless app-building tools also provide multiple data storage options: Firebase Realtime Database and Cloud Firestore. Both are valid alternatives to FaunaDB and are effectively serverless.

But why choose FaunaDB when Firestore offers a similar promise and is available with Google’s toolkit? Since our application is quite simple, it does not matter that much. The main difference is that once my application grows and I add multiple collections, then FaunaDB still offers consistency over multiple collections whereas Firestore does not. In this case, I made my choice based on a few other nifty benefits of FaunaDB, which you will discover as you read along — and FaunaDB’s generous free tier doesn’t hurt, either. 😉

In this post, we’ll cover:

  • Installing Firebase CLI tools
  • Creating a Firebase project with Hosting and Cloud Function capabilities
  • Routing URLs to Cloud Functions
  • Building three REST API calls with Express
  • Establishing a FaunaDB Collection to track your (my) favorite video games
  • Creating FaunaDB Documents, accessing them with FaunaDB’s JavaScript client API, and performing basic and intermediate-level queries
  • And more, of course!

Set Up A Local Firebase Functions Project

For this step, you’ll need Node v8 or higher. Install firebase-tools globally on your machine:

$  npm i -g firebase-tools

Then log into Firebase with this command:

$  firebase login

Make a new directory for your project, e.g. mkdir serverless-rest-api and navigate inside.

Create a Firebase project in your new directory by executing firebase login.

Select Functions and Hosting when prompted.

Choose “functions” and “hosting” when the bubbles appear, create a brand new firebase project, select JavaScript as your language, and choose yes (y) for the remaining options.

Create a new project, then choose JavaScript as your Cloud Function language.

Once complete, enter the functions directory, this is where your code lives and where you’ll add a few NPM packages.

Your API requires Express, CORS, and FaunaDB. Install it all with the following:

$  npm i cors express faunadb

Set Up FaunaDB with NodeJS and Firebase Cloud Functions

Before you can use FaunaDB, you need to sign up for an account.

When you’re signed in, go to your FaunaDB console and create your first database, name it “Games.”

You’ll notice that you can create databases inside other databases . So you could make a database for development, one for production or even make one small database per unit test suite. For now we only need ‘Games’ though, so let’s continue.

Create a new database and name it “Games.”

Then tab over to Collections and create your first Collection named ‘games’. Collections will contain your documents (games in this case) and are the equivalent of a table in other databases— don’t worry about payment details, Fauna has a generous free-tier, the reads and writes you perform in this tutorial will definitely not go over that free-tier. At all times you can monitor your usage in the FaunaDB console.

For the purpose of this API, make sure to name your collection ‘games’ because we’re going to be tracking your (my) favorite video games with this nerdy little API.

Create a Collection in your Games Database and name it “Games.”

Tab over to Security, and create a new Key and name it “Personal Key.” There are 3 different types of keys, Admin/Server/Client. Admin key is meant to manage multiple databases, A Server key is typically what you use in a backend which allows you to manage one database. Finally a client key is meant for untrusted clients such as your browser. Since we’ll be using this key to access one FaunaDB database in a serverless backend environment, choose ‘Server key’.

Under the Security tab, create a new Key. Name it Personal Key.

Save the key somewhere, you’ll need it shortly.

Build an Express REST API with Firebase Functions

Firebase Functions can respond directly to external HTTPS requests, and the functions pass standard Node Request and Response objects to your code — sweet. This makes Google’s Cloud Function requests accessible to middleware such as Express.

Open index.js inside your functions directory, clear out the pre-filled code, and add the following to enable Firebase Functions:

const functions = require('firebase-functions') const admin = require('firebase-admin') admin.initializeApp(functions.config().firebase)

Import the FaunaDB library and set it up with the secret you generated in the previous step:

admin.initializeApp(...)   const faunadb = require('faunadb') const q = faunadb.query const client = new faunadb.Client({   secret: 'secrety-secret...that’s secret :)' })

Then create a basic Express app and enable CORS to support cross-origin requests:

const client = new faunadb.Client({...})   const express = require('express') const cors = require('cors') const api = express()   // Automatically allow cross-origin requests api.use(cors({ origin: true }))

You’re ready to create your first Firebase Cloud Function, and it’s as simple as adding this export:

api.use(cors({...}))   exports.api = functions.https.onRequest(api)

This creates a cloud function named, “api” and passes all requests directly to your api express server.

Routing an API URL to a Firebase HTTPS Cloud Function

If you deployed right now, your function’s public URL would be something like this: https://project-name.firebaseapp.com/api. That’s a clunky name for an access point if I do say so myself (and I did because I wrote this… who came up with this useless phrase?)

To remedy this predicament, you will use Firebase’s Hosting options to re-route URL globs to your new function.

Open firebase.json and add the following section immediately below the “ignore” array:

"ignore": [...], "rewrites": [   {     "source": "/api/v1**/**",     "function": "api"   } ]

This setting assigns all /api/v1/... requests to your brand new function, making it reachable from a domain that humans won’t mind typing into their text editors.

With that, you’re ready to test your API. Your API that does… nothing!

Respond to API Requests with Express and Firebase Functions

Before you run your function locally, let’s give your API something to do.

Add this simple route to your index.js file right above your export statement:

api.get(['/api/v1', '/api/v1/'], (req, res) => {   res     .status(200)     .send(`<img src="https://media.giphy.com/media/hhkflHMiOKqI/source.gif">`) })   exports.api = ...

Save your index.js fil, open up your command line, and change into the functions directory.

If you installed Firebase globally, you can run your project by entering the following: firebase serve.

This command runs both the hosting and function environments from your machine.

If Firebase is installed locally in your project directory instead, open package.json and remove the --only functions parameter from your serve command, then run npm run serve from your command line.

Visit localhost:5000/api/v1/ in your browser. If everything was set up just right, you will be greeted by a gif from one of my favorite movies.

And if it’s not one of your favorite movies too, I won’t take it personally but I will say there are other tutorials you could be reading, Bethany.

Now you can leave the hosting and functions emulator running. They will automatically update as you edit your index.js file. Neat, huh?

FaunaDB Indexing

To query data in your games collection, FaunaDB requires an Index.

Indexes generally optimize query performance across all kinds of databases, but in FaunaDB, they are mandatory and you must create them ahead of time.

As a developer just starting out with FaunaDB, this requirement felt like a digital roadblock.

“Why can’t I just query data?” I grimaced as the right side of my mouth tried to meet my eyebrow.

I had to read the documentation and become familiar with how Indexes and the Fauna Query Language (FQL) actually work; whereas Cloud Firestore creates Indexes automatically and gives me stupid-simple ways to access my data. What gives?

Typical databases just let you do what you want and if you do not stop and think: : “is this performant?” or “how much reads will this cost me?” you might have a problem in the long run. Fauna prevents this by requiring an index whenever you query.
As I created complex queries with FQL, I began to appreciate the level of understanding I had when I executed them. Whereas Firestore just gives you free candy and hopes you never ask where it came from as it abstracts away all concerns (such as performance, and more importantly: costs).

Basically, FaunaDB has the flexibility of a NoSQL database coupled with the performance attenuation one expects from a relational SQL database.

We’ll see more examples of how and why in a moment.

Adding Documents to a FaunaDB Collection

Open your FaunaDB dashboard and navigate to your games collection.

In here, click NEW DOCUMENT and add the following BioShock titles to your collection:

{   "title": "BioShock",   "consoles": [     "windows",     "xbox_360",     "playstation_3",     "os_x",     "ios",     "playstation_4",     "xbox_one"   ],   "release_date": Date("2007-08-21"),   "metacritic_score": 96 }  {   "title": "BioShock 2",   "consoles": [     "windows",     "playstation_3",     "xbox_360",     "os_x"   ],   "release_date": Date("2010-02-09"),   "metacritic_score": 88 }{   "title": "BioShock Infinite",   "consoles": [     "windows",     "playstation_3",     "xbox_360",     "os_x",     "linux"   ],   "release_date": Date("2013-03-26"),   "metacritic_score": 94 }

As with other NoSQL databases, the documents are JSON-style text blocks with the exception of a few Fauna-specific objects (such as Date used in the “release_date” field).

Now switch to the Shell area and clear your query. Paste the following:

Map(Paginate(Match(Index("all_games"))),Lambda("ref",Var("ref")))

And click the “Run Query” button. You should see a list of three items: references to the documents you created a moment ago.

In the Shell, clear out the query field, paste the query provided, and click “Run Query.”

It’s a little long in the tooth, but here’s what the query is doing.

Index("all_games") creates a reference to the all_games index which Fauna generated automatically for you when you established your collection.These default indexes are organized by reference and return references as values. So in this case we use the Match function on the index to return a Set of references. Since we do not filter anywhere, we will receive every document in the ‘games’ collection.

The set that was returned from Match is then passed to Paginate. This function as you would expect adds pagination functionality (forward, backward, skip ahead). Lastly, you pass the result of Paginate to Map, which much like its software counterpart lets you perform an operation on each element in a Set and return an array, in this case it is simply returning ref (the reference id).

As we mentioned before, the default index only returns references. The Lambda operation that we fed to Map, pulls this ref field from each entry in the paginated set. The result is an array of references.

Now that you have a list of references, you can retrieve the data behind the reference by using another function: Get.

Wrap Var("ref") with a Get call and re-run your query, which should look like this:

Map(Paginate(Match(Index("all_games"))),Lambda("ref",Get(Var("ref"))))

Instead of a reference array, you now see the contents of each video game document.

Wrap Var("ref") with a Get function, and re-run the query.

Now that you have an idea of what your game documents look like, you can start creating REST calls, beginning with a POST.

Create a Serverless POST API Request

Your first API call is straightforward and shows off how Express combined with Cloud Functions allow you to serve all routes through one method.

Add this below the previous (and impeccable) API call:

api.get(['/api/v1', '/api/v1/'], (req, res) => {...})   api.post(['/api/v1/games', '/api/v1/games/'], (req, res) => {   let addGame = client.query(     q.Create(q.Collection('games'), {       data: {         title: req.body.title,         consoles: req.body.consoles,         metacritic_score: req.body.metacritic_score,         release_date: q.Date(req.body.release_date)       }     })   )   addGame     .then(response => {       res.status(200).send(`Saved! $ {response.ref}`)       return     })     .catch(reason => {       res.error(reason)     }) })

Please look past the lack of input sanitization for the sake of this example (all employees must sanitize inputs before leaving the work-room).

But as you can see, creating new documents in FaunaDB is easy-peasy.

The q object acts as a query builder interface that maps one-to-one with FQL functions (find the full list of FQL functions here).

You perform a Create, pass in your collection, and include data fields that come straight from the body of the request.

client.query returns a Promise, the success-state of which provides a reference to the newly-created document.

And to make sure it’s working, you return the reference to the caller. Let’s see it in action.

Test Firebase Functions Locally with Postman and cURL

Use Postman or cURL to make the following request against localhost:5000/api/v1/ to add Halo: Combat Evolved to your list of games (or whichever Halo is your favorite but absolutely not 4, 5, Reach, Wars, Wars 2, Spartan…)

$  curl http://localhost:5000/api/v1/games -X POST -H "Content-Type: application/json" -d '{"title":"Halo: Combat Evolved","consoles":["xbox","windows","os_x"],"metacritic_score":97,"release_date":"2001-11-15"}'

If everything went right, you should see a reference coming back with your request and a new document show up in your FaunaDB console.

Now that you have some data in your games collection, let’s learn how to retrieve it.

Retrieve FaunaDB Records Using a REST API Request

Earlier, I mentioned that every FaunaDB query requires an Index and that Fauna prevents you from doing inefficient queries. Since our next query will return games filtered by a game console, we can’t simply use a traditional `where` clause since that might be inefficient without an index. In Fauna, we first need to define an index that allows us to filter.

To filter, we need to specify which terms we want to filter on. And by terms, I mean the fields of document you expect to search on.

Navigate to Indexes in your FaunaDB Console and create a new one.

Name it games_by_console, set data.consoles as the only term since we will filter on the consoles. Then set data.title and ref as values. Values are indexed by range, but they are also just the values that will be returned by the query. Indexes are in that sense a bit like views, you can create an index that returns a different combination of fields and each index can have different security.

To minimize request overhead, we’ve limited the response data (e.g. values) to titles and the reference.

Your screen should resemble this one:

Under indexes, create a new index named games_by_console using the parameters above.

Click “Save” when you’re ready.

With your Index prepared, you can draft up your next API call.

I chose to represent consoles as a directory path where the console identifier is the sole parameter, e.g. /api/v1/console/playstation_3, not necessarily best practice, but not the worst either — come on now.

Add this API request to your index.js file:

api.post(['/api/v1/games', '/api/v1/games/'], (req, res) => {...})   api.get(['/api/v1/console/:name', '/api/v1/console/:name/'], (req, res) => {   let findGamesForConsole = client.query(     q.Map(       q.Paginate(q.Match(q.Index('games_by_console'), req.params.name.toLowerCase())),       q.Lambda(['title', 'ref'], q.Var('title'))     )   )   findGamesForConsole     .then(result => {       console.log(result)       res.status(200).send(result)       return     })     .catch(error => {       res.error(error)     }) })

This query looks similar to the one you used in your SHELL to retrieve all games, but with a slight modification.This query looks similar to the one you used in your SHELL to retrieve all games, but with a slight modification. Note how your Match function now has a second parameter (req.params.name.toLowerCase()) which is the console identifier that was passed in through the URL.

The Index you made a moment ago, games_by_console, had one Term in it (the consoles array), this corresponds to the parameter we have provided to the match parameter. Basically, the Match function searches for the string you pass as its second argument in the index. The next interesting bit is the Lambda function. Your first encounter with Lamba featured a single string as Lambda’s first argument, “ref.”

However, the games_by_console Index returns two fields per result, the two values you specified earlier when you created the Index (data.title and ref). So basically we receive a paginated set containing tuples of titles and references, but we only need titles. In case your set contains multiple values, the parameter of your lambda will be an array. The array parameter above (`[‘title’, ‘ref’]`) says that the first value is bound to the text variable title and the second is bound to the variable ref. text parameter. These variables can then be retrieved again further in the query by using Var(‘title’). In this case, both “title” and “ref,” were returned by the index and your Map with Lambda function maps over this list of results and simply returns only the list of titles for each game.

In fauna, the composition of queries happens before they are executed. When you write var q = q.Match(q.Index('games_by_console'))), the variable just contains a query but no query was executed yet. Only when you pass the query to client.query(q) to be executed, it will execute. You can even pass javascript variables in other Fauna FQL functions to start composing queries. his is a big benefit of querying in Fauna vs the chained asynchronous queries required of Firestore. If you ever have tried to generate very complex queries in SQL dynamically, then you will also appreciate the composition and less declarative nature of FQL.

Save index.js and test out your API with this:

$  curl http://localhost:5000/api/v1/xbox {"data":["Halo: Combat Evolved"]}

Neat, huh? But Match only returns documents whose fields are exact matches, which doesn’t help the user looking for a game whose title they can barely recall.

Although Fauna does not offer fuzzy searching via indexes (yet), we can provide similar functionality by making an index on all words in the string. Or if we want really flexible fuzzy searching we can use the filter syntax. Note that its is not necessarily a good idea from a performance or cost point of view… but hey, we’ll do it because we can and because it is a great example of how flexible FQL is!

Filtering FaunaDB Documents by Search String

The last API call we are going to construct will let users find titles by name. Head back into your FaunaDB Console, select INDEXES and click NEW INDEX. Name the new Index, games_by_title and leave the Terms empty, you won’t be needing them.

Rather than rely on Match to compare the title to the search string, you will iterate over every game in your collection to find titles that contain the search query.

Remember how we mentioned that indexes are a bit like views. In order to filter on title , we need to include `data.title` as a value returned by the Index. Since we are using Filter on the results of Match, we have to make sure that Match returns the title so we can work with it.

Add data.title and ref as Values, compare your screen to mine:

Create another index called games_by_title using the parameters above.

Click “Save” when you’re ready.

Back in index.js, add your fourth and final API call:

api.get(['/api/v1/console/:name', '/api/v1/console/:name/'], (req, res) => {...})   api.get(['/api/v1/games/', '/api/v1/games'], (req, res) => {   let findGamesByName = client.query(     q.Map(       q.Paginate(         q.Filter(           q.Match(q.Index('games_by_title')),           q.Lambda(             ['title', 'ref'],             q.GT(               q.FindStr(                 q.LowerCase(q.Var('title')),                 req.query.title.toLowerCase()               ),               -1             )           )         )       ),       q.Lambda(['title', 'ref'], q.Get(q.Var('ref')))     )   )   findGamesByName     .then(result => {       console.log(result)       res.status(200).send(result)       return     })     .catch(error => {       res.error(error)     }) })

Big breath because I know there are many brackets (Lisp programmers will love this) , but once you understand the components, the full query is quite easy to understand since it’s basically just like coding.

Beginning with the first new function you spot, Filter. Filter is again very similar to the filter you encounter in programming languages. It reduces an Array or Set to a subset based on the result of a Lambda function.

In this Filter, you exclude any game titles that do not contain the user’s search query.

You do that by comparing the result of FindStr (a string finding function similar to JavaScript’s indexOf) to -1, a non-negative value here means FindStr discovered the user’s query in a lowercase-version of the game’s title.

And the result of this Filter is passed to Map, where each document is retrieved and placed in the final result output.

Now you may have thought the obvious: performing a string comparison across four entries is cheap, 2 million…? Not so much.

This is an inefficient way to perform a text search, but it will get the job done for the purpose of this example. (Maybe we should have used ElasticSearch or Solr for this?) Well in that case, FaunaDB is quite perfect as central system to keep your data safe and feed this data into a search engine thanks to the temporal aspect which allows you to ask Fauna: “Hey, give me the last changes since timestamp X?”. So you could setup ElasticSearch next to it and use FaunaDB (soon they have push messages) to update it whenever there are changes. Whoever did this once knows how hard it is to keep such an external search up to date and correct, FaunaDB makes it quite easy.

Test the API by searching for “Halo”:

$  curl http://localhost:5000/api/v1/games?title=halo

Don’t You Dare Forget This One Firebase Optimization

A lot of Firebase Cloud Functions code snippets make one terribly wrong assumption: that each function invocation is independent of another.

In reality, Firebase Function instances can remain “hot” for a short period of time, prepared to execute subsequent requests.

This means you should lazy-load your variables and cache the results to help reduce computation time (and money!) during peak activity, here’s how:

let functions, admin, faunadb, q, client, express, cors, api   if (typeof api === 'undefined') { ... // dump the existing code here }   exports.api = functions.https.onRequest(api)

Deploy Your REST API with Firebase Functions

Finally, deploy both your functions and hosting configuration to Firebase by running firebase deploy from your shell.

Without a custom domain name, refer to your Firebase subdomain when making API requests, e.g. https://{project-name}.firebaseapp.com/api/v1/.

What Next?

FaunaDB has made me a conscientious developer.

When using other schemaless databases, I start off with great intentions by treating documents as if I instantiated them with a DDL (strict types, version numbers, the whole shebang).

While that keeps me organized for a short while, soon after standards fall in favor of speed and my documents splinter: leaving outdated formatting and zombie data behind.

By forcing me to think about how I query my data, which Indexes I need, and how to best manipulate that data before it returns to my server, I remain conscious of my documents.

To aid me in remaining forever organized, my catalog (in FaunaDB Console) of Indexes helps me keep track of everything my documents offer.

And by incorporating this wide range of arithmetic and linguistic functions right into the query language, FaunaDB encourages me to maximize efficiency and keep a close eye on my data-storage policies. Considering the affordable pricing model, I’d sooner run 10k+ data manipulations on FaunaDB’s servers than on a single Cloud Function.

For those reasons and more, I encourage you to take a peek at those functions and consider FaunaDB’s other powerful features.

The post Build a 100% Serverless REST API with Firebase Functions & FaunaDB appeared first on CSS-Tricks.

CSS-Tricks

, , , , , ,
[Top]