Tag: GraphQL

A Complete Walkthrough of GraphQL APIs with React and FaunaDB

As a web developer, there is an interesting bit of back and forth that always comes along with setting up a new application. Even using a full stack web framework like Ruby on Rails can be non-trivial to set up and deploy, especially if it’s your first time doing so in a while.

Personally I have always enjoyed being able to dig in and write the actual bit of application logic more so than setting up the apps themselves. Lately I have become a big fan of React applications together with a GraphQL API and storing state with the Apollo library.

Setting up a React application has become very easy in the past few years, but setting up a backend with a GraphQL API? Not so much. So when I was working on a project recently, I decided to look for an easier way to integrate a GraphQL API and was delighted to find FaunaDB.

FaunaDB is a NoSQL database as a service that makes provisioning a GraphQL API an incredibly simple process, and even comes with a free tier. Frankly I was surprised and really impressed with how quickly I was able to go from zero to a working API.

The service also touts its production readiness, with a focus on making scalability much easier than if you were managing your own backend. Although I haven’t explored its more advanced features yet, if it’s anything like my first impression then the prospects and implications of using FaunaDB are quite exciting. For now, I can confirm that for many of my projects, it provides an excellent solution for managing state together with a React application.

While working on my project, I did run into a few configuration issues when making all of the frameworks work together which I think could’ve been addressed with a guide that focuses on walking through standing up an application in its entirety. So in this article, I’m going to do a thorough walkthrough of setting up a small to-do React application on Heroku, then persisting data to that application with FaunaDB using the Apollo library. You can find the full source code here.

Our Application

For this walkthrough, we’re building a to-do list where a user can take the following actions:

  • Add a new item
  • Mark an item as complete
  • Remove an item

From a technical perspective, we’re going to accomplish this by doing the following:

  • Creating a React application
  • Deploying the application to Heroku
  • Provisioning a new FaunaDB database
  • Declaring a GraphQL API schema
  • Provisioning a new database key
  • Configuring Apollo in our React application to interact with our API
  • Writing application logic and consume our API to persist information

Here’s a preview of what the final result will look like:

Creating the React Application

First we’ll create a boilerplate React application and make sure it runs. Assuming you have create-react-app installed, the commands to create a new application are:

create-react-app fauna-todo cd fauna-todo yarn start

After which you should be able to head to http://localhost:3000 and see the generated homepage for the application.

Deploying to Heroku

As I mentioned above, deploying React applications has become awesomely easy over the last few years. I’m using Heroku here since it’s been my go-to platform as a service for a while now, but you could just as easily use another service like Netlify (though of course the configuration will be slightly different). Assuming you have a Heroku account and the Heroku CLI installed, then this article shows that you only need a few lines of code to create and deploy a React application.

git init heroku create -b https://github.com/mars/create-react-app-buildpack.git git push heroku master

And your app is deployed! To view it you can run:

heroku open

Provisioning a FaunaDB Database

Now that we have a React app up and running, let’s add persistence to the application using FaunaDB. Head to fauna.com to create a free account. After you have an account, click “New Database” on the dashboard, and enter in a name of your choosing:

Creating an API via GraphQL Schema in FaunaDB

In this example, we’re going to declare a GraphQL schema then use that file to automatically generate our API within FaunaDB. As a first stab at the schema for our to-do application, let’s suppose that there is simply a collection of “Items” with “name” as its sole field. Since I plan to build upon this schema later and like being able to see the schema itself at a glance, I’m going to create a schema.graphql file and add it to the top level of my React application. Here is the content for this file:

type Item {  name: String } type Query {  allItems: [Item!] }

If you’re unfamiliar with the concept of defining a GraphQL schema, think of it as a manifest for declaring what kinds of objects and queries are available within your API. In this case, we’re saying that there is going to be an Item type with a name string field and that we are going to have an explicit query allItems to look up all item records. You can read more about schemas in this Apollo article and types in this graphql.org article. FaunaDB also provides a reference document for declaring and importing a schema file.

We can now upload this schema.graphql file and use it to generate a GraphQL API. Head to the FaunaDB dashboard again and click on “GraphQL” then upload your newly created schema file here:

Congratulations! You have created a fully functional GraphQL API. This page turns into a “GraphQL Playground” which lets you interact with your API. Click on the “Docs” tab in the sidebar to see the available queries and mutations.

Note that in addition to our allItems query, FaunaDB has generated the following queries/mutations automatically on our behalf:

  • findItemByID
  • createItem
  • updateItem
  • deleteItem

All of these were derived by declaring the Item type in our schema file. Pretty cool right? Let’s give these queries and mutations a spin to familiarize ourselves with them. We can execute queries and mutations directly in the “GraphQL Playground.” Let’s first run a query for items. Enter this query into the left pane of the playground:

query MyItemQuery {  allItems {    data {     name    }  } }

Then click on the play button to run it:

The result is listed on the right pane, and unsurprisingly returns no results since we haven’t created any items yet. Fortunately createItem was one of the mutations that was automatically generated from the schema and we can use that to populate a sample item. Let’s run this mutation:

mutation MyItemCreation {  createItem(data: { name: "My first todo item" }) {    name  } }

You can see the result of the mutation in the right pane. It seems like our item was created successfully, but just to double check we can re-run our first query and see the result:

You can see that if we add our first query back to the left pane in the playground that the play button gives you a choice as to which operation you’d like to perform. Finally, note in step 3 of the screenshot above that our item was indeed created successfully.

In addition to running the query above, we can also look in the “Collections” tab of FaunaDB to view the collection directly:

Provisioning a New Database Key

Now that we have the database itself configured, we need a way for our React application to access it.

For the sake of simplicity in this application, this will be done with a secret key that we can add as an environment variable to our React application. We aren’t going to have authentication for individual users. Instead we’ll generate an application key which has permission to create, read, update, and delete items.

Authentication and authorization are substantial topics on their own — if you would like to learn more on how FaunaDB handles them as a follow up exercise to this guide, you can read all about the topic here.

The application key we generate has an associate set of permissions that are grouped together in a “role.” Let’s begin by first defining a role that has permission to perform CRUD operations on items, as well as perform the allItems query. Start by going to the “Security” tab, then clicking on “Manage Roles”:

There are 2 built in roles, admin and server. We could in theory use these roles for our key, but this is a bad idea as those keys would allow whoever has access to this key to perform database level operations such as creating new collections or even destroy the database itself. So instead, let’s create a new role by clicking on “New Custom Role” button:

You can name the role whatever you’d like, here we’re using the name ItemEditor and giving the role permission to read, write, create, and delete items — as well as permission to read the allItems index.

Save this role then, head to the “Security” tab and create a new key:

When creating a key, make sure to select “ItemEditor” for the role and whatever name you please:

Next you’ll be presented with your secret key which you’ll need to copy:

In order for React to load the key’s value as an environment variable, create a new file called .env.local which lives at the root level of your React application. In this file, add an entry for the generated key:

REACT_APP_FAUNA_SECRET=fnADzT7kXcACAFHdiKG-lIUWq-hfWIVxqFi4OtTv

Important: Since it’s not good practice to store secrets directly in source control in plain text, make sure that you also have a .gitignore file in your project’s root directory that contains .env.local so that your secrets won’t be added to your git repo and shared with others.

It’s critical that this variable’s name starts with “REACT_APP_” otherwise it won’t be recognized when the application is started. By adding the value to the .env.local file, it will still be loaded for the application when running locally. You’ll have to explicitly stop and restart your application with yarn start in order to see these changes take.

If you’re interested in reading more about how environment variables are loaded in apps created via create-react-app, there is a full explanation here. We’ll cover adding this secret as an environment variable in Heroku later on in this article.

Connecting to FaunaDB in React with Apollo

In order for our React application to interact with our GraphQL API, we need some sort of GraphQL client library. Fortunately for us, the Apollo client provides an elegant interface for making API requests as well as caching and interacting with the results.

To install the relevant Apollo packages we’ll need, run:

yarn add @apollo/client graphql @apollo/react-hooks

Now in your src directory of your application, add a new file named client.js with the following content:

import { ApolloClient, InMemoryCache } from "@apollo/client"; export const client = new ApolloClient({  uri: "https://graphql.fauna.com/graphql",  headers: {    authorization: `Bearer $ {process.env.REACT_APP_FAUNA_SECRET}`,  },  cache: new InMemoryCache(), });

What we’re doing here is configuring Apollo to make requests to our FaunaDB database. Specifically, the uri makes the request to Fauna itself, then the authorization header indicates that we’re connecting to the specific database instance for the provided key that we generated earlier.

There are 2 important implications from this snippet of code:

  1. The authorization header contains the key with the “ItemEditor” role, and is currently hard coded to use the same header regardless of which user is looking at our application. If you were to update this application to show a different to-do list for each user, you would need to login for each user and generate a token which could instead be passed in this header. Again, the FaunaDB documentation covers this concept if you care to learn more about it.
  2. As with any time you add a layer of caching to a system (as we are doing here with Apollo), you introduce the potential to have stale data. FaunaDB’s operations are strongly consistent, and you can configure Apollo’s fetchPolicy to minimize the potential for stale data. In order to prevent stale reads to our cache, we’ll use a combination of refetch queries and specifying response fields in our mutations.

Next we’ll replace the contents of the home page’s component. Head to App.js and replace its content with:

import React from "react"; import { ApolloProvider } from "@apollo/client"; import { client } from "./client"; function App() {  return (    <ApolloProvider client={client}>      <div style={{ padding: "5px" }}>        <h3>My Todo Items</h3>        <div>items to get loaded here</div>      </div>    </ApolloProvider>  ); }

Note: For this sample application I’m focusing on functionality over presentation, so you’ll see some inline styles. While I certainly wouldn’t recommend this for a production-grade application, I think it does at least demonstrate any added styling in the most straightforward manner within a small demo.

Visit http://localhost:3000 again and you’ll see:

Which contains the hard coded values we’ve set in our jsx above. What we would really like to see however is the to-do item we created in our database. In the src directory, let’s create a component called ItemList which lists out any items in our database:

import React from "react"; import gql from "graphql-tag"; import { useQuery } from "@apollo/react-hooks"; const ITEMS_QUERY = gql`  {    allItems {      data {        _id        name      }    }  } `; export function ItemList() {  const { data, loading } = useQuery(ITEMS_QUERY); if (loading) {    return "Loading...";  } return (    <ul>      {data.allItems.data.map((item) => {        return <li key={item._id}>{item.name}</li>;      })}    </ul>  ); }

Then update App.js to render this new component  —  see the full commit in this example’s source code to see this step in its entirety. Previewing your app in again, you’ll see that your to-do item has loaded:

Now is a good time to commit your progress in git. And since we’re using Heroku, deploying is a snap:

git push heroku master heroku open

When you run heroku open though, you’ll see that the page is blank. If we inspect the network traffic and request to FaunaDB, we’ll see an error about how the database secret is missing:

Which makes sense since we haven’t configured this value in Heroku yet. Let’s set it by going to the Heroku dashboard, selecting your application, then clicking on the “Settings” tab. In there you should add the REACT_APP_FAUNA_SECRET key and value used in the .env.local file earlier. Reusing this key is done for demonstration purposes. In a “real” application, you would probably have a separate database and separate keys for each environment.

If you would prefer to use the command line rather than Heroku’s web interface, you can use the following command and replace the secret with your key instead:

heroku config:set REACT_APP_FAUNA_SECRET=fnADzT7kXcACAFHdiKG-lIUWq-hfWIVxqFi4OtTv

Important: as noted in the Heroku docs, you need to trigger a deploy in order for this environment variable to apply in your app:

git commit — allow-empty -m 'Add REACT_APP_FAUNA_SECRET env var' git push heroku master heroku open

After running this last command, your Heroku-hosted app should appear and load the items from your database.

Adding New To-Do Items

We now have all of the pieces in place for accessing our FaunaDB database both locally and a hosted Heroku environment. Now adding items is as simple as calling the mutation we used in the GraphQL Playground earlier. Here is the code for an AddItem component, which uses a bare bones html form to call the createItem mutation:

import React from "react"; import gql from "graphql-tag"; import { useMutation } from "@apollo/react-hooks"; const CREATE_ITEM = gql`  mutation CreateItem($ data: ItemInput!) {    createItem(data: $ data) {      _id    }  } `; const ITEMS_QUERY = gql`  {    allItems {      data {        _id        name      }    }  } `; export function AddItem() {  const [showForm, setShowForm] = React.useState(false);  const [newItemName, setNewItemName] = React.useState(""); const [createItem, { loading }] = useMutation(CREATE_ITEM, {    refetchQueries: [{ query: ITEMS_QUERY }],    onCompleted: () => {      setNewItemName("");      setShowForm(false);    },  }); if (showForm) {    return (      <form        onSubmit={(e) => {          e.preventDefault();          createItem({ variables: { data: { name: newItemName } } });        }}      >        <label>          <input            disabled={loading}            type="text"            value={newItemName}            onChange={(e) => setNewItemName(e.target.value)}            style={{ marginRight: "5px" }}          />        </label>        <input disabled={loading} type="submit" value="Add" />      </form>    );  } return <button onClick={() => setShowForm(true)}>Add Item</button>; }

After adding a reference to AddItem in our App component, we can verify that adding items works as expected. Again, you can see the full commit in the demo app for a recap of this step.

Deleting New To-Do Items

Similar to how we called the automatically generated AddItem mutation to add new items, we can call the generated DeleteItem mutation to remove items from our list. Here’s what our updated ItemList component looks like after adding this mutation:

import React from "react"; import gql from "graphql-tag"; import { useMutation, useQuery } from "@apollo/react-hooks"; const ITEMS_QUERY = gql`  {    allItems {      data {        _id        name      }    }  } `; const DELETE_ITEM = gql`  mutation DeleteItem($ id: ID!) {    deleteItem(id: $ id) {      _id    }  } `; export function ItemList() {  const { data, loading } = useQuery(ITEMS_QUERY); const [deleteItem, { loading: deleteLoading }] = useMutation(DELETE_ITEM, {    refetchQueries: [{ query: ITEMS_QUERY }],  }); if (loading) {    return <div>Loading...</div>;  } return (    <ul>      {data.allItems.data.map((item) => {        return (          <li key={item._id}>            {item.name}{" "}            <button              disabled={deleteLoading}              onClick={(e) => {                e.preventDefault();                deleteItem({ variables: { id: item._id } });              }}            >              Remove            </button>          </li>        );      })}    </ul>  ); }

Reloading our app and adding another item should result in a page that looks like this:

If you click on the “Remove” button for any item, the DELETE_ITEM mutation is fired and the entire list of items is fired upon completion as specified per the refetchQuery option. 

One thing you may have noticed is that in our ITEMS_QUERY, we’re specifying _id as one of the fields we’d like in the result set from our query. This _id field is automatically generated by FaunaDB as a unique identifier for each collection, and should be used when updating or deleting a record.

Marking Items as Complete

This wouldn’t be a fully functional to-do list without the ability to mark items as complete! So let’s add that now. By the time we’re done, we expect the app to look like this:

The first step we need to take is updating our Item schema within FaunaDB since right now the only information we store about an item is its name. Heading to our schema.graphql file, we can add a new field to track the completion state for an item:

type Item {  name: String  isComplete: Boolean } type Query {  allItems: [Item!] }

Now head to the GraphQL tab in the FaunaDB console and click on the “Update Schema” link to upload the newly updated schema file.

Note: there is also an “Override Schema” option, which can be used to rewrite your database’s schema from scratch if you’d like. One consideration to make when choosing to override the schema completely is that the data is deleted from your database. This may be fine for a test environment, but a test or production environment would require a proper database migration instead.

Since the changes we’re making here are additive, there won’t be any conflict with the existing schema so we can keep our existing data.

You can view the mutation itself and its expected schema in the GraphQL Playground in FaunaDB:

This tells us that we can call the deleteItem mutation with a data parameter of type ItemInput. As with our other requests, we could test this mutation in the playground if we wanted. For now, we can add it directly to the application. In ItemList.js, let’s add this mutation with this code as outlined in the example repository.

The references to UPDATE_ITEM are the most relevant changes we’ve made. It’s interesting to note as well that we don’t need the refetchQueries parameter for this mutation. When the update mutation returns, Apollo updates the corresponding item in the cache based on its identifier field (_id in this case) so our React component re-renders appropriately as the cache updates.

We now have all of the functionality for an initial version of a to-do application. As a final step, push your branch one more time to Heroku:

git push heroku master heroku open

Conclusion

Take a moment to pat yourself on the back! You’ve created a brand-new React application, added persistence at a database level with FaunaDB, and can do deployments available to the entire world with the push of a branch to Heroku.

Now that we’ve covered some of the concepts behind provisioning and interacting with FaunaDB, setting up any similar project in the future is an amazingly fast process. Being able to provision a GraphQL-accessible database in minutes is a dream for me when it comes to spinning up a new project. Not only that, but this is a production grade database that you don’t have to worry about configuring or scaling — and instead get to focus on writing the rest of your application instead of playing the role of a database administrator.


The post A Complete Walkthrough of GraphQL APIs with React and FaunaDB appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , ,

Building Serverless GraphQL API in Node with Express and Netlify

I’ve always wanted to build an API, but was scared away by just how complicated things looked. I’d read a lot of tutorials that start with “first, install this library and this library and this library” without explaining why that was important. I’m kind of a Luddite when it comes to these things.

Well, I recently rolled up my sleeves and got my hands dirty. I wanted to build and deploy a simple read-only API, and goshdarnit, I wasn’t going to let some scary dependency lists and fancy cutting-edge services stop me¹.

What I discovered is that underneath many of the tutorials and projects out there is a small, easy-to-understand set of tools and techniques. In less than an hour and with only 30 lines of code, I believe anyone can write and deploy their very own read-only API. You don’t have to be a senior full-stack engineer — a basic grasp of JavaScript and some experience with npm is all you need.

At the end of this article you’ll be able to deploy your very own API without the headache of managing a server. I’ll list out each dependency and explain why we’re incorporating it. I’ll also give you an intro to some of the newer concepts involved, and provide links to resources to go deeper.

Let’s get started!

A rundown of the API concepts

There are a couple of common ways to work with APIs. But let’s begin by (super briefly) explaining what an API is all about: reading and updating data.

Over the past 20 years, some standard ways to build APIs have emerged. REST (short for REpresentational State Transfer) is one of the most common. To use a REST API, you make a call to a server through a URL — say api.example.com/rest/books — and expect to get a list of books back in a format like JSON or XML. To get a single book, we’d go back to the server at a URL — like api.example.com/rest/books/123 — and expect the data for book #123. Adding a new book or updating a specific book’s data means more trips to the server at similar, purpose-defined URLs.

That’s the basic idea of two concepts we’ll be looking at here: GraphQL and Serverless.

GraphQL

Applications that do a lot of getting and updating of data make a lot of API calls. Complicated software, like Twitter, might make hundreds of calls to get the data for a single page. Collecting the right data from a handful of URLs and formatting it can be a real headache. In 2012, Facebook developers starting looking for new ways to get and update data more efficiently.

Their key insight was that for the most part, data in complicated applications has relationships to other data. A user has followers, who are each users themselves, who each have their own followers, and those followers have tweets, which have replies from other users. Drawing the relationships between data results in a graph and that graph can help a server do a lot of clever work formatting and sending (or updating) data, and saving front-end developers time and frustration. Graph Query Language, aka GraphQL, was born.

GraphQL is different from the REST API approach in its use of URLs and queries. To get a list of books from our API using GraphQL, we don’t need to go to a specific URL (like our api.example.com/graphql/books example). Instead, we call up the API at the top level — which would be api.example.com/graphql in our example — and tell it what kind of information we want back with a JSON object:

{   books {     id     title     author   } }

The server sees that request, formats our data, and sends it back in another JSON object:

{   "books" : [     {       "id" : 123       "title" : "The Greatest CSS Tricks Vol. I"       "author" : "Chris Coyier"     }, {       // ...     }   ] }

Sebastian Scholl compares GraphQL to REST using a fictional cocktail party that makes the distinction super clear. The bottom line: GraphQL allows us to request the exact data we want while REST gives us a dump of everything at the URL.

Concept 2: Serverless

Whenever I see the word “serverless,” I think of Chris Watterston’s famous sticker.

Similarly, there is no such thing as a truly “serverless” application. Chris Coyier nice sums it up his “Serverless” post:

What serverless is trying to mean, it seems to me, is a new way to manage and pay for servers. You don’t buy individual servers. You don’t manage them. You don’t scale them. You don’t balance them. You aren’t really responsible for them. You just pay for what you use.

The serverless approach makes it easier to build and deploy back-end applications. It’s especially easy for folks like me who don’t have a background in back-end development. Rather than spend my time learning how to provision and maintain a server, I often hand the hard work off to someone (or even perhaps something) else.

It’s worth checking out the CSS-Tricks guide to all things serverless. On the Ideas page, there’s even a link to a tutorial on building a serverless API!

Picking our tools

If you browse through that serverless guide you’ll see there’s no shortage of tools and resources to help us on our way to building an API. But exactly which ones we use requires some initial thought and planning. I’m going to cover two specific tools that we’ll use for our read-only API.

Tool 1: NodeJS and Express

Again, I don’t have much experience with back-end web development. But one of the few things I have encountered is Node.js. Many of you are probably aware of it and what it does, but it’s essentially JavaScript that runs on a server instead of a web browser. Node.js is perfect for someone coming from the front-end development side of things because we can work directly in JavaScript — warts and all — without having to reach for some back-end language.

Express is one of the most popular frameworks for Node.js. Back before React was king (How Do You Do, Fellow Kids?), Express was the go-to for building web applications. It does all sorts of handy thing like routing, templating, and error handling.

I’ll be honest: frameworks like Express intimidate me. But for a simple API, Express is extremely easy to use and understand. There’s an official GraphQL helper for Express, and a plug-and-play library for making a serverless application called serverless-http. Neat, right?!

Tool 2: Netlify functions

The idea of running an application without maintaining a server sounds too good to be true. But check this out: not only can you accomplish this feat of modern sorcery, you can do it for free. Mind blowing.

Netlify offers a free plan with serverless functions that will give you up to 125,000 API calls in a month. Amazon offers a similar service called Lambda. We’ll stick with Netlify for this tutorial.

Netlify includes Netlify Dev which is a CLI for Netlify’s platform. Essentially, it lets us run a simulation of our in a fully-featured production environment, all within the safety of our local machine. We can use it to build and test our serverless functions without needing to deploy them.

At this point, I think it’s worth noting that not everyone agrees that running Express in a serverless function is a good idea. As Paul Johnston explains, if you’re building your functions for scale, it’s best to break each piece of functionality out into its own single-purpose function. Using Express the way I have means that every time a request goes to the API, the whole Express server has to be booted up from scratch — not very efficient. Deploy to production at your own risk.

Let’s get building!

Now that we have out tools in place, we can kick off the project. Let’s start by creating a new folder, navigating to fit in terminal, then running npm init  on it. Once npm creates a package.json file, we can install the dependencies we need. Those dependencies are:

  1. Express
  2. GraphQL and express-graphql. These allow us to receive and respond to GraphQL requests.
  3. Bodyparser. This is a small layer that translates the requests we get to and from JSON, which is what GraphQL expects.
  4. Serverless-http. This serves as a wrapper for Express that makes sure our application can be used on a serverless platform, like Netlify.

That’s it! We can install them all in a single command:

npm i express express-graphql graphql body-parser serverless-http

We also need to install Netlify Dev as a global dependency so we can use it as a CLI:

npm i -g netlify-dev

File structure

There’s a few files that are required for our API to work correctly. The first is netlify.toml which should be created at the project’s root directory. This is a configuration file to tell Netlify how to handle our project. Here’s what we need in the file to define our startup command, our build command and where our serverless functions are located:

[build] 
   # This command builds the site   command = "npm run build" 
   # This is the directory that will be deployed   publish = "build" 
   # This is where our functions are located   functions = "functions"

That functions line is super important; it tells Netlify where we’ll be putting our API code.

Next, let’s create that /functions folder at the project’s root, and create a new file inside it called api.js.  Open it up and add the following lines to the top so our dependencies are available to use and are included in the build:

const express = require("express"); const bodyParser = require("body-parser"); const expressGraphQL = require("express-graphql"); const serverless = require("serverless-http");

Setting up Express only takes a few lines of code. First, we’ll initial Express and wrap it in the serverless-http serverless function:

const app = express(); module.exports.handler = serverless(app);

These lines initialize Express, and wrap it in the serverless-http function. module.exports.handler lets Netlify know that our serverless function is the Express function.

Now let’s configure Express itself:

app.use(bodyParser.json()); app.use(   "/",   expressGraphQL({     graphiql: true   }) );

These two declarations tell Express what middleware we’re running. Middleware is what we want to happen between the request and response. In our case, we want to parse JSON using bodyparser, and handle it with express-graphql. The graphiql:true configuration for express-graphql will give us a nice user interface and playground for testing.

Defining the GraphQL schema

In order to understand requests and format responses, GraphQL needs to know what our data looks like. If you’ve worked with databases then you know that this kind of data blueprint is called a schema. GraphQL combines this well-defined schema with types — that is, definitions of different kinds of data — to work its magic.

The very first thing our schema needs is called a root query. This will handle any data requests coming in to our API. It’s called a “root” query because it’s accessed at the root of our API— say, api.example.com/graphql.

For this demonstration, we’ll build a hello world example; the root query should result in a response of “Hello world.”

So, our GraphQL API will need a schema (composed of types) for the root query. GraphQL provides some ready-built types, including a schema, a generic object², and a string.

Let’s get those by adding this below the imports:

const {   GraphQLSchema,   GraphQLObjectType,   GraphQLString } = require("graphql");

Then we’ll define our schema like this:

const schema = new GraphQLSchema({   query: new GraphQLObjectType({     name: 'HelloWorld',     fields: () => ({ /* we'll put our response here */ })   }) })

The first element in the object, with the key query, tells GraphQL how to handle a root query. Its value is a GraphQL object with the following configuration:

  • name – A reference used for documentation purposes
  • fields – Defines the data that our server will respond with. It might seem strange to have a function that just returns an object here, but this allows us to use variables and functions defined elsewhere in our file without needing to define them first³.
const schema = new GraphQLSchema({   query: new GraphQLObjectType({     name: "HelloWorld",     fields: () => ({       message: {         type: GraphQLString,         resolve: () => "Hello World",       },     }),   }), });

The fields function returns an object and our schema only has a single message field so far. The message we want to respond with is a string, so we specify its type as a GraphQLString. The resolve function is run by our server to generate the response we want. In this case, we’re only  returning “Hello World” but in a more complicated application, we’d probably use this function to go to our database and retrieve some data.

That’s our schema! We need to tell our Express server about it, so let’s open up api.js and make sure the Express configuration is updated to this:

app.use(   "/",   expressGraphQL({     schema: schema,     graphiql: true   }) );

Running the server locally

Believe it or not, we’re ready to start the server! Run netlify dev in Terminal from the project’s root folder. Netlify Dev will read the netlify.toml configuration, bundle up your api.js function, and make it available locally from there. If everything goes according to plan, you’ll see a message like “Server now ready on http://localhost:8888.” 

If you go to localhost:8888 like I did the first time, you might be a little disappointed to get a 404 error.

But fear not! Netlify is running the function, only in a different directory than you might expect, which is /.netlify/functions. So, if you go to localhost:8888/.netlify/functions/api, you should see the GraphiQL interface as expected. Success!

Now, that’s more like it!

The screen we get is the GraphiQL playground and we can use it to test out the API. First, clear out the comments in the left pane and replace them with the following:

{   message }

This might seem a little… naked… but you just wrote a GraphQL query! What we’re saying is that we’d like to see the message field we defined in api.js. Click the “Run” button, and on the righth, you’ll see the following:

{   "data": {     "message": "Hello World"   } }

I don’t know about you, but I did a little fist pump when I did this the first time. We built an API!

Bonus: Redirecting requests

One of my hang-ups while learning about Netlify’s serverless functions is that they run on the /.netlify/functions path. It wasn’t ideal to type or remember it and I nearly bailed for another solution. But it turns out you can easily redirect requests when running and deploying on Netlfiy. All it takes is creating a file in the project’s root directory called _redirects (no extension necessary) with the following line in it:

/api /.netlify/functions/api 200!

This tells Netlify that any traffic that goes to yoursite.com/api should be sent to /.netlify/functions/api. The 200! bit instructs the server to send back a status code of 200 (meaning everything’s OK).

Deploying the API

To deploy the project, we need to connect the source code to Netlfiy. I host mine in a GitHub repo, which allows for continuous deployment.

After connecting the repository to Netlfiy, the rest is automatic: the code is processed and deployed as a serverless function! You can log into the Netlify dashboard to see the logs from any function.

Conclusion

Just like that, we are able to create a serverless API using GraphQL with a few lines of JavaScript and some light configuration. And hey, we can even deploy — for free. 

The possibilities are endless. Maybe you want to create your own personal knowledge base, or a tool to serve up design tokens. Maybe you want to try your hand at making your own PokéAPI. Or, maybe you’re interesting in working with GraphQL.

Regardless of what you make, it’s these sorts of technologies that are getting more and more accessible every day. It’s exciting to be able to work with some of the most modern tools and techniques without needing a deep technical back-end knowledge.

If you’d like to see at the complete source code for this project, it’s available on GitHub.

Some of the code in this tutorial was adapted from Web Dev Simplified’s “Learn GraphQL in 40 minutes” article. It’s a great resource to go one step deeper into GraphQL. However, it’s also focused on a more traditional server-full Express.


  1. If you’d like to see the full result of my explorations, I’ve written a companion piece called “A design API in practice” on my website.
  2. The reasons you need a special GraphQL object, instead of a regular ol’ vanilla JavaScript object in curly braces, is a little beyond the scope of this tutorial. Just keep in mind that GraphQL is a finely-tuned machine that uses these specialized types to be fast and resilient.
  3. Scope and hoisting are some of the more confusing topics in JavaScript. MDN has a good primer that’s worth checking out.

The post Building Serverless GraphQL API in Node with Express and Netlify appeared first on CSS-Tricks.

CSS-Tricks

, , , , ,
[Top]

Get Started Building GraphQL APIs With Node

We all have a number of interests and passions. For example, I’m interested in JavaScript, 90’s indie rock and hip hop, obscure jazz, the city of Pittsburgh, pizza, coffee, and movies starring John Lurie. We also have family members, friends, acquaintances, classmates, and colleagues who also have their own social relationships, interests, and passions. Some of these relationships and interests overlap, like my friend Riley who shares my interest in 90’s hip hop and pizza. Others do not, like my colleague Harrison, who prefers Python to JavaScript, only drinks tea, and prefers current pop music. All together, we each have a connected graph of the people in our lives, and the ways that our relationships and interests overlap.

These types of interconnected data are exactly the challenge that GraphQL initially set out to solve in API development. By writing a GraphQL API we are able to efficiently connect data, which reduces the complexity and number of requests, while allowing us to serve the client precisely the data that it needs. (If you’re into more GraphQL metaphors, check out Meeting GraphQL at a Cocktail Mixer.)

In this article, we’ll build a GraphQL API in Node.js, using the Apollo Server package. To do so, we’ll explore fundamental GraphQL topics, write a GraphQL schema, develop code to resolve our schema functions, and access our API using the GraphQL Playground user interface.

What is GraphQL?

GraphQL is an open source query and data manipulation language for APIs. It was developed with the goal of providing single endpoints for data, allowing applications to request exactly the data that is needed. This has the benefit of not only simplifying our UI code, but also improving performance by limiting the amount of data that needs to be sent over the wire.

What we’re building

To follow along with this tutorial, you’ll need Node v8.x or later and some familiarity with working with the command line. 

We’re going to build an API application for book highlights, allowing us to store memorable passages from the things that we read. Users of the API will be able to perform “CRUD” (create, read, update, delete) operations against their highlights:

  • Create a new highlight
  • Read an individual highlight as well as a list of highlights
  • Update a highlight’s content
  • Delete a highlight

Getting started

To get started, first create a new directory for our project, initialize a new node project, and install the dependencies that we’ll need:

# make the new directory mkdir highlights-api # change into the directory cd highlights-api # initiate a new node project npm init -y # install the project dependencies npm install apollo-server graphql # install the development dependencies npm install nodemon --save-dev

Before moving on, let’s break down our dependencies:

  • apollo-server is a library that enables us to work with GraphQL within our Node application. We’ll be using it as a standalone library, but the team at Apollo has also created middleware for working with existing Node web applications in ExpresshapiFastify, and Koa.
  • graphql includes the GraphQL language and is a required peer dependency of apollo-server.
  • nodemon is a helpful library that will watch our project for changes and automatically restart our server.

With our packages installed, let’s next create our application’s root file, named index.js. For now, we’ll console.log() a message in this file:

console.log("📚 Hello Highlights");

To make our development process simpler, we’ll update the scripts object within our package.json file to make use of the nodemon package:

"scripts": {   "start": "nodemon index.js" },

Now, we can start our application by typing npm start in the terminal application. If everything is working properly, you will see 📚 Hello Highlights logged to your terminal.

GraphQL schema types

A schema is a written representation of our data and interactions. By requiring a schema, GraphQL enforces a strict plan for our API. This is because the API can only return data and perform interactions that are defined within the schema. The fundamental component of GraphQL schemas are object types. GraphQL contains five built-in types:

  • String: A string with UTF-8 character encoding
  • Boolean: A true or false value
  • Int: A 32-bit integer
  • Float: A floating-point value
  • ID: A unique identifier

We can construct a schema for an API with these basic components. In a file named schema.js, we can import the gql library and prepare the file for our schema syntax:

const { gql } = require('apollo-server');  const typeDefs = gql`   # The schema will go here `;  module.exports = typeDefs;

To write our schema, we first define the type. Let’s consider how we might define a schema for our highlights application. To begin, we would create a new type with a name of Highlight:

const typeDefs = gql`   type Highlight {   } `;

Each highlight will have a unique ID,  some content, a title, and an author. The Highlight schema will look something like this:

const typeDefs = gql`   type Highlight {     id: ID     content: String     title: String     author: String   } `;

We can make some of these fields required by adding an exclamation point:

const typeDefs = gql`   type Highlight {     id: ID!     content: String!     title: String     author: String   } `;

Though we’ve defined an object type for our highlights, we also need to provide a description of how a client will fetch that data. This is called a query. We’ll dive more into queries shortly, but for now let’s describe in our schema the ways in which someone will retrieve highlights. When requesting all of our highlights, the data will be returned as an array (represented as [Highlight]) and when we want to retrieve a single highlight we will need to pass an ID as a parameter.

const typeDefs = gql`   type Highlight {     id: ID!     content: String!     title: String     author: String   }   type Query {     highlights: [Highlight]!     highlight(id: ID!): Highlight   } `;

Now, in the index.js file, we can import our type definitions and set up Apollo Server:

const {ApolloServer } = require('apollo-server'); const typeDefs = require('./schema');  const server = new ApolloServer({ typeDefs });  server.listen().then(({ url }) => {   console.log(`📚 Highlights server ready at $ https://css-tricks.com/get-started-building-graphql-apis-with-node/`); });

If we’ve kept the node process running, the application will have automatically updated and relaunched, but if not, typing npm start  from the project’s directory in the terminal window will start the server. If we look at the terminal, we should see that nodemon is watching our files and the server is running on a local port:

[nodemon] 2.0.2 [nodemon] to restart at any time, enter `rs` [nodemon] watching dir(s): *.* [nodemon] watching extensions: js,mjs,json [nodemon] starting `node index.js` 📚 Highlights server ready at http://localhost:4000/

Visiting the URL in the browser will launch the GraphQL Playground application, which provides a user interface for interacting with our API.

GraphQL Resolvers

Though we’ve developed our project with an initial schema and Apollo Server setup, we can’t yet interact with our API. To do so, we’ll introduce resolvers. Resolvers perform exactly the action their name implies; they resolve the data that the API user has requested. We will write these resolvers by first defining them in our schema and then implementing the logic within our JavaScript code. Our API will contain two types of resolvers: queries and mutations.

Let’s first add some data to interact with. In an application, this would typically be data that we’re retrieving and writing to from a database, but for our example let’s use an array of objects. In the index.js file add the following:

let highlights = [   {     id: '1',     content: 'One day I will find the right words, and they will be simple.',     title: 'Dharma Bums',     author: 'Jack Kerouac'   },   {     id: '2',     content: 'In the limits of a situation there is humor, there is grace, and everything else.',     title: 'Arbitrary Stupid Goal',     author: 'Tamara Shopsin'   } ]

Queries

A query requests specific data from an API, in its desired format. The query will then return an object, containing the data that the API user has requested. A query never modifies the data; it only accesses it. We’ve already written a two queries in our schema. The first returns an array of highlights and the second returns a specific highlight. The next step is to write the resolvers that will return the data.

In the index.js file, we can add a resolvers object, which can contain our queries:

const resolvers = {   Query: {     highlights: () => highlights,     highlight: (parent, args) => {       return highlights.find(highlight => highlight.id === args.id);     }   } };

The highlights query returns the full array of highlights data. The highlight query accepts two parameters: parent and args. The parent is the first parameter of any GraqhQL query in Apollo Server and provides a way of accessing the context of the query. The args parameter allows us to access the user provided arguments. In this case, users of the API will be supplying an id argument to access a specific highlight.

We can then update our Apollo Server configuration to include the resolvers:

const server = new ApolloServer({ typeDefs, resolvers });

With our query resolvers written and Apollo Server updated, we can now query API using the GraphQL Playground. To access the GraphQL Playground, visit http://localhost:4000 in your web browser.

A query is formatted as so:

query {   queryName {       field       field     } }

With this in mind, we can write a query that requests the ID, content, title, and author for each our highlights:

query {   highlights {     id     content     title     author   } }

Let’s say that we had a page in our UI that lists only the titles and authors of our highlighted texts. We wouldn’t need to retrieve the content for each of those highlights. Instead, we could write a query that only requests the data that we need:

query {   highlights {     title     author   } }

We’ve also written a resolver to query for an individual note by including an ID parameter with our query. We can do so as follows:

query {   highlight(id: "1") {     content   } }

Mutations

We use a mutation when we want to modify the data in our API. In our highlight example, we will want to write a mutation to create a new highlight, one to update an existing highlight, and a third to delete a highlight. Similar to a query, a mutation is also expected to return a result in the form of an object, typically the end result of the performed action.

The first step to updating anything in GraphQL is to write the schema. We can include mutations in our schema, by adding a mutation type to our schema.js file:

type Mutation {   newHighlight (content: String! title: String author: String): Highlight!   updateHighlight(id: ID! content: String!): Highlight!   deleteHighlight(id: ID!): Highlight! }

Our newHighlight mutation will take the required value of content along with optional title and author values and return a Highlight. The updateHighlight mutation will require that a highlight id and content be passed as argument values and will return the updated Highlight. Finally, the deleteHighlight mutation will accept an ID argument, and will return the deleted Highlight.

With the schema updated to include mutations, we can now update the resolvers in our index.js file to perform these actions. Each mutation will update our highlights array of data.

const resolvers = {   Query: {     highlights: () => highlights,     highlight: (parent, args) => {       return highlights.find(highlight => highlight.id === args.id);     }   },   Mutation: {     newHighlight: (parent, args) => {       const highlight = {         id: String(highlights.length + 1),         title: args.title || '',         author: args.author || '',         content: args.content       };       highlights.push(highlight);       return highlight;     },     updateHighlight: (parent, args) => {       const index = highlights.findIndex(highlight => highlight.id === args.id);       const highlight = {         id: args.id,         content: args.content,         author: highlights[index].author,         title: highlights[index].title       };       highlights[index] = highlight;       return highlight;     },     deleteHighlight: (parent, args) => {       const deletedHighlight = highlights.find(         highlight => highlight.id === args.id       );       highlights = highlights.filter(highlight => highlight.id !== args.id);       return deletedHighlight;     }   } };

With these mutations written, we can use the GraphQL Playground to practice mutating the data. The structure of a mutation is nearly identical to that of a query, specifying the name of the mutation, passing the argument values, and requesting specific data in return. Let’s start by adding a new highlight:

mutation {   newHighlight(author: "Adam Scott" title: "JS Everywhere" content: "GraphQL is awesome") {     id     author     title     content   } }

We can then write mutations to update a highlight:

mutation {   updateHighlight(id: "3" content: "GraphQL is rad") {     id     content   } }

And to delete a highlight:

mutation {   deleteHighlight(id: "3") {     id   } }

Wrapping up

Congratulations! You’ve now successfully built a GraphQL API, using Apollo Server, and can run GraphQL queries and mutations against an in-memory data object. We’ve established a solid foundation for exploring the world of GraphQL API development.

Here are some potential next steps to level up:

The post Get Started Building GraphQL APIs With Node appeared first on CSS-Tricks.

CSS-Tricks

, , , ,
[Top]

Instant GraphQL Backend with Fine-grained Security Using FaunaDB

GraphQL is becoming popular and developers are constantly looking for frameworks that make it easy to set up a fast, secure and scalable GraphQL API. In this article, we will learn how to create a scalable and fast GraphQL API with authentication and fine-grained data-access control (authorization). As an example, we’ll build an API with register and login functionality. The API will be about users and confidential files so we’ll define advanced authorization rules that specify whether a logged-in user can access certain files. 

By using FaunaDB’s native GraphQL and security layer, we receive all the necessary tools to set up such an API in minutes. FaunaDB has a free tier so you can easily follow along by creating an account at https://dashboard.fauna.com/. Since FaunaDB automatically provides the necessary indexes and translates each GraphQL query to one FaunaDB query, your API is also as fast as it can be (no n+1 problems!).

Setting up the API is simple: drop in a schema and we are ready to start. So let’s get started!  

The use-case: users and confidential files

We need an example use-case that demonstrates how security and GraphQL API features can work together. In this example, there are users and files. Some files can be accessed by all users, and some are only meant to be accessed by managers. The following GraphQL schema will define our model:

type User {   username: String! @unique   role: UserRole! }  enum UserRole {   MANAGER   EMPLOYEE }  type File {   content: String!   confidential: Boolean! }  input CreateUserInput {   username: String!   password: String!   role: UserRole! }  input LoginUserInput {   username: String!   password: String! }  type Query {   allFiles: [File!]! }  type Mutation {   createUser(input: CreateUserInput): User! @resolver(name: "create_user")   loginUser(input: LoginUserInput): String! @resolver(name: "login_user") }

When looking at the schema, you might notice that the createUser and loginUser Mutation fields have been annotated with a special directive named @resolver. This is a directive provided by the FaunaDB GraphQL API, which allows us to define a custom behavior for a given Query or Mutation field. Since we’ll be using FaunaDB’s built-in authentication mechanisms, we will need to define this logic in FaunaDB after we import the schema. 

Importing the schema

First, let’s import the example schema into a new database. Log into the FaunaDB Cloud Console with your credentials. If you don’t have an account yet, you can sign up for free in a few seconds.

Once logged in, click the “New Database” button from the home page:

Choose a name for the new database, and click the “Save” button: 

Next, we will import the GraphQL schema listed above into the database we just created. To do so, create a file named schema.gql containing the schema definition. Then, select the GRAPHQL tab from the left sidebar, click the “Import Schema” button, and select the newly-created file: 

The import process creates all of the necessary database elements, including collections and indexes, for backing up all of the types defined in the schema. It automatically creates everything your GraphQL API needs to run efficiently. 

You now have a fully functional GraphQL API which you can start testing out in the GraphQL playground. But we do not have data yet. More specifically, we would like to create some users to start testing our GraphQL API. However, since users will be part of our authentication, they are special: they have credentials and can be impersonated. Let’s see how we can create some users with secure credentials!

Custom resolvers for authentication

Remember the createUser and loginUser mutation fields that have been annotated with a special directive named @resolver. createUser is exactly what we need to start creating users, however the schema did not really define how a user needs to created; instead, it was tagged with a @resolver tag.

By tagging a specific mutation with a custom resolver such as @resolver(name: "create_user") we are informing FaunaDB that this mutation is not implemented yet but will be implemented by a User-defined function (UDF). Since our GraphQL schema does not know how to express this, the import process will only create a function template which we still have to fill in.

A UDF is a custom FaunaDB function, similar to a stored procedure, that enables users to define a tailor-made operation in Fauna’s Query Language (FQL). This function is then used as the resolver of the annotated field. 

We will need a custom resolver since we will take advantage of the built-in authentication capabilities which can not be expressed in standard GraphQL. FaunaDB allows you to set a password on any database entity. This password can then be used to impersonate this database entity with the Login function which returns a token with certain permissions. The permissions that this token holds depend on the access rules that we will write.

Let’s continue to implement the UDF for the createUser field resolver so that we can create some test users. First, select the Shell tab from the left sidebar:

As explained before, a template UDF has already been created during the import process. When called, this template UDF prints an error message stating that it needs to be updated with a proper implementation. In order to update it with the intended behavior, we are going to use FQL’s Update function.

So, let’s copy the following FQL query into the web-based shell, and click the “Run Query” button:

Update(Function("create_user"), {   "body": Query(     Lambda(["input"],       Create(Collection("User"), {         data: {           username: Select("username", Var("input")),           role: Select("role", Var("input")),         },         credentials: {           password: Select("password", Var("input"))         }       })       )   ) });

Your screen should look similar to:

The create_user UDF will be in charge of properly creating a User document along with a password value. The password is stored in the document within a special object named credentials that is encrypted and cannot be retrieved back by any FQL function. As a result, the password is securely saved in the database making it impossible to read from either the FQL or the GraphQL APIs. The password will be used later for authenticating a User through a dedicated FQL function named Login, as explained next.

Now, let’s add the proper implementation for the UDF backing up the loginUser field resolver through the following FQL query:

Update(Function("login_user"), {   "body": Query(     Lambda(["input"],       Select(         "secret",         Login(           Match(Index("unique_User_username"), Select("username", Var("input"))),            { password: Select("password", Var("input")) }         )       )     )   ) });

Copy the query listed above and paste it into the shell’s command panel, and click the “Run Query” button:

The login_user UDF will attempt to authenticate a User with the given username and password credentials. As mentioned before, it does so via the Login function. The Login function verifies that the given password matches the one stored along with the User document being authenticated. Note that the password stored in the database is not output at any point during the login process. Finally, in case the credentials are valid, the login_user UDF returns an authorization token called a secret which can be used in subsequent requests for validating the User’s identity.

With the resolvers in place, we will continue with creating some sample data. This will let us try out our use case and help us better understand how the access rules are defined later on.

Creating sample data

First, we are going to create a manager user. Select the GraphQL tab from the left sidebar, copy the following mutation into the GraphQL Playground, and click the “Play” button:

mutation CreateManagerUser {   createUser(input: {     username: "bill.lumbergh"     password: "123456"     role: MANAGER   }) {     username     role   } }

Your screen should look like this:

Next, let’s create an employee user by running the following mutation through the GraphQL Playground editor:

mutation CreateEmployeeUser {   createUser(input: {     username: "peter.gibbons"     password: "abcdef"     role: EMPLOYEE   }) {     username     role   } }

You should see the following response:

Now, let’s create a confidential file by running the following mutation:

mutation CreateConfidentialFile {   createFile(data: {     content: "This is a confidential file!"     confidential: true   }) {     content     confidential   } }

As a response, you should get the following:

And lastly, create a public file with the following mutation:

mutation CreatePublicFile {   createFile(data: {     content: "This is a public file!"     confidential: false   }) {     content     confidential   } }

If successful, it should prompt the following response:

Now that all the sample data is in place, we need access rules since this article is about securing a GraphQL API. The access rules determine how the sample data we just created can be accessed, since by default a user can only access his own user entity. In this case, we are going to implement the following access rules: 

  1. Allow employee users to read public files only.
  2. Allow manager users to read both public files and, only during weekdays, confidential files.

As you might have already noticed, these access rules are highly specific. We will see however that the ABAC system is powerful enough to express very complex rules without getting in the way of the design of your GraphQL API.

Such access rules are not part of the GraphQL specification so we will define the access rules in the Fauna Query Language (FQL), and then verify that they are working as expected by executing some queries from the GraphQL API. 

But what is this “ABAC” system that we just mentioned? What does it stand for, and what can it do?

What is ABAC?

ABAC stands for Attribute-Based Access Control. As its name indicates, it’s an authorization model that establishes access policies based on attributes. In simple words, it means that you can write security rules that involve any of the attributes of your data. If our data contains users we could use the role, department, and clearance level to grant or refuse access to specific data. Or we could use the current time, day of the week, or location of the user to decide whether he can access a specific resource. 

In essence, ABAC allows the definition of fine-grained access control policies based on environmental properties and your data. Now that we know what it can do, let’s define some access rules to give you concrete examples.

Defining the access rules

In FaunaDB, access rules are defined in the form of roles. A role consists of the following data:

  • name —  the name that identifies the role
  • privileges — specific actions that can be executed on specific resources 
  • membership — specific identities that should have the specified privileges

Roles are created through the CreateRole FQL function, as shown in the following example snippet:

CreateRole({   name: "role_name",   membership: [     // ...   ],   privileges: [     // ...   ] })

You can see two important concepts in this role; membership and privileges. Membership defines who receives the privileges of the role and privileges defines what these permissions are. Let’s write a simple example rule to start with: “Any user can read all files.”

Since the rule applies on all users, we would define the membership like this: 

membership: {   resource: Collection("User") }

Simple right? We then continue to define the “Can read all files” privilege for all of these users.

privileges: [   {     resource: Collection("File"),     actions: { read: true }   } ]

The direct effect of this is that any token that you receive by logging in with a user via our loginUser GraphQL mutation can now access all files. 

This is the simplest rule that we can write, but in our example we want to limit access to some confidential files. To do that, we can replace the {read: true} syntax with a function. Since we have defined that the resource of the privilege is the “File” collection, this function will take each file that would be accessed by a query as the first parameter. You can then write rules such as: “A user can only access a file if it is not confidential”. In FaunaDB’s FQL, such a function is written by using Query(Lambda(‘x’, … <logic that users Var(‘x’)>)).

Below is the privilege that would only provide read access to non-confidential files: 

privileges: [   {     resource: Collection("File"),     actions: {       // Read and establish rule based on action attribute       read: Query(         // Read and establish rule based on resource attribute         Lambda("fileRef",           Not(Select(["data", "confidential"], Get(Var("fileRef"))))         )       )     }   } ]

This directly uses properties of the “File” resource we are trying to access. Since it’s just a function, we could also take into account environmental properties like the current time. For example, let’s write a rule that only allows access on weekdays. 

privileges: [     {       resource: Collection("File"),       actions: {         read: Query(           Lambda("fileRef",             Let(               {                 dayOfWeek: DayOfWeek(Now())               },               And(GTE(Var("dayOfWeek"), 1), LTE(Var("dayOfWeek"), 5))               )           )         )       }     } ]

As mentioned in our rules, confidential files should only be accessible by managers. Managers are also users, so we need a rule that applies to a specific segment of our collection of users. Luckily, we can also define the membership as a function; for example, the following Lambda only considers users who have the MANAGER role to be part of the role membership. 

membership: {   resource: Collection("User"),   predicate: Query(    // Read and establish rule based on user attribute     Lambda("userRef",        Equals(Select(["data", "role"], Get(Var("userRef"))), "MANAGER")     )   ) }

In sum, FaunaDB roles are very flexible entities that allow defining access rules based on all of the system elements’ attributes, with different levels of granularity. The place where the rules are defined — privileges or membership — determines their granularity and the attributes that are available, and will differ with each particular use case.

Now that we have covered the basics of how roles work, let’s continue by creating the access rules for our example use case!

In order to keep things neat and tidy, we’re going to create two roles: one for each of the access rules. This will allow us to extend the roles with further rules in an organized way if required later. Nonetheless, be aware that all of the rules could also have been defined together within just one role if needed.

Let’s implement the first rule: 

“Allow employee users to read public files only.”

In order to create a role meeting these conditions, we are going to use the following query:

CreateRole({   name: "employee_role",   membership: {     resource: Collection("User"),     predicate: Query(        Lambda("userRef",         // User attribute based rule:         // It grants access only if the User has EMPLOYEE role.         // If so, further rules specified in the privileges         // section are applied next.                 Equals(Select(["data", "role"], Get(Var("userRef"))), "EMPLOYEE")       )     )   },   privileges: [     {       // Note: 'allFiles' Index is used to retrieve the        // documents from the File collection. Therefore,        // read access to the Index is required here as well.       resource: Index("allFiles"),       actions: { read: true }      },     {       resource: Collection("File"),       actions: {         // Action attribute based rule:         // It grants read access to the File collection.         read: Query(           Lambda("fileRef",             Let(               {                 file: Get(Var("fileRef")),               },               // Resource attribute based rule:               // It grants access to public files only.               Not(Select(["data", "confidential"], Var("file")))             )           )         )       }     }   ] })

Select the Shell tab from the left sidebar, copy the above query into the command panel, and click the “Run Query” button:

Next, let’s implement the second access rule:

“Allow manager users to read both public files and, only during weekdays, confidential files.”

In this case, we are going to use the following query:

CreateRole({   name: "manager_role",   membership: {     resource: Collection("User"),     predicate: Query(       Lambda("userRef",          // User attribute based rule:         // It grants access only if the User has MANAGER role.         // If so, further rules specified in the privileges         // section are applied next.         Equals(Select(["data", "role"], Get(Var("userRef"))), "MANAGER")       )     )   },   privileges: [     {       // Note: 'allFiles' Index is used to retrieve       // documents from the File collection. Therefore,        // read access to the Index is required here as well.       resource: Index("allFiles"),       actions: { read: true }      },     {       resource: Collection("File"),       actions: {         // Action attribute based rule:         // It grants read access to the File collection.         read: Query(           Lambda("fileRef",             Let(               {                 file: Get(Var("fileRef")),                 dayOfWeek: DayOfWeek(Now())               },               Or(                 // Resource attribute based rule:                 // It grants access to public files.                 Not(Select(["data", "confidential"], Var("file"))),                 // Resource and environmental attribute based rule:                 // It grants access to confidential files only on weekdays.                 And(                   Select(["data", "confidential"], Var("file")),                   And(GTE(Var("dayOfWeek"), 1), LTE(Var("dayOfWeek"), 5))                   )               )             )           )         )       }     }   ] })

Copy the query into the command panel, and click the “Run Query” button:

At this point, we have created all of the necessary elements for implementing and trying out our example use case! Let’s continue with verifying that the access rules we just created are working as expected…

Putting everything in action

Let’s start by checking the first rule: 

“Allow employee users to read public files only.”

The first thing we need to do is log in as an employee user so that we can verify which files can be read on its behalf. In order to do so, execute the following mutation from the GraphQL Playground console:

mutation LoginEmployeeUser {   loginUser(input: {     username: "peter.gibbons"     password: "abcdef"   }) }

As a response, you should get a secret access token. The secret represents that the user has been authenticated successfully:

At this point, it’s important to remember that the access rules we defined earlier are not directly associated with the secret that is generated as a result of the login process. Unlike other authorization models, the secret token itself does not contain any authorization information on its own, but it’s just an authentication representation of a given document.

As explained before, access rules are stored in roles, and roles are associated with documents through their membership configuration. After authentication, the secret token can be used in subsequent requests to prove the caller’s identity and determine which roles are associated with it. This means that access rules are effectively verified in every subsequent request and not only during authentication. This model enables us to modify access rules dynamically without requiring users to authenticate again.

Now, we will use the secret issued in the previous step to validate the identity of the caller in our next query. In order to do so, we need to include the secret as a Bearer Token as part of the request. To achieve this, we have to modify the Authorization header value set by the GraphQL Playground. Since we don’t want to miss the admin secret that is being used as default, we’re going to do this in a new tab.

Click the plus (+) button to create a new tab, and select the HTTP HEADERS panel on the bottom left corner of the GraphQL Playground editor. Then, modify the value of the Authorization header to include the secret obtained earlier, as shown in the following example. Make sure to change the scheme value from Basic to Bearer as well:

{   "authorization": "Bearer fnEDdByZ5JACFANyg5uLcAISAtUY6TKlIIb2JnZhkjU-SWEaino" }

With the secret properly set in the request, let’s try to read all of the files on behalf of the employee user. Run the following query from the GraphQL Playground: 

query ReadFiles {   allFiles {     data {       content       confidential     }   } }

In the response, you should see the public file only:

Since the role we defined for employee users does not allow them to read confidential files, they have been correctly filtered out from the response!

Let’s move on now to verifying our second rule:

“Allow manager users to read both public files and, only during weekdays, confidential files.”

This time, we are going to log in as the employee user. Since the login mutation requires an admin secret token, we have to go back first to the original tab containing the default authorization configuration. Once there, run the following query:

mutation LoginManagerUser {   loginUser(input: {     username: "bill.lumbergh"     password: "123456"   }) }

You should get a new secret as a response:

Copy the secret, create a new tab, and modify the Authorization header to include the secret as a Bearer Token as we did before. Then, run the following query in order to read all of the files on behalf of the manager user:

query ReadFiles {   allFiles {     data {       content       confidential     }   } }

As long as you’re running this query on a weekday (if not, feel free to update this rule to include weekends), you should be getting both the public and the confidential file in the response:

And, finally, we have verified that all of the access rules are working successfully from the GraphQL API!

Conclusion

In this post, we have learned how a comprehensive authorization model can be implemented on top of the FaunaDB GraphQL API using FaunaDB’s built-in ABAC features. We have also reviewed ABAC’s distinctive capabilities, which allow defining complex access rules based on the attributes of each system component.

While access rules can only be defined through the FQL API at the moment, they are effectively verified for every request executed against the FaunaDB GraphQL API. Providing support for specifying access rules as part of the GraphQL schema definition is already planned for the future.

In short, FaunaDB provides a powerful mechanism for defining complex access rules on top of the GraphQL API covering most common use cases without the need for third-party services.

The post Instant GraphQL Backend with Fine-grained Security Using FaunaDB appeared first on CSS-Tricks.

CSS-Tricks

, , , , , ,
[Top]

Apollo GraphQL without JavaScript

It’s cool to see progressive enhancement being done even while using the fanciest of the fancy front-end technologies.

This is a button in a JSX React component that has a click handler applied directly to it that fires a data mutation Ajax request through Apollo GraphQL. That is about the least friendly environment for progressive enhancement I can imagine.

Hugo Giraudel writes that they do server-side rendering already, so the next tricky part is the click handler. Without JavaScript, the only mechanism we have for posting data is a <form>, so that’s what they do. It submits to the /graphql endpoint with the data it needs to perform the mutation via hidden inputs, plus additional data on where to redirect upon success or failure.

Pretty neat.

Direct Link to ArticlePermalink

The post Apollo GraphQL without JavaScript appeared first on CSS-Tricks.

CSS-Tricks

, , ,
[Top]

Practice GraphQL Queries With the State of JavaScript API

Learning how to build GraphQL APIs can be quite challenging. But you can learn how to use GraphQL APIs in 10 minutes! And it so happens I’ve got the perfect API for that: the brand new, fresh-of-the-VS-Code State of JavaScript GraphQL API.

The State of JavaScript survey is an annual survey of the JavaScript landscape. We’ve been doing it for four years now, and the most recent edition reached over 20,000 developers.

We’ve always relied on Gatsby to build our showcase site, but until this year, we were feeding our data to Gatsby in the form of static YAML files generated through some kind of arcane magic known to mere mortals as “ElasticSearch.”

But since Gatsby poops out all the data sources it eats as GraphQL anyway, we though we might as well skip the middleman and feed it GraphQL directly! Yes I know, this metaphor is getting grosser by the second and I already regret it. My point is: we built an internal GraphQL API for our data, and now we’re making it available to everybody so that you too can easily exploit out dataset!

“But wait,” you say. “I’ve spent all my life studying the blade which has left me no time to learn GraphQL!” Not to worry: that’s where this article comes in.

What is GraphQL?

At its core, GraphQL is a syntax that lets you specify what data you want to receive from an API. Note that I said API, and not database. Unlike SQL, a GraphQL query does not go to your database directly but to your GraphQL API endpoint which, in turn, can connect to a database or any other data source.

The big advantage of GraphQL over older paradigms like REST is that it lets you ask for what you want. For example:

query {   user(id: "foo123") {     name   } }

Would get you a user object with a single name field. Also need the email? Just do:

query {   user(id: "foo123") {     name     email   } }

As you can see, the user field in this example supports an id argument. And now we come to the coolest feature of GraphQL, nesting:

query {   user(id: "foo123") {     name     email     posts {        title       body     }   } }

Here, we’re saying that we want to find the user’s posts, and load their title and body. The nice thing about GraphQL is that our API layer can do the work of figuring out how to fetch that extra information in that specific format since we’re not talking to the database directly, even if it’s not stored in a nested format inside our actual database.

Sebastian Scholl does a wonderful job explaining GraphQL as if you were meeting it for the first time at a cocktail mixer.

Introducing GraphiQL

Building our first query with GraphiQL, the IDE for GraphQL

GraphiQL (note the “i” in there) is the most common GraphQL IDE out there, and it’s the tool we’ll use to explore the State of JavaScript API. You can launch it right now at graphiql.stateofjs.com and it’ll automatically connect to our endpoint (which is api.stateofjs.com/graphql). The UI consists of three main elements: the Explorer panel, the Query Builder and the Results panels. We’ll later add the Docs panels to that but let’s keep it simple for now.

The Explorer tab is part of a turbo-boosted version of GraphiQL developed and maintained by OneGraph. Much thanks to them for helping us integrate it. Be sure to check out their example repo if you want to deploy your own GraphiQL instance.

Don’t worry, I’m not going to make you write any code just yet. Instead, let’s start from an existing GraphQL query, such as the one corresponding to developer experience with React over the past four years.

Remember how I said we were using GraphQL internally to build our site? Not only are we exposing the API, we’re also exposing the queries themselves. Click the little “Export” button, copy the query in the “GraphQL” tab, paste it inside GraphiQL’s query builder window, and click the “Play” button.

Source URL
The GraphQL tab in the modal that triggers when clicking Export.

If everything went according to plan, you should see your data appear in the Results panel. Let’s take a moment to analyze the query.

query react_experienceQuery {   survey(survey: js) {     tool(id: react) {       id       entity {         homepage         name         github {           url         }       }       experience {         allYears {           year           total           completion {             count             percentage           }           awarenessInterestSatisfaction {             awareness             interest             satisfaction           }           buckets {             id             count             percentage           }         }       }     }   } }

First comes the query keyword which defines the start of our GraphQL query, along with the query’s name, react_experienceQuery. Query names are optional in GraphQL, but can be useful for debugging purposes.

We then have our first field, survey, which takes a survey argument. (We also have a State of CSS survey so we needed to specify the survey in question.) We then have a tool field which takes an id argument. And everything after that is related to the API results for that specific tool. entity gives you information on the specific tool selected (e.g. React) while experience contains the actual statistical data.

Now, rather than keep going through all those fields here, I’m going to teach you a little trick: Command + click (or Control + click) any of those fields inside GraphiQL, and it will bring up the Docs panel. Congrats, you’ve just witnessed another one of GraphQL’s nifty tricks, self-documentation! You can write documentation directly into your API definition and GraphiQL will in turn make it available to end users.

Changing variables

Let’s tweak things a bit: in the Query Builder, replace “react” with “vuejs” and you should notice another cool GraphQL thing: auto-completion. This is quite helpful to avoid making mistakes or to save time! Press “Play” again and you’ll get the same data, but for Vue this time.

Using the Explorer

We’ll now unlock one more GraphQL power tool: the Explorer. The Explorer is basically a tree of your entire API that not only lets you visualize its structure, but also build queries without writing a single line of code! So, let’s try recreating our React query using the explorer this time.

First, let’s open a new browser tab and load graphiql.stateofjs.com in it to start fresh. Click the survey node in the Explorer, and under it the tool node, click “Play.” The tool’s id field should be automatically added to the results and — by the way — this is a good time to change the default argument value (“typescript”) to “react.”

Next, let’s keep drilling down. If you add entity without any subfields, you should see a little squiggly red line underneath it letting you know you need to also specify at least one or more subfields. So, let’s add id, name and homepage at a minimum. Another useful trick: you can automatically tell GraphiQL to add all of a field’s subfields by option+control-clicking it in the Explorer.

Next up is experience. Keep adding fields and subfields until you get something that approaches the query you initially copied from the State of JavaScript site. Any item you select is instantly reflected inside the Query Builder panel. There you go, you just wrote your first GraphQL query!

Filtering data

You might have noticed a purple filters item under experience. This is actually they key reason why you’d want to use our GraphQL API as opposed to simply browsing our site: any aggregation provided by the API can be filtered by a number of factors, such as the respondent’s gender, company size, salary, or country.

Expand filters and select companySize and then eq and range_more_than_1000. You’ve just calculated React’s popularity among large companies! Select range_1 instead and you can now compare it with the same datapoint among freelancers and independent developers.

It’s important to note that GraphQL only defines very low-level primitives, such as fields and arguments, so these eq, in, nin, etc., filters are not part of GraphQL itself, but simply arguments we’ve defined ourselves when setting up the API. This can be a lot of work at first, but it does give you total control over how clients can query your API.

Conclusion

Hopefully you’ve seen that querying a GraphQL API isn’t that big a deal, especially with awesome tools like GraphiQL to help you do it. Now sure, actually integrating GraphQL data into a real-world app is another matter, but this is mostly due to the complexity of handling data transfers between client and server. The GraphQL part itself is actually quite easy!

Whether you’re hoping to get started with GraphQL or just learn enough to query our data and come up with some amazing new insights, I hope this guide will have proven useful!

And if you’re interested in taking part in our next survey (which should be the State of CSS 2020) then be sure to sign up for our mailing list so you can be notified when we launch it.

State of JavaScript API Reference

You can find more info about the API (including links to the actual endpoint and the GitHub repo) at api.stateofjs.com.

Here’s a quick glossary of the terms used inside the State of JS API.

Top-Level Fields

  • Demographics: Regroups all demographics info such as gender, company size, salary, etc.
  • Entity: Gives access to more info about a specific “entity” (library, framework, programming language, etc.).
  • Feature: Usage data for a specific JavaScript or CSS feature.
  • Features: Same, but across a range of features.
  • Matrices: Gives access to the data used to populate our cross-referential heatmaps.
  • Opinion: Opinion data for a specific question (e.g. “Do you think JavaScript is moving in the right direction?”).
  • OtherTools: Data for the “other tools” section (text editors, browsers, bundlers, etc.).
  • Resources: Data for the “resources” section (sites, blogs, podcasts, etc.).
  • Tool: Experience data for a specific tool (library, framework, etc.).
  • Tools: Same, but across a range of tools.
  • ToolsRankings: Rankings (awareness, interest, satisfaction) across a range of tools.

Common Fields

  • Completion: Which proportion of survey respondents answered any given question.
  • Buckets: The array containing the actual data.
  • Year/allYears: Whether to get the data for a specific survey year; or an array containing all years.

The post Practice GraphQL Queries With the State of JavaScript API appeared first on CSS-Tricks.

CSS-Tricks

, , , ,
[Top]

Raw GraphQL Querying

GraphQL has all kinds of awesome tooling built around it. But like everything on the web, it ultimately comes down to data shootin’ across the ol’ network and responses coming back. If you need to talk to a GraphQL API endpoint, you don’t absolutely have to use some kind of framework or library to make requests against it. As a matter of fact, you can do it pretty cleanly in vanilla JavaScript.

Take the Pokémon API, available in GraphQL, which I found via this list of APIs. It has a GraphiQL interface for exploring.

From that page, you can execute queries and see the results. You can also poke into the Network tab in DevTools and see the payload for requests it sends off, and see that it’s just a bit of JSON that is POSTed.

We can make our own JSON in that format too! First, we’ll make it an object in the right format, utilizing template literals to make the GraphQL query look nice (which is actually required because the query syntax is dependent on white-space). Then we JSON-ify it with the native JavaScript API and POST it via the also-native fetch API.

See the Pen
Raw GraphQL
by Chris Coyier (@chriscoyier)
on CodePen.

Easy cheesy. And no fancy tooling to request exactly what you need, which is the core benefit of GraphQL.

The post Raw GraphQL Querying appeared first on CSS-Tricks.

CSS-Tricks

,
[Top]

Meeting GraphQL at a Cocktail Mixer

GraphQL and REST are two specifications used when building APIs for websites to use. REST defines a series of unique identifiers (URLs) that applications use to request and send data. GraphQL defines a query language that allows client applications to specify precisely the data they need from a single endpoint. They are related technologies and used for largely the same things (in fact, they can and often do co-exist), but they are also very different.

That’s a little dry, eh? Let’s explain it a much more entertaining way that might help you understand better, and maybe just get you a little excited about GraphQL!

🍹 You’re at a cocktail mixer

You’re attending this mixer to help build your professional network, so naturally, you want to collect some data about the people around you. Close by, there are five other attendees.

Their name tags read:

  • Richy REST
  • Friend of Richy REST
  • Employer of Richy REST
  • Georgia GraphQL

Being the dynamic, social, outgoing animal that you are, you walk right up to Richy REST and say, “Hi, I’m Adam Application, who are you?” Richy REST responds:

{   name: "Richy REST",   age: 33,   married: false,   hometown: "Circuits-ville",   employed: true   // ... 20 other things about Richy REST }

“Whoa, that was a lot to take in,” you think to yourself. In an attempt to avoid any awkward silence, you remember Richy REST specifying that he was employed and ask, “Where do you work?”

Strangely, Richy REST has no idea where he works. Maybe the Employer of Richy Rest does?

You ask the same question to the Employer of Richy REST, who is delighted to answer your inquiry! He responds like so:

{   company: "Mega Corp",   employee_count: 11230,   head_quarters: "1 Main Avenue, Big City, 10001, PL"   year_founded: 2005,   revenue: 100000000,   // ... 20 other things about Richy REST Employer }

At this point, you’re exhausted. You don’t even want to meet the Friend of Richy Rest! That might take forever, use up all your energy and, you don’t have the time.

However, Georgia GraphQL has been standing there politely, so you decide to engage her.

“Hi, what’s your name?”

{      name: "Georgia GraphQL" }

“Where are you from, and how old are you?”*

{   hometown: "Pleasant-Ville",   age: 28 }

“How many hobbies and friends do you have and what are your friends’ names?”

{   hobbies_count: 12,   friends_count: 50,   friends: [     { name: "Steve" },     { name: "Anjalla" },     // ...etc   ] }

Georgia GraphQL is incredible, articulate, concise, and to the point. You 100% want to swap business cards and work together on future projects with Georgia.

This anecdote encapsulates the developer’s experience of working with GraphQL over REST. GraphQL allows developers to articulate what they want with a tidy query and, in response, only receive what they specified. Those queries are fully dynamic, so only a single endpoint is required. REST, on the other hand, has predefined responses and often requires applications to utilize multiple endpoints to satisfy a full data requirement.

Metaphor over! Let’s talk brass tacks.

To expand further upon essential concepts presented in the cocktail mixer metaphor let’s specifically address two limitations that often surface when using REST.

1. Several trips when fetching related resources

Data-driven mobile and web applications often require related resources and data sets. Thus, retrieving data using a REST API can entail multiple requests to numerous endpoints. For instance, requesting a Post entity and related Author might be completed by two requests to different endpoints:

someServer.com/authors/:id someServer.com/posts/:id

Multiple trips to an API impacts the performance and readiness of an application. It is also a more significant issue for low bandwidth devices (e.g. smart-watches, IoT, older mobile devices, and others).

2. Over-fetching and under-fetching

Over/under-fetching is inevitable with RESTful APIs. Using the example above, the endpoint domainName.com/posts/:id fetches data for a specific Post. Every Post is made up of attributes, such as id, body, title, publishingDate, authorId, and others. In REST, the same data object always gets returned; the response is predefined.

In situations where only a Post title and body are needed, over-fetching happens — as more data gets sent over the network than data that is actually utilized. When an entire Post is required, along with related data about its Author, under-fetching gets experienced — as less data gets sent over the network than is actually utilized. Under-fetching leads to bandwidth overuse from multiple requests to the API.

Client-side querying with GraphQL

GraphQL introduces a genuinely unique approach that provides a tremendous amount of flexibility to client-apps. With GraphQL, a query gets sent to your API and precisely what you need is returned — nothing more and nothing less — in a single request. Query results are returned in the same shape as your query, ensuring that response structures are always predictable. These factors allow apps to run faster and be more stable because they are in control of the data they get, and not the server.

“Results are returned in the same shape as queries.”

/* Query */ {   myFriends(first: 2) {     items {       name       age     }   } }
/* Response */ {   "data": {     "items": [       { "name": "Steve", "age": 27 },       { "name": "Kelly", "age": 31 }     ]   } }

Reality Check ✅

Now, at this point, you probably think GraphQL is as easy as slicing warm butter with a samurai sword. That might be the reality for front-end developers — specifically developers consuming GraphQL APIs. However, when it comes to server-side setup, someone has to make the sausage. Our friend Georgia GraphQL put in some hard work to become the fantastic professional she is!

Building a GraphQL API, server-side, is something that takes time, energy, and expertise. That said, it’s nothing that someone ready for a challenge is unable to handle! There are many different ways to hop in at all levels of abstraction. For example:

  • Un-assisted: If you really want to get your hands dirty, try building a GraphQL API with the help of packages. For instance, like Rails? Checkout graphql-ruby. Love Node.js? Try express-graphql.
  • Assisted: If fully maintaining your server/self-hosting is a priority, something like Graph.cool can help you get going on a GraphQL project.
  • Instant: Tired of writing CRUD boilerplate and want to get going fast? 8base provides an instant GraphQL API and serverless backend that’s fully extensible.

Wrapping up

REST was a significant leap forward for web services in enabling highly available resource-specific APIs. That said, its design didn’t account for today’s proliferation of connected devices, all with differing data constraints and requirements. This oversight has quickly lead to the spread of GraphQL — open-sourced by Facebook in 2015 — for the tremendous flexibility that it gives front-end developers. Working with GraphQL is a fantastic developer experience, both for individual developers as well as teams.

The post Meeting GraphQL at a Cocktail Mixer appeared first on CSS-Tricks.

CSS-Tricks

, , ,
[Top]

Using GraphQL Playground with Gatsby

I’m assuming most of you have already heard about Gatsby, and at least loosely know that it’s basically a static site generator for React sites. It generally runs like this:

  1. Data Sources → Pull data from anywhere.
  2. Build → Generate your website with React and GraphQL.
  3. Deploy → Send the site to any static site host.

What this typically means is that you can get your data from any recognizable data source — CMS, markdown, file systems and databases, to name a few — manage the data through GraphQL to build your website, and finally deploy your website to any static web host (such as Netlify or Zeit).

Screenshot of the Gatsby homepage. It shows the three different steps of the Gatsby build process showing how data sources get built and then deployed.
The Gatsby homepage illustration of the Gatsby workflow.

In this article, we are concerned with the build process, which is powered by GraphQL. This is the part where your data is managed. Unlike traditional REST APIs where you often need to send anonymous data to test your endpoints, GraphQL consolidates your APIs into a self-documenting IDE. Using this IDE, you can perform GraphQL operations such as queries, mutations, and subscriptions, as well as view your GraphQL schema, and documentation.

GraphQL has an IDE built right into it, but what if we want something a little more powerful? That’s where GraphQL Playground comes in and we’re going to walk through the steps to switch from the default over to GraphQL Playground so that it works with Gatsby.

GraphiQL and GraphQL Playground

GraphiQL is GraphQL’s default IDE for exploring GraphQL operations, but you could switch to something else, like GraphQL Playground. Both have their advantages. For example, GraphQL Playground is essentially a wrapper over GraphiQL but includes additional features such as:

  • Interactive, multi-column schema documentation
  • Automatic schema reloading
  • Support for GraphQL Subscriptions
  • Query history
  • Configuration of HTTP headers
  • Tabs
  • Extensibility (themes, etc.)

Choosing either GraphQL Playground or GraphiQL most likely depends on whether you need to use those additional features. There’s no strict rule that will make you write better GraphQL operations, or build a better website or app.

This post isn’t meant to sway you toward one versus the other. We’re looking at GraphQL Playground in this post specifically because it’s not the default IDE and there may be use cases where you need the additional features it provides and needs to set things up to work with Gatsby. So, let’s dig in and set up a new Gatsby project from scratch. We’ll integrate GraphQL Playground and configure it for the project.

Setting up a Gatsby Project

To set up a new Gatsby project, we first need to install the gatsby-cli. This will give us Gatsby-specific commands we can use in the Terminal.

npm install -g gatsby-cli

Now, let’s set up a new site. I’ve decided to call this example “gatsby-playground” but you can name it whatever you’d like.

gatsby new gatsby-playground

Let’s navigate to the directory where it was installed.

cd gatsby-playground

And, finally, flip on our development server.

gatsby develop

Head over to http://localhost:8000 in the browser for the opening page of the project. Your Gatsby GraphQL operations are going to be located at http://localhost:8000/___graphql.

Screenshot of the starter page for a new Gatsby project. It says Welcome to your new Gatsby website. Now go build something great.
The GraphiQL interface. There are four panels from left to right showing the explorer, query variables and documentation.
The GraphiQL interface.

At this point, I think it’s worth calling out that there is a desktop app for GraphQL Playground. You could just access your Gatsby GraphQL operations with the URL Endpoint localhost:8000/___graphql without going any further with this article. But, we want to get our hands dirty and have some fun under the hood!

Screenshot of the GraphQL Playground interface. It has two panels showing the Gatsby GraphQL operations.
GraphQL Playground running Gatsby GraphQL Operations.

Gatsby’s Environmental Variables

Still around? Cool. Moving on.

Since we’re not going to be relying on the desktop app, we’ll need to do a little bit of Environmental Variable setup.

Environmental Variables are variables used specifically to customize the behavior of a website in different environments. These environments could be when the website is in active development, or perhaps when it is live in production and available for the world to see. We can have as many environments as we want, and define different Environmental Variables for each of the environments.

Learn more about Environmental Variables in Gatsby.

Gatsby supports two environments: development and production. To set a development environmental variable, we need to have a .env.development file at the root of the project. Same sort of deal for production, but it’s .env.production.

To swap out both environments, we’ll need to set an environment variable in a cross-platform compatible way. Let’s create a .env.development file at the root of the project. Here, we set a key/value pair for our variables. The key will be GATSBY_GRAPHQL_IDE, and the value will be playground, like so:

GATSBY_GRAPHQL_IDE=playground

Accessing Environment Variables in JavaScript

In Gatsby, our Environmental Variables are only available at build time, or when Node.JS is running (what we’ll call run time). Since the variables are loaded client-side at build time, we need to use them dynamically at run time. It is important that we restart our server or rebuild our website every time we modify any of these variables.

To load our environmental variables into our project, we need to install a package:

yarn add env-cmd --dev // npm install --save-dev env-cmd

With that, we will change the develop script in package.json as the final step, to this instead:

"develop": "env-cmd --file .env.development --fallback gatsby develop"

The develop script instructs the env-cmd package to load environmental variables from a custom environmental variable file (.env.development in this case), and if it can’t find it, fallback to .env (if you have one, so if you see the need to, create a .env file at the root of your project with the same content as .env.development).

And that’s it! But, hey, remember to restart the server since we change the variable.

yarn start // npm run develop

If you refresh the http://localhost:8000/___graphql in the browser, you should now see GraphQL playground. Cool? Cool!

GraphQL Playground with Gatsby.

And that’s how we get GraphQL Playground to work with Gatsby!

So that’s how we get from GraphQL’s default GraphiQL IDE to GraphQL Playground. Like we covered earlier, the decision of whether or not to make the switch at all comes down to whether the additional features offered in GraphQL Playground are required for your project. Again, we’re basically working with a GraphiQL wrapper that piles on more features.

Resources

Here are some additional articles around the web to get you started with Gatsby and more familiar with GraphiQL, GraphQL Playground, and Environment Variables.

The post Using GraphQL Playground with Gatsby appeared first on CSS-Tricks.

CSS-Tricks

, , ,
[Top]

Weekly Platform News: Mozilla WebThings, Internet Explorer mode, GraphQL

Šime posts regular content for web developers on webplatform.news.

Mozilla WebThings provides complete privacy for user data

Josephine Lau: Smart home companies require that users’ data goes through their servers, which means that people are giving up their privacy for the convenience of a smart home device (e.g., smart light bulb).

We’ve learned that people are concerned about the privacy of their smart home data. And yet, when there’s no alternative, they feel the need to trade away their privacy for convenience.

Mozilla WebThings is an alternative approach to the Internet of Things that stores user data in the user’s home. Devices can be controlled locally via a web interface, and the data is tunneled through a private HTTPS connection.

A diagram showing how Mozilla doesn’t store user data in the cloud, unlike smart home vendors.

An Internet Explorer mode is coming to Edge

Fred Pullen: The next version of Edge will include an Internet Explorer mode for backward compatibility with legacy websites. Edge will also for the first time be available on older versions of Windows (including Windows 7 and 8.1).

By introducing Internet Explorer mode, we’re effectively blurring the lines between the browsers. From an end-user standpoint, it seems like a single browser. … You can use IE mode to limit the sites that instantiate Internet Explorer just to the sites that you approved.

Quick hits: Other interesting articles

Introducing the first Microsoft Edge preview builds for macOS (Microsoft Edge Blog)

Edge Canary (analogous to Chrome Canary) is now officially available on macOS. This version of Edge updates daily.

With our new Chromium foundation, you can expect a consistent rendering experience across the Windows and macOS versions of Microsoft Edge.


#EmberJS2019 More Accessible Than Ever (Yehuda Katz)

Navigating from one page to another in a client-side web app provides no feedback by default in virtually all popular routing solutions across the client-side ecosystem.

Their goal is to make Ember’s router more accessible and screen reader friendly.


Opinion: Five developer trends to watch in 2019 (DeveloperTech)

The article includes a good, short explanation of what GraphQL is and what problems it solves.


Part 2: What the Fr(action)? (CSS IRL)

Read the last section (“Intrinsic and extrinsic sizing”). All three columns have the size 1fr but the middle one is wider because of its content. This can be prevented by using the size minmax(0, 1fr) instead.


Parallel streaming of progressive images (Cloudflare Blog)

Instead of loading from top to bottom, progressive images appear blurry at first and become sharper as more data loads.

The benefits of progressive rendering are unique to JPEG (supported in all browsers) and JPEG 2000 (supported in Safari). GIF and PNG have interlaced modes, but these modes come at a cost of worse compression. WebP doesn’t even support progressive rendering at all. This creates a dilemma: WebP is usually 20%-30% smaller than a JPEG of equivalent quality, but progressive JPEG appears to load 50% faster.

The post Weekly Platform News: Mozilla WebThings, Internet Explorer mode, GraphQL appeared first on CSS-Tricks.

CSS-Tricks

, , , , , , , ,
[Top]