QR codes are funny, right? We love them, then hate them, then love them again. Anyways, they’ve lately been popping up again and it got me thinking about how they’re made. There are like a gazillion QR code generators out there, but say it’s something you need to do on your own website. This package can do that. But it’s also weighs in at a hefty 180 KB for everything it needs to generate stuff. You wouldn’t want to serve all that along with the rest of your scripts.
Now, I’m relatively new to the concept of cloud functions, but I hear that’s the bee’s knees for something just like this. That way, the function lives somewhere on a server that can be called when it’s needed. Sorta like a little API to run the function.
DigitalOcean has a CLI that with a command that’ll scaffold things for us, so cd wherever you want to set things up and run:
doctl serverless init --language js qr-generator
Notice the language is explicitly declared. DigitalOcean functions also support PHP and Python.
We get a nice clean project called qr-generator with a /packages folder that holds all the project’s functions. There’s a sample function in there, but we can overlook it for now and create a qr folder right next to it:
That folder is where both the qrcode package and our qr.js function are going to live. So, let’s cd into packages/sample/qr and install the package:
npm install --save qrcode
Now we can write the function in a new qr.js file:
All that’s doing is requiring the the qrcode package and exporting a function that basically generates an <img> tag with the a base64 PNG for the source. We can even test it out in the terminal:
There is one extra step we need here. When the project was scaffolded, we got this little project.yml file and it configures the function with some information about it. This is what’s in there by default:
See those highlighted lines? The packages: name property is where in the packages folder the function lives, which is a folder called sample in this case. The actions/ name property is the name of the function itself, which is the name of the file. It’s hello by default when we spin up the project, but we named ours qr.js, so we oughta change that line from hello to qr before moving on.
Deploy the function
We can do it straight from the command line! First, we connect to the DigitalOcean sandbox environment so we have a live URL for testing:
## You will need an DO API key handy doctl sandbox connect
Now we can deploy the function:
doctl sandbox deploy qr-generator
Once deployed, we can access the function at a URL. What’s the URL? There’s a command for that:
doctl sbx fn get sample/qr --url https://faas-nyc1-2ef2e6cc.doserverless.co/api/v1/web/fn-10a937cb-1f12-427b-aadd-f43d0b08d64a/sample/qr
Heck yeah! No more need to ship that entire package with the rest of the scripts! We can hit that URL and generate the QR code from there.
Demo
We fetch the API and that’s really all there is to it!
The first time cloud functions / serverless functions clicked for me was when I saw and tried Auth0’s (now defunct) Webtask. It was a little CodePen-like IDE but you didn’t really see anything aside from code and logs. The point was to write little bits of Node when you hit the functions URL (that’s literally exactly what a serverless function is). It would even store your secrets for you, meaning that you could use the serverless function as a proxy. You hit the function, the function hits the API using your unexposed API Key secrets, the API returns data, the function then returns data, and that data is available to the client side for you to work with. The entire point was 1) you can snag data from an otherwise totally static website, and 2) your API keys are protected. Brilliant.
I still miss Webtask, but I’m sure there are better and fancier things these days. I don’t have a solid handle on the whole landscape. Even AWS has an online editor for lambdas (a “lambda” is AWS’s standards-setting implementation of what a serverless function is), but using the AWS console directly for anything isn’t usually… very good. Functions in AWS Amplify are probably a better bet there.
My guess is the proper modern way of building these things are things like…
But what makes me think of all this, and is also in the category of things I don’t have any personal experience with, is Pipedream. I heard about it via Raymond, who has a similar story to mine:
One of the first things that intrigued me about serverless, and honestly it’s not really that novel, is the ability to build proxies to other APIs. So for example, imagine a cool API that requires authentication of some sort to use, like an API key. If you use this in client-side JavaScript, anyone can look at your code and get your key. Better services let you lock a key to a domain, but if you don’t have that option, then a simple use of serverless is to simply give you an endpoint that makes the call to the API with your key.
Not only is it a web-based IDE for crafting functions, but I can trigger it a bunch of ways—a URL of course, but also on a CRON, or things like via email or RSS. Neat. Look at all the other options too. Slack? GitHub? Twitter? It’s kinda like how Zapier looks in that way, only where Zapier is entirely no-code (I think). Pipedream makes code a first-class citizen.
Modern apps place high demands on front-end developers. Web apps require complex functionality, and the lion’s share of that work is falling to front-end devs:
building modern, accessible user interfaces
creating interactive elements and complex animations
managing complex application state
meta-programming: build scripts, transpilers, bundlers, linters, etc.
reading from REST, GraphQL, and other APIs
middle-tier programming: proxies, redirects, routing, middleware, auth, etc.
This list is daunting on its own, but it gets really rough if your tech stack doesn’t optimize for simplicity. A complex infrastructure introduces hidden responsibilities that introduce risk, slowdowns, and frustration.
Depending on the infrastructure we choose, we may also inadvertently add server configuration, release management, and other DevOps duties to a front-end developer’s plate.
The sneaky middle tier — where front-end tasks can balloon in complexity
Let’s look at a task I’ve seen assigned to multiple front-end teams: create a simple REST API to combine data from a few services into a single request for the frontend. If you just yelled at your computer, “But that’s not a frontend task!” — I agree! But who am I to let facts hinder the backlog?
An API that’s only needed by the frontend falls into middle-tier programming. For example, if the front end combines the data from several backend services and derives a few additional fields, a common approach is to add a proxy API so the frontend isn’t making multiple API calls and doing a bunch of business logic on the client side.
There’s not a clear line to which back-end team should own an API like this. Getting it onto another team’s backlog — and getting updates made in the future — can be a bureaucratic nightmare, so the front-end team ends up with the responsibility.
This is a story that ends differently depending on the architectural choices we make. Let’s look at two common approaches to handling this task:
Build an Express app on Node to create the REST API
Use serverless functions to create the REST API
Express + Node comes with a surprising amount of hidden complexity and overhead. Serverless lets front-end developers deploy and scale the API quickly so they can get back to their other front-end tasks.
Solution 1: Build and deploy the API using Node and Express (and Docker and Kubernetes)
Earlier in my career, the standard operating procedure was to use Node and Express to stand up a REST API. On the surface, this seems relatively straightforward. We can create the whole REST API in a file called server.js:
const express = require('express'); const PORT = 8080; const HOST = '0.0.0.0'; const app = express(); app.use(express.static('site')); // simple REST API to load movies by slug const movies = require('./data.json'); app.get('/api/movies/:slug', (req, res) => { const { slug } = req.params; const movie = movies.find((m) => m.slug === slug); res.json(movie); }); app.listen(PORT, HOST, () => { console.log(`app running on http://$ {HOST}:$ {PORT}`); });
This code isn’t too far removed from front-end JavaScript. There’s a decent amount of boilerplate in here that will trip up a front-end dev if they’ve never seen it before, but it’s manageable.
If we run node server.js, we can visit http://localhost:8080/api/movies/some-movie and see a JSON object with details for the movie with the slug some-movie (assuming you’ve defined that in data.json).
Deployment introduces a ton of extra overhead
Building the API is only the beginning, however. We need to get this API deployed in a way that can handle a decent amount of traffic without falling down. Suddenly, things get a lot more complicated.
We need several more tools:
somewhere to deploy this (e.g. DigitalOcean, Google Cloud Platform, AWS)
a container to keep local dev and production consistent (i.e. Docker)
a way to make sure the deployment stays live and can handle traffic spikes (i.e. Kubernetes)
At this point, we’re way outside front-end territory. I’ve done this kind of work before, but my solution was to copy-paste from a tutorial or Stack Overflow answer.
The Docker config is somewhat comprehensible, but I have no idea if it’s secure or optimized:
FROM node:14 WORKDIR /usr/src/app COPY package*.json ./ RUN npm install COPY . . EXPOSE 8080 CMD [ "node", "server.js" ]
Next, we need to figure out how to deploy the Docker container into Kubernetes. Why? I’m not really sure, but that’s what the back end teams at the company use, so we should follow best practices.
Our initial task of “stand up a quick Node API” has ballooned into a suite of tasks that don’t line up with our core skill set. The first time I got handed a task like this, I lost several days getting things configured and waiting on feedback from the backend teams to make sure I wasn’t causing more problems than I was solving.
Some companies have a DevOps team to check this work and make sure it doesn’t do anything terrible. Others end up trusting the hivemind of Stack Overflow and hoping for the best.
With this approach, things start out manageable with some Node code, but quickly spiral out into multiple layers of config spanning areas of expertise that are well beyond what we should expect a frontend developer to know.
Solution 2: Build the same REST API using serverless functions
If we choose serverless functions, the story can be dramatically different. Serverless is a great companion to Jamstack web apps that provides front-end developers with the ability to handle middle tier programming without the unnecessary complexity of figuring out how to deploy and scale a server.
There are multiple frameworks and platforms that make deploying serverless functions painless. My preferred solution is to use Netlify since it enables automated continuous delivery of both the front end and serverless functions. For this example, we’ll use Netlify Functions to manage our serverless API.
Using Functions as a Service (a fancy way of describing platforms that handle the infrastructure and scaling for serverless functions) means that we can focus only on the business logic and know that our middle tier service can handle huge amounts of traffic without falling down. We don’t need to deal with Docker containers or Kubernetes or even the boilerplate of a Node server — it Just Works™ so we can ship a solution and move on to our next task.
First, we can define our REST API in a serverless function at netlify/functions/movie-by-slug.js:
To add the proper routing, we can create a netlify.toml at the root of the project:
[[redirects]] from = "/api/movies/*" to = "/.netlify/functions/movie-by-slug" status = 200
This is significantly less configuration than we’d need for the Node/Express approach. What I prefer about this approach is that the config here is stripped down to only what we care about: the specific paths our API should handle. The rest — build commands, ports, and so on — is handled for us with good defaults.
If we have the Netlify CLI installed, we can run this locally right away with the command ntl dev, which knows to look for serverless functions in the netlify/functions directory.
Visiting http://localhost:888/api/movies/booper will show a JSON object containing details about the “booper” movie.
So far, this doesn’t feel too different from the Node and Express setup. However, when we go to deploy, the difference is huge. Here’s what it takes to deploy this site to production:
Commit the serverless function and netlify.toml to repo and push it up on GitHub, Bitbucket, or GitLab
Use the Netlify CLI to create a new site connected to your git repo: ntl init
That’s it! The API is now deployed and capable of scaling on demand to millions of hits. Changes will be automatically deployed whenever they’re pushed to the main repo branch.
Using serverless functions allows front-end developers to complete middle-tier programming tasks without taking on the additional boilerplate and DevOps overhead that creates risk and decreases productivity.
If our goal is to empower frontend teams to quickly and confidently ship software, choosing serverless functions bakes productivity into the infrastructure. Since adopting this approach as my default Jamstack starter, I’ve been able to ship faster than ever, whether I’m working alone, with other front-end devs, or cross-functionally with teams across a company.
Svelte is a free and open-source front end JavaScript framework that enables developers to build highly performant applications with smaller application bundles. Svelte also empowers developers with its awesome developer experience.
Svelte provides a different approach to building web apps than some of the other frameworks such as React and Vue. While frameworks like React and Vue do the bulk of their work in the user’s browser while the app is running, Svelte shifts that work into a compile step that happens only when you build your app, producing highly-optimized vanilla JavaScript.
The outcome of this approach is not only smaller application bundles and better performance, but also a developer experience that is more approachable for people that have limited experience of the modern tooling ecosystem.
Svelte sticks closely to the classic web development model of HTML, CSS, and JS, just adding a few extensions to HTML and JavaScript. It arguably has fewer concepts and tools to learn than some of the other framework options.
Project Setup
The recommended way to initialize a Svelte app is by using degit which sets up everything automatically for you.
You will be required to either have yarn or npm installed.
# for Rollup npx degit "sveltejs/sapper-template#rollup" hn-clone # for webpack npx degit "sveltejs/sapper-template#webpack" hn-clone
In this tutorial, we will build a basic HN clone with the ability to create a post and comment on that post.
Setting Up Fauna
yarn add faunadb
Creating Your Own Database on Fauna
To hold all our application’s data, we will first need to create a database. Fortunately, this is just a single command or line of code, as shown below. Don’t forget to create a Fauna account before continuing!
Fauna Shell
Fauna’s API has many interfaces/clients, such as drivers in JS, GO, Java and more, a cloud console, local and cloud shells, and even a VS Code extension! For this article, we’ll start with the local Fauna Shell, which is almost 100% interchangeable with the other interfaces.
npm install -g fauna-shell
After installing the Fauna Shell with npm, log in with your Fauna credentials:
Now that we have our database created, it’s time to create our collections.
In Fauna, a database is made up of one or more collections. The data you create is represented as documents and saved in a collection. A collection is like an SQL table. Or rather, a collection, is a collection of documents.
A fair comparison with a traditional SQL database would be as below.
FaunaDB Terminology
SQL Terminology
Database
Database
Collection
Table
Document
Row
Index
Index
For our two microservices, we will create two collections in our database. Namely:
a posts collection, and
a comments collection.
To start an interactive shell for querying our new database, we need to run:
fauna shell hn-clone
We can now operate our database from this shell.
$ fauna shell hn-clone Starting shell for database hn-clone Connected to https://db.fauna.com Type Ctrl+D or .exit to exit the shell hn-clone>
To create our posts collection, run the following command in the shell to create the collection with the default configuration settings for collections.
hn-clone> CreateCollection({ name: "posts" })
Next, let’s do the same for the comments collections.
hn-clone> CreateCollection({ name: "comments" })
Creating a Posts/Feed Page
To view all our posts we will create a page to view all our posts in a time ordered feed.
In our src/routes/index.svelte file add the following content. This will create the list of all available posts that are stored in our Fauna database.
<script context="module"> import faunadb, { query as q } from "faunadb"; import Comment from "../components/Comment.svelte"; const client = new faunadb.Client({ secret: process.env.FAUNA_SECRET, }); export async function preload(page, session) { let posts = await client.query( q.Get( q.Paginate(q.Documents(q.Collection("posts")) ) ) ); console.log(posts) return { posts }; } </script> <script> import Post from "../components/Post.svelte"; export let posts; console.log(posts); </script> <main class="container"> {#each posts as post} <!-- content here --> <div class="card"> <div class="card-body"> <p>{post}</p> <Comment postId={post.id}/> </div> </div> {/each} </main>
Creating a Comments Component
To create a comment we will create a component to send our data to Fauna using the query below.
<script> export let postId; import faunadb, { query as q } from 'faunadb' const client = new faunadb.Client({secret: process.env.FAUNA_SECRET}) let comment; let postComment = async () => { if (!comment) { return } let response = await client.query( q.Create( q.Collection('comments'), { data: { title: comment, post: postId } }, ) ) console.log(response) } </script> <div class="row"> <div class="col-sm-12 col-lg-6 col-sm-8"> <p></p> <textarea type="text" class="form-control" bind:value={comment} placeholder="Comment" ></textarea> <p></p> <button class="btn btn-warning" style="float:right;" on:click={postComment}>Post comment</button> </div> </div>
We run the dev server:
yarn dev
When you visit http://localhost:5000 you will be greeted with the feeds with a panel to comment on the same page.
Conclusion
For this tutorial, we are able to see how fast it can be to develop a full stack application with Fauna and Svelte.
Svelte provides a highly productive, powerful and fast framework that we can use to develop both backend and frontend components of our full stack app while using the pre-rendered framework Sapper.
Secondly, we can see how Fauna is indeed a powerful database; with a powerful FQL, which supports complex querying and integration with the serverless and JAMStack ecosystem through its API first approach. This enables developers to simplify code and ship faster.
I hope you find Fauna to be exciting, like I do, and that you enjoyed this article. Feel free to follow me on Twitter @theAmolo if you enjoyed this!
In this article, we walk through building out a full-stack real-time and completely serverless application that allows you to create polls! All of the app’s static bits (HTML, CSS, JS, & Media) will be hosted and globally distributed via the GitHub Pages CDN (Content Delivery Network). All of the data and dynamic requests for data (i.e., the back end) will be globally distributed and stateful via the Macrometa GDN (Global Data Network).
Macrometa is a geo-distributed stateful serverless platform designed from the ground up to be lightning-fast no matter where the client is located, optimized for both reads and writes, and elastically scalable. We will use it as a database for data collection and maintaining state and stream to subscribe to database updates for real-time action.
We will be using Gatsby to manage our app and deploy it to Github Pages. Let’s do this!
Intro
This demo uses the Macrometa c8db-source-plugin to get some of the data as markdown and then transform it to HTML to display directly in the browser and the Macrometa JSC8 SKD to keep an open socket for real-time fun and manage working with Macrometa’s API.
Once you’re logged in to Macrometa create a document collection called markdownContent. Then create a single document with title and content fields in markdown format. This creates your data model the app will be using for its static content.
Here’s an example of what the markdownContent collection should look like:
{ "title": "## Real-Time Polling Application", "content": "Full-Stack Geo-Distributed Serverless App Built with GatsbyJS and Macrometa!" }
content and title keys in the document are in the markdown format. Once they go through gatsby-source-c8db, data in title is converted to <h2></h2>, and content to <p></p>.
Now create a second document collection called polls. This is where the poll data will be stored.
In the polls collection each poll will be stored as a separate document. A sample document is mentioned below:
Your Macrometa login details, along with the collection to be used and markdown transformations, has to be provided in the application’s gatsby-config.js like below:
Under password you will notice that it says process.env.MM_PW, instead of putting your password there, we are going to create some .env files and make sure that those files are listed in our .gitignore file, so we don’t accidentally push our Macrometa password back to Github. In your root directory create .env.development and .env.production files.
You will only have one thing in each of those files: MM_PW='<your-password-here>'
Running the app locally
We have the frontend code already done, so you can fork the repo, set up your Macrometa account as described above, add your password as described above, and then deploy. Go ahead and do that and then I’ll walk you through how the app is set up so you can check out the code.
In the terminal of your choice:
Fork this repo and clone your fork onto your local machine
Run npm install
Once that’s done run npm run develop to start the local server. This will start local development server on http://localhost:<some_port> and the GraphQL server at http://localhost:<some_port>/___graphql
How to deploy app (UI) on GitHub Pages
Once you have the app running as expected in your local environment simply run npm run deploy!
Gatsby will automatically generate the static code for the site, create a branch called gh-pages, and deploy it to Github.
Now you can access your site at <your-github-username>.github.io/tutorial-jamstack-pollingapp
If your app isn‘t showing up there for some reason go check out your repo’s settings and make sure Github Pages is enabled and configured to run on your gh-pages branch.
Walking through the code
First, we made a file that loaded the Macrometa JSC8 Driver, made sure we opened up a socket to Macrometa, and then defined the various API calls we will be using in the app. Next, we made the config available to the whole app.
After that we wrote the functions that handle various front-end events. Here’s the code for handling a vote submission:
Are you hosting one or more websites and are using a headless CMS? Are you hosting your CMS on a virtual machine or a container, or using a SaaS solution? If so, then you’re paying for the uptime, regardless if the server or service is serving requests or not. Essentially, you are paying for stuff you are not using. And in this article look at how how you can change that and save up to 80% of your hosting cost along the way.
Serverless — what’s that about?
If you’re new to serverless, in short, serverless is set of services you’re consuming without worrying about the underlying infrastructure. There are services for compute, like AWS Lambda that allow you to run Node.js code, services for storage like S3, database as a service like DynamoDb and many others.
The benefits of serverless are:
You are billed based on your consumption
There are no servers for you to manage
Services scale automatically
Services are more secure than your regular server
Servers are still there, but they are abstracted away — out of sight, out of mind.
Out of all the benefits, the first one plays a big role. Picture an API on a regular server or a virtual machine. If that server is not handling a new request every few seconds, there is a lot of idle time where the server is not doing anything, but you’re still paying for it.
With serverless you pay per your consumption, if your API is not handling any request at that point in time, your cost is $ 0. To further back this case, a research made by Deloitte found that a larger system can save anywhere between 60-80% in infrastructure costs and up to 60% in management costs just by switching to serverless.
Although serverless sounds great, there is a down side to it. It’s quite complex and time consuming to create new solutions from scratch and existing solutions are not designed for such environments. This is where Webiny comes in.
Webiny Serverless CMS
To help you adopt serverless and build websites on top of this modern infrastructure, there is one solution you can use today, for free. Webiny Serverless CMS is an open source solution that comes with a few apps, including a GraphQL-based Headless CMS.
Some of its features:
GraphQL API
Content versioning and modeling through a UI
Multi-tenancy & Multi-language support
Powerful user access control
Built-in image optimization and image editor
Works with existing static page generators like Gatsby and others
This article is for anyone interested in the emerging ecosystem of tools and technologies related to Jamstack and serverless. We’re going to use Fauna’s GraphQL API as a serverless back-end for a Jamstack front-end built with the Redwood framework and deployed with a one-click deploy on Vercel.
In other words, lots to learn! By the end, you’ll not only get to dive into Jamstack and serverless concepts, but also hands-on experience with a really neat combination of tech that I think you’ll really like.
Creating a Redwood app
Redwood is a framework for serverless applications that pulls together React (for front-end component), GraphQL (for data) and Prisma (for database queries).
There are other front-end frameworks that we could use here. One example is Bison, created by Chris Ball. It leverages GraphQL in a similar fashion to Redwood, but uses a slightly different lineup of GraphQL libraries, such as Nexus in place of Apollo Client and GraphQL Codegen, in place of the Redwood CLI. But it’s only been around a few months, so the project is still very new compared to Redwood, which has been in development since June 2019.
There are many greatRedwoodstarter templates we could use to bootstrap our application, but I want to start by generating a Redwood boilerplate project and looking at the different pieces that make up a Redwood app. We’ll then build up the project, piece by piece.
We will need to install Yarn to use the Redwood CLI to get going. Once that’s good to go, here’s what to run in a terminal
yarn create redwood-app ./csstricks
We’ll now cd into our new project directory and start our development server.
cd csstricks yarn rw dev
Our project’s front-end is now running on localhost:8910. Our back-end is running on localhost:8911 and ready to receive GraphQL queries. By default, Redwood comes with a GraphiQL playground that we’ll use towards the end of the article.
Let’s head over to localhost:8910 in the browser. If all is good, the Redwood landing page should load up.
The Redwood starting page indicates that the front end of our app is ready to go. It also provides a nice instruction for how to start creating custom routes for the app.
Redwood is currently at version 0.21.0, as of this writing. The docs warn against using it in production until it officially reaches 1.0. They also have a community forum where they welcome feedback and input from developers like yourself.
Directory structure
Redwood values convention over configuration and makes a lot of decisions for us, including the choice of technologies, how files are organized, and even naming conventions. This can result in an overwhelming amount of generated boilerplate code that is hard to comprehend, especially if you’re just digging into this for the first time.
Don’t worry too much about what all this means yet; the first thing to notice is things are split into two main directories: web and api. Yarn workspaces allows each side to have its own path in the codebase.
web contains our front-end code for:
Pages
Layouts
Components
api contains our back-end code for:
Function handlers
Schema definition language
Services for back-end business logic
Database client
Redwood assumes Prisma as a data store, but we’re going to use Fauna instead. Why Fauna when we could just as easily use Firebase? Well, it’s just a personal preference. After Google purchased Firebase they launched a real-time document database, Cloud Firestore, as the successor to the original Firebase Realtime Database. By integrating with the larger Firebase ecosystem, we could have access to a wider range of features than what Fauna offers. At the same time, there are even a handful of community projects that have experimented with Firestore and GraphQL but there isn’t first class GraphQL support from Google.
Since we will be querying Fauna directly, we can delete the prisma directory and everything in it. We can also delete all the code in db.js. Just don’t delete the file as we’ll be using it to connect to the Fauna client.
index.html
We’ll start by taking a look at the web side since it should look familiar to developers with experience using React or other single-page application frameworks.
But what actually happens when we build a React app? It takes the entire site and shoves it all into one big ball of JavaScript inside index.js, then shoves that ball of JavaScript into the “root” DOM node, which is on line 11 of index.html.
While Redwood uses Jamstack in the documentation and marketing of itself, Redwood doesn’t do pre-rendering yet (like Next or Gatsby can), but is still Jamstack in that it’s shipping static files and hitting APIs with JavaScript for data.
index.js
index.js contains our root component (that big ball of JavaScript) that is rendered to the root DOM node. document.getElementById() selects an element with an id containing redwood-app, and ReactDOM.render() renders our application into the root DOM element.
RedwoodProvider
The <Routes /> component (and by extension all the application pages) are contained within the <RedwoodProvider> tags. Flash uses the Context API for passing message objects between deeply nested components. It provides a typical message display unit for rendering the messages provided to FlashContext.
FlashContext’s provider component is packaged with the <RedwoodProvider /> component so it’s ready to use out of the box. Components pass message objects by subscribing to it (think, “send and receive”) via the provided useFlash hook.
FatalErrorBoundary
The provider itself is then contained within the <FatalErrorBoundary> component which is taking in <FatalErrorPage> as a prop. This defaults your website to an error page when all else fails.
import ReactDOM from 'react-dom' import { RedwoodProvider, FatalErrorBoundary } from '@redwoodjs/web' import FatalErrorPage from 'src/pages/FatalErrorPage' import Routes from 'src/Routes' import './index.css' ReactDOM.render( <FatalErrorBoundary page={FatalErrorPage}> <RedwoodProvider> <Routes /> </RedwoodProvider> </FatalErrorBoundary>, document.getElementById('redwood-app') )
Routes.js
Router contains all of our routes and each route is specified with a Route. The Redwood Router attempts to match the current URL to each route, stopping when it finds a match and then renders only that route. The only exception is the notfound route which renders a single Route with a notfound prop when no other route matches.
Now that our application is set up, let’s start creating pages! We’ll use the Redwood CLI generate page command to create a named route function called home. This renders the HomePage component when it matches the URL path to /.
We can also use rw instead of redwood and g instead of generate to save some typing.
yarn rw g page home /
This command performs four separate actions:
It creates web/src/pages/HomePage/HomePage.js. The name specified in the first argument gets capitalized and “Page” is appended to the end.
It creates a test file at web/src/pages/HomePage/HomePage.test.js with a single, passing test so you can pretend you’re doing test-driven development.
It creates a Storybook file at web/src/pages/HomePage/HomePage.stories.js.
It adds a new <Route> in web/src/Routes.js that maps the / path to the HomePage component.
HomePage
If we go to web/src/pages we’ll see a HomePage directory containing a HomePage.js file. Here’s what’s in it:
// web/src/pages/HomePage/HomePage.js import { Link, routes } from '@redwoodjs/router' const HomePage = () => { return ( <> <h1>HomePage</h1> <p> Find me in <code>./web/src/pages/HomePage/HomePage.js</code> </p> <p> My default route is named <code>home</code>, link to me with ` <Link to={routes.home()}>Home</Link>` </p> </> ) } export default HomePage
The HomePage.js file has been set as the main route, /.
We’re going to move our page navigation into a re-usable layout component which means we can delete the Link and routes imports as well as <Link to={routes.home()}>Home</Link>. This is what we’re left with:
To create our AboutPage, we’ll enter almost the exact same command we just did, but with about instead of home. We also don’t need to specify the path since it’s the same as the name of our route. In this case, the name and path will both be set to about.
yarn rw g page about
AboutPage.js is now available at /about.
// web/src/pages/AboutPage/AboutPage.js import { Link, routes } from '@redwoodjs/router' const AboutPage = () => { return ( <> <h1>AboutPage</h1> <p> Find me in <code>./web/src/pages/AboutPage/AboutPage.js</code> </p> <p> My default route is named <code>about</code>, link to me with ` <Link to={routes.about()}>About</Link>` </p> </> ) } export default AboutPage
We’ll make a few edits to the About page like we did with our Home page. That includes taking out the <Link> and routes imports and deleting Link to={routes.about()}>About</Link>.
Here’s the end result:
// web/src/pages/AboutPage/AboutPage.js const AboutPage = () => { return ( <> <h1>About 🚀🚀</h1> <p>For those who want to stack their Jam, fully</p> </> ) }
If we return to Routes.js we’ll see our new routes for home and about. Pretty nice that Redwood does this for us!
Now we want to create a header with navigation links that we can easily import into our different pages. We want to use a layout so we can add navigation to as many pages as we want by importing the component instead of having to write the code for it on every single page.
BlogLayout
You may now be wondering, “is there a generator for layouts?” The answer to that is… of course! The command is almost identical as what we’ve been doing so far, except with rw g layout followed by the name of the layout, instead of rw g page followed by the name and path of the route.
We won’t see anything different in the browser yet. We created the BlogLayout but have not imported it into any pages. So let’s import BlogLayout into HomePage and wrap the entire return statement with the BlogLayout tags.
// web/src/pages/HomePage/HomePage.js import BlogLayout from 'src/layouts/BlogLayout' const HomePage = () => { return ( <BlogLayout> <p>Taking Fullstack to the Jamstack</p> </BlogLayout> ) } export default HomePage
Hey look, the navigation is taking shape!
If we click the link to the About page we’ll be taken there but we are unable to get back to the previous page because we haven’t imported BlogLayout into AboutPage yet. Let’s do that now:
// web/src/pages/AboutPage/AboutPage.js import BlogLayout from 'src/layouts/BlogLayout' const AboutPage = () => { return ( <BlogLayout> <p>For those who want to stack their Jam, fully</p> </BlogLayout> ) } export default AboutPage
Now we can navigate back and forth between the pages by clicking the navigation links! Next up, we’ll now create our GraphQL schema so we can start working with data.
Fauna schema definition language
To make this work, we need to create a new file called sdl.gql and enter the following schema into the file. Fauna will take this schema and make a few transformations.
// sdl.gql type Post { title: String! body: String! } type Query { posts: [Post] }
Save the file and upload it to Fauna’s GraphQL Playground. Note that, at this point, you will need a Fauna account to continue. There’s a free tier that works just fine for what we’re doing.
The GraphQL Playground is located in the selected database.The Fauna shell allows us to write, run and test queries.
It’s very important that Redwood and Fauna agree on the SDL, so we cannot use the original SDL that was entered into Fauna because that is no longer an accurate representation of the types as they exist on our Fauna database.
The Post collection and posts Index will appear unaltered if we run the default queries in the shell, but Fauna creates an intermediary PostPage type which has a data object.
Redwood schema definition language
This data object contains an array with all the Post objects in the database. We will use these types to create another schema definition language that lives inside our graphql directory on the api side of our Redwood project.
// api/src/graphql/posts.sdl.js import gql from 'graphql-tag' export const schema = gql` type Post { title: String! body: String! } type PostPage { data: [Post] } type Query { posts: PostPage } `
Services
The posts service sends a query to the Fauna GraphQL API. This query is requesting an array of posts, specifically the title and body for each. These are contained in the data object from PostPage.
// api/src/services/posts/posts.js import { request } from 'src/lib/db' import { gql } from 'graphql-request' export const posts = async () => { const query = gql` { posts { data { title body } } } ` const data = await request(query, 'https://graphql.fauna.com/graphql') return data['posts'] }
At this point, we can install graphql-request, a minimal client for GraphQL with a promise-based API that can be used to send GraphQL requests:
cd api yarn add graphql-request graphql
Attach the Fauna authorization token to the request header
So far, we have GraphQL for data, Fauna for modeling that data, and graphql-request to query it. Now we need to establish a connection between graphql-request and Fauna, which we’ll do by importing graphql-request into db.js and use it to query an endpoint that is set to https://graphql.fauna.com/graphql.
A GraphQLClient is instantiated to set the header with an authorization token, allowing data to flow to our app.
Create
We’ll use the Fauna Shell and run a couple of Fauna Query Language (FQL) commands to seed the database. First, we’ll create a blog post with a title and body.
Create( Collection("Post"), { data: { title: "Deno is a secure runtime for JavaScript and TypeScript.", body: "The original creator of Node, Ryan Dahl, wanted to build a modern, server-side JavaScript framework that incorporates the knowledge he gained building out the initial Node ecosystem." } } )
{ ref: Ref(Collection("Post"), "282083736060690956"), ts: 1605274864200000, data: { title: "Deno is a secure runtime for JavaScript and TypeScript.", body: "The original creator of Node, Ryan Dahl, wanted to build a modern, server-side JavaScript framework that incorporates the knowledge he gained building out the initial Node ecosystem." } }
Let’s create another one.
Create( Collection("Post"), { data: { title: "NextJS is a React framework for building production grade applications that scale.", body: "To build a complete web application with React from scratch, there are many important details you need to consider such as: bundling, compilation, code splitting, static pre-rendering, server-side rendering, and client-side rendering." } } )
{ ref: Ref(Collection("Post"), "282083760102441484"), ts: 1605274887090000, data: { title: "NextJS is a React framework for building production grade applications that scale.", body: "To build a complete web application with React from scratch, there are many important details you need to consider such as: bundling, compilation, code splitting, static pre-rendering, server-side rendering, and client-side rendering." } }
And maybe one more just to fill things up.
Create( Collection("Post"), { data: { title: "Vue.js is an open-source front end JavaScript framework for building user interfaces and single-page applications.", body: "Evan You wanted to build a framework that combined many of the things he loved about Angular and Meteor but in a way that would produce something novel. As React rose to prominence, Vue carefully observed and incorporated many lessons from React without ever losing sight of their own unique value prop." } } )
{ ref: Ref(Collection("Post"), "282083792286384652"), ts: 1605274917780000, data: { title: "Vue.js is an open-source front end JavaScript framework for building user interfaces and single-page applications.", body: "Evan You wanted to build a framework that combined many of the things he loved about Angular and Meteor but in a way that would produce something novel. As React rose to prominence, Vue carefully observed and incorporated many lessons from React without ever losing sight of their own unique value prop." } }
Cells
Cells provide a simple and declarative approach to data fetching. They contain the GraphQL query along with loading, empty, error, and success states. Each one renders itself automatically depending on what state the cell is in.
By default we have the query render the data with JSON.stringify on the page where the cell is imported. We’ll make a handful of changes to make the query and render the data we need. So, let’s:
Change blogPosts to posts.
Change BlogPostsQuery to POSTS.
Change the query itself to return the title and body of each post.
Map over the data object in the success component.
Create a component with the title and body of the posts returned through the data object.
The POSTS query is sending a query for posts, and when it’s queried, we get back a data object containing an array of posts. We need to pull out the data object so we can loop over it and get the actual posts. We do this with object destructuring to get the data object and then we use the map() function to map over the data object and pull out each post. The title of each post is rendered with an <h2> inside <header> and the body is rendered with a <p> tag.
Import BlogPostsCell to HomePage
// web/src/pages/HomePage/HomePage.js import BlogLayout from 'src/layouts/BlogLayout' import BlogPostsCell from 'src/components/BlogPostsCell/BlogPostsCell.js' const HomePage = () => { return ( <BlogLayout> <p>Taking Fullstack to the Jamstack</p> <BlogPostsCell /> </BlogLayout> ) } export default HomePage
Check that out! Posts are returned to the app and rendered on the front end.
Vercel
We do mention Vercel in the title of this post, and we’re finally at the point where we need it. Specifically, we’re using it to build the project and deploy it to Vercel’s hosted platform, which offers build previews when code it pushed to the project repository. So, if you don’t already have one, grab a Vercel account. Again, the free pricing tier works just fine for this work.
Why Vercel over, say, Netlify? It’s a good question. Redwood even began with Netlify as its original deploy target. Redwood still has many well-documented Netlify integrations. Despite the tight integration with Netlify, Redwood seeks to be universally portable to as many deploy targets as possible. This now includes official support for Vercel along with community integrations for the Serverless framework, AWS Fargate, and PM2. So, yes, we could use Netlify here, but it’s nice that we have a choice of available services.
We only have to make one change to the project’s configuration to integrate it with Vercel. Let’s open netlify.toml and change the apiProxyPath to "/api". Then, let’s log into Vercel and click the “Import Project” button to connect its service to the project repository. This is where we enter the URL of the repo so Vercel can watch it, then trigger a build and deploy when it noticed changes.
I’m using GitHub to host my project, but Vercel is capable of working with GitLab and Bitbucket as well.
Redwood has a preset build command that works out of the box in Vercel:
Simply select “Redwood” from the preset options and we’re good to go.
We’re pretty far along, but even though the site is now “live” the database isn’t connected:
To fix that, we’ll add the FAUNADB_SECRET token from our Fauna account to our environment variables in Vercel:
We did it! I hope this not only gets you super excited about working with Jamstack and serverless, but got a taste of some new technologies in the process.
The best way to learn is to build. Let’s learn about this hot new buzzword, Jamstack, by building a site with React, Netlify (Serverless) Functions, and Airtable. One of the ingredients of Jamstack is static hosting, but that doesn’t mean everything on the site has to be static. In fact, we’re going to build an app with full-on CRUD capability, just like a tutorial for any web technology with more traditional server-side access might.
Why these technologies, you ask?
You might already know this, but the “JAM” in Jamstack stands for JavaScript, APIs, and Markup. These technologies individually are not new, so the Jamstack is really just a new and creative way to combine them. You can read more about it over at the Jamstack site.
One of the most important benefits of Jamstack is ease of deployment and hosting, which heavily influence the technologies we are using. By incorporating Netlify Functions (for backend CRUD operations with Airtable), we will be able to deploy our full-stack application to Netlify. The simplicity of this process is the beauty of the Jamstack.
As far as the database, I chose Airtable because I wanted something that was easy to get started with. I also didn’t want to get bogged down in technical database details, so Airtable fits perfectly. Here’s a few of the benefits of Airtable:
You don’t have to deploy or host a database yourself
It comes with an Excel-like GUI for viewing and editing data
There’s a nice JavaScript SDK
What we’re building
For context going forward, we are going to build an app where you can use to track online courses that you want to take. Personally, I take lots of online courses, and sometimes it’s hard to keep up with the ones in my backlog. This app will let track those courses, similar to a Netflix queue.
One of the reasons I take lots of online courses is because I make courses. In fact, I have a new one available where you can learn how to build secure and production-ready Jamstack applications using React and Netlify (Serverless) Functions. We’ll cover authentication, data storage in Airtable, Styled Components, Continuous Integration with Netlify, and more! Check it out →
Airtable setup
Let me start by clarifying that Airtable calls their databases “bases.” So, to get started with Airtable, we’ll need to do a couple of things.
Next, let’s create a new database. We’ll log into Airtable, click on “Add a Base” and choose the “Start From Scratch” option. I named my new base “JAMstack Demos” so that I can use it for different projects in the future.
Next, let’s click on the base to open it.
You’ll notice that this looks very similar to an Excel or Google Sheets document. This is really nice for being able tower with data right inside of the dashboard. There are few columns already created, but we add our own. Here are the columns we need and their types:
name (single line text)
link (single line text)
tags (multiple select)
purchased (checkbox)
We should add a few tags to the tags column while we’re at it. I added “node,” “react,” “jamstack,” and “javascript” as a start. Feel free to add any tags that make sense for the types of classes you might be interested in.
I also added a few rows of data in the name column based on my favorite online courses:
The last thing to do is rename the table itself. It’s called “Table 1” by default. I renamed it to “courses” instead.
Locating Airtable credentials
Before we get into writing code, there are a couple of pieces of information we need to get from Airtable. The first is your API Key. The easiest way to get this is to go your account page and look in the “Overview” section.
Next, we need the ID of the base we just created. I would recommend heading to the Airtable API page because you’ll see a list of your bases. Click on the base you just created, and you should see the base ID listed. The documentation for the Airtable API is really handy and has more detailed instructions for find the ID of a base.
Lastly, we need the table’s name. Again, I named mine “courses” but use whatever you named yours if it’s different.
Project setup
To help speed things along, I’ve created a starter project for us in the main repository. You’ll need to do a few things to follow along from here:
Check out the starter branch with git checkout starter
There are lots of files already there. The majority of the files come from a standard create-react-app application with a few exceptions. There is also a functions directory which will host all of our serverless functions. Lastly, there’s a netlify.toml configuration file that tells Netlify where our serverless functions live. Also in this config is a redirect that simplifies the path we use to call our functions. More on this soon.
The last piece of the setup is to incorporate environment variables that we can use in our serverless functions. To do this install the dotenv package.
npm install dotenv
Then, create a .env file in the root of the repository with the following. Make sure to use your own API key, base ID, and table name that you found earlier.
To create serverless functions with Netlify, we need to create a JavaScript file inside of our /functions directory. There are already some files included in this starter directory. Let’s look in the courses.js file first.
The core part of a serverless function is the exports.handler function. This is where we handle the incoming request and respond to it. In this case, we are accepting an event parameter which we will use in just a moment.
We are returning a call inside the handler to the formattedReturn function, which makes it a bit simpler to return a status and body data. Here’s what that function looks like for reference.
Notice also that we are importing several helper functions to handle the interaction with Airtable. We can decide which one of these to call based on the HTTP method of the incoming request.
HTTP GET → getCourses
HTTP POST → createCourse
HTTP PUT → updateCourse
HTTP DELETE → deleteCourse
Let’s update this function to call the appropriate helper function based on the HTTP method in the event parameter. If the request doesn’t match one of the methods we are expecting, we can return a 405 status code (method not allowed).
Since we are going to be interacting with Airtable in each of the different helper files, let’s configure it once and reuse it. Open the airtable.js file.
In this file, we want to get a reference to the courses table we created earlier. To do that, we create a reference to our Airtable base using the API key and the base ID. Then, we use the base to get a reference to the table and export it.
require('dotenv').config(); var Airtable = require('airtable'); var base = new Airtable({ apiKey: process.env.AIRTABLE_API_KEY }).base( process.env.AIRTABLE_BASE_ID ); const table = base(process.env.AIRTABLE_TABLE_NAME); module.exports = { table };
Getting courses
With the Airtable config in place, we can now open up the getCourses.js file and retrieve courses from our table by calling table.select().firstPage(). The Airtable API uses pagination so, in this case, we are specifying that we want the first page of records (which is 20 records by default).
Airtable returns back a lot of extra information in its records. I prefer to simplify these records with only the record ID and the values for each of the table columns we created above. These values are found in the fields property. To do this, I used the an Array map to format the data the way I want.
How do we test this out? Well, the netlify-cli provides us a netlify dev command to run our serverless functions (and our front-end) locally. First, install the CLI:
npm install -g netlify-cli
Then, run the netlify dev command inside of the directory.
This beautiful command does a few things for us:
Runs the serverless functions
Runs a web server for your site
Creates a proxy for front end and serverless functions to talk to each other on Port 8888.
Let’s open up the following URL to see if this works:
We are able to use /api/* for our API because of the redirect configuration in the netlify.toml file.
If successful, we should see our data displayed in the browser.
Creating courses
Let’s add the functionality to create a course by opening up the createCourse.js file. We need to grab the properties from the incoming POST body and use them to create a new record by calling table.create().
The incoming event.body comes in a regular string which means we need to parse it to get a JavaScript object.
const fields = JSON.parse(event.body);
Then, we use those fields to create a new course. Notice that the create() function accepts an array which allows us to create multiple records at once.
Since we can’t perform a POST, PUT, or DELETE directly in the browser web address (like we did for the GET), we need to use a separate tool for testing our endpoints from now on. I prefer Postman, but I’ve heard good things about Insomnia as well.
Inside of Postman, I need the following configuration.
url: localhost:8888/api/courses
method: POST
body: JSON object with name, link, and tags
After running the request, we should see the new course record is returned.
We can also check the Airtable GUI to see the new record.
Tip: Copy and paste the ID from the new record to use in the next two functions.
Updating courses
Now, let’s turn to updating an existing course. From the incoming request body, we need the id of the record as well as the other field values.
We can specifically grab the id value using object destructuring, like so:
const {id} = JSON.parse(event.body);
Then, we can use the spread operator to grab the rest of the values and assign it to a variable called fields:
const {id, ...fields} = JSON.parse(event.body);
From there, we call the update() function which takes an array of objects (each with an id and fields property) to be updated:
To test this out, we’ll turn back to Postman for the PUT request:
url: localhost:8888/api/courses
method: PUT
body: JSON object with id (the id from the course we just created) and the fields we want to update (name, link, and tags)
I decided to append “Updated!!!” to the name of a course once it’s been updated.
We can also see the change in the Airtable GUI.
Deleting courses
Lastly, we need to add delete functionality. Open the deleteCourse.js file. We will need to get the id from the request body and use it to call the destroy() function.
Here’s the configuration for the Delete request in Postman.
url: localhost:8888/api/courses
method: PUT
body: JSON object with an id (the same id from the course we just updated)
And, of course, we can double-check that the record was removed by looking at the Airtable GUI.
Displaying a list of courses in React
Whew, we have built our entire back end! Now, let’s move on to the front end. The majority of the code is already written. We just need to write the parts that interact with our serverless functions. Let’s start by displaying a list of courses.
Open the App.js file and find the loadCourses function. Inside, we need to make a call to our serverless function to retrieve the list of courses. For this app, we are going to make an HTTP request using fetch, which is built right in.
Thanks to the netlify dev command, we can make our request using a relative path to the endpoint. The beautiful thing is that this means we don’t need to make any changes after deploying our application!
const res = await fetch('/api/courses'); const courses = await res.json();
Then, store the list of courses in the courses state variable.
Open up localhost:8888 in the browser and we should our list of courses.
Adding courses in React
Now that we have the ability to view our courses, we need the functionality to create new courses. Open up the CourseForm.js file and look for the submitCourse function. Here, we’ll need to make a POST request to the API and send the inputs from the form in the body.
The JavaScript Fetch API makes GET requests by default, so to send a POST, we need to pass a configuration object with the request. This options object will have these two properties.
Test this out in the browser. Fill in the form and submit it.
After submitting the form, the form should be reset, and the list of courses should update with the newly added course.
Updating purchased courses in React
The list of courses is split into two different sections: one with courses that have been purchased and one with courses that haven’t been purchased. We can add the functionality to mark a course “purchased” so it appears in the right section. To do this, we’ll send a PUT request to the API.
Open the Course.js file and look for the markCoursePurchased function. In here, we’ll make the PUT request and include both the id of the course as well as the properties of the course with the purchased property set to true. We can do this by passing in all of the properties of the course with the spread operator and then overriding the purchased property to be true.
To test this out, click the button to mark one of the courses as purchased and the list of courses should update to display the course in the purchased section.
Deleting courses in React
And, following with our CRUD model, we will add the ability to delete courses. To do this, locate the deleteCourse function in the Course.js file we just edited. We will need to make a DELETE request to the API and pass along the id of the course we want to delete.
To test this out, click the “Delete” button next to the course and the course should disappear from the list. We can also verify it is gone completely by checking the Airtable dashboard.
Deploying to Netlify
Now, that we have all of the CRUD functionality we need on the front and back end, it’s time to deploy this thing to Netlify. Hopefully, you’re as excited as I am about now easy this is. Just make sure everything is pushed up to GitHub before we move into deployment.
If you don’t have a Netlify, account, you’ll need to create one (like Airtable, it’s free). Then, in the dashboard, click the “New site from Git” option. Select GitHub, authenticate it, then select the project repo.
Next, we need to tell Netlify which branch to deploy from. We have two options here.
Use the starter branch that we’ve been working in
Choose the master branch with the final version of the code
For now, I would choose the starter branch to ensure that the code works. Then, we need to choose a command that builds the app and the publish directory that serves it.
Build command: npm run build
Publish directory: build
Netlify recently shipped an update that treats React warnings as errors during the build proces. which may cause the build to fail. I have updated the build command to CI = npm run build to account for this.
Lastly, click on the “Show Advanced” button, and add the environment variables. These should be exactly as they were in the local .env that we created.
The site should automatically start building.
We can click on the “Deploys” tab in Netlify tab and track the build progress, although it does go pretty fast. When it is complete, our shiny new app is deployed for the world can see!
Welcome to the Jamstack!
The Jamstack is a fun new place to be. I love it because it makes building and hosting fully-functional, full-stack applications like this pretty trivial. I love that Jamstack makes us mighty, all-powerful front-end developers!
I hope you see the same power and ease with the combination of technology we used here. Again, Jamstack doesn’t require that we use Airtable, React or Netlify, but we can, and they’re all freely available and easy to set up. Check out Chris’ serverless site for a whole slew of other services, resources, and ideas for working in the Jamstack. And feel free to drop questions and feedback in the comments here!
I’ve always wanted to build an API, but was scared away by just how complicated things looked. I’d read a lot of tutorials that start with “first, install this library and this library and this library” without explaining why that was important. I’m kind of a Luddite when it comes to these things.
Well, I recently rolled up my sleeves and got my hands dirty. I wanted to build and deploy a simple read-only API, and goshdarnit, I wasn’t going to let some scary dependency lists and fancy cutting-edge services stop me¹.
What I discovered is that underneath many of the tutorials and projects out there is a small, easy-to-understand set of tools and techniques. In less than an hour and with only 30 lines of code, I believe anyone can write and deploy their very own read-only API. You don’t have to be a senior full-stack engineer — a basic grasp of JavaScript and some experience with npm is all you need.
At the end of this article you’ll be able to deploy your very own API without the headache of managing a server. I’ll list out each dependency and explain why we’re incorporating it. I’ll also give you an intro to some of the newer concepts involved, and provide links to resources to go deeper.
Let’s get started!
A rundown of the API concepts
There are a couple of common ways to work with APIs. But let’s begin by (super briefly) explaining what an API is all about: reading and updating data.
Over the past 20 years, some standard ways to build APIs have emerged. REST (short for REpresentational State Transfer) is one of the most common. To use a REST API, you make a call to a server through a URL — say api.example.com/rest/books — and expect to get a list of books back in a format like JSON or XML. To get a single book, we’d go back to the server at a URL — like api.example.com/rest/books/123 — and expect the data for book #123. Adding a new book or updating a specific book’s data means more trips to the server at similar, purpose-defined URLs.
That’s the basic idea of two concepts we’ll be looking at here: GraphQL and Serverless.
GraphQL
Applications that do a lot of getting and updating of data make a lot of API calls. Complicated software, like Twitter, might make hundreds of calls to get the data for a single page. Collecting the right data from a handful of URLs and formatting it can be a real headache. In 2012, Facebook developers starting looking for new ways to get and update data more efficiently.
Their key insight was that for the most part, data in complicated applications has relationships to other data. A user has followers, who are each users themselves, who each have their own followers, and those followers have tweets, which have replies from other users. Drawing the relationships between data results in a graph and that graph can help a server do a lot of clever work formatting and sending (or updating) data, and saving front-end developers time and frustration. Graph Query Language, aka GraphQL, was born.
GraphQL is different from the REST API approach in its use of URLs and queries. To get a list of books from our API using GraphQL, we don’t need to go to a specific URL (like our api.example.com/graphql/books example). Instead, we call up the API at the top level — which would be api.example.com/graphql in our example — and tell it what kind of information we want back with a JSON object:
{ books { id title author } }
The server sees that request, formats our data, and sends it back in another JSON object:
Sebastian Scholl compares GraphQL to REST using a fictional cocktail party that makes the distinction super clear. The bottom line: GraphQL allows us to request the exact data we want while REST gives us a dump of everything at the URL.
Concept 2: Serverless
Whenever I see the word “serverless,” I think of Chris Watterston’s famous sticker.
Similarly, there is no such thing as a truly “serverless” application. Chris Coyier nice sums it up his “Serverless” post:
What serverless is trying to mean, it seems to me, is a new way to manage and pay for servers. You don’t buy individual servers. You don’t manage them. You don’t scale them. You don’t balance them. You aren’t really responsible for them. You just pay for what you use.
The serverless approach makes it easier to build and deploy back-end applications. It’s especially easy for folks like me who don’t have a background in back-end development. Rather than spend my time learning how to provision and maintain a server, I often hand the hard work off to someone (or even perhaps something) else.
If you browse through that serverless guide you’ll see there’s no shortage of tools and resources to help us on our way to building an API. But exactly which ones we use requires some initial thought and planning. I’m going to cover two specific tools that we’ll use for our read-only API.
Tool 1: NodeJS and Express
Again, I don’t have much experience with back-end web development. But one of the few things I have encountered is Node.js. Many of you are probably aware of it and what it does, but it’s essentially JavaScript that runs on a server instead of a web browser. Node.js is perfect for someone coming from the front-end development side of things because we can work directly in JavaScript — warts and all — without having to reach for some back-end language.
Express is one of the most popular frameworks for Node.js. Back before React was king (How Do You Do, Fellow Kids?), Express was the go-to for building web applications. It does all sorts of handy thing like routing, templating, and error handling.
I’ll be honest: frameworks like Express intimidate me. But for a simple API, Express is extremely easy to use and understand. There’s an official GraphQL helper for Express, and a plug-and-play library for making a serverless application called serverless-http. Neat, right?!
Tool 2: Netlify functions
The idea of running an application without maintaining a server sounds too good to be true. But check this out: not only can you accomplish this feat of modern sorcery, you can do it for free. Mind blowing.
Netlify offers a free plan with serverless functions that will give you up to 125,000 API calls in a month. Amazon offers a similar service called Lambda. We’ll stick with Netlify for this tutorial.
Netlify includes Netlify Dev which is a CLI for Netlify’s platform. Essentially, it lets us run a simulation of our in a fully-featured production environment, all within the safety of our local machine. We can use it to build and test our serverless functions without needing to deploy them.
At this point, I think it’s worth noting that not everyone agrees that running Express in a serverless function is a good idea. As Paul Johnston explains, if you’re building your functions for scale, it’s best to break each piece of functionality out into its own single-purpose function. Using Express the way I have means that every time a request goes to the API, the whole Express server has to be booted up from scratch — not very efficient. Deploy to production at your own risk.
Let’s get building!
Now that we have out tools in place, we can kick off the project. Let’s start by creating a new folder, navigating to fit in terminal, then running npm init on it. Once npm creates a package.json file, we can install the dependencies we need. Those dependencies are:
Express
GraphQL and express-graphql. These allow us to receive and respond to GraphQL requests.
Bodyparser. This is a small layer that translates the requests we get to and from JSON, which is what GraphQL expects.
Serverless-http. This serves as a wrapper for Express that makes sure our application can be used on a serverless platform, like Netlify.
That’s it! We can install them all in a single command:
npm i express express-graphql graphql body-parser serverless-http
We also need to install Netlify Dev as a global dependency so we can use it as a CLI:
npm i -g netlify-dev
File structure
There’s a few files that are required for our API to work correctly. The first is netlify.toml which should be created at the project’s root directory. This is a configuration file to tell Netlify how to handle our project. Here’s what we need in the file to define our startup command, our build command and where our serverless functions are located:
[build] # This command builds the site command = "npm run build" # This is the directory that will be deployed publish = "build" # This is where our functions are located functions = "functions"
That functions line is super important; it tells Netlify where we’ll be putting our API code.
Next, let’s create that /functions folder at the project’s root, and create a new file inside it called api.js. Open it up and add the following lines to the top so our dependencies are available to use and are included in the build:
These lines initialize Express, and wrap it in the serverless-http function. module.exports.handler lets Netlify know that our serverless function is the Express function.
These two declarations tell Express what middleware we’re running. Middleware is what we want to happen between the request and response. In our case, we want to parse JSON using bodyparser, and handle it with express-graphql. The graphiql:true configuration for express-graphql will give us a nice user interface and playground for testing.
Defining the GraphQL schema
In order to understand requests and format responses, GraphQL needs to know what our data looks like. If you’ve worked with databases then you know that this kind of data blueprint is called a schema. GraphQL combines this well-defined schema with types — that is, definitions of different kinds of data — to work its magic.
The very first thing our schema needs is called a root query. This will handle any data requests coming in to our API. It’s called a “root” query because it’s accessed at the root of our API— say, api.example.com/graphql.
For this demonstration, we’ll build a hello world example; the root query should result in a response of “Hello world.”
So, our GraphQL API will need a schema (composed of types) for the root query. GraphQL provides some ready-built types, including a schema, a generic object², and a string.
const schema = new GraphQLSchema({ query: new GraphQLObjectType({ name: 'HelloWorld', fields: () => ({ /* we'll put our response here */ }) }) })
The first element in the object, with the key query, tells GraphQL how to handle a root query. Its value is a GraphQL object with the following configuration:
name – A reference used for documentation purposes
fields – Defines the data that our server will respond with. It might seem strange to have a function that just returns an object here, but this allows us to use variables and functions defined elsewhere in our file without needing to define them first³.
The fields function returns an object and our schema only has a single message field so far. The message we want to respond with is a string, so we specify its type as a GraphQLString. The resolve function is run by our server to generate the response we want. In this case, we’re only returning “Hello World” but in a more complicated application, we’d probably use this function to go to our database and retrieve some data.
That’s our schema! We need to tell our Express server about it, so let’s open up api.js and make sure the Express configuration is updated to this:
Believe it or not, we’re ready to start the server! Run netlify dev in Terminal from the project’s root folder. Netlify Dev will read the netlify.toml configuration, bundle up your api.js function, and make it available locally from there. If everything goes according to plan, you’ll see a message like “Server now ready on http://localhost:8888.”
If you go to localhost:8888 like I did the first time, you might be a little disappointed to get a 404 error.
But fear not! Netlify is running the function, only in a different directory than you might expect, which is /.netlify/functions. So, if you go to localhost:8888/.netlify/functions/api, you should see the GraphiQL interface as expected. Success!
Now, that’s more like it!
The screen we get is the GraphiQL playground and we can use it to test out the API. First, clear out the comments in the left pane and replace them with the following:
{ message }
This might seem a little… naked… but you just wrote a GraphQL query! What we’re saying is that we’d like to see the message field we defined in api.js. Click the “Run” button, and on the righth, you’ll see the following:
{ "data": { "message": "Hello World" } }
I don’t know about you, but I did a little fist pump when I did this the first time. We built an API!
Bonus: Redirecting requests
One of my hang-ups while learning about Netlify’s serverless functions is that they run on the /.netlify/functions path. It wasn’t ideal to type or remember it and I nearly bailed for another solution. But it turns out you can easily redirect requests when running and deploying on Netlfiy. All it takes is creating a file in the project’s root directory called _redirects (no extension necessary) with the following line in it:
/api /.netlify/functions/api 200!
This tells Netlify that any traffic that goes to yoursite.com/api should be sent to /.netlify/functions/api. The 200! bit instructs the server to send back a status code of 200 (meaning everything’s OK).
Deploying the API
To deploy the project, we need to connect the source code to Netlfiy. I host mine in a GitHub repo, which allows for continuous deployment.
After connecting the repository to Netlfiy, the rest is automatic: the code is processed and deployed as a serverless function! You can log into the Netlify dashboard to see the logs from any function.
Conclusion
Just like that, we are able to create a serverless API using GraphQL with a few lines of JavaScript and some light configuration. And hey, we can even deploy — for free.
The possibilities are endless. Maybe you want to create your own personal knowledge base, or a tool to serve up design tokens. Maybe you want to try your hand at making your own PokéAPI. Or, maybe you’re interesting in working with GraphQL.
Regardless of what you make, it’s these sorts of technologies that are getting more and more accessible every day. It’s exciting to be able to work with some of the most modern tools and techniques without needing a deep technical back-end knowledge.
Some of the code in this tutorial was adapted from Web Dev Simplified’s “Learn GraphQL in 40 minutes” article. It’s a great resource to go one step deeper into GraphQL. However, it’s also focused on a more traditional server-full Express.
If you’d like to see the full result of my explorations, I’ve written a companion piece called “A design API in practice” on my website.
The reasons you need a special GraphQL object, instead of a regular ol’ vanilla JavaScript object in curly braces, is a little beyond the scope of this tutorial. Just keep in mind that GraphQL is a finely-tuned machine that uses these specialized types to be fast and resilient.
Scope and hoisting are some of the more confusing topics in JavaScript. MDN has a good primer that’s worth checking out.
Many developers are at least marginally familiar with AWS Lambda functions. They’re reasonably straightforward to set up, but the vast AWS landscape can make it hard to see the big picture. With so many different pieces it can be daunting, and frustratingly hard to see how they fit seamlessly into a normal web application.
The Serverless framework is a huge help here. It streamlines the creation, deployment, and most significantly, the integration of Lambda functions into a web app. To be clear, it does much, much more than that, but these are the pieces I’ll be focusing on. Hopefully, this post strikes your interest and encourages you to check out the many other things Serverless supports. If you’re completely new to Lambda you might first want to check out this AWS intro.
There’s no way I can cover the initial installation and setup better than the quick start guide, so start there to get up and running. Assuming you already have an AWS account, you might be up and running in 5–10 minutes; and if you don’t, the guide covers that as well.
Your first Serverless service
Before we get to cool things like file uploads and S3 buckets, let’s create a basic Lambda function, connect it to an HTTP endpoint, and call it from an existing web app. The Lambda won’t do anything useful or interesting, but this will give us a nice opportunity to see how pleasant it is to work with Serverless.
First, let’s create our service. Open any new, or existing web app you might have (create-react-app is a great way to quickly spin up a new one) and find a place to create our services. For me, it’s my lambda folder. Whatever directory you choose, cd into it from terminal and run the following command:
sls create -t aws-nodejs --path hello-world
That creates a new directory called hello-world. Let’s crack it open and see what’s in there.
If you look in handler.js, you should see an async function that returns a message. We could hit sls deploy in our terminal right now, and deploy that Lambda function, which could then be invoked. But before we do that, let’s make it callable over the web.
Working with AWS manually, we’d normally need to go into the AWS API Gateway, create an endpoint, then create a stage, and tell it to proxy to our Lambda. With serverless, all we need is a little bit of config.
Still in the hello-world directory? Open the serverless.yaml file that was created in there.
The config file actually comes with boilerplate for the most common setups. Let’s uncomment the http entries, and add a more sensible path. Something like this:
functions: hello: handler: handler.hello # The following are a few example events you can configure # NOTE: Please make sure to change your handler code to work with those events # Check the event documentation for details events: - http: path: msg method: get
That’s it. Serverless does all the grunt work described above.
CORS configuration
Ideally, we want to call this from front-end JavaScript code with the Fetch API, but that unfortunately means we need CORS to be configured. This section will walk you through that.
Below the configuration above, add cors: true, like this
That’s the section! CORS is now configured on our API endpoint, allowing cross-origin communication.
CORS Lambda tweak
While our HTTP endpoint is configured for CORS, it’s up to our Lambda to return the right headers. That’s just how CORS works. Let’s automate that by heading back into handler.js, and adding this function:
Before returning from the Lambda, we’ll send the return value through that function. Here’s the entirety of handler.js with everything we’ve done up to this point:
Let’s run it. Type sls deploy into your terminal from the hello-world folder.
When that runs, we’ll have deployed our Lambda function to an HTTP endpoint that we can call via Fetch. But… where is it? We could crack open our AWS console, find the gateway API that serverless created for us, then find the Invoke URL. It would look something like this.
Fortunately, there is an easier way, which is to type sls info into our terminal:
Just like that, we can see that our Lambda function is available at the following path:
Now that we’ve gotten our feet wet, let’s repeat this process. This time, though, let’s make a more interesting, useful service. Specifically, let’s make the canonical “resize an image” Lambda, but instead of being triggered by a new S3 bucket upload, let’s let the user upload an image directly to our Lambda. That’ll remove the need to bundle any kind of aws-sdk resources in our client-side bundle.
Building a useful Lambda
OK, from the start! This particular Lambda will take an image, resize it, then upload it to an S3 bucket. First, let’s create a new service. I’m calling it cover-art but it could certainly be anything else.
sls create -t aws-nodejs --path cover-art
As before, we’ll add a path to our HTTP endpoint (which in this case will be a POST, instead of GET, since we’re sending the file instead of receiving it) and enable CORS:
// Same as before events: - http: path: upload method: post cors: true
Next, let’s grant our Lambda access to whatever S3 buckets we’re going to use for the upload. Look in your YAML file — there should be a iamRoleStatements section that contains boilerplate code that’s been commented out. We can leverage some of that by uncommenting it. Here’s the config we’ll use to enable the S3 buckets we want:
Note the /* on the end. We don’t list specific bucket names in isolation, but rather paths to resources; in this case, that’s any resources that happen to exist inside your-bucket-name.
Since we want to upload files directly to our Lambda, we need to make one more tweak. Specifically, we need to configure the API endpoint to accept multipart/form-data as a binary media type. Locate the provider section in the YAML file:
For good measure, let’s give our function an intelligent name. Replace handler: handler.hello with handler: handler.upload, then change module.exports.hello to module.exports.upload in handler.js.
Now we get to write some code
First, let’s grab some helpers.
npm i jimp uuid lambda-multipart-parser
Wait, what’s Jimp? It’s the library I’m using to resize uploaded images. uuid will be for creating new, unique file names of the sized resources, before uploading to S3. Oh, and lambda-multipart-parser? That’s for parsing the file info inside our Lambda.
Next, let’s make a convenience helper for S3 uploading:
Lastly, we’ll plug in some code that reads the upload files, resizes them with Jimp (if needed) and uploads the result to S3. The final result is below.
I’m sorry to dump so much code on you but — this being a post about Amazon Lambda and serverless — I’d rather not belabor the grunt work within the serverless function. Of course, yours might look completely different if you’re using an image library other than Jimp.
Let’s run it by uploading a file from our client. I’m using the react-dropzone library, so my JSX looks like this:
<Dropzone onDrop={files => onDrop(files)} multiple={false} > <div>Click or drag to upload a new cover</div> </Dropzone>
The onDrop function looks like this:
const onDrop = files => { let request = new FormData(); request.append("fileUploaded", files[0]); fetch("https://yb1ihnzpy8.execute-api.us-east-1.amazonaws.com/dev/upload", { method: "POST", mode: "cors", body: request }) .then(resp => resp.json()) .then(res => { if (res.error) { // handle errors } else { // success - woo hoo - update state as needed } }); };
And just like that, we can upload a file and see it appear in our S3 bucket!
An optional detour: bundling
There’s one optional enhancement we could make to our setup. Right now, when we deploy our service, Serverless is zipping up the entire services folder and sending all of it to our Lambda. The content currently weighs in at 10MB, since all of our node_modules are getting dragged along for the ride. We can use a bundler to drastically reduce that size. Not only that, but a bundler will cut deploy time, data usage, cold start performance, etc. In other words, it’s a nice thing to have.
Fortunately for us, there’s a plugin that easily integrates webpack into the serverless build process. Let’s install it with:
npm i serverless-webpack --save-dev
…and add it via our YAML config file. We can drop this in at the very end:
// Same as before plugins: - serverless-webpack
Naturally, we need a webpack.config.js file, so let’s add that to the mix:
Notice that we’re setting target: node so Node-specific assets are treated properly. Also note that you may need to set the output filename to handler.js. I’m also adding aws-sdk to the externals array so webpack doesn’t bundle it at all; instead, it’ll leave the call to const AWS = require("aws-sdk"); alone, allowing it to be handled by our Lamdba, at runtime. This is OK since Lambdas already have the aws-sdk available implicitly, meaning there’s no need for us to send it over the wire. Finally, the mainFields: ["main"] is to tell webpack to ignore any ESM module fields. This is necessary to fix some issues with the Jimp library.
Now let’s re-deploy, and hopefully we’ll see webpack running.
Now our code is bundled nicely into a single file that’s 935K, which zips down further to a mere 337K. That’s a lot of savings!
Odds and ends
If you’re wondering how you’d send other data to the Lambda, you’d add what you want to the request object, of type FormData, from before. For example:
request.append("xyz", "Hi there");
…and then read formPayload.xyz in the Lambda. This can be useful if you need to send a security token, or other file info.
If you’re wondering how you might configure env variables for your Lambda, you might have guessed by now that it’s as simple as adding some fields to your serverless.yaml file. It even supports reading the values from an external file (presumably not committed to git). This blog post by Philipp Müns covers it well.
Wrapping up
Serverless is an incredible framework. I promise, we’ve barely scratched the surface. Hopefully this post has shown you its potential, and motivated you to check it out even further.
If you’re interested in learning more, I’d recommend the learning materials from David Wells, an engineer at Netlify, and former member of the serverless team, as well as the Serverless Handbook by Swizec Teller