Tag: JAMstack

Deploying a Serverless Jamstack Site with RedwoodJS, Fauna, and Vercel

This article is for anyone interested in the emerging ecosystem of tools and technologies related to Jamstack and serverless. We’re going to use Fauna’s GraphQL API as a serverless back-end for a Jamstack front-end built with the Redwood framework and deployed with a one-click deploy on Vercel.

In other words, lots to learn! By the end, you’ll not only get to dive into Jamstack and serverless concepts, but also hands-on experience with a really neat combination of tech that I think you’ll really like.

Creating a Redwood app

Redwood is a framework for serverless applications that pulls together React (for front-end component), GraphQL (for data) and Prisma (for database queries).

There are other front-end frameworks that we could use here. One example is Bison, created by Chris Ball. It leverages GraphQL in a similar fashion to Redwood, but uses a slightly different lineup of GraphQL libraries, such as Nexus in place of Apollo Client and GraphQL Codegen, in place of the Redwood CLI. But it’s only been around a few months, so the project is still very new compared to Redwood, which has been in development since June 2019.

There are many great Redwood starter templates we could use to bootstrap our application, but I want to start by generating a Redwood boilerplate project and looking at the different pieces that make up a Redwood app. We’ll then build up the project, piece by piece.

We will need to install Yarn to use the Redwood CLI to get going. Once that’s good to go, here’s what to run in a terminal

yarn create redwood-app ./csstricks

We’ll now cd into our new project directory and start our development server.

cd csstricks yarn rw dev

Our project’s front-end is now running on localhost:8910. Our back-end is running on localhost:8911 and ready to receive GraphQL queries. By default, Redwood comes with a GraphiQL playground that we’ll use towards the end of the article.

Let’s head over to localhost:8910 in the browser. If all is good, the Redwood landing page should load up.

The Redwood starting page indicates that the front end of our app is ready to go. It also provides a nice instruction for how to start creating custom routes for the app.

Redwood is currently at version 0.21.0, as of this writing. The docs warn against using it in production until it officially reaches 1.0. They also have a community forum where they welcome feedback and input from developers like yourself.

Directory structure

Redwood values convention over configuration and makes a lot of decisions for us, including the choice of technologies, how files are organized, and even naming conventions. This can result in an overwhelming amount of generated boilerplate code that is hard to comprehend, especially if you’re just digging into this for the first time.

Here’s how the project is structured:

├── api │   ├── prisma │   │   ├── schema.prisma │   │   └── seeds.js │   └── src │       ├── functions │       │   └── graphql.js │       ├── graphql │       ├── lib │       │   └── db.js │       └── services └── web     ├── public     │   ├── favicon.png     │   ├── README.md     │   └── robots.txt     └── src         ├── components         ├── layouts         ├── pages         │   ├── FatalErrorPage         │   │   └── FatalErrorPage.js         │   └── NotFoundPage         │       └── NotFoundPage.js         ├── index.css         ├── index.html         ├── index.js         └── Routes.js

Don’t worry too much about what all this means yet; the first thing to notice is things are split into two main directories: web and api. Yarn workspaces allows each side to have its own path in the codebase.

web contains our front-end code for:

  • Pages
  • Layouts
  • Components

api contains our back-end code for:

  • Function handlers
  • Schema definition language
  • Services for back-end business logic
  • Database client

Redwood assumes Prisma as a data store, but we’re going to use Fauna instead. Why Fauna when we could just as easily use Firebase? Well, it’s just a personal preference. After Google purchased Firebase they launched a real-time document database, Cloud Firestore, as the successor to the original Firebase Realtime Database. By integrating with the larger Firebase ecosystem, we could have access to a wider range of features than what Fauna offers. At the same time, there are even a handful of community projects that have experimented with Firestore and GraphQL but there isn’t first class GraphQL support from Google.

Since we will be querying Fauna directly, we can delete the prisma directory and everything in it. We can also delete all the code in db.js. Just don’t delete the file as we’ll be using it to connect to the Fauna client.

index.html

We’ll start by taking a look at the web side since it should look familiar to developers with experience using React or other single-page application frameworks.

But what actually happens when we build a React app? It takes the entire site and shoves it all into one big ball of JavaScript inside index.js, then shoves that ball of JavaScript into the “root” DOM node, which is on line 11 of index.html.

<!DOCTYPE html> <html lang="en">   <head>     <meta charset="UTF-8" />     <meta name="viewport" content="width=device-width, initial-scale=1.0" />     <link rel="icon" type="image/png" href="/favicon.png" />     <title><%= htmlWebpackPlugin.options.title %></title>   </head>    <body>     <div id="redwood-app"></div> // HIGHLIGHT   </body> </html>

While Redwood uses Jamstack in the documentation and marketing of itself, Redwood doesn’t do pre-rendering yet (like Next or Gatsby can), but is still Jamstack in that it’s shipping static files and hitting APIs with JavaScript for data.

index.js

index.js contains our root component (that big ball of JavaScript) that is rendered to the root DOM node. document.getElementById() selects an element with an id containing redwood-app, and ReactDOM.render() renders our application into the root DOM element.

RedwoodProvider

The <Routes /> component (and by extension all the application pages) are contained within the <RedwoodProvider> tags. Flash uses the Context API for passing message objects between deeply nested components. It provides a typical message display unit for rendering the messages provided to FlashContext.

FlashContext’s provider component is packaged with the <RedwoodProvider /> component so it’s ready to use out of the box. Components pass message objects by subscribing to it (think, “send and receive”) via the provided useFlash hook.

FatalErrorBoundary

The provider itself is then contained within the <FatalErrorBoundary> component which is taking in <FatalErrorPage> as a prop. This defaults your website to an error page when all else fails.

import ReactDOM from 'react-dom' import { RedwoodProvider, FatalErrorBoundary } from '@redwoodjs/web' import FatalErrorPage from 'src/pages/FatalErrorPage' import Routes from 'src/Routes' import './index.css'  ReactDOM.render(   <FatalErrorBoundary page={FatalErrorPage}>     <RedwoodProvider>       <Routes />     </RedwoodProvider>   </FatalErrorBoundary>,    document.getElementById('redwood-app') )

Routes.js

Router contains all of our routes and each route is specified with a Route. The Redwood Router attempts to match the current URL to each route, stopping when it finds a match and then renders only that route. The only exception is the notfound route which renders a single Route with a notfound prop when no other route matches.

import { Router, Route } from '@redwoodjs/router'  const Routes = () => {   return (     <Router>       <Route notfound page={NotFoundPage} />     </Router>   ) }  export default Routes

Pages

Now that our application is set up, let’s start creating pages! We’ll use the Redwood CLI generate page command to create a named route function called home. This renders the HomePage component when it matches the URL path to /.

We can also use rw instead of redwood and g instead of generate to save some typing.

yarn rw g page home /

This command performs four separate actions:

  • It creates web/src/pages/HomePage/HomePage.js. The name specified in the first argument gets capitalized and “Page” is appended to the end.
  • It creates a test file at web/src/pages/HomePage/HomePage.test.js with a single, passing test so you can pretend you’re doing test-driven development.
  • It creates a Storybook file at web/src/pages/HomePage/HomePage.stories.js.
  • It adds a new <Route> in web/src/Routes.js that maps the / path to the HomePage component.

HomePage

If we go to web/src/pages we’ll see a HomePage directory containing a HomePage.js file. Here’s what’s in it:

// web/src/pages/HomePage/HomePage.js  import { Link, routes } from '@redwoodjs/router'  const HomePage = () => {   return (     <>       <h1>HomePage</h1>       <p>         Find me in <code>./web/src/pages/HomePage/HomePage.js</code>       </p>       <p>         My default route is named <code>home</code>, link to me with `         <Link to={routes.home()}>Home</Link>`       </p>     </>   ) }  export default HomePage
The HomePage.js file has been set as the main route, /.

We’re going to move our page navigation into a re-usable layout component which means we can delete the Link and routes imports as well as <Link to={routes.home()}>Home</Link>. This is what we’re left with:

// web/src/pages/HomePage/HomePage.js  const HomePage = () => {   return (     <>       <h1>RedwoodJS+FaunaDB+Vercel 🚀</h1>       <p>Taking Fullstack to the Jamstack</p>     </>   ) }  export default HomePage

AboutPage

To create our AboutPage, we’ll enter almost the exact same command we just did, but with about instead of home. We also don’t need to specify the path since it’s the same as the name of our route. In this case, the name and path will both be set to about.

yarn rw g page about
AboutPage.js is now available at /about.
// web/src/pages/AboutPage/AboutPage.js  import { Link, routes } from '@redwoodjs/router'  const AboutPage = () => {   return (     <>       <h1>AboutPage</h1>       <p>         Find me in <code>./web/src/pages/AboutPage/AboutPage.js</code>       </p>       <p>         My default route is named <code>about</code>, link to me with `         <Link to={routes.about()}>About</Link>`       </p>     </>   ) }  export default AboutPage

We’ll make a few edits to the About page like we did with our Home page. That includes taking out the <Link> and routes imports and deleting Link to={routes.about()}>About</Link>.

Here’s the end result:

// web/src/pages/AboutPage/AboutPage.js  const AboutPage = () => {   return (     <>       <h1>About 🚀🚀</h1>       <p>For those who want to stack their Jam, fully</p>     </>   ) }

If we return to Routes.js we’ll see our new routes for home and about. Pretty nice that Redwood does this for us!

const Routes = () => {   return (     <Router>       <Route path="/about" page={AboutPage} name="about" />       <Route path="/" page={HomePage} name="home" />       <Route notfound page={NotFoundPage} />     </Router>   ) }

Layouts

Now we want to create a header with navigation links that we can easily import into our different pages. We want to use a layout so we can add navigation to as many pages as we want by importing the component instead of having to write the code for it on every single page.

BlogLayout

You may now be wondering, “is there a generator for layouts?” The answer to that is… of course! The command is almost identical as what we’ve been doing so far, except with rw g layout followed by the name of the layout, instead of rw g page followed by the name and path of the route.

yarn rw g layout blog
// web/src/layouts/BlogLayout/BlogLayout.js  const BlogLayout = ({ children }) => {   return <>{children}</> }  export default BlogLayout

To create links between different pages we’ll need to:

  • Import Link and routes from @redwoodjs/router into BlogLayout.js
  • Create a <Link to={}></Link> component for each link
  • Pass a named route function, such as routes.home(), into the to={} prop for each route
// web/src/layouts/BlogLayout/BlogLayout.js  import { Link, routes } from '@redwoodjs/router'  const BlogLayout = ({ children }) => {   return (     <>       <header>         <h1>RedwoodJS+FaunaDB+Vercel 🚀</h1>          <nav>           <ul>             <li>               <Link to={routes.home()}>Home</Link>             </li>             <li>               <Link to={routes.about()}>About</Link>             </li>           </ul>         </nav>        </header>        <main>         <p>{children}</p>       </main>     </>   ) }  export default BlogLayout

We won’t see anything different in the browser yet. We created the BlogLayout but have not imported it into any pages. So let’s import BlogLayout into HomePage and wrap the entire return statement with the BlogLayout tags.

// web/src/pages/HomePage/HomePage.js  import BlogLayout from 'src/layouts/BlogLayout'  const HomePage = () => {   return (     <BlogLayout>       <p>Taking Fullstack to the Jamstack</p>     </BlogLayout>   ) }  export default HomePage
Hey look, the navigation is taking shape!

If we click the link to the About page we’ll be taken there but we are unable to get back to the previous page because we haven’t imported BlogLayout into AboutPage yet. Let’s do that now:

// web/src/pages/AboutPage/AboutPage.js  import BlogLayout from 'src/layouts/BlogLayout'  const AboutPage = () => {   return (     <BlogLayout>       <p>For those who want to stack their Jam, fully</p>     </BlogLayout>   ) }  export default AboutPage

Now we can navigate back and forth between the pages by clicking the navigation links! Next up, we’ll now create our GraphQL schema so we can start working with data.

Fauna schema definition language

To make this work, we need to create a new file called sdl.gql and enter the following schema into the file. Fauna will take this schema and make a few transformations.

// sdl.gql  type Post {   title: String!   body: String! }  type Query {   posts: [Post] }

Save the file and upload it to Fauna’s GraphQL Playground. Note that, at this point, you will need a Fauna account to continue. There’s a free tier that works just fine for what we’re doing.

The GraphQL Playground is located in the selected database.
The Fauna shell allows us to write, run and test queries.

It’s very important that Redwood and Fauna agree on the SDL, so we cannot use the original SDL that was entered into Fauna because that is no longer an accurate representation of the types as they exist on our Fauna database.

The Post collection and posts Index will appear unaltered if we run the default queries in the shell, but Fauna creates an intermediary PostPage type which has a data object.

Redwood schema definition language

This data object contains an array with all the Post objects in the database. We will use these types to create another schema definition language that lives inside our graphql directory on the api side of our Redwood project.

// api/src/graphql/posts.sdl.js  import gql from 'graphql-tag'  export const schema = gql`   type Post {     title: String!     body: String!   }    type PostPage {     data: [Post]   }    type Query {     posts: PostPage   } ` 

Services

The posts service sends a query to the Fauna GraphQL API. This query is requesting an array of posts, specifically the title and body for each. These are contained in the data object from PostPage.

// api/src/services/posts/posts.js  import { request } from 'src/lib/db' import { gql } from 'graphql-request'  export const posts = async () => {   const query = gql`   {     posts {       data {         title         body       }     }   }   `    const data = await request(query, 'https://graphql.fauna.com/graphql')    return data['posts'] } 

At this point, we can install graphql-request, a minimal client for GraphQL with a promise-based API that can be used to send GraphQL requests:

cd api yarn add graphql-request graphql

Attach the Fauna authorization token to the request header

So far, we have GraphQL for data, Fauna for modeling that data, and graphql-request to query it. Now we need to establish a connection between graphql-request and Fauna, which we’ll do by importing graphql-request into db.js and use it to query an endpoint that is set to https://graphql.fauna.com/graphql.

// api/src/lib/db.js  import { GraphQLClient } from 'graphql-request'  export const request = async (query = {}) => {   const endpoint = 'https://graphql.fauna.com/graphql'    const graphQLClient = new GraphQLClient(endpoint, {     headers: {       authorization: 'Bearer ' + process.env.FAUNADB_SECRET     },   })    try {     return await graphQLClient.request(query)   } catch (error) {     console.log(error)     return error   } } 

A GraphQLClient is instantiated to set the header with an authorization token, allowing data to flow to our app.

Create

We’ll use the Fauna Shell and run a couple of Fauna Query Language (FQL) commands to seed the database. First, we’ll create a blog post with a title and body.

Create(   Collection("Post"),   {     data: {       title: "Deno is a secure runtime for JavaScript and TypeScript.",       body: "The original creator of Node, Ryan Dahl, wanted to build a modern, server-side JavaScript framework that incorporates the knowledge he gained building out the initial Node ecosystem."     }   } )
{   ref: Ref(Collection("Post"), "282083736060690956"),   ts: 1605274864200000,   data: {     title: "Deno is a secure runtime for JavaScript and TypeScript.",     body:       "The original creator of Node, Ryan Dahl, wanted to build a modern, server-side JavaScript framework that incorporates the knowledge he gained building out the initial Node ecosystem."   } }

Let’s create another one.

Create(   Collection("Post"),   {     data: {       title: "NextJS is a React framework for building production grade applications that scale.",       body: "To build a complete web application with React from scratch, there are many important details you need to consider such as: bundling, compilation, code splitting, static pre-rendering, server-side rendering, and client-side rendering."     }   } )
{   ref: Ref(Collection("Post"), "282083760102441484"),   ts: 1605274887090000,   data: {     title:       "NextJS is a React framework for building production grade applications that scale.",     body:       "To build a complete web application with React from scratch, there are many important details you need to consider such as: bundling, compilation, code splitting, static pre-rendering, server-side rendering, and client-side rendering."   } }

And maybe one more just to fill things up.

Create(   Collection("Post"),   {     data: {       title: "Vue.js is an open-source front end JavaScript framework for building user interfaces and single-page applications.",       body: "Evan You wanted to build a framework that combined many of the things he loved about Angular and Meteor but in a way that would produce something novel. As React rose to prominence, Vue carefully observed and incorporated many lessons from React without ever losing sight of their own unique value prop."     }   } )
{   ref: Ref(Collection("Post"), "282083792286384652"),   ts: 1605274917780000,   data: {     title:       "Vue.js is an open-source front end JavaScript framework for building user interfaces and single-page applications.",     body:       "Evan You wanted to build a framework that combined many of the things he loved about Angular and Meteor but in a way that would produce something novel. As React rose to prominence, Vue carefully observed and incorporated many lessons from React without ever losing sight of their own unique value prop."   } }

Cells

Cells provide a simple and declarative approach to data fetching. They contain the GraphQL query along with loading, empty, error, and success states. Each one renders itself automatically depending on what state the cell is in.

BlogPostsCell

yarn rw generate cell BlogPosts   export const QUERY = gql`   query BlogPostsQuery {     blogPosts {       id     }   } ` export const Loading = () => <div>Loading...</div> export const Empty = () => <div>Empty</div> export const Failure = ({ error }) => <div>Error: {error.message}</div>  export const Success = ({ blogPosts }) => {   return JSON.stringify(blogPosts) }

By default we have the query render the data with JSON.stringify on the page where the cell is imported. We’ll make a handful of changes to make the query and render the data we need. So, let’s:

  • Change blogPosts to posts.
  • Change BlogPostsQuery to POSTS.
  • Change the query itself to return the title and body of each post.
  • Map over the data object in the success component.
  • Create a component with the title and body of the posts returned through the data object.

Here’s how that looks:

// web/src/components/BlogPostsCell/BlogPostsCell.js  export const QUERY = gql`   query POSTS {     posts {       data {         title         body       }     }   } ` export const Loading = () => <div>Loading...</div> export const Empty = () => <div>Empty</div> export const Failure = ({ error }) => <div>Error: {error.message}</div>  export const Success = ({ posts }) => {   const {data} = posts   return data.map(post => (     <>       <header>         <h2>{post.title}</h2>       </header>       <p>{post.body}</p>     </>   )) }

The POSTS query is sending a query for posts, and when it’s queried, we get back a data object containing an array of posts. We need to pull out the data object so we can loop over it and get the actual posts. We do this with object destructuring to get the data object and then we use the map() function to map over the data object and pull out each post. The title of each post is rendered with an <h2> inside <header> and the body is rendered with a <p> tag.

Import BlogPostsCell to HomePage

// web/src/pages/HomePage/HomePage.js  import BlogLayout from 'src/layouts/BlogLayout' import BlogPostsCell from 'src/components/BlogPostsCell/BlogPostsCell.js'  const HomePage = () => {   return (     <BlogLayout>       <p>Taking Fullstack to the Jamstack</p>       <BlogPostsCell />     </BlogLayout>   ) }  export default HomePage
Check that out! Posts are returned to the app and rendered on the front end.

Vercel

We do mention Vercel in the title of this post, and we’re finally at the point where we need it. Specifically, we’re using it to build the project and deploy it to Vercel’s hosted platform, which offers build previews when code it pushed to the project repository. So, if you don’t already have one, grab a Vercel account. Again, the free pricing tier works just fine for this work.

Why Vercel over, say, Netlify? It’s a good question. Redwood even began with Netlify as its original deploy target. Redwood still has many well-documented Netlify integrations. Despite the tight integration with Netlify, Redwood seeks to be universally portable to as many deploy targets as possible. This now includes official support for Vercel along with community integrations for the Serverless framework, AWS Fargate, and PM2. So, yes, we could use Netlify here, but it’s nice that we have a choice of available services.

We only have to make one change to the project’s configuration to integrate it with Vercel. Let’s open netlify.toml and change the apiProxyPath to "/api". Then, let’s log into Vercel and click the “Import Project” button to connect its service to the project repository. This is where we enter the URL of the repo so Vercel can watch it, then trigger a build and deploy when it noticed changes.

I’m using GitHub to host my project, but Vercel is capable of working with GitLab and Bitbucket as well.

Redwood has a preset build command that works out of the box in Vercel:

Simply select “Redwood” from the preset options and we’re good to go.

We’re pretty far along, but even though the site is now “live” the database isn’t connected:

To fix that, we’ll add the FAUNADB_SECRET token from our Fauna account to our environment variables in Vercel:

Now our application is complete!

We did it! I hope this not only gets you super excited about working with Jamstack and serverless, but got a taste of some new technologies in the process.


The post Deploying a Serverless Jamstack Site with RedwoodJS, Fauna, and Vercel appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , , ,

How to create a client-serverless Jamstack app using Netlify, Gatsby and Fauna

The Jamstack is a modern web development architecture based on client-side JavaScript, reusable APIs, and prebuilt Markup.

The key aspects of a Jamstack application are the following:

  • The entire app runs on a CDN (or ADN). CDN stands for Content Delivery Network and an ADN is an Application Delivery Network.
  • Everything lives in Git.
  • Automated builds run with a workflow when developers push the code.
  • There’s Automatic deployment of the prebuilt markup to the CDN/ADN.
  • Reusable APIs make hasslefree integrations with many of the services. To take a few examples, Stripe for the payment and checkout, Mailgun for email services, etc. We can also write custom APIs targeted to a specific use-case. We will see such examples of custom APIs in this article.
  • It’s practically Serverless. To put it more clearly, we do not maintain any servers, rather make use of already existing services (like email, media, database, search, and so on) or serverless functions.

In this article, we will learn how to build a Jamstack application that has:

  • A global data store with GraphQL support to store and fetch data with ease. We will use Fauna to accomplish this.
  • Serverless functions that also act as the APIs to fetch data from the Fauna data store. We will use Netlify serverless functions for this.
  • We will build the client side of the app using a Static Site Generator called Gatsbyjs.
  • Finally we will deploy the app on a CDN configured and managed by Netlify CDN.

So, what are we building today?

We all love shopping. How cool would it be to manage all of our shopping notes in a centralized place? So we’ll be building an app called ‘shopnote’ that allows us to manage shop notes. We can also add one or more items to a note, mark them as done, mark them as urgent, etc.

At the end of this article, our shopnote app will look like this,

TL;DR

We will learn things with a step-by-step approach in this article. If you want to jump into the source code or demonstration sooner, here are links to them.

Set up Fauna

Fauna is the data API for client-serverless applications. If you are familiar with any traditional RDBMS, a major difference with Fauna would be, it is a relational NOSQL system that gives all the capabilities of the legacy RDBMS. It is very flexible without compromising scalability and performance.

Fauna supports multiple APIs for data-access,

  • GraphQL: An open source data query and manipulation language. If you are new to the GraphQL, you can find more details from here, https://graphql.org/
  • Fauna Query Language (FQL): An API for querying Fauna. FQL has language specific drivers which makes it flexible to use with languages like JavaScript, Java, Go, etc. Find more details of FQL from here.

In this article we will explain the usages of GraphQL for the ShopNote application.

First thing first, sign up using this URL. Please select the free plan which is with a generous daily usage quota and more than enough for our usage.

Next, create a database by providing a database name of your choice. I have used shopnotes as the database name.

After creating the database, we will be defining the GraphQL schema and importing it into the database. A GraphQL schema defines the structure of the data. It defines the data types and the relationship between them. With schema we can also specify what kind of queries are allowed.

At this stage, let us create our project folder. Create a project folder somewhere on your hard drive with the name, shopnote. Create a file with the name, shopnotes.gql with the following content:

type ShopNote {   name: String!   description: String   updatedAt: Time   items: [Item!] @relation }   type Item {   name: String!   urgent: Boolean   checked: Boolean   note: ShopNote! }   type Query {   allShopNotes: [ShopNote!]! }

Here we have defined the schema for a shopnote list and item, where each ShopNote contains name, description, update time and a list of Items. Each Item type has properties like, name, urgent, checked and which shopnote it belongs to. 

Note the @relation directive here. You can annotate a field with the @relation directive to mark it for participating in a bi-directional relationship with the target type. In this case, ShopNote and Item are in a one-to-many relationship. It means, one ShopNote can have multiple Items, where each Item can be related to a maximum of one ShopNote.

You can read more about the @relation directive from here. More on the GraphQL relations can be found from here.

As a next step, upload the shopnotes.gql file from the Fauna dashboard using the IMPORT SCHEMA button,

Upon importing a GraphQL Schema, FaunaDB will automatically create, maintain, and update, the following resources:

  • Collections for each non-native GraphQL Type; in this case, ShopNote and Item.
  • Basic CRUD Queries/Mutations for each Collection created by the Schema, e.g. createShopNote allShopNotes; each of which are powered by FQL.
  • For specific GraphQL directives: custom Indexes or FQL for establishing relationships (i.e. @relation), uniqueness (@unique), and more!

Behind the scene, Fauna will also help to create the documents automatically. We will see that in a while.

Fauna supports a schema-free object relational data model. A database in Fauna may contain a group of collections. A collection may contain one or more documents. Each of the data records are inserted into the document. This forms a hierarchy which can be visualized as:

Here the data record can be arrays, objects, or of any other supported types. With the Fauna data model we can create indexes, enforce constraints. Fauna indexes can combine data from multiple collections and are capable of performing computations. 

At this stage, Fauna already created a couple of collections for us, ShopNote and Item. As we start inserting records, we will see the Documents are also getting created. We will be able view and query the records and utilize the power of indexes. You may see the data model structure appearing in your Fauna dashboard like this in a while,

Point to note here, each of the documents is identified by the unique ref attribute. There is also a ts field which returns the timestamp of the recent modification to the document. The data record is part of the data field. This understanding is really important when you interact with collections, documents, records using FQL built-in functions. However, in this article we will interact with them using GraphQL queries with Netlify Functions.

With all these understanding, let us start using our Shopenotes database that is created successfully and ready for use. 

Let us try some queries

Even though we have imported the schema and underlying things are in place, we do not have a document yet. Let us create one. To do that, copy the following GraphQL mutation query to the left panel of the GraphQL playground screen and execute.

mutation {   createShopNote(data: {     name: "My Shopping List"     description: "This is my today's list to buy from Tom's shop"     items: {       create: [         { name: "Butther - 1 pk", urgent: true }         { name: "Milk - 2 ltrs", urgent: false }         { name: "Meat - 1lb", urgent: false }       ]     }   }) {     _id     name     description     items {       data {         name,         urgent       }     }   } }

Note, as Fauna already created the GraphQL mutation classes in the background, we can directly use it like, createShopNote. Once successfully executed, you can see the response of a ShopNote creation at the right side of the editor.

The newly created ShopNote document has all the required details we have passed while creating it. We have seen ShopNote has a one-to-many relation with Item. You can see the shopnote response has the item data nested within it. In this case, one shopnote has three items. This is really powerful. Once the schema and relation are defined, the document will be created automatically keeping that relation in mind.

Now, let us try fetching all the shopnotes. Here is the GraphQL query:

query {     allShopNotes {     data {       _id       name       description       updatedAt       items {         data {           name,           checked,           urgent         }       }     }   } }

Let’s try the query in the playground as before:

Now we have a database with a schema and it is fully operational with creating and fetch functionality. Similarly, we can create queries for adding, updating, removing items to a shopnote and also updating and deleting a shopnote. These queries will be used at a later point in time when we create the serverless functions.

If you are interested to run other queries in the GraphQL editor, you can find them from here,

Create a Server Secret Key

Next, we need to create a secured server key to make sure the access to the database is authenticated and authorized.

Click on the SECURITY option available in the FaunaDB interface to create the key, like so,

On successful creation of the key, you will be able to view the key’s secret. Make sure to copy and save it somewhere safe.

We do not want anyone else to know about this key. It is not even a good idea to commit it to the source code repository. To maintain this secrecy, create an empty file called .env at the root level of your project folder.

Edit the .env file and add the following line to it (paste the generated server key in the place of, <YOUR_FAUNA_KEY_SECRET>).

FAUNA_SERVER_SECRET=<YOUR_FAUNA_KEY_SECRET>

Add a .gitignore file and write the following content to it. This is to make sure we do not commit the .env file to the source code repo accidentally. We are also ignoring node_modules as a best practice.

.env

We are done with all that had to do with Fauna’s setup. Let us move to the next phase to create serverless functions and APIs to access data from the Fauna data store. At this stage, the directory structure may look like this,

Set up Netlify Serverless Functions

Netlify is a great platform to create hassle-free serverless functions. These functions can interact with databases, file-system, and in-memory objects.

Netlify functions are powered by AWS Lambda. Setting up AWS Lambdas on our own can be a fairly complex job. With Netlify, we will simply set a folder and drop our functions. Writing simple functions automatically becomes APIs. 

First, create an account with Netlify. This is free and just like the FaunaDB free tier, Netlify is also very flexible.

Now we need to install a few dependencies using either npm or yarn. Make sure you have nodejs installed. Open a command prompt at the root of the project folder. Use the following command to initialize the project with node dependencies,

npm init -y

Install the netlify-cli utility so that we can run the serverless function locally.

npm install netlify-cli -g

Now we will install two important libraries, axios and dotenv. axios will be used for making the HTTP calls and dotenv will help to load the FAUNA_SERVER_SECRET environment variable from the .env file into process.env.

yarn add axios dotenv

Or:

npm i axios dotenv

Create serverless functions

Create a folder with the name, functions at the root of the project folder. We are going to keep all serverless functions under it.

Now create a subfolder called utils under the functions folder. Create a file called query.js under the utils folder. We will need some common code to query the data store for all the serverless functions. The common code will be in the query.js file.

First we import the axios library functionality and load the .env file. Next, we export an async function that takes the query and variables. Inside the async function, we make calls using axios with the secret key. Finally, we return the response.

// query.js   const axios = require("axios"); require("dotenv").config();   module.exports = async (query, variables) => {   const result = await axios({       url: "https://graphql.fauna.com/graphql",       method: "POST",       headers: {           Authorization: `Bearer $ {process.env.FAUNA_SERVER_SECRET}`       },       data: {         query,         variables       }  });    return result.data; };

Create a file with the name, get-shopnotes.js under the functions folder. We will perform a query to fetch all the shop notes.

// get-shopnotes.js   const query = require("./utils/query");   const GET_SHOPNOTES = `    query {        allShopNotes {        data {          _id          name          description          updatedAt          items {            data {              _id,              name,              checked,              urgent          }        }      }    }  }   `;   exports.handler = async () => {   const { data, errors } = await query(GET_SHOPNOTES);     if (errors) {     return {       statusCode: 500,       body: JSON.stringify(errors)     };   }     return {     statusCode: 200,     body: JSON.stringify({ shopnotes: data.allShopNotes.data })   }; };

Time to test the serverless function like an API. We need to do a one time setup here. Open a command prompt at the root of the project folder and type:

netlify login

This will open a browser tab and ask you to login and authorize access to your Netlify account. Please click on the Authorize button.

Next, create a file called, netlify.toml at the root of your project folder and add this content to it,

[build]     functions = "functions"   [[redirects]]    from = "/api/*"    to = "/.netlify/functions/:splat"    status = 200

This is to tell Netlify about the location of the functions we have written so that it is known at the build time.

Netlify automatically provides the APIs for the functions. The URL to access the API is in this form, /.netlify/functions/get-shopnotes which may not be very user-friendly. We have written a redirect to make it like, /api/get-shopnotes.

Ok, we are done. Now in command prompt type,

netlify dev

By default the app will run on localhost:8888 to access the serverless function as an API.

Open a browser tab and try this URL, http://localhost:8888/api/get-shopnotes:

Congratulations!!! You have got your first serverless function up and running.

Let us now write the next serverless function to create a ShopNote. This is going to be simple. Create a file named, create-shopnote.js under the functions folder. We need to write a mutation by passing the required parameters. 

//create-shopnote.js   const query = require("./utils/query");   const CREATE_SHOPNOTE = `   mutation($ name: String!, $ description: String!, $ updatedAt: Time!, $ items: ShopNoteItemsRelation!) {     createShopNote(data: {name: $ name, description: $ description, updatedAt: $ updatedAt, items: $ items}) {       _id       name       description       updatedAt       items {         data {           name,           checked,           urgent         }       }     }   } `;   exports.handler = async event => {      const { name, items } = JSON.parse(event.body);   const { data, errors } = await query(     CREATE_SHOPNOTE, { name, items });     if (errors) {     return {       statusCode: 500,       body: JSON.stringify(errors)     };   }     return {     statusCode: 200,     body: JSON.stringify({ shopnote: data.createShopNote })   }; };

Please give your attention to the parameter, ShopNotesItemRelation. As we had created a relation between the ShopNote and Item in our schema, we need to maintain that while writing the query as well.

We have de-structured the payload to get the required information from the payload. Once we got those, we just called the query method to create a ShopNote.

Alright, let’s test it out. You can use postman or any other tools of your choice to test it like an API. Here is the screenshot from postman.

Great, we can create a ShopNote with all the items we want to buy from a shopping mart. What if we want to add an item to an existing ShopNote? Let us create an API for it. With the knowledge we have so far, it is going to be really quick.

Remember, ShopNote and Item are related? So to create an item, we have to mandatorily tell which ShopNote it is going to be part of. Here is our next serverless function to add an item to an existing ShopNote.

//add-item.js   const query = require("./utils/query");   const ADD_ITEM = `   mutation($ name: String!, $ urgent: Boolean!, $ checked: Boolean!, $ note: ItemNoteRelation!) {     createItem(data: {name: $ name, urgent: $ urgent, checked: $ checked, note: $ note}) {       _id       name       urgent       checked       note {         name       }     }   } `;   exports.handler = async event => {      const { name, urgent, checked, note} = JSON.parse(event.body);   const { data, errors } = await query(     ADD_ITEM, { name, urgent, checked, note });     if (errors) {     return {       statusCode: 500,       body: JSON.stringify(errors)     };   }     return {     statusCode: 200,     body: JSON.stringify({ item: data.createItem })   }; };

We are passing the item properties like, name, if it is urgent, the check value and the note the items should be part of. Let’s see how this API can be called using postman,

As you see, we are passing the id of the note while creating an item for it.

We won’t bother writing the rest of the API capabilities in this article,  like updating, deleting a shop note, updating, deleting items, etc. In case, you are interested, you can look into those functions from the GitHub Repository.

However, after creating the rest of the API, you should have a directory structure like this,

We have successfully created a data store with Fauna, set it up for use, created an API backed by serverless functions, using Netlify Functions, and tested those functions/routes.

Congratulations, you did it. Next, let us build some user interfaces to show the shop notes and add items to it. To do that, we will use Gatsby.js (aka, Gatsby) which is a super cool, React-based static site generator.

The following section requires you to have basic knowledge of ReactJS. If you are new to it, you can learn it from here. If you are familiar with any other user interface technologies like, Angular, Vue, etc feel free to skip the next section and build your own using the APIs explained so far.

Set up the User Interfaces using Gatsby

We can set up a Gatsby project either using the starter projects or initialize it manually. We will build things from scratch to understand it better.

Install gatsby-cli globally. 

npm install -g gatsby-cli

Install gatsby, react and react-dom

yarn add gatsby react react-dom

Edit the scripts section of the package.json file to add a script for develop.

"scripts": {   "develop": "gatsby develop"  }

Gatsby projects need a special configuration file called, gatsby-config.js. Please create a file named, gatsby-config.js at the root of the project folder with the following content,

module.exports = {   // keep it empty     }

Let’s create our first page with Gatsby. Create a folder named, src at the root of the project folder. Create a subfolder named pages under src. Create a file named, index.js under src/pages with the following content:

import React, { useEffect, useState } from 'react';       export default () => {       const [loading, setLoading ] = useState(false);       const [shopnotes, setShopnotes] = useState(null);         return (     <>           <h1>Shopnotes to load here...</h1>     </>           )     } 

Let’s run it. We generally need to use the command gatsby develop to run the app locally. As we have to run the client side application with netlify functions, we will continue to use, netlify dev command.

netlify dev

That’s all. Try accessing the page at http://localhost:8888. You should see something like this,

Gatsby project build creates a couple of output folders which you may not want to push to the source code repository. Let us add a few entries to the .gitignore file so that we do not get unwanted noise.

Add .cache, node_modules and public to the .gitignore file. Here is the full content of the file:

.cache public node_modules *.env

At this stage, your project directory structure should match with the following:

Thinking of the UI components

We will create small logical components to achieve the ShopNote user interface. The components are:

  • Header: A header component consists of the Logo, heading and the create button to create a shopnote.
  • Shopenotes: This component will contain the list of the shop note (Note component).
  • Note: This is individual notes. Each of the notes will contain one or more items.
  • Item: Each of the items. It consists of the item name and actions to add, remove, edit an item.

You can see the sections marked in the picture below:

Install a few more dependencies

We will install a few more dependencies required for the user interfaces to be functional and look better. Open a command prompt at the root of the project folder and install these dependencies,

yarn add bootstrap lodash moment react-bootstrap react-feather shortid

Lets load all the Shop Notes

We will use the Reactjs useEffect hook to make the API call and update the shopnotes state variables. Here is the code to fetch all the shop notes. 

useEffect(() => {   axios("/api/get-shopnotes").then(result => {     if (result.status !== 200) {       console.error("Error loading shopnotes");       console.error(result);       return;     }     setShopnotes(result.data.shopnotes);     setLoading(true);   }); }, [loading]);

Finally, let us change the return section to use the shopnotes data. Here we are checking if the data is loaded. If so, render the Shopnotes component by passing the data we have received using the API.

return (   <div className="main">     <Header />     {       loading ? <Shopnotes data = { shopnotes } /> : <h1>Loading...</h1>     }   </div> );  

You can find the entire index.js file code from here The index.js file creates the initial route(/) for the user interface. It uses other components like, Shopnotes, Note and Item to make the UI fully operational. We will not go to a great length to understand each of these UI components. You can create a folder called components under the src folder and copy the component files from here.

Finally, the index.css file

Now we just need a css file to make things look better. Create a file called index.css under the pages folder. Copy the content from this CSS file to the index.css file.

import 'bootstrap/dist/css/bootstrap.min.css'; import './index.css'

That’s all. We are done. You should have the app up and running with all the shop notes created so far. We are not getting into the explanation of each of the actions on items and notes here not to make the article very lengthy. You can find all the code in the GitHub repo. At this stage, the directory structure may look like this,

A small exercise

I have not included the Create Note UI implementation in the GitHib repo. However, we have created the API already. How about you build the front end to add a shopnote? I suggest implementing a button in the header, which when clicked, creates a shopnote using the API we’ve already defined. Give it a try!

Let’s Deploy

All good so far. But there is one issue. We are running the app locally. While productive, it’s not ideal for the public to access. Let’s fix that with a few simple steps.

Make sure to commit all the code changes to the Git repository, say, shopnote. You have an account with Netlify already. Please login and click on the button, New site from Git.

Next, select the relevant Git services where your project source code is pushed. In my case, it is GitHub.

Browse the project and select it.

Provide the configuration details like the build command, publish directory as shown in the image below. Then click on the button to provide advanced configuration information. In this case, we will pass the FAUNA_SERVER_SECRET key value pair from the .env file. Please copy paste in the respective fields. Click on deploy.

You should see the build successful in a couple of minutes and the site will be live right after that.

In Summary

To summarize:

  • The Jamstack is a modern web development architecture based on client-side JavaScript, reusable APIs, and prebuilt Markup.
  • 70% – 80% of the features that once required a custom back-end can now be done either on the front end or there are APIs, services to take advantage of.
  • Fauna provides the data API for the client-serverless applications. We can use GraphQL or Fauna’s FQL to talk to the store.
  • Netlify serverless functions can be easily integrated with Fauna using the GraphQL mutations and queries. This approach may be useful when you have the need of  custom authentication built with Netlify functions and a flexible solution like Auth0.
  • Gatsby and other static site generators are great contributors to the Jamstack to give a fast end user experience.

Thank you for reading this far! Let’s connect. You can @ me on Twitter (@tapasadhikary) with comments, or feel free to follow.


The post How to create a client-serverless Jamstack app using Netlify, Gatsby and Fauna appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , , ,
[Top]

WordPress and Jamstack

I recently moderated a panel at Netlify’s virtual Jamstack Conf that included Netlify CEO Matt Biilman and Automattic founder Matt Mullenweg. The whole thing was built up — at least to some — as a “Jamstack vs. WordPress” showdown.

I have lots of thoughts of my own on this and think I’m more useful as a pundit than a moderator. This is one of my favorite conversations in tech right now! So allow me to blog.

Disclosure: both Automattic and Netlify are active sponsors of this site. I have production sites that use both, and honestly, I’m a fan of both, which is an overarching point I’ll try to make. I also happen to be writing and publishing this on a WordPress site.

History

  1. Richard MacManus published “WordPress Co-Founder Matt Mullenweg Is Not a Fan of JAMstack” with quotes from an email conversation between them a line from Matt saying, “JAMstack is a regression for the vast majority of the people adopting it.”
  2. Matt Biilmann published a response “On Mullenweg and the Jamstack – Regression or Future?” with a whole section titled “The end of the WordPress era.”
  3. People chimed in along the way. Netlify board member Ohad Eder-Pressman wrote an open letter. Sarah Gooding rounded up some of the activity on WP Tavern (which is owned by Matt Mullenweg). I chimed in as well.
  4. Matt Mullenweg clarified remarks with some new zingers.

The debate was on October 6th at Jamstack Conf Virtual 2020. There is no public video of it (sorry).

The Stack

Comparing Jamstack to WordPress is a bit weird. What is comparable is the fact that they are both roads you might travel when building a website. Much of this post will keep that in mind and compare the two that way. Why they aren’t directly comparable is because:

  • Jamstack is a loose descriptor of an architectural philosophy that encourages static files on CDNs and JavaScript-accessed services for any dynamic needs.
  • WordPress is a CMS on the LAMP stack.

Those things are not apples to apples.

If we stick just with the stack for a moment, the comparison would be between:

  • Static Hosting + Services
  • LAMP

An example of Static + Services is using Netlify for hosting (which is static) and using services to do anything dynamic you need to do. Maybe you use Netlify’s own forms and auth functionality and Hasura for data storage.

On a LAMP stack, you have MySQL to store data in, so you aren’t reaching for an outside service there. You also have PHP available. So with those (in addition to open-source software), you have what you need for auth. It doesn’t mean that you never reach for services; you just do so less often as you have more technology at your fingertips from the server you already have.

Jamstack: Static Hosting as the hub, reaching out to services for everything else.

LAMP: Server that does a bunch of work as the hub, reaching out to a few services if needed.

Matt B. called the LAMP stack a “monolith.” Matt M. objected to that term and called it an “integrated approach.” I’m not a computer scientist, but I could see this going either way. Here’s Wikipedia:

[…] a monolithic application describes a single-tiered software application in which the user interface and data access code are combined into a single program.

Defined that way, yes, WordPress does appear to be a monolith, and yet the Wikipedia article continues:

[…] a monolithic application describes a software application that is designed without modularity.

Seen that way, appears to disqualify WordPress as a monolith. WordPress’ hook and plugin architecture is modular. 🤷‍♂️

It would be interesting to hear these two guys dig into the nuance there, but the software is what it is. A self-hosted WordPress site runs on a server with a full stack of technology available to it. It makes sense to ask as much of that server as you can (i.e. integrated). In a Jamstack approach, the server is abstracted from you. Everything else you need to do is split into different services (i.e. not integrated).

The WordPress approach, doesn’t mean that you never reach for outside services. In both stacks, you’d likely use something like Stripe for eCommerce APIs. You might reach for something like Cloudinary for robust media storage and serving. Even WordPress’ Jetpack service (which I use and like) brings a lot of power to a self-hosted WordPress site by behaving like a third-party service and moving things like asset-hosting and search technology off your own servers by moving them over to cloud servers. Both stacks are conglomerations of technologies.

Neither stack is any more “house of cards” or prone than the other. All websites can have that “only as strong as its weakest link” metaphor apply to them. If a WordPress plugin ships a borked version or somehow is corrupted on upload, it may screw up my site until I fix it. If my API keys become invalid for my serverless database, my Jamstack site might be hosed until I fix it. If Stripe is down, I’m not selling any products on any kind of site until they are back up.

A missing semicolon can sink any site.

Pricing

WordPress.com has a free plan, and that’s absolutely a place you can build a site. (I have several.) But you don’t really have developer-style access to it until you’re on the business plan at $ 25 per month. Self-hosted WordPress itself is open-source and free, but you’re not going to find a place to spin up a self-hosted WordPress site for free. It starts cheap and scales up. You need LAMP hosting to run WordPress. Here’s a look around at fairly inexpensive hosting plans:

There is money involved right off the bat.

Starting free is much more common with Jamstack, then you incur costs at different points. Jamstack being newer, it feels like a market that’s still figuring itself out.

  • Vercel is free until you need team members or features like password protected sites. A single password-protected site is $ 150/month. You can toss basic auth on any server with Apache for no additional cost.
  • Netlify is very similar, unlocking features on higher plans, and offering ala-carte per-site features, like analytics ($ 9/month) and auth (5,000 active users is $ 99/month).
  • AWS Amplify starts free, but like everything on AWS, your usage is metered on lots of levels, like build minutes, storage, and bandwidth. They have an example calculation of a web app has 10,000 daily active users and is updated two times per month, which costs $ 65.98/month.
  • Azure Static Web Apps hasn’t released pricing yet, but will almost certainly have a free tier or free usage or some kind.

All of this is a good reminder that Netlify isn’t the only one in the Jamstack game. Jamstack just means static hosting plus services.

You can’t make blanket statements like Jamstack is cheaper. It’s far too dependent on the site’s usage and needs. With high usage and a bunch of premium services, Jamstack (much like Serverless in general) can get super expensive. Jamstack says their enterprise pricing starts at $ 3,000/month, and while you get things like auth, forms, and media handling, you don’t get a CMS or any data storage, that is likely to kick you up much higher.

While this WordPress site isn’t enterprise, I can tell you it requires a server in the vicinity of a $ 1,000/month, and that assumes Cloudflare is in front of it to help reduce bandwidth directly to the host and Jetpack handling things like media hosting and search functionality. Mailchimp sends our newsletter. Wufoo powers our forms. We also have paid plugins, like Advanced Custom Fields Pro and a few WooCommerce add-ons. That’s not all of it. It’s probably a few thousand per month, all told. This isn’t unique to any integrated approach, but helps illustrate that the cost of a WordPress site can be quite high as well. They don’t publish prices (a common enterprise tactic), but Automattic’s own WordPress VIP hosting service surely starts at mid-4-figures before you start adding third-party stuff.

Bottom line: there is no sea change in pricing happening here.

Performance

80% of web performance is a front-end concern.

That’s a true story, but it’s also built on the foundation of the server (accounting for the first 20%). The speediest front-end in the world doesn’t feel speedy at all if the first request back from the server takes multiple seconds. You’ve gotta make sure that first request is smoking fast if you want a fast site.

The first rectangle is the first server response, and the second is the entire front-end. Even with a fast front end, it can’t save you from a slow back end.

You know what’s super fast? Global CDNs serving static files. That’s what you want to make happen on any website, regardless of the stack. While that’s the foundation of Jamstack (static CDN-backed hosting), it doesn’t mean WordPress can’t do it.

You take an index.html file with static content, put that on Netlify, and it’s gonna be smoking fast. Maybe your static site generator makes that file (which, it’s worth pointing out, could very well get content from WordPress). There is something very nice about the robustness and steady foundation of that.

By default, WordPress isn’t making static files that are catchable on a global CDN. WordPress responds to requests from a single origin, runs PHP, which asks the database for stuff, before a response is assembled, and then, finally, the page is returned. That can be pretty fast, but it’s far less sturdy than a static file on a global CDN and it’s far easier to overwhelm with requests.

WordPress hosts know this, and they try to solve the problem at the hosting level. Just look at WP Engine’s approach. Without you doing anything, they use a page cache so that the site can essentially return a static asset rather than needing to run PHP or hit a database. They employ all sorts of other caching as well, including partnering with Cloudflare to do the best possible caching. As I’m writing this, my shoptalkshow.com site literally went down. I wrote to the host, Flywheel, to see what was up. Turns out that when I went in there to turn on a staging site, I flipped a wrong switch and turned off their caching. The site couldn’t handle the traffic and just died. Flipping the caching switch back on instantly solved it. I didn’t have Cloudflare in front of the site, but I should have.

Cloudflare is part of the magic sauce of making WordPress fast. Just putting it in front of your self-hosted WordPress site is going to do a ton of work in making it fast and reliable. One of the missing pieces has been great caching of the HTML itself, which they literally dealt with this month and now that can be cached as well. There is kind of a funny irony in that caching WordPress means caching requests as static HTML and static assets, and serving them from a global CDN, which is essentially what Jamstack is at the end of the day.

Matt M. mentioned that WordPress.com employs global CDNs that kick in at certain levels of traffic. I’m not sure if that’s Cloudflare or not, but I wouldn’t doubt it.

With Cloudflare in front of a WordPress site, I see the same first-response numbers as I do on Netlify sites without Cloudflare (because the do not recommend using Cloudflare in front of Netlify-hosted sites). That’s mid-2-digit millisecond numbers, which is very, very good.

First request on WordPress site css-tricks.com, hosted by Flywheel with Cloudflare in front. Very fast.
First request on my Jamstack site, conferences.css-tricks.com, hosted by Netlify. Very fast.

From that foundation, any discussion about performance becomes front-end specific. Front-end tactics for speed are the same no matter what the server, hosting, or CMS situation is on the back end.

Security

There are far more stories about WordPress sites getting hacked than Jamstack sites. But is it fair to say that WordPress is less secure? WordPress is going on a couple of decades of history and has a couple orders of magnitude more sites built on it than Jamstack does. Security aside, you’re going to get more stories from WordPress with those numbers.

Matt M brought up that whitehouse.gov is on WordPress, which is obviously a site that needs the highest levels of security. It’s not that WordPress itself is insecure software. It’s what you do with it. Do you have insecure passwords? That’s insecure no matter what platform you’re using. Is the server itself insecure via file permissions or access levels? That’s not exactly the software’s fault, but you may be in that position because of the software. Are you running the latest version of WordPress? Usage is fragmented, at best, and the older the version, the less secure it’s going to be. Tricky.

It may be more interesting to think about security vectors. That is, at what points it is possible to get hacked. If you have a static file sitting on static hosting, I think it’s safe to say there are fairly few attack vectors. But still, there are some:

  • Your hosting account could be hacked
  • Your Git repo could be hacked
  • Your Cloudflare account could be hacked
  • Your domain name could be stolen (it happens)

That’s all true of a WordPress site, too, only there are additional attack vectors like:

  • Server-side code: XSS, bad plugins, remote execution, etc.
  • Database vulnerabilities
  • Running an older, outdated version of WordPress
  • The login system is right on the site itself, e.g. bad guys can hammer /wp-login.php

I think it’s fair to say there are more attack vectors on a WordPress site, but there are plenty of vectors on any site. The hosting account of any site is a big vector. Anyting that sits in the DNS chain. Any third-party services with logins. Anything with an API key.

Personal experience: this site is on WordPress and has never been hacked, but not for lack of trying. I do feel like I need to think more about security on my WordPress sites than my sites that are only built from static site generators.

Scaling

Scaling either approach costs money. This WordPress site isn’t massively scaled, but does require some decent scaling up from entry-level server requirements. I serve all traffic through Cloudflare, so a peak at the last 30 days tells me I serve 5 TB of bandwidth a month.

On a Netlify Business plan (600 GB of traffic for $ 99, then $ 20 per additional 100 GB) that math works out to $ 979. Remember when I said this site requires about a server that costs about $ 1,000/month? I wrote that before I ran these numbers, so I was super close (go me). Jamstack versus WordPress at the scale of this site is pretty neck-and-neck. All hosts are going to charge for bandwidth and have caps with overage charges. Amplify charges $ 0.15/GB over a 15 GB monthly cap. Flywheel (my WordPress host) charges based on a monthly visitor cap and, over that, it’s $ 1 per 1000.

The story with WordPress scaling is:

  • Use a host that can handle it and that has their own proven caching strategy.
  • CDN everything (which usually means putting Cloudflare in front of it).
  • Ultimately, you’re going to pay for it.

The story with Jamstack scaling is:

  • The host and services are built to scale.
  • You have to think about scaling less in terms of can this service handle this, or do I have to move?
  • You have to think about scaling more in terms of the fact that every aspect of every service will have pricing you need to keep an eye on.
  • Ultimately, you’re going to pay for it.

I’ve had to move around a bit with my WordPress hosting, finding hosts that are in-line with the current needs of the site. Moving a WordPress site is non-trivial, but it’s far easier than moving to another CMS. For example, if you build a Jamstack site on a headless CMS that becomes too pricey, the cost of moving is a bigger job than switching hosts.

I like what Dave Rupert wrote the other day (in a Slack conversation) about comparing performance between the two:

Jamstack: Use whatever thing to build your thing, there’s addons to help you, and use our thing to deploy it out to a CDN so it won’t fall over.

WordPress: Use our thing to build your thing, there’s addons to help you, and you have to use certain hosts to get it to not fall over.

There are other kinds of “scaling” as well. I think of something like number of users. That’s something that all sorts of services use for pricing tiers, which is an understandable metric. But that’s free in WordPress. You can have as many users with as many nuanced permissions as you like. That’s just the CMS, so adding on other services might still charge you by the head. Vercel or Netlify charge you by the head for team accounts. Contentful (a popular headless CMS) starts at $ 489/month for teams. Even GitHub’s Team tier is $ 4 per user if you need anything the free account can’t do.

Splitting the Front and Back

This is one of the big things that gets people excited about building with Jamstack. If all of my site’s functionality and content are behind APIs, that frees up the front end to build however it wants to.

  • Wanna build an all-static site? OK, hit that API during the build process and do that.
  • Wanna build a client-rendered site with React or Vue or whatever? Fine, hit the API client-side.
  • Wanna split the middle, pre-rendering some, client-rendering some, and server-rendering some? Cool, it’s an API, you can hit it however you want.

That flexibility is neat on green-field builds, but people are just as excited about theoretical future flexibility. If all functionality and content is API-driven, you’ve entirely split the front and back, meaning you can change either in the future with more flexibility.

  • As long as your APIs keep spitting out what the front end expects, you can re-architect the back end without troubling the front end.
  • As long as you’re getting the data you need, you can re-architect the front end without troubling the back end.

This kind of split feels “future safe” for sites at a certain size and scale. I can’t quite put my finger on what those scale numbers are, but they are there.

If you’ve ever done any major site re-architecture just to accommodate one side or the other, moving to a system where you’ve split the back end and front surely feels like a smart move.

Once we’ve split, as long as the expectations are maintained, back (B) and front (F) are free to evolve independently.

You can split a WordPress site (we’ll get to that in the “Using Both” section), but by default, WordPress is very much an integrated approach where the front end is built from themes in PHP using very WordPress-specific APIs. Not split at all.

Developer Experience

Jamstack has done a good job of heavily prioritizing developer experience (DX). I’ve heard someone call it “a local optimum,” meaning Jamstack is designed around local development (and local developer) experience.

  • You’re expected to work locally. You work in your own comfortable (local, fast, customized) development environment.
  • Git is a first-class citizen. You push to your production branch (e.g. master or main), then your build process runs, and your site is deployed. You even get a preview URL of what the production site will be for every pull request, which is an impressively great feature.
  • Use whatever tooling you like. You wanna pre-build a site in Hugo? Go for it. You learned create-react-app in school? Use that. Wanna experiment with the cool new framework de jour? Have at it. There is a lot of freedom to build however you want, leveraging the fact that you can run a build and deploy whatever folder in your repo you want.
  • What you don’t have to do is important, too. You don’t have to deal with HTTPS, you don’t have to deal with caching, you don’t have to worry about file permissions, you don’t have to configure a CDN. Even advanced developers appreciate having to do less.

It’s not that WordPress doesn’t consider developer experience (for example, they have a CLI and it can do helpful things, like scaffold blocks), but the DX doesn’t feel as to-the-core of the project to me.

  • Running WordPress locally is tricky, requiring you to run a (X)AMP stack somehow, which involves notoriously finicky third-party software. Thank god for Local by Flywheel. There is some guidance but it doesn’t feel like a priority.
  • What should go in Git? To this day, I don’t really know, but I’ve largely settled on the entire /wp-content folder. It feels weird to me there is no guidance or obvious best practices.
  • You’re totally on your own for deployment. Even WordPress-specific hosts don’t really nail it here. It’s largely just: here’s your SFTP credentials.
  • Even if you have a nice local development and deployment pipeline set up (I’m happy with mine), that doesn’t really help deal with moving the database around, so you’re on own your own there as well.

These are all solvable things, and the WordPress community is so big that you’ll find plenty of information on it, but I think it’s fair to say that WordPress doesn’t have DX down to the core. It’s a little wild-west-y even after all these years.

In fact, I’ve found that because the encouragement of a healthy local development environment is so sidelined, a lot of people just don’t have one at all. This is anecdotal, but now twice is as many years have I found myself involved in other people’s sites that work entirely production-only. That would be one thing if they were very simple sites with largely default behavior, but these have been anything but. They’re very complicated (much more so than this site) involving public user logins, paid memberships and permissions, page builders, custom shortcodes, custom CSS, and just a heck of a lot of moving parts. It scared me to death. I didn’t want to touch anything. They were live-editing PHP to make things work — cowboy coding, as people jokingly call that. One syntax error and the site is hosed, maybe even the very page you’re looking at.

The fact that WordPress powers such a huge swath of the web without particularly good DX is very interesting. There is no Jamstack without DX. It’s an entirely developer-focused thing. With WordPress, there probably isn’t a developer at all on most sites. It’s installed (or just activated, in the case of WordPress.com) and the site owner takes it from there. The site owner is like a developer in that they have lots of power, but perhaps doesn’t write any code at all.

To that end, I’d say WordPress has far more focus on UX than DX, which is a huge part of all this…

CMS and End User UX

WordPress is a damn fine CMS. Even if you don’t like it, there are a hell of a lot of people that do, and the numbers speak for themselves. What you get, when you decide to build a site with WordPress, is a heaping helping of ability to build just about any kind of site you want. It’s unlikely that you’ll have that oops, painted myself into a corner here situation with WordPress.

That’s a big deal. Jenn put her finger on this, noting that the people who use WordPress are a bigger story than a developer’s needs.

WordPress can do an absolute ton of things:

  • Blog (or be any type of content-driven CMS-style site)…
    • With content previews, which is possible-but-tricky on Jamstack
  • Deal with users/permissions…
  • eCommerce
  • Process forms
  • Handle plugins to the moon and back

Jamstack can absolutely do all these things too, but now it is Jamstack that is in Wild West territory. When you look at tutorials about how to store data, they often involve explaining how to write individual CRUD functions for a cloud database. That’s down to the metal stuff which can be very powerful, but it’s a far cry from clicking a few buttons, which is what WordPress feels like a lot of the time.

I bet I could probably cobble together a basic Jamstack eCommerce setup with Stripe APIs, which is very cool. But then I’d get nervous when I need to start thinking about inventory management, shipping zones, product variations, and who knows what else that gets complicated in eCommerce-land, making me wish I had something super robust that did it all for me.

Sometimes, we developers are building sites just for us (I do more than my fair share of that), but I’d say developers are mostly building sites for other people. So the most important question is: am I building something that is empowering for the people I’m building it for?

You can pull off a good site manager experience no matter what, but WordPress has surely proven that it delivers in that department without asking terribly much in terms of custom development.

Jamstack has some tricks that I wish I would pull off on WordPress, though. Here’s a big one for me: user-submitted content and updates. I literally have three websites now that benefit from this. A site about conferences, a site about serverless, and an upcoming site about coding fonts. WordPress could have absolutely done a great job at all three of those sites. But, what I really want is for people to be able to update and submit content in a way that I can be like: Yep, looks good, merge. By having gone with a Jamstack approach, the content is in public GitHub repos, and anyone can participate.

I think that’s super great. It doesn’t even necessarily require someone from the public knowing or understanding Git or GitHub, as Netlify CMS has this concept of Open Authoring, which keeps the whole contribution experience in the browser with UI for the editing.

Using Both

This is a big one that I see brought up a lot. Even Netlify themselves say “There is no Versus.”

Here’s the deal:

  • The “A” in “Jam” means APIs. Use APIs to build your site either at build time or client-side.
  • WordPress sites, by default, have a REST API (and can have a GraphQL API as well).
  • So, hit that API for CMS data on your Jamstack site.

Yep, totally. This works and people do it. I think it’s pretty cool.

But…

  • Running a WordPress site somewhere in addition to your Jamstack site means… you’re running a WordPress site in addition to your Jamstack site. There is cost and technical debt to that.
  • You often aren’t getting all the value of WordPress. Hitting an API for data might be all you need, but this is a super, very different approach to building a site than building a WordPress theme. You’re getting none of the other value of WordPress. I think of situations like this: you find a neat plugin that adds a fancy Gutenberg block to your site. That’ll “just work” on a WordPress site, but it likely has some special front-end behavior that won’t work if all you’re doing is sucking the HTML from an API. It probably enqueues some additional scripts and styles that you’ll be on your own to figure out how to incorporate where your front-end is hosted, and on your own to maintain updates.

Here’s some players that all have a unique approach to “using both”:

  • Frontity: A React framework for WordPress. Looks like you can run it with a Node server so the pages are server-side rendered, or as a static site generator. Either way, WordPress is the CMS but you’re building the front end with React and getting data from the REST API.
  • WP2Static: A WordPress plugin that builds a static version of your site and can auto-deploy it when changes are made.
  • Strattic: They host the dynamic WordPress site for you (which they refer to as “staging”) and you work with WordPress normally there. Then you choose to deploy, and they also host a static version of your site for you.

There are loads of other ways to integrate both. Here’s our own Geoff and Sarah talking about using WordPress and Jamstack together by using Vue/Nuxt with the REST API and hosting on Netlify.

Using Neither

Just in case this isn’t clear, there are absolutely loads of ways to build websites. If you’re building a Ruby on Rails site, that’s not Jamstack or WordPress. You could argue it’s more like a WordPress site in that it requires a server and you’ll be using that server to do as much as you can. You could also argue it’s more like Jamstack in that, even though it’s not static hosting, it encourages using APIs and piecing together services.

The web is a big place, gang, and this isn’t a zero-sum game. I fully expect WordPress to continue to grow and Jamstack to continue to grow because the web itself is growing. Even if we’re only considering the percentage of market share, I’d still bet that both will grow, pushing whatever else into smaller slices.

Choosing

I’m not even going to go here. Not because I’m avoiding playing favorites, but because it isn’t necessary. I don’t see developers out there biting their fingernails trying to decide between a WordPress or Jamstack approach to building a website. We’re at the point where the technologies are well-understood enough that the process goes like:

  1. Put adult pants on
  2. Evaluate needs and outcomes
  3. Pick technologies

The post WordPress and Jamstack appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

,
[Top]

Jamstack Conf

Here’s an important detail here: It’s free!

Jamstack Conf Virtual is coming up October 6th and 7th, 2020. The sessions are on October 6th. That’s the free part (register here). Then on October 7th there are a variety of workshops (they all look great to me) that are $ 100 USD each. That’s the classic conference one-two punch. Sessions are for getting a broad sense of what’s happening and will very likely open your eyes to some new concepts; workshops are for deep learning and walking away with some new skills.

The speaker lineup is top-notch!

I’ve been to several Jamstack Conf’s myself, and I’m a fan. Some of my most favorite conferences ever are ones that are focused around a technological idea at the rise of their relevance. That’s exactly what’s happening here and I’m excited to check it out. You can even watch videos from the 2019 conference to see just how awesome of an experience it is.

Direct Link to ArticlePermalink


The post Jamstack Conf appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

,
[Top]

State of Jamstack 2020: Data Deep Dive

(This is a sponsored post.)

The Jamstack, a modern approach to building websites and apps, delivers better performance, higher security, lower cost of scaling, and a better developer experience. But how popular is it among developers worldwide, and what do they love and hate about it?

We at Kentico Kontent decided to take a closer look at the current state of Jamstackʼs adoption and use. Surveying more than 500 developers in four countries, we wanted to find out how long they’ve been working with the Jamstack, what they’re using this architecture for, where they typically deploy and host their projects, and more.

Based on the findings, we created The State of Jamstack 2020 Report that provides an overview of the results and comments on the most interesting facts. Now we’ve released something just as exciting:

Our free interactive data visualization page lets you dive into the data from our survey and discover how the web developers’ answers varied according to age, gender, primary programming language, and other factors.

Do you know where the majority of US developers working for large companies store their content? Or what’s their favorite static site generator? The answers may surprise you—click below to find out:

Direct Link to ArticlePermalink


The post State of Jamstack 2020: Data Deep Dive appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , ,
[Top]

Going Jamstack with React, Serverless, and Airtable

The best way to learn is to build. Let’s learn about this hot new buzzword, Jamstack, by building a site with React, Netlify (Serverless) Functions, and Airtable. One of the ingredients of Jamstack is static hosting, but that doesn’t mean everything on the site has to be static. In fact, we’re going to build an app with full-on CRUD capability, just like a tutorial for any web technology with more traditional server-side access might.

Why these technologies, you ask?

You might already know this, but the “JAM” in Jamstack stands for JavaScript, APIs, and Markup. These technologies individually are not new, so the Jamstack is really just a new and creative way to combine them. You can read more about it over at the Jamstack site.

One of the most important benefits of Jamstack is ease of deployment and hosting, which heavily influence the technologies we are using. By incorporating Netlify Functions (for backend CRUD operations with Airtable), we will be able to deploy our full-stack application to Netlify. The simplicity of this process is the beauty of the Jamstack.

As far as the database, I chose Airtable because I wanted something that was easy to get started with. I also didn’t want to get bogged down in technical database details, so Airtable fits perfectly. Here’s a few of the benefits of Airtable:

  1. You don’t have to deploy or host a database yourself
  2. It comes with an Excel-like GUI for viewing and editing data
  3. There’s a nice JavaScript SDK

What we’re building

For context going forward, we are going to build an app where you can use to track online courses that you want to take. Personally, I take lots of online courses, and sometimes it’s hard to keep up with the ones in my backlog. This app will let track those courses, similar to a Netflix queue.

 

One of the reasons I take lots of online courses is because I make courses. In fact, I have a new one available where you can learn how to build secure and production-ready Jamstack applications using React and Netlify (Serverless) Functions. We’ll cover authentication, data storage in Airtable, Styled Components, Continuous Integration with Netlify, and more! Check it out  →

Airtable setup

Let me start by clarifying that Airtable calls their databases “bases.” So, to get started with Airtable, we’ll need to do a couple of things.

  1. Sign up for a free account
  2. Create a new “base”
  3. Define a new table for storing courses

Next, let’s create a new database. We’ll log into Airtable, click on “Add a Base” and choose the “Start From Scratch” option. I named my new base “JAMstack Demos” so that I can use it for different projects in the future.

Next, let’s click on the base to open it.

You’ll notice that this looks very similar to an Excel or Google Sheets document. This is really nice for being able tower with data right inside of the dashboard. There are few columns already created, but we add our own. Here are the columns we need and their types:

  1. name (single line text)
  2. link (single line text)
  3. tags (multiple select)
  4. purchased (checkbox)

We should add a few tags to the tags column while we’re at it. I added “node,” “react,” “jamstack,” and “javascript” as a start. Feel free to add any tags that make sense for the types of classes you might be interested in.

I also added a few rows of data in the name column based on my favorite online courses:

  1. Build 20 React Apps
  2. Advanced React Security Patterns
  3. React and Serverless

The last thing to do is rename the table itself. It’s called “Table 1” by default. I renamed it to “courses” instead.

Locating Airtable credentials

Before we get into writing code, there are a couple of pieces of information we need to get from Airtable. The first is your API Key. The easiest way to get this is to go your account page and look in the “Overview” section.

Next, we need the ID of the base we just created. I would recommend heading to the Airtable API page because you’ll see a list of your bases. Click on the base you just created, and you should see the base ID listed. The documentation for the Airtable API is really handy and has more detailed instructions for find the ID of a base.

Lastly, we need the table’s name. Again, I named mine “courses” but use whatever you named yours if it’s different.

Project setup

To help speed things along, I’ve created a starter project for us in the main repository. You’ll need to do a few things to follow along from here:

  1. Fork the repository by clicking the fork button
  2. Clone the new repository locally
  3. Check out the starter branch with git checkout starter

There are lots of files already there. The majority of the files come from a standard create-react-app application with a few exceptions. There is also a functions directory which will host all of our serverless functions. Lastly, there’s a netlify.toml configuration file that tells Netlify where our serverless functions live. Also in this config is a redirect that simplifies the path we use to call our functions. More on this soon.

The last piece of the setup is to incorporate environment variables that we can use in our serverless functions. To do this install the dotenv package.

npm install dotenv

Then, create a .env file in the root of the repository with the following. Make sure to use your own API key, base ID, and table name that you found earlier.

AIRTABLE_API_KEY=<YOUR_API_KEY> AIRTABLE_BASE_ID=<YOUR_BASE_ID> AIRTABLE_TABLE_NAME=<YOUR_TABLE_NAME>

Now let’s write some code!

Setting up serverless functions

To create serverless functions with Netlify, we need to create a JavaScript file inside of our /functions directory. There are already some files included in this starter directory. Let’s look in the courses.js file first.

const  formattedReturn  =  require('./formattedReturn'); const  getCourses  =  require('./getCourses'); const  createCourse  =  require('./createCourse'); const  deleteCourse  =  require('./deleteCourse'); const  updateCourse  =  require('./updateCourse'); exports.handler  =  async  (event)  =>  {   return  formattedReturn(200, 'Hello World'); };

The core part of a serverless function is the exports.handler function. This is where we handle the incoming request and respond to it. In this case, we are accepting an event parameter which we will use in just a moment.

We are returning a call inside the handler to the formattedReturn function, which makes it a bit simpler to return a status and body data. Here’s what that function looks like for reference.

module.exports  =  (statusCode, body)  =>  {   return  {     statusCode,     body: JSON.stringify(body),   }; };

Notice also that we are importing several helper functions to handle the interaction with Airtable. We can decide which one of these to call based on the HTTP method of the incoming request.

  • HTTP GET → getCourses
  • HTTP POST → createCourse
  • HTTP PUT → updateCourse
  • HTTP DELETE → deleteCourse

Let’s update this function to call the appropriate helper function based on the HTTP method in the event parameter. If the request doesn’t match one of the methods we are expecting, we can return a 405 status code (method not allowed).

exports.handler = async (event) => {   if (event.httpMethod === 'GET') {     return await getCourses(event);   } else if (event.httpMethod === 'POST') {     return await createCourse(event);   } else if (event.httpMethod === 'PUT') {     return await updateCourse(event);   } else if (event.httpMethod === 'DELETE') {     return await deleteCourse(event);   } else {     return formattedReturn(405, {});   } };

Updating the Airtable configuration file

Since we are going to be interacting with Airtable in each of the different helper files, let’s configure it once and reuse it. Open the airtable.js file.

In this file, we want to get a reference to the courses table we created earlier. To do that, we create a reference to our Airtable base using the API key and the base ID. Then, we use the base to get a reference to the table and export it.

require('dotenv').config(); var Airtable = require('airtable'); var base = new Airtable({ apiKey: process.env.AIRTABLE_API_KEY }).base(   process.env.AIRTABLE_BASE_ID ); const table = base(process.env.AIRTABLE_TABLE_NAME); module.exports = { table };

Getting courses

With the Airtable config in place, we can now open up the getCourses.js file and retrieve courses from our table by calling table.select().firstPage(). The Airtable API uses pagination so, in this case, we are specifying that we want the first page of records (which is 20 records by default).

const courses = await table.select().firstPage(); return formattedReturn(200, courses);

Just like with any async/await call, we need to handle errors. Let’s surround this snippet with a try/catch.

try {   const courses = await table.select().firstPage();   return formattedReturn(200, courses); } catch (err) {   console.error(err);   return formattedReturn(500, {}); }

Airtable returns back a lot of extra information in its records. I prefer to simplify these records with only the record ID and the values for each of the table columns we created above. These values are found in the fields property. To do this, I used the an Array map to format the data the way I want.

const { table } = require('./airtable'); const formattedReturn = require('./formattedReturn'); module.exports = async (event) => {   try {     const courses = await table.select().firstPage();     const formattedCourses = courses.map((course) => ({       id: course.id,       ...course.fields,     }));     return formattedReturn(200, formattedCourses);   } catch (err) {     console.error(err);     return formattedReturn(500, {});   } };

How do we test this out? Well, the netlify-cli provides us a netlify dev command to run our serverless functions (and our front-end) locally. First, install the CLI:

npm install -g netlify-cli

Then, run the netlify dev command inside of the directory.

This beautiful command does a few things for us:

  • Runs the serverless functions
  • Runs a web server for your site
  • Creates a proxy for front end and serverless functions to talk to each other on Port 8888.

Let’s open up the following URL to see if this works:

We are able to use /api/* for our API because of the redirect configuration in the netlify.toml file.

If successful, we should see our data displayed in the browser.

Creating courses

Let’s add the functionality to create a course by opening up the createCourse.js file. We need to grab the properties from the incoming POST body and use them to create a new record by calling table.create().

The incoming event.body comes in a regular string which means we need to parse it to get a JavaScript object.

const fields = JSON.parse(event.body);

Then, we use those fields to create a new course. Notice that the create() function accepts an array which allows us to create multiple records at once.

const createdCourse = await table.create([{ fields }]);

Then, we can return the createdCourse:

return formattedReturn(200, createdCourse);

And, of course, we should wrap things with a try/catch:

const { table } = require('./airtable'); const formattedReturn = require('./formattedReturn'); module.exports = async (event) => {   const fields = JSON.parse(event.body);   try {     const createdCourse = await table.create([{ fields }]);     return formattedReturn(200, createdCourse);   } catch (err) {     console.error(err);     return formattedReturn(500, {});   } };

Since we can’t perform a POST, PUT, or DELETE directly in the browser web address (like we did for the GET), we need to use a separate tool for testing our endpoints from now on. I prefer Postman, but I’ve heard good things about Insomnia as well.

Inside of Postman, I need the following configuration.

  • url: localhost:8888/api/courses
  • method: POST
  • body: JSON object with name, link, and tags

After running the request, we should see the new course record is returned.

We can also check the Airtable GUI to see the new record.

Tip: Copy and paste the ID from the new record to use in the next two functions.

Updating courses

Now, let’s turn to updating an existing course. From the incoming request body, we need the id of the record as well as the other field values.

We can specifically grab the id value using object destructuring, like so:

const {id} = JSON.parse(event.body);

Then, we can use the spread operator to grab the rest of the values and assign it to a variable called fields:

const {id, ...fields} = JSON.parse(event.body);

From there, we call the update() function which takes an array of objects (each with an id and fields property) to be updated:

const updatedCourse = await table.update([{id, fields}]);

Here’s the full file with all that together:

const { table } = require('./airtable'); const formattedReturn = require('./formattedReturn'); module.exports = async (event) => {   const { id, ...fields } = JSON.parse(event.body);   try {     const updatedCourse = await table.update([{ id, fields }]);     return formattedReturn(200, updatedCourse);   } catch (err) {     console.error(err);     return formattedReturn(500, {});   } };

To test this out, we’ll turn back to Postman for the PUT request:

  • url: localhost:8888/api/courses
  • method: PUT
  • body: JSON object with id (the id from the course we just created) and the fields we want to update (name, link, and tags)

I decided to append “Updated!!!” to the name of a course once it’s been updated.

We can also see the change in the Airtable GUI.

Deleting courses

Lastly, we need to add delete functionality. Open the deleteCourse.js file. We will need to get the id from the request body and use it to call the destroy() function.

const { id } = JSON.parse(event.body); const deletedCourse = await table.destroy(id);

The final file looks like this:

const { table } = require('./airtable'); const formattedReturn = require('./formattedReturn'); module.exports = async (event) => {   const { id } = JSON.parse(event.body);   try {     const deletedCourse = await table.destroy(id);     return formattedReturn(200, deletedCourse);   } catch (err) {     console.error(err);     return formattedReturn(500, {});   } };

Here’s the configuration for the Delete request in Postman.

  • url: localhost:8888/api/courses
  • method: PUT
  • body: JSON object with an id (the same id from the course we just updated)

And, of course, we can double-check that the record was removed by looking at the Airtable GUI.

Displaying a list of courses in React

Whew, we have built our entire back end! Now, let’s move on to the front end. The majority of the code is already written. We just need to write the parts that interact with our serverless functions. Let’s start by displaying a list of courses.

Open the App.js file and find the loadCourses function. Inside, we need to make a call to our serverless function to retrieve the list of courses. For this app, we are going to make an HTTP request using fetch, which is built right in.

Thanks to the netlify dev command, we can make our request using a relative path to the endpoint. The beautiful thing is that this means we don’t need to make any changes after deploying our application!

const res = await fetch('/api/courses'); const courses = await res.json();

Then, store the list of courses in the courses state variable.

setCourses(courses)

Put it all together and wrap it with a try/catch:

const loadCourses = async () => {   try {     const res = await fetch('/api/courses');     const courses = await res.json();     setCourses(courses);   } catch (error) {     console.error(error);   } };

Open up localhost:8888 in the browser and we should our list of courses.

Adding courses in React

Now that we have the ability to view our courses, we need the functionality to create new courses. Open up the CourseForm.js file and look for the submitCourse function. Here, we’ll need to make a POST request to the API and send the inputs from the form in the body.

The JavaScript Fetch API makes GET requests by default, so to send a POST, we need to pass a configuration object with the request. This options object will have these two properties.

  1. method → POST
  2. body → a stringified version of the input data
await fetch('/api/courses', {   method: 'POST',   body: JSON.stringify({     name,     link,     tags,   }), });

Then, surround the call with try/catch and the entire function looks like this:

const submitCourse = async (e) => {   e.preventDefault();   try {     await fetch('/api/courses', {       method: 'POST',       body: JSON.stringify({         name,         link,         tags,       }),     });     resetForm();     courseAdded();   } catch (err) {     console.error(err);   } };

Test this out in the browser. Fill in the form and submit it.

After submitting the form, the form should be reset, and the list of courses should update with the newly added course.

Updating purchased courses in React

The list of courses is split into two different sections: one with courses that have been purchased and one with courses that haven’t been purchased. We can add the functionality to mark a course “purchased” so it appears in the right section. To do this, we’ll send a PUT request to the API.

Open the Course.js file and look for the markCoursePurchased function. In here, we’ll make the PUT request and include both the id of the course as well as the properties of the course with the purchased property set to true. We can do this by passing in all of the properties of the course with the spread operator and then overriding the purchased property to be true.

const markCoursePurchased = async () => {   try {     await fetch('/api/courses', {       method: 'PUT',       body: JSON.stringify({ ...course, purchased: true }),     });     refreshCourses();   } catch (err) {     console.error(err);   } };

To test this out, click the button to mark one of the courses as purchased and the list of courses should update to display the course in the purchased section.

Deleting courses in React

And, following with our CRUD model, we will add the ability to delete courses. To do this, locate the deleteCourse function in the Course.js file we just edited. We will need to make a DELETE request to the API and pass along the id of the course we want to delete.

const deleteCourse = async () => {   try {     await fetch('/api/courses', {       method: 'DELETE',       body: JSON.stringify({ id: course.id }),     });     refreshCourses();   } catch (err) {     console.error(err);   } };

To test this out, click the “Delete” button next to the course and the course should disappear from the list. We can also verify it is gone completely by checking the Airtable dashboard.

Deploying to Netlify

Now, that we have all of the CRUD functionality we need on the front and back end, it’s time to deploy this thing to Netlify. Hopefully, you’re as excited as I am about now easy this is. Just make sure everything is pushed up to GitHub before we move into deployment.

If you don’t have a Netlify, account, you’ll need to create one (like Airtable, it’s free). Then, in the dashboard, click the “New site from Git” option. Select GitHub, authenticate it, then select the project repo.

Next, we need to tell Netlify which branch to deploy from. We have two options here.

  1. Use the starter branch that we’ve been working in
  2. Choose the master branch with the final version of the code

For now, I would choose the starter branch to ensure that the code works. Then, we need to choose a command that builds the app and the publish directory that serves it.

  1. Build command: npm run build
  2. Publish directory: build

Netlify recently shipped an update that treats React warnings as errors during the build proces. which may cause the build to fail. I have updated the build command to CI = npm run build to account for this.

Lastly, click on the “Show Advanced” button, and add the environment variables. These should be exactly as they were in the local .env that we created.

The site should automatically start building.

We can click on the “Deploys” tab in Netlify tab and track the build progress, although it does go pretty fast. When it is complete, our shiny new app is deployed for the world can see!

Welcome to the Jamstack!

The Jamstack is a fun new place to be. I love it because it makes building and hosting fully-functional, full-stack applications like this pretty trivial. I love that Jamstack makes us mighty, all-powerful front-end developers!

I hope you see the same power and ease with the combination of technology we used here. Again, Jamstack doesn’t require that we use Airtable, React or Netlify, but we can, and they’re all freely available and easy to set up. Check out Chris’ serverless site for a whole slew of other services, resources, and ideas for working in the Jamstack. And feel free to drop questions and feedback in the comments here!


The post Going Jamstack with React, Serverless, and Airtable appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , ,
[Top]

Where Does Logic Go on Jamstack Sites?

Here’s something I had to get my head wrapped around when I started building Jamstack sites. There are these different stages your site goes through where you can put logic.

Let’s look at a special example so you can see what I mean. Say you’re making a website for a music venue. The most important part of the site is a list of events, some in the past and some upcoming. You want to make sure to label them as such or design that to be very clear. That is date-based logic. How do you do that? Where does that logic live?

There are at least four places to consider when it comes to Jamstack.

Option 1: Write it into the HTML ourselves

Literally sit down and write an HTML file that represents all of the events. We’d look at the date of the event, decide in whether it’s in the past or the future, and write different content for either case. Commit and deploy that file.

<h1>Upcoming Event: Bill's Banjo Night</h1> <h1>Past Event: 70s Classics with Jill</h1>

This would totally work! But the downside is that weu’d have to update that HTML file all the time — once Bill’s Banjo Night is over, we have to open your code editor, change “Upcoming” to “Past” and re-upload the file.

Option 2: Write structured data and do logic at build time

Instead of writing all the HTML by hand, we create a Markdown file to represent each event. Important information like the date and title is in there as structured data. That’s just one option. The point is we have access to this data directly. It could be a headless CMS or something like that as well.

Then we set up a static site generator, like Eleventy, that reads all the Markdown files (or pulls the information down from your CMS) and builds them into HTML files. The neat thing is thatwe can run any logic we want during the build process. Do fancy math, hit APIs, run a spell-check… the sky is the limit.

For our music venue site, we might represent events as Markdown files like this:

--- title: Bill's Banjo Night date: 2020-09-02 ---  The event description goes here!

Then, we run a little bit of logic during the build process by writing a template like this:

{% if event.date > now %}   <h1>Upcoming Event: {{event.title}}</h1> {% else %}   <h1>Past Event: {{event.title}}</h1> {% endif %}

Now, each time the build process runs, it looks at the date of the event, decides if it’s in the past or the future and produces different HTML based on that information. No more changing HTML by hand!

The problem with this approach is that the date comparison only happens one time, during the build process. The now variable in the example above is going to refer to the date and time the build happens to run. And once we’ve uploaded the HTML files that build produced, those won’t change until we run the build again. This means that once an event at our music venue is over, we’d have to re-run the build to make sure the website reflects that.

Now, we could automate the rebuild so it happens once a day, or heck, even once an hour. That’s literally what the CSS-Tricks conferences site does via Zapier.

The conferences site is deployed daily using a Zapier automation that triggers a Netlify deploy,, ensuring information is current.

But this could rack up build minutes if you’re using a service like Netlify, and there might still be edge cases where someone gets an outdated version of the site.

Option 3: Do logic at the edge

Edge workers are a way of running code at the CDN level whenever a request comes in. They’re not widely available at the time of this writing but, once they are, we could write our date comparison like this:

// THIS DOES NOT WORK import eventsList from "./eventsList.json" function onRequest(request) {   const now = new Date();   eventList.forEach(event => {     if (event.date > now) {       event.upcoming = true;     }   })   const props = {     events: events,   }   request.respondWith(200, render(props), {}) }

The render() function would take our processed list of events and turn it into HTML, perhaps by injecting it into a pre-rendered template. The big promise of edge workers is that they’re extremely fast, so we could run this logic server-side while still enjoying the performance benefits of a CDN.

And because the edge worker runs every time someone requests the website, we can be sure that they’re going to get an up-to-date version of it.

Option 4: Do logic at run time

Finally, we could pass our structured data to the front end directly, for example, in the form of data attributes. Then we write JavaScript that’s going to do whatever logic we need on the user’s device and manipulates the DOM on the fly.

For our music venue site, we might write a template like this:

<h1 data-date="{{event.date}}">{{event.title}}</h1>

Then, we do our date comparison in JavaScript after the page is loaded:

function processEvents(){   const now = new Date()   events.forEach(event => {     const eventDate = new Date(event.getAttribute('data-date'))     if (eventDate > now){         event.classList.add('upcoming')     } else {         event.classList.add('past')     }   }) }

The now variable reflects the time on the user’s device, so we can be pretty sure the list of events will be up-to-date. Because we’re running this code on the user’s device, we could even get fancy and do things like adjust the way the date is displayed based on the user’s language or timezone.

And unlike the previous points in the lifecycle, run time lasts as long as the user has our website open. So, if we wanted to, we could run processEvents() every few seconds and our list would stay perfectly up-to-date without having to refresh the page. This would probably be unnecessary for our music venue’s website, but if we wanted to display the events on a billboard outside the building, it might just come in handy.

Where will you put the logic?

Although one of the core concepts of Jamstack is that we do as much work as we can at build time and serve static HTML, we still get to decide where to put logic.

Where will you put it?

It really depends on what you’re trying to do. Parts of your site that hardly ever change are totally fine to complete at edit time. When you find yourself changing a piece of information again and again, it’s probably a good time to move that into a CMS and pull it in at build time. Features that are time-sensitive (like the event examples we used here), or that rely on information about the user, probably need to happen further down the lifecycle at the edge or even at runtime.


The post Where Does Logic Go on Jamstack Sites? appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, ,
[Top]

Make Jamstack Slow? Challenge Accepted.

“Jamstack is slowwwww.” That’s not something you hear often, right? Especially, when one of the main selling points of Jamstack is performance. But yeah, it’s true that even a Jamstack site can suffer hits to performance just like any other site. 

Don’t think that by choosing Jamstack you no longer have to think about performance. Jamstack can be fast — really fast — but you have to make the right choices. Let’s see if we can spot some of the poor decisions that can lead to a “slow” Jamstack site.

To do that, we’re going to build a really slow Gatsby site. Seems strange right? Why would we intentionally do that!? It’s the sort of thing where, if we make it, then perhaps we can gain a better understanding of what affects Jamstack performance and how to avoid bottlenecks.

We will use continuous performance testing and Google Lighthouse to audit every change. This will highlight the importance of testing every code change. Our site will start with a top Lighthouse performance score of 100. From there, we will make changes until it scores a mere 17. It is easier to do than you might think!

Let’s get started!

Creating our Jamstack site

We are going to use Gatsby for our test site. Let’s start by installing the Gatsby CLI installed:

npm install -g gatsby-cli

We can up a new Gatsby site using this command:

gatsby new slow-jamstack

Let’s cd into the new slow-jamstack project directory and start the development server:

cd slow-jamstack gatsby develop

To add Lighthouse to the mix, we need a Gatsby production build. We can use Vercel to host the site, giving Lighthouse a way to runs its tests. That requires installing the Vercel command-line tool and logging in:

npm install -g vercel-cli vercel

This will create the site in Vercel and put it on a live server. Here’s the example I’ve already set up that we’ll use for testing.

We’ve gotta use Chrome to access directly from DevTools and run a performance audit. No surprise here, the default Gatsby site is fast:

Screenshot of a Lighthouse audit showing a score of 100 out of 100.

A score of 100 is the fastest you can get. Let’s see what we can do to slow it down.

Slow CSS

CSS frameworks are great. They can do a lot of heavy lifting for you. When deciding on a CSS framework use one that is modular or employs CSS-in-JS so that the only CSS you need is what’s loaded.

But let’s make the bad decision to reach for an entire framework just to style a button component. In fact, let’s even grab the heaviest framework while we’re at it. These are the sizes of some popular frameworks:

Framework CSS Size (gzip)
Bootstrap 68kb (12kb)
Bulma 73kb (10kb)
Foundation 30kb (7kb)
Milligram 10kb (3kb)
Pure 17kb (4kb)
SemanticUI 146kb (20kb)
UIKit 33kb (6kb)

Alright, SemanticUI it is! The “right” way to load this framework would be to use a Sass or Less package, which would allow us to choose the parts of the framework we need. The wrong way would be to load all the CSS and JavaScript files in the <head> of the HTML. That’s what we’ll do with the full SemanticUI stylesheet. Plus, we’re going to link up jQuery because it’s a SemanticUI dependency.

We want these files to load in the head so let’s jump into the html.js file. This is not available in the src directory until we run a command to copy over the default from the cache:

cp .cache/default-html.js src/html.js

That gives us html.js in the src directory. Open it up and add the required stylesheet and scripts:

<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/semantic-ui@2.4.2/dist/semantic.css"></link> <script src="https://code.jquery.com/jquery-3.1.1.js"></script> <script src="https://cdn.jsdelivr.net/npm/semantic-ui@2.4.2/dist/semantic.js"></script>  

Now let’s push the changes straight to our production URL:

vercel --prod

OK, let’s view the audit…

Screenshot of a Lighthouse report showing a score of 66 out of 100.
Zoikes! A 33% reduction!

We have reduced the speed of the site down to a score of 66. Remember that we are not even using this framework at the moment. All we have done is load the files in the head and that reduced the performance score by one-third. Our Time to Interactive (TTI) jumped from a quick 1.9 seconds to a noticeable 4.9 seconds. And look at the possible savings we could get with from Lighthouse’s recommendations.

Slow marketing dependencies

Next, we are going to look at marketing tags and how these third-party scripts can affect performance. Let’s pretend we work with a marketing department and they want to start measuring traffic with Google Analytics. They also have a Facebook campaign and want to track it as well. 

They give us the details of the scripts that we need to add to get everything working. First, for Google Analytics:

<script async src="https://www.googletagmanager.com/gtag/js?id=UA-4369823-4"></script> <script   dangerouslySetInnerHTML={{ __html: `   window.dataLayer = window.dataLayer || [];   function gtag(){dataLayer.push(arguments);}   gtag('js', new Date());   gtag('config', 'UA-4369823-4');   `}} />

Then for the Facebook campaign:

<script   dangerouslySetInnerHTML={{ __html: `     !function(f,b,e,v,n,t,s)     {if(f.fbq)return;n=f.fbq=function(){n.callMethod?     n.callMethod.apply(n,arguments):n.queue.push(arguments)};     if(!f._fbq)f._fbq=n;n.push=n;n.loaded=!0;n.version='2.0';     n.queue=[];t=b.createElement(e);t.async=!0;     t.src=v;s=b.getElementsByTagName(e)[0];     s.parentNode.insertBefore(t,s)}(window, document,'script',     'https://connect.facebook.net/en_US/fbevents.js');     fbq('init', '3180830148641968');     fbq('track', 'PageView');     `}} />  <noscript><img height="1" width="1" src="https://www.facebook.com/tr?id=3180830148641968&ev=PageView&noscript=1"/></noscript>

We’ll place these scripts inside html.js, again in the <head> section, before the closing </head> tag.

Just like before, let’s push to Vercel and re-run Lighthouse:

vercel --prod
Screenshot of a Lighthouse audit showing a score of 51 out of 100.

Wow, the site is already down to 51 and all we’ve done is tack on one framework and a couple of measly scripts. Together, they/ve reduced the score by a whopping 49 points, nearly half of where we started.

Slow images

We haven’t added any images to the site yet but we know we absolutely would in a real-life scenario. We are going to add 100 images to the page. Sure, 100 is a lot for a single page but, then again, we know that images are often the biggest culprits of bloated web pages so we might as well let them shine.

We’ll make things a little worse by hot loading the images directly from https://placeimg.com instead of serving them on our own server.

Let’s crack open index.js and drop this code in, which will loop through 100 instances of images:

const IndexPage = () => {   const items = []   for(var i = 0; i < 100; i++) {     const url = `http://placeimg.com/640/360/any?=$ {i}`     items.push(<img key={i} alt={i} src=https://css-tricks.com/make-jamstack-slow-challenge-accepted/ />)   }      return (     <Layout>       // ...       {items}       // ...     </Layout>   ) }

The 100 images are all different and will all load as the page loads, thereby blocking the rendering. OK, let’s push to Vercel and see what’s up.

vercel --prod
Screenshot of a Lighthouse audit showing a score of 17 out of 100.
That score deserves a sad trombone. 🎺

OK, we now have a very slow Jamstack site. The images are blocking the rendering of the page and the TTI is now a whopping 16.5 seconds. We have taken a very fast Jamstack site and dropped it to a Lighthouse score of 17 — a reduction of 83 points!

Now, you may be think that you would never make these poor decisions when building an app. But you are missing the point. Every choice we make has an impact on performance. It’s a balance and performance does not come free. Even on Jamstack sites.

Making Jamstack fast again

You have seen that we cannot ignore client-side performance when using Jamstack. 

So why do people say that Jamstack is fast? Well, the main advantage of Jamstack — or using static site generators in general — is caching. Static files are cached on the edge reducing Time to First Byte (TTFB).

This is always going to be faster than going to a single-origin web server before generating the page. This is a great feature of Jamstack and gives you a fighting chance to create a page that can hit 100 in Lighthouse. (But, hey, as a side note, remember that great scores aren’t always indicative of an actual user experience.)


See, I told you we could make Jamstack slow! There are also many other things that can slow it down, but hopefully this drives home the point.

While we’re talking about performance, here are a few of my favorite performance articles on here at CSS-Tricks:


The post Make Jamstack Slow? Challenge Accepted. appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , ,
[Top]

Settling down in a Jamstack world

One of the things I like about Jamstack is that it’s just a philosophy. It’s not particularly prescriptive about how you go about it. To me, the only real requirement is that it’s based on static (CDN-backed) hosting. You can use whatever tooling you like. Those tools, though, tend to be somewhat new, and new sometimes comes with issues. Some pragmatism from Sean C Davis here:

I have two problems with solving problems using the newest, best tool every time a problem arises.

1. It’s simply not productive. Introducing new processes and tools takes time. Mastery and efficiency are built over time. If we’re trying to run a profitable business, we shouldn’t start from scratch every time.

2. We can’t know everything, all the time. With the rapidity at which we’re seeing new tools, there’s simply no way of knowing the best tool for the job because there’s no way to know all the tools available.

The trick is to settle into some tools you’ve proved that work and then keep using them to increase that level of expertise.

The post Settling down in a Jamstack world appeared first on CSS-Tricks.

CSS-Tricks

, , ,
[Top]

5 Myths About Jamstack

Jamstack isn’t necessarily new. The term was officially coined in 2016, but the technologies and architecture it describes have been around well before that. Jamstack has received a massive dose of attention recently, with articles about it appearing in major sites and publications and new Jamstack-focused events, newsletters, podcasts, and more. As someone who follows it closely, I’ve even seen what seems like a significant uptick in discussion about it on Twitter, often from people who are being newly introduced to the concept.

The buzz has also seemed to bring out the criticism. Some of the criticism is fair, and I’ll get to some of that in a bit, but others appear to be based on common myths about the Jamstack that persist, which is what I want to address first. So let’s look at five common myths about Jamstack I’ve encountered and debunk them. As with many myths, they are often based on a kernel of truth but lead to invalid conclusions.

Myth 1: They are just rebranded static sites

Yes, as I covered previously, the term “Jamstack” was arguably a rebranding of what we previously called “static sites.” This was not a rebranding meant to mislead or sell you something that wasn’t fully formed — quite the opposite. The term “static site” had long since lost its ability to describe what people were building. The sites being built using static site generators (SSG) were frequently filled with all sorts of dynamic content and capabilities.

Static sites were seen as largely about blogs and documentation where the UI was primarily fixed. The extent of interaction was perhaps embedded comments and a contact form. Jamstack sites, on the other hand, have things like user authentication, dynamic content, ecommerce, user generated content.

A listing of sites built using Jamstack on Jamstack.org

Want proof? Some well-known companies and sites built using the Jamstack include Smashing Magazine, Sphero, Postman, Prima, Impossible Foods and TriNet, just to name a few.

Myth 2: Jamstack sites are fragile

A Medium article with no byline: The issues with JAMStack: You might need a backend

Reading the dependency list for Smashing Magazine reads like the service equivalent of node_modules, including AlgoliaGoCommerceGoTrueGoTell and a variety of Netlify services to name a few. There is a huge amount of value in knowing what to outsource (and when), but it is amusing to note the complexity that has been introduced in an apparent attempt to ‘get back to basics’. This is to say nothing of the potential fragility in relying on so many disparate third-party services.

Yes, to achieve the dynamic capabilities that differentiate the Jamstack from static sites, Jamstack projects generally rely on a variety of services, both first- or third-party. Some have argued that this makes Jamstack sites particularly vulnerable for two reasons. The first, they say, is that if any one piece fails, the whole site functionality collapses. The second is that your infrastructure becomes too dependent on tools and services that you do not own.

Let’s tackle that first argument. The majority of a Jamstack site should be pre-rendered. This means that when a user visits the site, the page and most of its content is delivered as static assets from the CDN. This is what gives Jamstack much of its speed and security. Dynamic functionality — like shopping carts, authentication, user generated content and perhaps search — rely upon a combination of serverless functions and APIs to work.

Broadly speaking, the app will call a serverless function that serves as the back end to connect to the APIs. If, for example, our e-commerce functionality relies on Stripe’s APIs to work and Stripe is down, then, yes, our e-commerce functionality will not work. However, it’s important to note that the site won’t go down. It can handle the issue gracefully by informing the user of the issue. A server-rendered page that relies on the Stripe API for e-commerce would face the identical issue. Assuming the server-rendered page still calls the back end code for payment asynchronously, it would be no more or less fragile than the Jamstack version. On the other hand, if the server-rendering is actually dependent upon the API call, the user may be stuck waiting on a response or receive an error (a situation anyone who uses the web is very familiar with).

As for the second argument, it’s really hard to gauge the degree of dependence on third-parties for a Jamstack web app versus a server-rendered app. Many of today’s server-rendered applications still rely on APIs for a significant amount of functionality because it allows for faster development, takes advantage of the specific area of expertise of the provider, can offload responsibility for legal and other compliance issues, and more. In these cases, once again, the server-rendered version would be no more or less dependent than the Jamstack version. Admittedly, if your app relies mostly on internal or homegrown solutions, then this may be different.

Myth 3: Editing content is difficult

Kev Quirk, on Why I Don’t Use A Static Site Generator:

Having to SSH into a Linux box, then editing a post on Vim just seems like a ridiculously high barrier for entry when it comes to writing on the go. The world is mobile first these days, like it or not, so writing on the go should be easy.

This issue feels like a relic of static sites past. To be clear, you do not need to SSH into a Linux box to edit your site content. There are a wide range of headless CMS options, from completely free and open source to commercial offerings that power content for large enterprises. They offer an array of editing capabilities that rival any traditional CMS (something I’ve talked about before). The point is, there is no reason to be manually editing Markdown, YAML or JSON files, even on your blog side project. Aren’t sure how to hook all these pieces up? We’ve got a solution for that too!

One legitimate criticism has been that the headless CMS and build process can cause a disconnect between the content being edited and the change on the site. It can be difficult to preview exactly what the impact of a change is on the live site until it is published or without some complex build previewing process. This is something that is being addressed by the ecosystem. Companies like Stackbit (who I work for) are building tools that make this process seamless.

Animated screenshot. A content editor is shown on a webpage with a realtime preview of changes.
Editing a site using Stackbit

We’re not the only ones working on solving this problem. Other solutions include TinaCMS and Gatsby Preview. I think we are close to it becoming commonplace to have the simplicity of WYSIWYG editing on a tool like Wix running on top of the Jamstack.

Myth 4: SEO is Hard on the Jamstack

Kym Ellis, on What the JAMstack means for marketing:

Ditching the concept of the plugin and opting for a JAMstack site which is “just HTML” doesn’t actually mean you have to give up functionality, or suddenly need to know how to code like a front-end developer to manage a site and its content.

I haven’t seen this one pop up as often in recent years and I think it is mostly legacy criticism of the static site days, where managing SEO-related metadata involved manually editing YAML-based frontmatter. The concern was that doing SEO properly became cumbersome and hard to maintain, particularly if you wanted to inject different metadata for each unique page that was generated or to create structured data like JSON-LD, which can be critical for enhancing your search listing.

The advances in content management for the Jamstack have generally addressed the complexity of maintaining SEO metadata. In addition, because pages are pre-rendered, adding sitemaps and JSON-LD is relatively simple, provided the metadata required exists. While pre-rendering makes it easy to create the resources search engines (read: Google) need to index a site, they also, combined with CDNs, making it easier to achieve the performance benchmarks necessary to improve a site’s ranking.

Basically, Jamstack excels at “technical SEO” while also providing the tools necessary for content editors to supply the keyword and other metadata they require. For a more comprehensive look at Jamstack and SEO, I highly recommend checking out the Jamstack SEO Guide from Bejamas.

Myth 5: Jamstack requires heavy JavaScript frameworks

If you’re trying to sell plain ol’ websites to management who are obsessed with flavour-of-the-month frameworks, a slick website promoting the benefits of “JAMstack” is a really useful thing.

– jdietrich, Hacker News

Lately, it feels as if Jamstack has become synonymous with front-end JavaScript frameworks. It’s true that a lot of the most well-known solutions do depend on a front-end framework, including Gatsby (React), Next.js (React), Nuxt (Vue), VuePress (Vue), Gridsome (Vue) and Scully (Angular). This seems to be compounded by confusion around the “J” in Jamstack. While it stands for JavaScript, this does not mean that Jamstack solutions are all JavaScript-based, nor do they all require npm or JavaScript frameworks.

In fact, many of the most widely used tools are not built in JavaScript, notably Hugo (Go), Jekyll (Ruby), Pelican (Python) and the recently released Bridgetown (Ruby). Meanwhile, tools like Eleventy are built using JavaScript but do not depend on a JavaScript framework. None of these tools prevent the use of a JavaScript framework, but they do not require it.

The point here isn’t to dump on JavaScript frameworks or the tools that use them. These are great tools, used successfully by many developers. JavaScript frameworks can be very powerful tools capable of simplifying some very complex tasks. The point here is simply that the idea that a JavaScript framework is required to use the Jamstack is false — Jamstack comes in 460 flavors!

Where we can improve

So that’s it, right? Jamstack is an ideal world of web development where everything isn’t just perfect, but perfectly easy. Unfortunately, no. There are plenty of legitimate criticisms of Jamstack.

Simplicity

Sebastian De Deyne, with Thoughts (and doubts) after messing around with the JAMstack:

In my experience, the JAMstack (JavaScript, APIs, and Markup) is great until is isn’t. When the day comes that I need to add something dynamic–and that day always comes–I start scratching my head.

Let’s be honest: Getting started with the Jamstack isn’t easy. Sure, diving into building a blog or a simple site using a static site generator may not be terribly difficult. But try building a real site with anything dynamic and things start to get complicated fast.

You are generally presented with a myriad of options for completing the task, making it tough to weigh the pros and cons. One of the best things about Jamstack is that it is not prescriptive, but that can make it seem unapproachable, leaving people with the impression that perhaps it isn’t suited for complex tasks.

Tying services together

When you actually get to the point of building those dynamic features, your site can wind up being dependent on an array of services and APIs. You may be calling a headless CMS for content, a serverless function that calls an API for payment transactions, a service like Algolia for search, and so on. Bringing all those pieces together can be a very complex task. Add to that the fact that each often comes with its own dashboard and API/SDK updates, things get even more complex.

This is why I think services like Stackbit and tools like RedwoodJS are important, as they bring together disparate pieces of the infrastructure behind a Jamstack site and make those easier to build and manage.

Overusing frameworks

In my opinion, our dependence on JavaScript frameworks for modern front-end development has been getting a much needed skeptical look lately. There are tradeoffs, as this post by Tim Kadlec recently laid out. As I said earlier, you don’t need a JavaScript framework to work in the Jamstack.

However, the impression was created both because so many Jamstack tools rely on JavaScript frameworks and also because much of the way we teach Jamstack has been centered on using frameworks. I understand the reasoning for this — many Jamstack developers are comfortable in JavaScript frameworks and there’s no way to teach every tool, so you pick the one you like. Still, I personally believe the success of Jamstack in the long run depends on its flexibility, which (despite what I said about the simplicity above) means we need to present the diversity of solutions it offers — both with and without JavaScript frameworks.

Where to go from here

Sheesh, you made it! I know I had a lot to say, perhaps more than I even realized when I started writing, so I won’t bore you with a long conclusion other than to say that, obviously, I have presented these myths from the perspective of someone who believes very deeply in the value of the Jamstack, despite it’s flaws!

If you are looking for a good post about when to and when not to choose Jamstack over server-side rendering, check out Chris Coyier’s recent post Static or Not?.

The post 5 Myths About Jamstack appeared first on CSS-Tricks.

CSS-Tricks

, ,
[Top]