Tag: JAMstack

Static vs. Dynamic vs. Jamstack: Where’s The Line?

You’ll often hear developers talking about “static” vs. “dynamic” sites, or you may have heard someone use the term Jamstack. What do these terms mean, and when does a “static” site become either a Jamstack or dynamic site? These questions sound simple, but they’re more nuanced than they appear. Let’s explore these terms to gain a deeper understanding of Jamstack.

Finding the line

What’s the difference between a chair and a stool? Most people will respond that a chair has four legs and back support, whereas a stool has three legs with no back support.

Two brown backed leather chairs with a black frame and legs around a white table with a decorative green succulent plant in a white vase.
Credit: Rumman Amin

Two tall brown barstools with three legs and a brass frame under a natural wood countertop with a decorative green houseplant.
Credit: Rumman Amin

OK, that’s a great starting point, but what about these?

The more stool-like a chair becomes, the fewer people will unequivocally agree that it’s a chair. Eventually, we’ll reach a point where most people agree it’s a stool rather than a chair. It may sound like a silly exercise, but if we want to have a deep appreciation of what it means to be a chair, it’s a valuable one. We find out where the limits of a chair are for most people. We also build an understanding of the gray area beyond. Eventually, we get to the point where even the biggest die-hard chair fans concede and admit there’s a stool in front of them.

As interesting as chairs are, this is an article about website delivery technology. Let’s perform this same exercise for static, dynamic, and Jamstack websites.

At a high level

When you go to a website in your browser, there’s a lot going on behind the scenes:

  1. Your browser performs a DNS lookup to turn the domain name into an IP address.
  2. It requests an HTML file from that IP address.
  3. The webserver sends back the requested file.
  4. As the browser renders the web page, it may come across a reference for an asset, such as a CSS, JavaScript, or image file. The browser then performs a request for this asset.
  5. This cycle continues until the browser has all the files for the web page. It’s not unusual for a single webpage to make 50+ requests.

For every request, the response from the webserver is always a static file, even on a dynamic website. You could save these files to a USB drive, email them to a friend just like any other file on your computer.

When comparing static and dynamic, we’re talking about what the webserver is doing. On a static site, the files the browser requests already exist on the webserver. The webserver sends them back exactly as they are. On a dynamic site, the response gets generated by software. This software might connect to a database to retrieve data, build a layout from template files, and add today’s date to the footer. It does all of this for every request.

That’s the foundational difference between static and dynamic websites.

Where does Jamstack fit in?

Static websites are restrictive. They’re great for informational websites; however, you can’t have any dynamic content or behavior by definition. Jamstack blurs the line between static and dynamic. The idea is to take advantage of all the things that make static websites awesome while enabling dynamic functionality where necessary.

The ‘stack’ in Jamstack is a misnomer. The truth is, Jamstack is not a stack at all. It’s a philosophy that exhibits a striking resemblance to The 5 Pillars of the AWS Well-Architected Framework. The ambiguity in the term has led to extensive community discussion about what it means to be Jamstack.

What is Jamstack?

Jamstack is a superset of static. But to truly understand Jamstack, let’s start with the seeds that led to the coining of the term.

In 2002, the late Aaron Swartz published a blog post titled “Bake, Don’t Fry.” While Aaron didn’t coin “Bake, Don’t Fry,” it’s the first time I can find someone recognizing the benefits of static websites while breaking out perceived constraints of the word.

I care about not having to maintain cranky AOLserver, Postgres and Oracle installs. I care about being able to back things up with scp. I care about not having to do any installation or configuration to move my site to a new server. I care about being platform and server independent.

If we trawl through history, we can find similar frustrations that led to Jamstack seeds:

  • Ben and Mena Trott created MovableType because of a [d]issatisfaction with existing blog CMSes — performance, stability.
  • Tom Preston-Werner created Jekyll to move away from complexity:

    I already knew a lot about what I didn’t want. I was tired of complicated blogging engines like WordPress and Mephisto. I wanted to write great posts, not style a zillion template pages, moderate comments all day long, and constantly lag behind the latest software release.

  • Steve Francia created Hugo for performance:

    The past few years this blog has [been] powered by wordpress [sic] and drupal prior to that. Both are are fine pieces of software, but over time I became increasingly disappointed with how they are both optimized for writing content even though significantly most common usage is reading content. Due to the need to load the PHP interpreter on each request it could never be considered fast and consumed a lot of memory on my VPS.

The same themes surface as you look at the origins of many early Jamstack tools:

  • Reduce complexity
  • Improve performance
  • Reduce vendor lock-in
  • Better workflows for developers

In the past 20 years, JavaScript has evolved from a language for adding small interactions to a website to becoming a platform for building rich web applications in the browser. In parallel, we’ve seen a movement of splitting large applications into smaller microservices. These two developments gave rise to a new way of building websites where you could have a static front-end decoupled from a dynamic back-end.

In 2015, Mathias Biilmann wanted to talk about this modern way of building websites but was struggling with the constricting definition of static:

We were in this space of modern static websites. That’s a really bad description of what we’re doing, right? And we kept having that problem that, talking to people about static sites, they would think about something very static. They would think about a brochure or something with no moving parts. A little one-pager or something like that.

To break out of these constraints, he coined the term “Jamstack” to talk about this new approach, and it caught on like wildfire. What was old static technology from the 90s became new again and pushed to new limits. Many developers caught on to the benefits of the Jamstack approach, which helped Jamstack grow into the thriving ecosystem it is today.

Aaron Swartz put it nicely, 13 years before Jamstack was coined: keep a strict separation between input (which needs dynamic code to be processed) and output (which can usually be baked). In other words, decouple the front end from the back end. Prerender content whenever possible. Layer on dynamic functionality where necessary. That’s the crux of Jamstack.

The reason you might want to build a Jamstack site over a dynamic site come down to the six pillars of Jamstack:

Security

Jamstack sites have fewer moving parts and less surface area for malicious exploitation from outside sources.

Scale

Jamstack sites are static where possible. Static sites can live entirely in a CDN, making them much easier and cheaper to scale.

Performance

Serving a web page from a CDN rather than generating it from a centralized server on-demand improves the page load speed.

Maintainability

Static websites are simple. You need a webserver capable of serving files. With a dynamic site, you might need an entire team to keep a website online and fast.

Portability

Again, a static website is made up of files. As long as you find a webserver capable of serving website files, you can move your site anywhere.

Developer experience

Git workflows are a core part of software development today. With many legacy CMSs, it’s difficult to have Git development workflows. With a Jamstack site, everything is a file making it seamless to use Git.

Chris touches on some of these points in a deep-dive comparison between Jamstack and WordPress. He also compares the reasons for choosing a Jamstack architecture versus a server-side one in “Static or Not?”.

Let’s use these pillars to evaluate Jamstack use cases.

Where is the edge of static and Jamstack?

Now that we have the basics of static and Jamstack, let’s dive in and see what lies at the edge of each definition. We have four categories each edge case can fall under.

  • Static – This strictly adheres to the definition of static.
  • Basically static – While not precisely static, most people would call it a static site.
  • Jamstack – A static frontend decoupled from a dynamic backend.
  • Dynamic – Renders web pages on-demand.

Many of these use cases can be placed in multiple categories. In this exercise, we’re putting them in the most restrictive category they fit.

JavaScript interaction Static

Let’s start with an easy one. I have a static site that uses JavaScript to create a slideshow of images.

The HTML page, JavaScript, and images are all static files. All of the HTML manipulation required for the slideshow to function happens in the browser with no external influence.

Cookies Static

I have a static site that adds a banner to the top of the page using JavaScript if a cookie exists. A cookie is just a header. The rest of the files are static.

External assets Basically Static

On a web page, we can load images or JavaScript from an external source. This external source may generate these assets dynamically on request. Would that mean we have a dynamic site?

Most people, including myself, would consider this a static site because it basically is. But if we’re strict to the definition, it doesn’t fit the bill. Having any part of the page generated dynamically defiles the sacred harmony of static.

iFrames Basically Static

An inline frame allows you to embed an HTML page within another HTML page. iFrames are commonly used for embedding Google Maps, Facebook Like buttons, and YouTube videos on a webpage.

Again, most people would still consider this a static site. However, these embeds are almost always from a dynamically-generated source.

Forms Basically Static

A static site can undoubtedly have a form on it. The dilemma comes when you submit it. If you want to do something with the data, you almost certainly need a dynamic back-end. There are plenty of form submission services you can use as the action for your form.

I can see two ways to argue this:

  1. You’re submitting a form to an external website, and it happens to redirect back afterward. This separation means the definition of static remains intact.
  2. This external service is a core workflow on your website, the definition of static no longer works.

In reality, most people would still consider this a static site.

Ajax requests Jamstack

An Ajax request allows a developer to request data from an external source without reloading the page. We’re in the same boat as the above situations of relying on a third party. It’s possible the endpoint for the Ajax call is a static JSON file, but it’s more likely that it’s dynamically-generated.

The nature of how Ajax data is typically used on a website pushes it past a static website into Jamstack territory. It fits well with Jamstack as you can have a site where you prerender everything you can, then use Ajax to layer on any dynamic functionality or content on the site.

Embedded eCommerce Jamstack

There are services that allow you to add eCommerce, even to static websites. Behind the scenes, they’re essentially making Ajax requests to manage items in a shopping cart and collect payment details.

Single page application (SPA) Jamstack

The title alone puts it out of static site contention. A SPA uses Ajax calls to request data. The presentation layer lives entirely in the front end, making it Jamtastic.

Ajax call to a serverless function Jamstack

Whether the endpoint of an Ajax call is serverless with something like AWS Lambda, goes to your Kubernetes clustered Node.js back-end, or a simple PHP back-end, it doesn’t matter. The key for Jamstack is the front end is independent of the back end.

Reverse proxy in front of a webserver Static

Adding a reverse proxy in front of the webserver for a static site must make it dynamic, right? Well, not so fast. While a proxy is software that adds a dynamic element to the network, as long as the file on the server is precisely the file the browser receives, it’s still static.

A webserver, modem, and every piece of network infrastructure in between are running software. If adding a proxy makes a static site dynamic, then nothing is static.

CDN Static

A CDN is a globally-distributed reverse proxy, so it falls into the same category as a reverse proxy. CDNs often add their own headers. This still doesn’t impact the prestigious static status as the headers aren’t part of the file sitting on the server’s hard drive.

CDN in front of a dynamic site with a 200-year cache expiration time Dynamic

OK, 200 years is a long expiry time, I’ll give you that. There are two reasons this is neither a static nor Jamstack site:

  1. The first request isn’t cached, so it generates on demand.
  2. CDNs aren’t designed for persistent storage. If, after one week, you’ve only had five hits on your website, the CDN might purge your web page from the cache. It can always retrieve the web page from the origin server, which would dynamically render the response.

WordPress with a static output Static

Using a WordPress plugin like WP2Static lets you create and manage your website in WordPress and output a static website whenever something changes.

When you do this, the files the browser requests already exist on the webserver, making it a static website—a subtle but important distinction from having a CDN in front of a dynamic site.

Edge computing Dynamic

Many companies are now offering the ability to run dynamic code at the edge of a CDN. It’s a powerful concept because you can have dynamic functionality without adding latency to the user. You can even use edge computation to manipulate HTML before sending it to the client.

It comes down to how you’re using edge functions. You could use an edge function to add a header to particular requests. I would argue this is still a static site. Push much beyond this, where you’re manipulating the HTML, and you’ve crossed the dynamic boundary.

It’s hard to argue it’s a Jamstack site as it doesn’t adhere to some of the fundamental benefits: scale, maintainability, and portability. Now, you have a piece of your core infrastructure that’s changing HTML on every request, and it will only work on that particular hosting infrastructure. That’s getting pretty far away from the blissful simplicity of a static site.

One of the elegant things about Jamstack is the front end and back end are decoupled. The backend is made up of APIs that output data. They don’t know or care how the data is used. The front end is the presentation layer. It knows where to get dynamic data from and how to render it. When you break this separation of concerns, you’ve crossed into a dynamic world.

Dynamic Persistent Rendering (DPR) Dynamic

DPR is a strategy to reduce long build times on large static site generator (SSG) sites. The idea is the SSG builds a subset of the most popular pages. For the rest of the pages, the SSG builds them on-demand the first time they’re requested and saves them to persistent storage. After the initial request, the page behaves precisely like the rest of the built static pages.

Long build times limit large-scale use cases from choosing Jamstack. If all the SSG tooling were GoLang-based, we probably wouldn’t need DPR. However, that’s not the direction most Jamstack tooling has taken, and build performance can be excruciatingly long on big websites.

DPR is a means to an end and a necessity for Jamstack to grow. While it allows you to use Jamstack workflows on massive websites, ironically, I don’t think you can call a site using DPR a Jamstack site. Running software on-demand to generate a web page certainly sounds dynamicy. After the first request, a page served using DPR is a static page which makes DPR “more static” than putting a CDN in front of a dynamic site. However, it’s still a dynamic site as there isn’t a separation between frontend and backend, and it’s not portable, one of the pillars of a Jamstack site.

Incremental Static Regeneration (ISR) Dynamic

ISR is a similar but subtly different strategy to DPR to reduce long build times on large SSG sites. The difference is you can revalidate individual pages periodically to mimic a dynamic site without doing an entire site build.

Requests to a page without a cached version fall back to a stale version of that page or a generic loading page.

Again, it’s an exciting technology that expands what you can do with Jamstack workflows, but dynamically generating a page on-demand sounds like something a dynamic site would do.

Flat file CMS Dynamic

A flat file CMS uses text files for content rather than a database. While flat file CMSs remove a dynamic element from the stack, it’s still dynamically rendering the response.

The lines have been drawn

Exploring and debating these edge cases gives us a better understanding of the limits of all of these terms. The point of this exercise isn’t to be dogmatic about creating static or Jamstack websites. It’s to give us a common language to talk about the tradeoffs you make as you cross the boundary from one concept to another.

There’s absolutely nothing wrong with tradeoffs either. Not everything can be a purely static website. In many cases, the trade-offs make sense. For example, let’s say the front end needs to know the country of the visitor. There are two ways to do this:

  1. On page load, perform an Ajax call to query the country from an API. (Jamstack)
  2. Use an edge function to dynamically insert a country code into the HTML on response. (Dynamic)

If having the country code is a nice-to-have and the web page doesn’t need it immediately, then the first approach is a good option. The page can be static and the API call can fail gracefully if it doesn’t work. However, if the country code is required for the page, dynamically adding it using an edge function might make more sense. It’ll be faster as you don’t need to perform a second request/response cycle.

The key is understanding the problem you’re solving and thinking through the trade-offs you’re making with different approaches. You might end up with the majority of your site Jamstack and a portion dynamic. That’s totally fine and might be necessary for your use case. Typically, the closer you can get to static, the faster, more secure, and more scalable your site will be.

This is only the beginning of the discussion, and I’d love to hear your take. Where would you draw the lines? What do static and Jamstack mean to you? Are you sitting on a chair or stool right now?


The post Static vs. Dynamic vs. Jamstack: Where’s The Line? appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , ,

Jamstack Community Survey 2021

(This is a sponsored post.)

The folks over at Netlify have opened up the Jamstack Community Survey for 2021. More than 3,000 front-enders like yourself took last year’s survey, which gauged how familiar people are with the term “Jamstack” and which frameworks they use.

This is the survey’s second year which is super exciting because this is where we start to reveal year-over-year trends. Will the percentage of developers who have been using a Jamstack architecture increase from last year’s 71%? Will React still be the most widely used framework, but with one of the lower satisfaction scores? Or will Eleventy still be one of the least used frameworks, but with the highest satisfaction score? Only your answers will tell!

Plus, you can qualify for a limited-edition Jamstack sticker with your response. See Netlify’s announcement for more information.

Direct Link to ArticlePermalink


The post Jamstack Community Survey 2021 appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , ,
[Top]

Jamstack Community Survey 2021

The folks over at Netlify have opened up the Jamstack Community Survey for 2021. More than 3,000 front-enders like yourself took last year’s survey, which gauged how familiar people are with the term “Jamstack” and which frameworks they use.

This is the survey’s second year which is super exciting because this is where we start to reveal year-over-year trends. Will the percentage of developers who have been using a Jamstack architecture increase from last year’s 71%? Will React still be the most widely used framework, but with one of the lower satisfaction scores? Or will Eleventy still be one of the least used frameworks, but with the highest satisfaction score? Only your answers will tell!

Plus, you can qualify for a limited-edition Jamstack sticker with your response. See Netlify’s announcement for more information.


The post Jamstack Community Survey 2021 appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , ,
[Top]

Deploying a Serverless Jamstack Site with RedwoodJS, Fauna, and Vercel

This article is for anyone interested in the emerging ecosystem of tools and technologies related to Jamstack and serverless. We’re going to use Fauna’s GraphQL API as a serverless back-end for a Jamstack front-end built with the Redwood framework and deployed with a one-click deploy on Vercel.

In other words, lots to learn! By the end, you’ll not only get to dive into Jamstack and serverless concepts, but also hands-on experience with a really neat combination of tech that I think you’ll really like.

Creating a Redwood app

Redwood is a framework for serverless applications that pulls together React (for front-end component), GraphQL (for data) and Prisma (for database queries).

There are other front-end frameworks that we could use here. One example is Bison, created by Chris Ball. It leverages GraphQL in a similar fashion to Redwood, but uses a slightly different lineup of GraphQL libraries, such as Nexus in place of Apollo Client and GraphQL Codegen, in place of the Redwood CLI. But it’s only been around a few months, so the project is still very new compared to Redwood, which has been in development since June 2019.

There are many great Redwood starter templates we could use to bootstrap our application, but I want to start by generating a Redwood boilerplate project and looking at the different pieces that make up a Redwood app. We’ll then build up the project, piece by piece.

We will need to install Yarn to use the Redwood CLI to get going. Once that’s good to go, here’s what to run in a terminal

yarn create redwood-app ./csstricks

We’ll now cd into our new project directory and start our development server.

cd csstricks yarn rw dev

Our project’s front-end is now running on localhost:8910. Our back-end is running on localhost:8911 and ready to receive GraphQL queries. By default, Redwood comes with a GraphiQL playground that we’ll use towards the end of the article.

Let’s head over to localhost:8910 in the browser. If all is good, the Redwood landing page should load up.

The Redwood starting page indicates that the front end of our app is ready to go. It also provides a nice instruction for how to start creating custom routes for the app.

Redwood is currently at version 0.21.0, as of this writing. The docs warn against using it in production until it officially reaches 1.0. They also have a community forum where they welcome feedback and input from developers like yourself.

Directory structure

Redwood values convention over configuration and makes a lot of decisions for us, including the choice of technologies, how files are organized, and even naming conventions. This can result in an overwhelming amount of generated boilerplate code that is hard to comprehend, especially if you’re just digging into this for the first time.

Here’s how the project is structured:

├── api │   ├── prisma │   │   ├── schema.prisma │   │   └── seeds.js │   └── src │       ├── functions │       │   └── graphql.js │       ├── graphql │       ├── lib │       │   └── db.js │       └── services └── web     ├── public     │   ├── favicon.png     │   ├── README.md     │   └── robots.txt     └── src         ├── components         ├── layouts         ├── pages         │   ├── FatalErrorPage         │   │   └── FatalErrorPage.js         │   └── NotFoundPage         │       └── NotFoundPage.js         ├── index.css         ├── index.html         ├── index.js         └── Routes.js

Don’t worry too much about what all this means yet; the first thing to notice is things are split into two main directories: web and api. Yarn workspaces allows each side to have its own path in the codebase.

web contains our front-end code for:

  • Pages
  • Layouts
  • Components

api contains our back-end code for:

  • Function handlers
  • Schema definition language
  • Services for back-end business logic
  • Database client

Redwood assumes Prisma as a data store, but we’re going to use Fauna instead. Why Fauna when we could just as easily use Firebase? Well, it’s just a personal preference. After Google purchased Firebase they launched a real-time document database, Cloud Firestore, as the successor to the original Firebase Realtime Database. By integrating with the larger Firebase ecosystem, we could have access to a wider range of features than what Fauna offers. At the same time, there are even a handful of community projects that have experimented with Firestore and GraphQL but there isn’t first class GraphQL support from Google.

Since we will be querying Fauna directly, we can delete the prisma directory and everything in it. We can also delete all the code in db.js. Just don’t delete the file as we’ll be using it to connect to the Fauna client.

index.html

We’ll start by taking a look at the web side since it should look familiar to developers with experience using React or other single-page application frameworks.

But what actually happens when we build a React app? It takes the entire site and shoves it all into one big ball of JavaScript inside index.js, then shoves that ball of JavaScript into the “root” DOM node, which is on line 11 of index.html.

<!DOCTYPE html> <html lang="en">   <head>     <meta charset="UTF-8" />     <meta name="viewport" content="width=device-width, initial-scale=1.0" />     <link rel="icon" type="image/png" href="/favicon.png" />     <title><%= htmlWebpackPlugin.options.title %></title>   </head>    <body>     <div id="redwood-app"></div> // HIGHLIGHT   </body> </html>

While Redwood uses Jamstack in the documentation and marketing of itself, Redwood doesn’t do pre-rendering yet (like Next or Gatsby can), but is still Jamstack in that it’s shipping static files and hitting APIs with JavaScript for data.

index.js

index.js contains our root component (that big ball of JavaScript) that is rendered to the root DOM node. document.getElementById() selects an element with an id containing redwood-app, and ReactDOM.render() renders our application into the root DOM element.

RedwoodProvider

The <Routes /> component (and by extension all the application pages) are contained within the <RedwoodProvider> tags. Flash uses the Context API for passing message objects between deeply nested components. It provides a typical message display unit for rendering the messages provided to FlashContext.

FlashContext’s provider component is packaged with the <RedwoodProvider /> component so it’s ready to use out of the box. Components pass message objects by subscribing to it (think, “send and receive”) via the provided useFlash hook.

FatalErrorBoundary

The provider itself is then contained within the <FatalErrorBoundary> component which is taking in <FatalErrorPage> as a prop. This defaults your website to an error page when all else fails.

import ReactDOM from 'react-dom' import { RedwoodProvider, FatalErrorBoundary } from '@redwoodjs/web' import FatalErrorPage from 'src/pages/FatalErrorPage' import Routes from 'src/Routes' import './index.css'  ReactDOM.render(   <FatalErrorBoundary page={FatalErrorPage}>     <RedwoodProvider>       <Routes />     </RedwoodProvider>   </FatalErrorBoundary>,    document.getElementById('redwood-app') )

Routes.js

Router contains all of our routes and each route is specified with a Route. The Redwood Router attempts to match the current URL to each route, stopping when it finds a match and then renders only that route. The only exception is the notfound route which renders a single Route with a notfound prop when no other route matches.

import { Router, Route } from '@redwoodjs/router'  const Routes = () => {   return (     <Router>       <Route notfound page={NotFoundPage} />     </Router>   ) }  export default Routes

Pages

Now that our application is set up, let’s start creating pages! We’ll use the Redwood CLI generate page command to create a named route function called home. This renders the HomePage component when it matches the URL path to /.

We can also use rw instead of redwood and g instead of generate to save some typing.

yarn rw g page home /

This command performs four separate actions:

  • It creates web/src/pages/HomePage/HomePage.js. The name specified in the first argument gets capitalized and “Page” is appended to the end.
  • It creates a test file at web/src/pages/HomePage/HomePage.test.js with a single, passing test so you can pretend you’re doing test-driven development.
  • It creates a Storybook file at web/src/pages/HomePage/HomePage.stories.js.
  • It adds a new <Route> in web/src/Routes.js that maps the / path to the HomePage component.

HomePage

If we go to web/src/pages we’ll see a HomePage directory containing a HomePage.js file. Here’s what’s in it:

// web/src/pages/HomePage/HomePage.js  import { Link, routes } from '@redwoodjs/router'  const HomePage = () => {   return (     <>       <h1>HomePage</h1>       <p>         Find me in <code>./web/src/pages/HomePage/HomePage.js</code>       </p>       <p>         My default route is named <code>home</code>, link to me with `         <Link to={routes.home()}>Home</Link>`       </p>     </>   ) }  export default HomePage
The HomePage.js file has been set as the main route, /.

We’re going to move our page navigation into a re-usable layout component which means we can delete the Link and routes imports as well as <Link to={routes.home()}>Home</Link>. This is what we’re left with:

// web/src/pages/HomePage/HomePage.js  const HomePage = () => {   return (     <>       <h1>RedwoodJS+FaunaDB+Vercel 🚀</h1>       <p>Taking Fullstack to the Jamstack</p>     </>   ) }  export default HomePage

AboutPage

To create our AboutPage, we’ll enter almost the exact same command we just did, but with about instead of home. We also don’t need to specify the path since it’s the same as the name of our route. In this case, the name and path will both be set to about.

yarn rw g page about
AboutPage.js is now available at /about.
// web/src/pages/AboutPage/AboutPage.js  import { Link, routes } from '@redwoodjs/router'  const AboutPage = () => {   return (     <>       <h1>AboutPage</h1>       <p>         Find me in <code>./web/src/pages/AboutPage/AboutPage.js</code>       </p>       <p>         My default route is named <code>about</code>, link to me with `         <Link to={routes.about()}>About</Link>`       </p>     </>   ) }  export default AboutPage

We’ll make a few edits to the About page like we did with our Home page. That includes taking out the <Link> and routes imports and deleting Link to={routes.about()}>About</Link>.

Here’s the end result:

// web/src/pages/AboutPage/AboutPage.js  const AboutPage = () => {   return (     <>       <h1>About 🚀🚀</h1>       <p>For those who want to stack their Jam, fully</p>     </>   ) }

If we return to Routes.js we’ll see our new routes for home and about. Pretty nice that Redwood does this for us!

const Routes = () => {   return (     <Router>       <Route path="/about" page={AboutPage} name="about" />       <Route path="/" page={HomePage} name="home" />       <Route notfound page={NotFoundPage} />     </Router>   ) }

Layouts

Now we want to create a header with navigation links that we can easily import into our different pages. We want to use a layout so we can add navigation to as many pages as we want by importing the component instead of having to write the code for it on every single page.

BlogLayout

You may now be wondering, “is there a generator for layouts?” The answer to that is… of course! The command is almost identical as what we’ve been doing so far, except with rw g layout followed by the name of the layout, instead of rw g page followed by the name and path of the route.

yarn rw g layout blog
// web/src/layouts/BlogLayout/BlogLayout.js  const BlogLayout = ({ children }) => {   return <>{children}</> }  export default BlogLayout

To create links between different pages we’ll need to:

  • Import Link and routes from @redwoodjs/router into BlogLayout.js
  • Create a <Link to={}></Link> component for each link
  • Pass a named route function, such as routes.home(), into the to={} prop for each route
// web/src/layouts/BlogLayout/BlogLayout.js  import { Link, routes } from '@redwoodjs/router'  const BlogLayout = ({ children }) => {   return (     <>       <header>         <h1>RedwoodJS+FaunaDB+Vercel 🚀</h1>          <nav>           <ul>             <li>               <Link to={routes.home()}>Home</Link>             </li>             <li>               <Link to={routes.about()}>About</Link>             </li>           </ul>         </nav>        </header>        <main>         <p>{children}</p>       </main>     </>   ) }  export default BlogLayout

We won’t see anything different in the browser yet. We created the BlogLayout but have not imported it into any pages. So let’s import BlogLayout into HomePage and wrap the entire return statement with the BlogLayout tags.

// web/src/pages/HomePage/HomePage.js  import BlogLayout from 'src/layouts/BlogLayout'  const HomePage = () => {   return (     <BlogLayout>       <p>Taking Fullstack to the Jamstack</p>     </BlogLayout>   ) }  export default HomePage
Hey look, the navigation is taking shape!

If we click the link to the About page we’ll be taken there but we are unable to get back to the previous page because we haven’t imported BlogLayout into AboutPage yet. Let’s do that now:

// web/src/pages/AboutPage/AboutPage.js  import BlogLayout from 'src/layouts/BlogLayout'  const AboutPage = () => {   return (     <BlogLayout>       <p>For those who want to stack their Jam, fully</p>     </BlogLayout>   ) }  export default AboutPage

Now we can navigate back and forth between the pages by clicking the navigation links! Next up, we’ll now create our GraphQL schema so we can start working with data.

Fauna schema definition language

To make this work, we need to create a new file called sdl.gql and enter the following schema into the file. Fauna will take this schema and make a few transformations.

// sdl.gql  type Post {   title: String!   body: String! }  type Query {   posts: [Post] }

Save the file and upload it to Fauna’s GraphQL Playground. Note that, at this point, you will need a Fauna account to continue. There’s a free tier that works just fine for what we’re doing.

The GraphQL Playground is located in the selected database.
The Fauna shell allows us to write, run and test queries.

It’s very important that Redwood and Fauna agree on the SDL, so we cannot use the original SDL that was entered into Fauna because that is no longer an accurate representation of the types as they exist on our Fauna database.

The Post collection and posts Index will appear unaltered if we run the default queries in the shell, but Fauna creates an intermediary PostPage type which has a data object.

Redwood schema definition language

This data object contains an array with all the Post objects in the database. We will use these types to create another schema definition language that lives inside our graphql directory on the api side of our Redwood project.

// api/src/graphql/posts.sdl.js  import gql from 'graphql-tag'  export const schema = gql`   type Post {     title: String!     body: String!   }    type PostPage {     data: [Post]   }    type Query {     posts: PostPage   } ` 

Services

The posts service sends a query to the Fauna GraphQL API. This query is requesting an array of posts, specifically the title and body for each. These are contained in the data object from PostPage.

// api/src/services/posts/posts.js  import { request } from 'src/lib/db' import { gql } from 'graphql-request'  export const posts = async () => {   const query = gql`   {     posts {       data {         title         body       }     }   }   `    const data = await request(query, 'https://graphql.fauna.com/graphql')    return data['posts'] } 

At this point, we can install graphql-request, a minimal client for GraphQL with a promise-based API that can be used to send GraphQL requests:

cd api yarn add graphql-request graphql

Attach the Fauna authorization token to the request header

So far, we have GraphQL for data, Fauna for modeling that data, and graphql-request to query it. Now we need to establish a connection between graphql-request and Fauna, which we’ll do by importing graphql-request into db.js and use it to query an endpoint that is set to https://graphql.fauna.com/graphql.

// api/src/lib/db.js  import { GraphQLClient } from 'graphql-request'  export const request = async (query = {}) => {   const endpoint = 'https://graphql.fauna.com/graphql'    const graphQLClient = new GraphQLClient(endpoint, {     headers: {       authorization: 'Bearer ' + process.env.FAUNADB_SECRET     },   })    try {     return await graphQLClient.request(query)   } catch (error) {     console.log(error)     return error   } } 

A GraphQLClient is instantiated to set the header with an authorization token, allowing data to flow to our app.

Create

We’ll use the Fauna Shell and run a couple of Fauna Query Language (FQL) commands to seed the database. First, we’ll create a blog post with a title and body.

Create(   Collection("Post"),   {     data: {       title: "Deno is a secure runtime for JavaScript and TypeScript.",       body: "The original creator of Node, Ryan Dahl, wanted to build a modern, server-side JavaScript framework that incorporates the knowledge he gained building out the initial Node ecosystem."     }   } )
{   ref: Ref(Collection("Post"), "282083736060690956"),   ts: 1605274864200000,   data: {     title: "Deno is a secure runtime for JavaScript and TypeScript.",     body:       "The original creator of Node, Ryan Dahl, wanted to build a modern, server-side JavaScript framework that incorporates the knowledge he gained building out the initial Node ecosystem."   } }

Let’s create another one.

Create(   Collection("Post"),   {     data: {       title: "NextJS is a React framework for building production grade applications that scale.",       body: "To build a complete web application with React from scratch, there are many important details you need to consider such as: bundling, compilation, code splitting, static pre-rendering, server-side rendering, and client-side rendering."     }   } )
{   ref: Ref(Collection("Post"), "282083760102441484"),   ts: 1605274887090000,   data: {     title:       "NextJS is a React framework for building production grade applications that scale.",     body:       "To build a complete web application with React from scratch, there are many important details you need to consider such as: bundling, compilation, code splitting, static pre-rendering, server-side rendering, and client-side rendering."   } }

And maybe one more just to fill things up.

Create(   Collection("Post"),   {     data: {       title: "Vue.js is an open-source front end JavaScript framework for building user interfaces and single-page applications.",       body: "Evan You wanted to build a framework that combined many of the things he loved about Angular and Meteor but in a way that would produce something novel. As React rose to prominence, Vue carefully observed and incorporated many lessons from React without ever losing sight of their own unique value prop."     }   } )
{   ref: Ref(Collection("Post"), "282083792286384652"),   ts: 1605274917780000,   data: {     title:       "Vue.js is an open-source front end JavaScript framework for building user interfaces and single-page applications.",     body:       "Evan You wanted to build a framework that combined many of the things he loved about Angular and Meteor but in a way that would produce something novel. As React rose to prominence, Vue carefully observed and incorporated many lessons from React without ever losing sight of their own unique value prop."   } }

Cells

Cells provide a simple and declarative approach to data fetching. They contain the GraphQL query along with loading, empty, error, and success states. Each one renders itself automatically depending on what state the cell is in.

BlogPostsCell

yarn rw generate cell BlogPosts   export const QUERY = gql`   query BlogPostsQuery {     blogPosts {       id     }   } ` export const Loading = () => <div>Loading...</div> export const Empty = () => <div>Empty</div> export const Failure = ({ error }) => <div>Error: {error.message}</div>  export const Success = ({ blogPosts }) => {   return JSON.stringify(blogPosts) }

By default we have the query render the data with JSON.stringify on the page where the cell is imported. We’ll make a handful of changes to make the query and render the data we need. So, let’s:

  • Change blogPosts to posts.
  • Change BlogPostsQuery to POSTS.
  • Change the query itself to return the title and body of each post.
  • Map over the data object in the success component.
  • Create a component with the title and body of the posts returned through the data object.

Here’s how that looks:

// web/src/components/BlogPostsCell/BlogPostsCell.js  export const QUERY = gql`   query POSTS {     posts {       data {         title         body       }     }   } ` export const Loading = () => <div>Loading...</div> export const Empty = () => <div>Empty</div> export const Failure = ({ error }) => <div>Error: {error.message}</div>  export const Success = ({ posts }) => {   const {data} = posts   return data.map(post => (     <>       <header>         <h2>{post.title}</h2>       </header>       <p>{post.body}</p>     </>   )) }

The POSTS query is sending a query for posts, and when it’s queried, we get back a data object containing an array of posts. We need to pull out the data object so we can loop over it and get the actual posts. We do this with object destructuring to get the data object and then we use the map() function to map over the data object and pull out each post. The title of each post is rendered with an <h2> inside <header> and the body is rendered with a <p> tag.

Import BlogPostsCell to HomePage

// web/src/pages/HomePage/HomePage.js  import BlogLayout from 'src/layouts/BlogLayout' import BlogPostsCell from 'src/components/BlogPostsCell/BlogPostsCell.js'  const HomePage = () => {   return (     <BlogLayout>       <p>Taking Fullstack to the Jamstack</p>       <BlogPostsCell />     </BlogLayout>   ) }  export default HomePage
Check that out! Posts are returned to the app and rendered on the front end.

Vercel

We do mention Vercel in the title of this post, and we’re finally at the point where we need it. Specifically, we’re using it to build the project and deploy it to Vercel’s hosted platform, which offers build previews when code it pushed to the project repository. So, if you don’t already have one, grab a Vercel account. Again, the free pricing tier works just fine for this work.

Why Vercel over, say, Netlify? It’s a good question. Redwood even began with Netlify as its original deploy target. Redwood still has many well-documented Netlify integrations. Despite the tight integration with Netlify, Redwood seeks to be universally portable to as many deploy targets as possible. This now includes official support for Vercel along with community integrations for the Serverless framework, AWS Fargate, and PM2. So, yes, we could use Netlify here, but it’s nice that we have a choice of available services.

We only have to make one change to the project’s configuration to integrate it with Vercel. Let’s open netlify.toml and change the apiProxyPath to "/api". Then, let’s log into Vercel and click the “Import Project” button to connect its service to the project repository. This is where we enter the URL of the repo so Vercel can watch it, then trigger a build and deploy when it noticed changes.

I’m using GitHub to host my project, but Vercel is capable of working with GitLab and Bitbucket as well.

Redwood has a preset build command that works out of the box in Vercel:

Simply select “Redwood” from the preset options and we’re good to go.

We’re pretty far along, but even though the site is now “live” the database isn’t connected:

To fix that, we’ll add the FAUNADB_SECRET token from our Fauna account to our environment variables in Vercel:

Now our application is complete!

We did it! I hope this not only gets you super excited about working with Jamstack and serverless, but got a taste of some new technologies in the process.


The post Deploying a Serverless Jamstack Site with RedwoodJS, Fauna, and Vercel appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , , ,
[Top]

How to create a client-serverless Jamstack app using Netlify, Gatsby and Fauna

The Jamstack is a modern web development architecture based on client-side JavaScript, reusable APIs, and prebuilt Markup.

The key aspects of a Jamstack application are the following:

  • The entire app runs on a CDN (or ADN). CDN stands for Content Delivery Network and an ADN is an Application Delivery Network.
  • Everything lives in Git.
  • Automated builds run with a workflow when developers push the code.
  • There’s Automatic deployment of the prebuilt markup to the CDN/ADN.
  • Reusable APIs make hasslefree integrations with many of the services. To take a few examples, Stripe for the payment and checkout, Mailgun for email services, etc. We can also write custom APIs targeted to a specific use-case. We will see such examples of custom APIs in this article.
  • It’s practically Serverless. To put it more clearly, we do not maintain any servers, rather make use of already existing services (like email, media, database, search, and so on) or serverless functions.

In this article, we will learn how to build a Jamstack application that has:

  • A global data store with GraphQL support to store and fetch data with ease. We will use Fauna to accomplish this.
  • Serverless functions that also act as the APIs to fetch data from the Fauna data store. We will use Netlify serverless functions for this.
  • We will build the client side of the app using a Static Site Generator called Gatsbyjs.
  • Finally we will deploy the app on a CDN configured and managed by Netlify CDN.

So, what are we building today?

We all love shopping. How cool would it be to manage all of our shopping notes in a centralized place? So we’ll be building an app called ‘shopnote’ that allows us to manage shop notes. We can also add one or more items to a note, mark them as done, mark them as urgent, etc.

At the end of this article, our shopnote app will look like this,

TL;DR

We will learn things with a step-by-step approach in this article. If you want to jump into the source code or demonstration sooner, here are links to them.

Set up Fauna

Fauna is the data API for client-serverless applications. If you are familiar with any traditional RDBMS, a major difference with Fauna would be, it is a relational NOSQL system that gives all the capabilities of the legacy RDBMS. It is very flexible without compromising scalability and performance.

Fauna supports multiple APIs for data-access,

  • GraphQL: An open source data query and manipulation language. If you are new to the GraphQL, you can find more details from here, https://graphql.org/
  • Fauna Query Language (FQL): An API for querying Fauna. FQL has language specific drivers which makes it flexible to use with languages like JavaScript, Java, Go, etc. Find more details of FQL from here.

In this article we will explain the usages of GraphQL for the ShopNote application.

First thing first, sign up using this URL. Please select the free plan which is with a generous daily usage quota and more than enough for our usage.

Next, create a database by providing a database name of your choice. I have used shopnotes as the database name.

After creating the database, we will be defining the GraphQL schema and importing it into the database. A GraphQL schema defines the structure of the data. It defines the data types and the relationship between them. With schema we can also specify what kind of queries are allowed.

At this stage, let us create our project folder. Create a project folder somewhere on your hard drive with the name, shopnote. Create a file with the name, shopnotes.gql with the following content:

type ShopNote {   name: String!   description: String   updatedAt: Time   items: [Item!] @relation }   type Item {   name: String!   urgent: Boolean   checked: Boolean   note: ShopNote! }   type Query {   allShopNotes: [ShopNote!]! }

Here we have defined the schema for a shopnote list and item, where each ShopNote contains name, description, update time and a list of Items. Each Item type has properties like, name, urgent, checked and which shopnote it belongs to. 

Note the @relation directive here. You can annotate a field with the @relation directive to mark it for participating in a bi-directional relationship with the target type. In this case, ShopNote and Item are in a one-to-many relationship. It means, one ShopNote can have multiple Items, where each Item can be related to a maximum of one ShopNote.

You can read more about the @relation directive from here. More on the GraphQL relations can be found from here.

As a next step, upload the shopnotes.gql file from the Fauna dashboard using the IMPORT SCHEMA button,

Upon importing a GraphQL Schema, FaunaDB will automatically create, maintain, and update, the following resources:

  • Collections for each non-native GraphQL Type; in this case, ShopNote and Item.
  • Basic CRUD Queries/Mutations for each Collection created by the Schema, e.g. createShopNote allShopNotes; each of which are powered by FQL.
  • For specific GraphQL directives: custom Indexes or FQL for establishing relationships (i.e. @relation), uniqueness (@unique), and more!

Behind the scene, Fauna will also help to create the documents automatically. We will see that in a while.

Fauna supports a schema-free object relational data model. A database in Fauna may contain a group of collections. A collection may contain one or more documents. Each of the data records are inserted into the document. This forms a hierarchy which can be visualized as:

Here the data record can be arrays, objects, or of any other supported types. With the Fauna data model we can create indexes, enforce constraints. Fauna indexes can combine data from multiple collections and are capable of performing computations. 

At this stage, Fauna already created a couple of collections for us, ShopNote and Item. As we start inserting records, we will see the Documents are also getting created. We will be able view and query the records and utilize the power of indexes. You may see the data model structure appearing in your Fauna dashboard like this in a while,

Point to note here, each of the documents is identified by the unique ref attribute. There is also a ts field which returns the timestamp of the recent modification to the document. The data record is part of the data field. This understanding is really important when you interact with collections, documents, records using FQL built-in functions. However, in this article we will interact with them using GraphQL queries with Netlify Functions.

With all these understanding, let us start using our Shopenotes database that is created successfully and ready for use. 

Let us try some queries

Even though we have imported the schema and underlying things are in place, we do not have a document yet. Let us create one. To do that, copy the following GraphQL mutation query to the left panel of the GraphQL playground screen and execute.

mutation {   createShopNote(data: {     name: "My Shopping List"     description: "This is my today's list to buy from Tom's shop"     items: {       create: [         { name: "Butther - 1 pk", urgent: true }         { name: "Milk - 2 ltrs", urgent: false }         { name: "Meat - 1lb", urgent: false }       ]     }   }) {     _id     name     description     items {       data {         name,         urgent       }     }   } }

Note, as Fauna already created the GraphQL mutation classes in the background, we can directly use it like, createShopNote. Once successfully executed, you can see the response of a ShopNote creation at the right side of the editor.

The newly created ShopNote document has all the required details we have passed while creating it. We have seen ShopNote has a one-to-many relation with Item. You can see the shopnote response has the item data nested within it. In this case, one shopnote has three items. This is really powerful. Once the schema and relation are defined, the document will be created automatically keeping that relation in mind.

Now, let us try fetching all the shopnotes. Here is the GraphQL query:

query {     allShopNotes {     data {       _id       name       description       updatedAt       items {         data {           name,           checked,           urgent         }       }     }   } }

Let’s try the query in the playground as before:

Now we have a database with a schema and it is fully operational with creating and fetch functionality. Similarly, we can create queries for adding, updating, removing items to a shopnote and also updating and deleting a shopnote. These queries will be used at a later point in time when we create the serverless functions.

If you are interested to run other queries in the GraphQL editor, you can find them from here,

Create a Server Secret Key

Next, we need to create a secured server key to make sure the access to the database is authenticated and authorized.

Click on the SECURITY option available in the FaunaDB interface to create the key, like so,

On successful creation of the key, you will be able to view the key’s secret. Make sure to copy and save it somewhere safe.

We do not want anyone else to know about this key. It is not even a good idea to commit it to the source code repository. To maintain this secrecy, create an empty file called .env at the root level of your project folder.

Edit the .env file and add the following line to it (paste the generated server key in the place of, <YOUR_FAUNA_KEY_SECRET>).

FAUNA_SERVER_SECRET=<YOUR_FAUNA_KEY_SECRET>

Add a .gitignore file and write the following content to it. This is to make sure we do not commit the .env file to the source code repo accidentally. We are also ignoring node_modules as a best practice.

.env

We are done with all that had to do with Fauna’s setup. Let us move to the next phase to create serverless functions and APIs to access data from the Fauna data store. At this stage, the directory structure may look like this,

Set up Netlify Serverless Functions

Netlify is a great platform to create hassle-free serverless functions. These functions can interact with databases, file-system, and in-memory objects.

Netlify functions are powered by AWS Lambda. Setting up AWS Lambdas on our own can be a fairly complex job. With Netlify, we will simply set a folder and drop our functions. Writing simple functions automatically becomes APIs. 

First, create an account with Netlify. This is free and just like the FaunaDB free tier, Netlify is also very flexible.

Now we need to install a few dependencies using either npm or yarn. Make sure you have nodejs installed. Open a command prompt at the root of the project folder. Use the following command to initialize the project with node dependencies,

npm init -y

Install the netlify-cli utility so that we can run the serverless function locally.

npm install netlify-cli -g

Now we will install two important libraries, axios and dotenv. axios will be used for making the HTTP calls and dotenv will help to load the FAUNA_SERVER_SECRET environment variable from the .env file into process.env.

yarn add axios dotenv

Or:

npm i axios dotenv

Create serverless functions

Create a folder with the name, functions at the root of the project folder. We are going to keep all serverless functions under it.

Now create a subfolder called utils under the functions folder. Create a file called query.js under the utils folder. We will need some common code to query the data store for all the serverless functions. The common code will be in the query.js file.

First we import the axios library functionality and load the .env file. Next, we export an async function that takes the query and variables. Inside the async function, we make calls using axios with the secret key. Finally, we return the response.

// query.js   const axios = require("axios"); require("dotenv").config();   module.exports = async (query, variables) => {   const result = await axios({       url: "https://graphql.fauna.com/graphql",       method: "POST",       headers: {           Authorization: `Bearer $ {process.env.FAUNA_SERVER_SECRET}`       },       data: {         query,         variables       }  });    return result.data; };

Create a file with the name, get-shopnotes.js under the functions folder. We will perform a query to fetch all the shop notes.

// get-shopnotes.js   const query = require("./utils/query");   const GET_SHOPNOTES = `    query {        allShopNotes {        data {          _id          name          description          updatedAt          items {            data {              _id,              name,              checked,              urgent          }        }      }    }  }   `;   exports.handler = async () => {   const { data, errors } = await query(GET_SHOPNOTES);     if (errors) {     return {       statusCode: 500,       body: JSON.stringify(errors)     };   }     return {     statusCode: 200,     body: JSON.stringify({ shopnotes: data.allShopNotes.data })   }; };

Time to test the serverless function like an API. We need to do a one time setup here. Open a command prompt at the root of the project folder and type:

netlify login

This will open a browser tab and ask you to login and authorize access to your Netlify account. Please click on the Authorize button.

Next, create a file called, netlify.toml at the root of your project folder and add this content to it,

[build]     functions = "functions"   [[redirects]]    from = "/api/*"    to = "/.netlify/functions/:splat"    status = 200

This is to tell Netlify about the location of the functions we have written so that it is known at the build time.

Netlify automatically provides the APIs for the functions. The URL to access the API is in this form, /.netlify/functions/get-shopnotes which may not be very user-friendly. We have written a redirect to make it like, /api/get-shopnotes.

Ok, we are done. Now in command prompt type,

netlify dev

By default the app will run on localhost:8888 to access the serverless function as an API.

Open a browser tab and try this URL, http://localhost:8888/api/get-shopnotes:

Congratulations!!! You have got your first serverless function up and running.

Let us now write the next serverless function to create a ShopNote. This is going to be simple. Create a file named, create-shopnote.js under the functions folder. We need to write a mutation by passing the required parameters. 

//create-shopnote.js   const query = require("./utils/query");   const CREATE_SHOPNOTE = `   mutation($ name: String!, $ description: String!, $ updatedAt: Time!, $ items: ShopNoteItemsRelation!) {     createShopNote(data: {name: $ name, description: $ description, updatedAt: $ updatedAt, items: $ items}) {       _id       name       description       updatedAt       items {         data {           name,           checked,           urgent         }       }     }   } `;   exports.handler = async event => {      const { name, items } = JSON.parse(event.body);   const { data, errors } = await query(     CREATE_SHOPNOTE, { name, items });     if (errors) {     return {       statusCode: 500,       body: JSON.stringify(errors)     };   }     return {     statusCode: 200,     body: JSON.stringify({ shopnote: data.createShopNote })   }; };

Please give your attention to the parameter, ShopNotesItemRelation. As we had created a relation between the ShopNote and Item in our schema, we need to maintain that while writing the query as well.

We have de-structured the payload to get the required information from the payload. Once we got those, we just called the query method to create a ShopNote.

Alright, let’s test it out. You can use postman or any other tools of your choice to test it like an API. Here is the screenshot from postman.

Great, we can create a ShopNote with all the items we want to buy from a shopping mart. What if we want to add an item to an existing ShopNote? Let us create an API for it. With the knowledge we have so far, it is going to be really quick.

Remember, ShopNote and Item are related? So to create an item, we have to mandatorily tell which ShopNote it is going to be part of. Here is our next serverless function to add an item to an existing ShopNote.

//add-item.js   const query = require("./utils/query");   const ADD_ITEM = `   mutation($ name: String!, $ urgent: Boolean!, $ checked: Boolean!, $ note: ItemNoteRelation!) {     createItem(data: {name: $ name, urgent: $ urgent, checked: $ checked, note: $ note}) {       _id       name       urgent       checked       note {         name       }     }   } `;   exports.handler = async event => {      const { name, urgent, checked, note} = JSON.parse(event.body);   const { data, errors } = await query(     ADD_ITEM, { name, urgent, checked, note });     if (errors) {     return {       statusCode: 500,       body: JSON.stringify(errors)     };   }     return {     statusCode: 200,     body: JSON.stringify({ item: data.createItem })   }; };

We are passing the item properties like, name, if it is urgent, the check value and the note the items should be part of. Let’s see how this API can be called using postman,

As you see, we are passing the id of the note while creating an item for it.

We won’t bother writing the rest of the API capabilities in this article,  like updating, deleting a shop note, updating, deleting items, etc. In case, you are interested, you can look into those functions from the GitHub Repository.

However, after creating the rest of the API, you should have a directory structure like this,

We have successfully created a data store with Fauna, set it up for use, created an API backed by serverless functions, using Netlify Functions, and tested those functions/routes.

Congratulations, you did it. Next, let us build some user interfaces to show the shop notes and add items to it. To do that, we will use Gatsby.js (aka, Gatsby) which is a super cool, React-based static site generator.

The following section requires you to have basic knowledge of ReactJS. If you are new to it, you can learn it from here. If you are familiar with any other user interface technologies like, Angular, Vue, etc feel free to skip the next section and build your own using the APIs explained so far.

Set up the User Interfaces using Gatsby

We can set up a Gatsby project either using the starter projects or initialize it manually. We will build things from scratch to understand it better.

Install gatsby-cli globally. 

npm install -g gatsby-cli

Install gatsby, react and react-dom

yarn add gatsby react react-dom

Edit the scripts section of the package.json file to add a script for develop.

"scripts": {   "develop": "gatsby develop"  }

Gatsby projects need a special configuration file called, gatsby-config.js. Please create a file named, gatsby-config.js at the root of the project folder with the following content,

module.exports = {   // keep it empty     }

Let’s create our first page with Gatsby. Create a folder named, src at the root of the project folder. Create a subfolder named pages under src. Create a file named, index.js under src/pages with the following content:

import React, { useEffect, useState } from 'react';       export default () => {       const [loading, setLoading ] = useState(false);       const [shopnotes, setShopnotes] = useState(null);         return (     <>           <h1>Shopnotes to load here...</h1>     </>           )     } 

Let’s run it. We generally need to use the command gatsby develop to run the app locally. As we have to run the client side application with netlify functions, we will continue to use, netlify dev command.

netlify dev

That’s all. Try accessing the page at http://localhost:8888. You should see something like this,

Gatsby project build creates a couple of output folders which you may not want to push to the source code repository. Let us add a few entries to the .gitignore file so that we do not get unwanted noise.

Add .cache, node_modules and public to the .gitignore file. Here is the full content of the file:

.cache public node_modules *.env

At this stage, your project directory structure should match with the following:

Thinking of the UI components

We will create small logical components to achieve the ShopNote user interface. The components are:

  • Header: A header component consists of the Logo, heading and the create button to create a shopnote.
  • Shopenotes: This component will contain the list of the shop note (Note component).
  • Note: This is individual notes. Each of the notes will contain one or more items.
  • Item: Each of the items. It consists of the item name and actions to add, remove, edit an item.

You can see the sections marked in the picture below:

Install a few more dependencies

We will install a few more dependencies required for the user interfaces to be functional and look better. Open a command prompt at the root of the project folder and install these dependencies,

yarn add bootstrap lodash moment react-bootstrap react-feather shortid

Lets load all the Shop Notes

We will use the Reactjs useEffect hook to make the API call and update the shopnotes state variables. Here is the code to fetch all the shop notes. 

useEffect(() => {   axios("/api/get-shopnotes").then(result => {     if (result.status !== 200) {       console.error("Error loading shopnotes");       console.error(result);       return;     }     setShopnotes(result.data.shopnotes);     setLoading(true);   }); }, [loading]);

Finally, let us change the return section to use the shopnotes data. Here we are checking if the data is loaded. If so, render the Shopnotes component by passing the data we have received using the API.

return (   <div className="main">     <Header />     {       loading ? <Shopnotes data = { shopnotes } /> : <h1>Loading...</h1>     }   </div> );  

You can find the entire index.js file code from here The index.js file creates the initial route(/) for the user interface. It uses other components like, Shopnotes, Note and Item to make the UI fully operational. We will not go to a great length to understand each of these UI components. You can create a folder called components under the src folder and copy the component files from here.

Finally, the index.css file

Now we just need a css file to make things look better. Create a file called index.css under the pages folder. Copy the content from this CSS file to the index.css file.

import 'bootstrap/dist/css/bootstrap.min.css'; import './index.css'

That’s all. We are done. You should have the app up and running with all the shop notes created so far. We are not getting into the explanation of each of the actions on items and notes here not to make the article very lengthy. You can find all the code in the GitHub repo. At this stage, the directory structure may look like this,

A small exercise

I have not included the Create Note UI implementation in the GitHib repo. However, we have created the API already. How about you build the front end to add a shopnote? I suggest implementing a button in the header, which when clicked, creates a shopnote using the API we’ve already defined. Give it a try!

Let’s Deploy

All good so far. But there is one issue. We are running the app locally. While productive, it’s not ideal for the public to access. Let’s fix that with a few simple steps.

Make sure to commit all the code changes to the Git repository, say, shopnote. You have an account with Netlify already. Please login and click on the button, New site from Git.

Next, select the relevant Git services where your project source code is pushed. In my case, it is GitHub.

Browse the project and select it.

Provide the configuration details like the build command, publish directory as shown in the image below. Then click on the button to provide advanced configuration information. In this case, we will pass the FAUNA_SERVER_SECRET key value pair from the .env file. Please copy paste in the respective fields. Click on deploy.

You should see the build successful in a couple of minutes and the site will be live right after that.

In Summary

To summarize:

  • The Jamstack is a modern web development architecture based on client-side JavaScript, reusable APIs, and prebuilt Markup.
  • 70% – 80% of the features that once required a custom back-end can now be done either on the front end or there are APIs, services to take advantage of.
  • Fauna provides the data API for the client-serverless applications. We can use GraphQL or Fauna’s FQL to talk to the store.
  • Netlify serverless functions can be easily integrated with Fauna using the GraphQL mutations and queries. This approach may be useful when you have the need of  custom authentication built with Netlify functions and a flexible solution like Auth0.
  • Gatsby and other static site generators are great contributors to the Jamstack to give a fast end user experience.

Thank you for reading this far! Let’s connect. You can @ me on Twitter (@tapasadhikary) with comments, or feel free to follow.


The post How to create a client-serverless Jamstack app using Netlify, Gatsby and Fauna appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , , ,
[Top]

WordPress and Jamstack

I recently moderated a panel at Netlify’s virtual Jamstack Conf that included Netlify CEO Matt Biilman and Automattic founder Matt Mullenweg. The whole thing was built up — at least to some — as a “Jamstack vs. WordPress” showdown.

I have lots of thoughts of my own on this and think I’m more useful as a pundit than a moderator. This is one of my favorite conversations in tech right now! So allow me to blog.

Disclosure: both Automattic and Netlify are active sponsors of this site. I have production sites that use both, and honestly, I’m a fan of both, which is an overarching point I’ll try to make. I also happen to be writing and publishing this on a WordPress site.

History

  1. Richard MacManus published “WordPress Co-Founder Matt Mullenweg Is Not a Fan of JAMstack” with quotes from an email conversation between them a line from Matt saying, “JAMstack is a regression for the vast majority of the people adopting it.”
  2. Matt Biilmann published a response “On Mullenweg and the Jamstack – Regression or Future?” with a whole section titled “The end of the WordPress era.”
  3. People chimed in along the way. Netlify board member Ohad Eder-Pressman wrote an open letter. Sarah Gooding rounded up some of the activity on WP Tavern (which is owned by Matt Mullenweg). I chimed in as well.
  4. Matt Mullenweg clarified remarks with some new zingers.

The debate was on October 6th at Jamstack Conf Virtual 2020. There is no public video of it (sorry).

The Stack

Comparing Jamstack to WordPress is a bit weird. What is comparable is the fact that they are both roads you might travel when building a website. Much of this post will keep that in mind and compare the two that way. Why they aren’t directly comparable is because:

  • Jamstack is a loose descriptor of an architectural philosophy that encourages static files on CDNs and JavaScript-accessed services for any dynamic needs.
  • WordPress is a CMS on the LAMP stack.

Those things are not apples to apples.

If we stick just with the stack for a moment, the comparison would be between:

  • Static Hosting + Services
  • LAMP

An example of Static + Services is using Netlify for hosting (which is static) and using services to do anything dynamic you need to do. Maybe you use Netlify’s own forms and auth functionality and Hasura for data storage.

On a LAMP stack, you have MySQL to store data in, so you aren’t reaching for an outside service there. You also have PHP available. So with those (in addition to open-source software), you have what you need for auth. It doesn’t mean that you never reach for services; you just do so less often as you have more technology at your fingertips from the server you already have.

Jamstack: Static Hosting as the hub, reaching out to services for everything else.

LAMP: Server that does a bunch of work as the hub, reaching out to a few services if needed.

Matt B. called the LAMP stack a “monolith.” Matt M. objected to that term and called it an “integrated approach.” I’m not a computer scientist, but I could see this going either way. Here’s Wikipedia:

[…] a monolithic application describes a single-tiered software application in which the user interface and data access code are combined into a single program.

Defined that way, yes, WordPress does appear to be a monolith, and yet the Wikipedia article continues:

[…] a monolithic application describes a software application that is designed without modularity.

Seen that way, appears to disqualify WordPress as a monolith. WordPress’ hook and plugin architecture is modular. 🤷‍♂️

It would be interesting to hear these two guys dig into the nuance there, but the software is what it is. A self-hosted WordPress site runs on a server with a full stack of technology available to it. It makes sense to ask as much of that server as you can (i.e. integrated). In a Jamstack approach, the server is abstracted from you. Everything else you need to do is split into different services (i.e. not integrated).

The WordPress approach, doesn’t mean that you never reach for outside services. In both stacks, you’d likely use something like Stripe for eCommerce APIs. You might reach for something like Cloudinary for robust media storage and serving. Even WordPress’ Jetpack service (which I use and like) brings a lot of power to a self-hosted WordPress site by behaving like a third-party service and moving things like asset-hosting and search technology off your own servers by moving them over to cloud servers. Both stacks are conglomerations of technologies.

Neither stack is any more “house of cards” or prone than the other. All websites can have that “only as strong as its weakest link” metaphor apply to them. If a WordPress plugin ships a borked version or somehow is corrupted on upload, it may screw up my site until I fix it. If my API keys become invalid for my serverless database, my Jamstack site might be hosed until I fix it. If Stripe is down, I’m not selling any products on any kind of site until they are back up.

A missing semicolon can sink any site.

Pricing

WordPress.com has a free plan, and that’s absolutely a place you can build a site. (I have several.) But you don’t really have developer-style access to it until you’re on the business plan at $ 25 per month. Self-hosted WordPress itself is open-source and free, but you’re not going to find a place to spin up a self-hosted WordPress site for free. It starts cheap and scales up. You need LAMP hosting to run WordPress. Here’s a look around at fairly inexpensive hosting plans:

There is money involved right off the bat.

Starting free is much more common with Jamstack, then you incur costs at different points. Jamstack being newer, it feels like a market that’s still figuring itself out.

  • Vercel is free until you need team members or features like password protected sites. A single password-protected site is $ 150/month. You can toss basic auth on any server with Apache for no additional cost.
  • Netlify is very similar, unlocking features on higher plans, and offering ala-carte per-site features, like analytics ($ 9/month) and auth (5,000 active users is $ 99/month).
  • AWS Amplify starts free, but like everything on AWS, your usage is metered on lots of levels, like build minutes, storage, and bandwidth. They have an example calculation of a web app has 10,000 daily active users and is updated two times per month, which costs $ 65.98/month.
  • Azure Static Web Apps hasn’t released pricing yet, but will almost certainly have a free tier or free usage or some kind.

All of this is a good reminder that Netlify isn’t the only one in the Jamstack game. Jamstack just means static hosting plus services.

You can’t make blanket statements like Jamstack is cheaper. It’s far too dependent on the site’s usage and needs. With high usage and a bunch of premium services, Jamstack (much like Serverless in general) can get super expensive. Jamstack says their enterprise pricing starts at $ 3,000/month, and while you get things like auth, forms, and media handling, you don’t get a CMS or any data storage, that is likely to kick you up much higher.

While this WordPress site isn’t enterprise, I can tell you it requires a server in the vicinity of a $ 1,000/month, and that assumes Cloudflare is in front of it to help reduce bandwidth directly to the host and Jetpack handling things like media hosting and search functionality. Mailchimp sends our newsletter. Wufoo powers our forms. We also have paid plugins, like Advanced Custom Fields Pro and a few WooCommerce add-ons. That’s not all of it. It’s probably a few thousand per month, all told. This isn’t unique to any integrated approach, but helps illustrate that the cost of a WordPress site can be quite high as well. They don’t publish prices (a common enterprise tactic), but Automattic’s own WordPress VIP hosting service surely starts at mid-4-figures before you start adding third-party stuff.

Bottom line: there is no sea change in pricing happening here.

Performance

80% of web performance is a front-end concern.

That’s a true story, but it’s also built on the foundation of the server (accounting for the first 20%). The speediest front-end in the world doesn’t feel speedy at all if the first request back from the server takes multiple seconds. You’ve gotta make sure that first request is smoking fast if you want a fast site.

The first rectangle is the first server response, and the second is the entire front-end. Even with a fast front end, it can’t save you from a slow back end.

You know what’s super fast? Global CDNs serving static files. That’s what you want to make happen on any website, regardless of the stack. While that’s the foundation of Jamstack (static CDN-backed hosting), it doesn’t mean WordPress can’t do it.

You take an index.html file with static content, put that on Netlify, and it’s gonna be smoking fast. Maybe your static site generator makes that file (which, it’s worth pointing out, could very well get content from WordPress). There is something very nice about the robustness and steady foundation of that.

By default, WordPress isn’t making static files that are catchable on a global CDN. WordPress responds to requests from a single origin, runs PHP, which asks the database for stuff, before a response is assembled, and then, finally, the page is returned. That can be pretty fast, but it’s far less sturdy than a static file on a global CDN and it’s far easier to overwhelm with requests.

WordPress hosts know this, and they try to solve the problem at the hosting level. Just look at WP Engine’s approach. Without you doing anything, they use a page cache so that the site can essentially return a static asset rather than needing to run PHP or hit a database. They employ all sorts of other caching as well, including partnering with Cloudflare to do the best possible caching. As I’m writing this, my shoptalkshow.com site literally went down. I wrote to the host, Flywheel, to see what was up. Turns out that when I went in there to turn on a staging site, I flipped a wrong switch and turned off their caching. The site couldn’t handle the traffic and just died. Flipping the caching switch back on instantly solved it. I didn’t have Cloudflare in front of the site, but I should have.

Cloudflare is part of the magic sauce of making WordPress fast. Just putting it in front of your self-hosted WordPress site is going to do a ton of work in making it fast and reliable. One of the missing pieces has been great caching of the HTML itself, which they literally dealt with this month and now that can be cached as well. There is kind of a funny irony in that caching WordPress means caching requests as static HTML and static assets, and serving them from a global CDN, which is essentially what Jamstack is at the end of the day.

Matt M. mentioned that WordPress.com employs global CDNs that kick in at certain levels of traffic. I’m not sure if that’s Cloudflare or not, but I wouldn’t doubt it.

With Cloudflare in front of a WordPress site, I see the same first-response numbers as I do on Netlify sites without Cloudflare (because the do not recommend using Cloudflare in front of Netlify-hosted sites). That’s mid-2-digit millisecond numbers, which is very, very good.

First request on WordPress site css-tricks.com, hosted by Flywheel with Cloudflare in front. Very fast.
First request on my Jamstack site, conferences.css-tricks.com, hosted by Netlify. Very fast.

From that foundation, any discussion about performance becomes front-end specific. Front-end tactics for speed are the same no matter what the server, hosting, or CMS situation is on the back end.

Security

There are far more stories about WordPress sites getting hacked than Jamstack sites. But is it fair to say that WordPress is less secure? WordPress is going on a couple of decades of history and has a couple orders of magnitude more sites built on it than Jamstack does. Security aside, you’re going to get more stories from WordPress with those numbers.

Matt M brought up that whitehouse.gov is on WordPress, which is obviously a site that needs the highest levels of security. It’s not that WordPress itself is insecure software. It’s what you do with it. Do you have insecure passwords? That’s insecure no matter what platform you’re using. Is the server itself insecure via file permissions or access levels? That’s not exactly the software’s fault, but you may be in that position because of the software. Are you running the latest version of WordPress? Usage is fragmented, at best, and the older the version, the less secure it’s going to be. Tricky.

It may be more interesting to think about security vectors. That is, at what points it is possible to get hacked. If you have a static file sitting on static hosting, I think it’s safe to say there are fairly few attack vectors. But still, there are some:

  • Your hosting account could be hacked
  • Your Git repo could be hacked
  • Your Cloudflare account could be hacked
  • Your domain name could be stolen (it happens)

That’s all true of a WordPress site, too, only there are additional attack vectors like:

  • Server-side code: XSS, bad plugins, remote execution, etc.
  • Database vulnerabilities
  • Running an older, outdated version of WordPress
  • The login system is right on the site itself, e.g. bad guys can hammer /wp-login.php

I think it’s fair to say there are more attack vectors on a WordPress site, but there are plenty of vectors on any site. The hosting account of any site is a big vector. Anyting that sits in the DNS chain. Any third-party services with logins. Anything with an API key.

Personal experience: this site is on WordPress and has never been hacked, but not for lack of trying. I do feel like I need to think more about security on my WordPress sites than my sites that are only built from static site generators.

Scaling

Scaling either approach costs money. This WordPress site isn’t massively scaled, but does require some decent scaling up from entry-level server requirements. I serve all traffic through Cloudflare, so a peak at the last 30 days tells me I serve 5 TB of bandwidth a month.

On a Netlify Business plan (600 GB of traffic for $ 99, then $ 20 per additional 100 GB) that math works out to $ 979. Remember when I said this site requires about a server that costs about $ 1,000/month? I wrote that before I ran these numbers, so I was super close (go me). Jamstack versus WordPress at the scale of this site is pretty neck-and-neck. All hosts are going to charge for bandwidth and have caps with overage charges. Amplify charges $ 0.15/GB over a 15 GB monthly cap. Flywheel (my WordPress host) charges based on a monthly visitor cap and, over that, it’s $ 1 per 1000.

The story with WordPress scaling is:

  • Use a host that can handle it and that has their own proven caching strategy.
  • CDN everything (which usually means putting Cloudflare in front of it).
  • Ultimately, you’re going to pay for it.

The story with Jamstack scaling is:

  • The host and services are built to scale.
  • You have to think about scaling less in terms of can this service handle this, or do I have to move?
  • You have to think about scaling more in terms of the fact that every aspect of every service will have pricing you need to keep an eye on.
  • Ultimately, you’re going to pay for it.

I’ve had to move around a bit with my WordPress hosting, finding hosts that are in-line with the current needs of the site. Moving a WordPress site is non-trivial, but it’s far easier than moving to another CMS. For example, if you build a Jamstack site on a headless CMS that becomes too pricey, the cost of moving is a bigger job than switching hosts.

I like what Dave Rupert wrote the other day (in a Slack conversation) about comparing performance between the two:

Jamstack: Use whatever thing to build your thing, there’s addons to help you, and use our thing to deploy it out to a CDN so it won’t fall over.

WordPress: Use our thing to build your thing, there’s addons to help you, and you have to use certain hosts to get it to not fall over.

There are other kinds of “scaling” as well. I think of something like number of users. That’s something that all sorts of services use for pricing tiers, which is an understandable metric. But that’s free in WordPress. You can have as many users with as many nuanced permissions as you like. That’s just the CMS, so adding on other services might still charge you by the head. Vercel or Netlify charge you by the head for team accounts. Contentful (a popular headless CMS) starts at $ 489/month for teams. Even GitHub’s Team tier is $ 4 per user if you need anything the free account can’t do.

Splitting the Front and Back

This is one of the big things that gets people excited about building with Jamstack. If all of my site’s functionality and content are behind APIs, that frees up the front end to build however it wants to.

  • Wanna build an all-static site? OK, hit that API during the build process and do that.
  • Wanna build a client-rendered site with React or Vue or whatever? Fine, hit the API client-side.
  • Wanna split the middle, pre-rendering some, client-rendering some, and server-rendering some? Cool, it’s an API, you can hit it however you want.

That flexibility is neat on green-field builds, but people are just as excited about theoretical future flexibility. If all functionality and content is API-driven, you’ve entirely split the front and back, meaning you can change either in the future with more flexibility.

  • As long as your APIs keep spitting out what the front end expects, you can re-architect the back end without troubling the front end.
  • As long as you’re getting the data you need, you can re-architect the front end without troubling the back end.

This kind of split feels “future safe” for sites at a certain size and scale. I can’t quite put my finger on what those scale numbers are, but they are there.

If you’ve ever done any major site re-architecture just to accommodate one side or the other, moving to a system where you’ve split the back end and front surely feels like a smart move.

Once we’ve split, as long as the expectations are maintained, back (B) and front (F) are free to evolve independently.

You can split a WordPress site (we’ll get to that in the “Using Both” section), but by default, WordPress is very much an integrated approach where the front end is built from themes in PHP using very WordPress-specific APIs. Not split at all.

Developer Experience

Jamstack has done a good job of heavily prioritizing developer experience (DX). I’ve heard someone call it “a local optimum,” meaning Jamstack is designed around local development (and local developer) experience.

  • You’re expected to work locally. You work in your own comfortable (local, fast, customized) development environment.
  • Git is a first-class citizen. You push to your production branch (e.g. master or main), then your build process runs, and your site is deployed. You even get a preview URL of what the production site will be for every pull request, which is an impressively great feature.
  • Use whatever tooling you like. You wanna pre-build a site in Hugo? Go for it. You learned create-react-app in school? Use that. Wanna experiment with the cool new framework de jour? Have at it. There is a lot of freedom to build however you want, leveraging the fact that you can run a build and deploy whatever folder in your repo you want.
  • What you don’t have to do is important, too. You don’t have to deal with HTTPS, you don’t have to deal with caching, you don’t have to worry about file permissions, you don’t have to configure a CDN. Even advanced developers appreciate having to do less.

It’s not that WordPress doesn’t consider developer experience (for example, they have a CLI and it can do helpful things, like scaffold blocks), but the DX doesn’t feel as to-the-core of the project to me.

  • Running WordPress locally is tricky, requiring you to run a (X)AMP stack somehow, which involves notoriously finicky third-party software. Thank god for Local by Flywheel. There is some guidance but it doesn’t feel like a priority.
  • What should go in Git? To this day, I don’t really know, but I’ve largely settled on the entire /wp-content folder. It feels weird to me there is no guidance or obvious best practices.
  • You’re totally on your own for deployment. Even WordPress-specific hosts don’t really nail it here. It’s largely just: here’s your SFTP credentials.
  • Even if you have a nice local development and deployment pipeline set up (I’m happy with mine), that doesn’t really help deal with moving the database around, so you’re on own your own there as well.

These are all solvable things, and the WordPress community is so big that you’ll find plenty of information on it, but I think it’s fair to say that WordPress doesn’t have DX down to the core. It’s a little wild-west-y even after all these years.

In fact, I’ve found that because the encouragement of a healthy local development environment is so sidelined, a lot of people just don’t have one at all. This is anecdotal, but now twice is as many years have I found myself involved in other people’s sites that work entirely production-only. That would be one thing if they were very simple sites with largely default behavior, but these have been anything but. They’re very complicated (much more so than this site) involving public user logins, paid memberships and permissions, page builders, custom shortcodes, custom CSS, and just a heck of a lot of moving parts. It scared me to death. I didn’t want to touch anything. They were live-editing PHP to make things work — cowboy coding, as people jokingly call that. One syntax error and the site is hosed, maybe even the very page you’re looking at.

The fact that WordPress powers such a huge swath of the web without particularly good DX is very interesting. There is no Jamstack without DX. It’s an entirely developer-focused thing. With WordPress, there probably isn’t a developer at all on most sites. It’s installed (or just activated, in the case of WordPress.com) and the site owner takes it from there. The site owner is like a developer in that they have lots of power, but perhaps doesn’t write any code at all.

To that end, I’d say WordPress has far more focus on UX than DX, which is a huge part of all this…

CMS and End User UX

WordPress is a damn fine CMS. Even if you don’t like it, there are a hell of a lot of people that do, and the numbers speak for themselves. What you get, when you decide to build a site with WordPress, is a heaping helping of ability to build just about any kind of site you want. It’s unlikely that you’ll have that oops, painted myself into a corner here situation with WordPress.

That’s a big deal. Jenn put her finger on this, noting that the people who use WordPress are a bigger story than a developer’s needs.

WordPress can do an absolute ton of things:

  • Blog (or be any type of content-driven CMS-style site)…
    • With content previews, which is possible-but-tricky on Jamstack
  • Deal with users/permissions…
  • eCommerce
  • Process forms
  • Handle plugins to the moon and back

Jamstack can absolutely do all these things too, but now it is Jamstack that is in Wild West territory. When you look at tutorials about how to store data, they often involve explaining how to write individual CRUD functions for a cloud database. That’s down to the metal stuff which can be very powerful, but it’s a far cry from clicking a few buttons, which is what WordPress feels like a lot of the time.

I bet I could probably cobble together a basic Jamstack eCommerce setup with Stripe APIs, which is very cool. But then I’d get nervous when I need to start thinking about inventory management, shipping zones, product variations, and who knows what else that gets complicated in eCommerce-land, making me wish I had something super robust that did it all for me.

Sometimes, we developers are building sites just for us (I do more than my fair share of that), but I’d say developers are mostly building sites for other people. So the most important question is: am I building something that is empowering for the people I’m building it for?

You can pull off a good site manager experience no matter what, but WordPress has surely proven that it delivers in that department without asking terribly much in terms of custom development.

Jamstack has some tricks that I wish I would pull off on WordPress, though. Here’s a big one for me: user-submitted content and updates. I literally have three websites now that benefit from this. A site about conferences, a site about serverless, and an upcoming site about coding fonts. WordPress could have absolutely done a great job at all three of those sites. But, what I really want is for people to be able to update and submit content in a way that I can be like: Yep, looks good, merge. By having gone with a Jamstack approach, the content is in public GitHub repos, and anyone can participate.

I think that’s super great. It doesn’t even necessarily require someone from the public knowing or understanding Git or GitHub, as Netlify CMS has this concept of Open Authoring, which keeps the whole contribution experience in the browser with UI for the editing.

Using Both

This is a big one that I see brought up a lot. Even Netlify themselves say “There is no Versus.”

Here’s the deal:

  • The “A” in “Jam” means APIs. Use APIs to build your site either at build time or client-side.
  • WordPress sites, by default, have a REST API (and can have a GraphQL API as well).
  • So, hit that API for CMS data on your Jamstack site.

Yep, totally. This works and people do it. I think it’s pretty cool.

But…

  • Running a WordPress site somewhere in addition to your Jamstack site means… you’re running a WordPress site in addition to your Jamstack site. There is cost and technical debt to that.
  • You often aren’t getting all the value of WordPress. Hitting an API for data might be all you need, but this is a super, very different approach to building a site than building a WordPress theme. You’re getting none of the other value of WordPress. I think of situations like this: you find a neat plugin that adds a fancy Gutenberg block to your site. That’ll “just work” on a WordPress site, but it likely has some special front-end behavior that won’t work if all you’re doing is sucking the HTML from an API. It probably enqueues some additional scripts and styles that you’ll be on your own to figure out how to incorporate where your front-end is hosted, and on your own to maintain updates.

Here’s some players that all have a unique approach to “using both”:

  • Frontity: A React framework for WordPress. Looks like you can run it with a Node server so the pages are server-side rendered, or as a static site generator. Either way, WordPress is the CMS but you’re building the front end with React and getting data from the REST API.
  • WP2Static: A WordPress plugin that builds a static version of your site and can auto-deploy it when changes are made.
  • Strattic: They host the dynamic WordPress site for you (which they refer to as “staging”) and you work with WordPress normally there. Then you choose to deploy, and they also host a static version of your site for you.

There are loads of other ways to integrate both. Here’s our own Geoff and Sarah talking about using WordPress and Jamstack together by using Vue/Nuxt with the REST API and hosting on Netlify.

Using Neither

Just in case this isn’t clear, there are absolutely loads of ways to build websites. If you’re building a Ruby on Rails site, that’s not Jamstack or WordPress. You could argue it’s more like a WordPress site in that it requires a server and you’ll be using that server to do as much as you can. You could also argue it’s more like Jamstack in that, even though it’s not static hosting, it encourages using APIs and piecing together services.

The web is a big place, gang, and this isn’t a zero-sum game. I fully expect WordPress to continue to grow and Jamstack to continue to grow because the web itself is growing. Even if we’re only considering the percentage of market share, I’d still bet that both will grow, pushing whatever else into smaller slices.

Choosing

I’m not even going to go here. Not because I’m avoiding playing favorites, but because it isn’t necessary. I don’t see developers out there biting their fingernails trying to decide between a WordPress or Jamstack approach to building a website. We’re at the point where the technologies are well-understood enough that the process goes like:

  1. Put adult pants on
  2. Evaluate needs and outcomes
  3. Pick technologies

The post WordPress and Jamstack appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

,
[Top]

Jamstack Conf

Here’s an important detail here: It’s free!

Jamstack Conf Virtual is coming up October 6th and 7th, 2020. The sessions are on October 6th. That’s the free part (register here). Then on October 7th there are a variety of workshops (they all look great to me) that are $ 100 USD each. That’s the classic conference one-two punch. Sessions are for getting a broad sense of what’s happening and will very likely open your eyes to some new concepts; workshops are for deep learning and walking away with some new skills.

The speaker lineup is top-notch!

I’ve been to several Jamstack Conf’s myself, and I’m a fan. Some of my most favorite conferences ever are ones that are focused around a technological idea at the rise of their relevance. That’s exactly what’s happening here and I’m excited to check it out. You can even watch videos from the 2019 conference to see just how awesome of an experience it is.

Direct Link to ArticlePermalink


The post Jamstack Conf appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

,
[Top]

State of Jamstack 2020: Data Deep Dive

(This is a sponsored post.)

The Jamstack, a modern approach to building websites and apps, delivers better performance, higher security, lower cost of scaling, and a better developer experience. But how popular is it among developers worldwide, and what do they love and hate about it?

We at Kentico Kontent decided to take a closer look at the current state of Jamstackʼs adoption and use. Surveying more than 500 developers in four countries, we wanted to find out how long they’ve been working with the Jamstack, what they’re using this architecture for, where they typically deploy and host their projects, and more.

Based on the findings, we created The State of Jamstack 2020 Report that provides an overview of the results and comments on the most interesting facts. Now we’ve released something just as exciting:

Our free interactive data visualization page lets you dive into the data from our survey and discover how the web developers’ answers varied according to age, gender, primary programming language, and other factors.

Do you know where the majority of US developers working for large companies store their content? Or what’s their favorite static site generator? The answers may surprise you—click below to find out:

Direct Link to ArticlePermalink


The post State of Jamstack 2020: Data Deep Dive appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , ,
[Top]

Going Jamstack with React, Serverless, and Airtable

The best way to learn is to build. Let’s learn about this hot new buzzword, Jamstack, by building a site with React, Netlify (Serverless) Functions, and Airtable. One of the ingredients of Jamstack is static hosting, but that doesn’t mean everything on the site has to be static. In fact, we’re going to build an app with full-on CRUD capability, just like a tutorial for any web technology with more traditional server-side access might.

Why these technologies, you ask?

You might already know this, but the “JAM” in Jamstack stands for JavaScript, APIs, and Markup. These technologies individually are not new, so the Jamstack is really just a new and creative way to combine them. You can read more about it over at the Jamstack site.

One of the most important benefits of Jamstack is ease of deployment and hosting, which heavily influence the technologies we are using. By incorporating Netlify Functions (for backend CRUD operations with Airtable), we will be able to deploy our full-stack application to Netlify. The simplicity of this process is the beauty of the Jamstack.

As far as the database, I chose Airtable because I wanted something that was easy to get started with. I also didn’t want to get bogged down in technical database details, so Airtable fits perfectly. Here’s a few of the benefits of Airtable:

  1. You don’t have to deploy or host a database yourself
  2. It comes with an Excel-like GUI for viewing and editing data
  3. There’s a nice JavaScript SDK

What we’re building

For context going forward, we are going to build an app where you can use to track online courses that you want to take. Personally, I take lots of online courses, and sometimes it’s hard to keep up with the ones in my backlog. This app will let track those courses, similar to a Netflix queue.

 

One of the reasons I take lots of online courses is because I make courses. In fact, I have a new one available where you can learn how to build secure and production-ready Jamstack applications using React and Netlify (Serverless) Functions. We’ll cover authentication, data storage in Airtable, Styled Components, Continuous Integration with Netlify, and more! Check it out  →

Airtable setup

Let me start by clarifying that Airtable calls their databases “bases.” So, to get started with Airtable, we’ll need to do a couple of things.

  1. Sign up for a free account
  2. Create a new “base”
  3. Define a new table for storing courses

Next, let’s create a new database. We’ll log into Airtable, click on “Add a Base” and choose the “Start From Scratch” option. I named my new base “JAMstack Demos” so that I can use it for different projects in the future.

Next, let’s click on the base to open it.

You’ll notice that this looks very similar to an Excel or Google Sheets document. This is really nice for being able tower with data right inside of the dashboard. There are few columns already created, but we add our own. Here are the columns we need and their types:

  1. name (single line text)
  2. link (single line text)
  3. tags (multiple select)
  4. purchased (checkbox)

We should add a few tags to the tags column while we’re at it. I added “node,” “react,” “jamstack,” and “javascript” as a start. Feel free to add any tags that make sense for the types of classes you might be interested in.

I also added a few rows of data in the name column based on my favorite online courses:

  1. Build 20 React Apps
  2. Advanced React Security Patterns
  3. React and Serverless

The last thing to do is rename the table itself. It’s called “Table 1” by default. I renamed it to “courses” instead.

Locating Airtable credentials

Before we get into writing code, there are a couple of pieces of information we need to get from Airtable. The first is your API Key. The easiest way to get this is to go your account page and look in the “Overview” section.

Next, we need the ID of the base we just created. I would recommend heading to the Airtable API page because you’ll see a list of your bases. Click on the base you just created, and you should see the base ID listed. The documentation for the Airtable API is really handy and has more detailed instructions for find the ID of a base.

Lastly, we need the table’s name. Again, I named mine “courses” but use whatever you named yours if it’s different.

Project setup

To help speed things along, I’ve created a starter project for us in the main repository. You’ll need to do a few things to follow along from here:

  1. Fork the repository by clicking the fork button
  2. Clone the new repository locally
  3. Check out the starter branch with git checkout starter

There are lots of files already there. The majority of the files come from a standard create-react-app application with a few exceptions. There is also a functions directory which will host all of our serverless functions. Lastly, there’s a netlify.toml configuration file that tells Netlify where our serverless functions live. Also in this config is a redirect that simplifies the path we use to call our functions. More on this soon.

The last piece of the setup is to incorporate environment variables that we can use in our serverless functions. To do this install the dotenv package.

npm install dotenv

Then, create a .env file in the root of the repository with the following. Make sure to use your own API key, base ID, and table name that you found earlier.

AIRTABLE_API_KEY=<YOUR_API_KEY> AIRTABLE_BASE_ID=<YOUR_BASE_ID> AIRTABLE_TABLE_NAME=<YOUR_TABLE_NAME>

Now let’s write some code!

Setting up serverless functions

To create serverless functions with Netlify, we need to create a JavaScript file inside of our /functions directory. There are already some files included in this starter directory. Let’s look in the courses.js file first.

const  formattedReturn  =  require('./formattedReturn'); const  getCourses  =  require('./getCourses'); const  createCourse  =  require('./createCourse'); const  deleteCourse  =  require('./deleteCourse'); const  updateCourse  =  require('./updateCourse'); exports.handler  =  async  (event)  =>  {   return  formattedReturn(200, 'Hello World'); };

The core part of a serverless function is the exports.handler function. This is where we handle the incoming request and respond to it. In this case, we are accepting an event parameter which we will use in just a moment.

We are returning a call inside the handler to the formattedReturn function, which makes it a bit simpler to return a status and body data. Here’s what that function looks like for reference.

module.exports  =  (statusCode, body)  =>  {   return  {     statusCode,     body: JSON.stringify(body),   }; };

Notice also that we are importing several helper functions to handle the interaction with Airtable. We can decide which one of these to call based on the HTTP method of the incoming request.

  • HTTP GET → getCourses
  • HTTP POST → createCourse
  • HTTP PUT → updateCourse
  • HTTP DELETE → deleteCourse

Let’s update this function to call the appropriate helper function based on the HTTP method in the event parameter. If the request doesn’t match one of the methods we are expecting, we can return a 405 status code (method not allowed).

exports.handler = async (event) => {   if (event.httpMethod === 'GET') {     return await getCourses(event);   } else if (event.httpMethod === 'POST') {     return await createCourse(event);   } else if (event.httpMethod === 'PUT') {     return await updateCourse(event);   } else if (event.httpMethod === 'DELETE') {     return await deleteCourse(event);   } else {     return formattedReturn(405, {});   } };

Updating the Airtable configuration file

Since we are going to be interacting with Airtable in each of the different helper files, let’s configure it once and reuse it. Open the airtable.js file.

In this file, we want to get a reference to the courses table we created earlier. To do that, we create a reference to our Airtable base using the API key and the base ID. Then, we use the base to get a reference to the table and export it.

require('dotenv').config(); var Airtable = require('airtable'); var base = new Airtable({ apiKey: process.env.AIRTABLE_API_KEY }).base(   process.env.AIRTABLE_BASE_ID ); const table = base(process.env.AIRTABLE_TABLE_NAME); module.exports = { table };

Getting courses

With the Airtable config in place, we can now open up the getCourses.js file and retrieve courses from our table by calling table.select().firstPage(). The Airtable API uses pagination so, in this case, we are specifying that we want the first page of records (which is 20 records by default).

const courses = await table.select().firstPage(); return formattedReturn(200, courses);

Just like with any async/await call, we need to handle errors. Let’s surround this snippet with a try/catch.

try {   const courses = await table.select().firstPage();   return formattedReturn(200, courses); } catch (err) {   console.error(err);   return formattedReturn(500, {}); }

Airtable returns back a lot of extra information in its records. I prefer to simplify these records with only the record ID and the values for each of the table columns we created above. These values are found in the fields property. To do this, I used the an Array map to format the data the way I want.

const { table } = require('./airtable'); const formattedReturn = require('./formattedReturn'); module.exports = async (event) => {   try {     const courses = await table.select().firstPage();     const formattedCourses = courses.map((course) => ({       id: course.id,       ...course.fields,     }));     return formattedReturn(200, formattedCourses);   } catch (err) {     console.error(err);     return formattedReturn(500, {});   } };

How do we test this out? Well, the netlify-cli provides us a netlify dev command to run our serverless functions (and our front-end) locally. First, install the CLI:

npm install -g netlify-cli

Then, run the netlify dev command inside of the directory.

This beautiful command does a few things for us:

  • Runs the serverless functions
  • Runs a web server for your site
  • Creates a proxy for front end and serverless functions to talk to each other on Port 8888.

Let’s open up the following URL to see if this works:

We are able to use /api/* for our API because of the redirect configuration in the netlify.toml file.

If successful, we should see our data displayed in the browser.

Creating courses

Let’s add the functionality to create a course by opening up the createCourse.js file. We need to grab the properties from the incoming POST body and use them to create a new record by calling table.create().

The incoming event.body comes in a regular string which means we need to parse it to get a JavaScript object.

const fields = JSON.parse(event.body);

Then, we use those fields to create a new course. Notice that the create() function accepts an array which allows us to create multiple records at once.

const createdCourse = await table.create([{ fields }]);

Then, we can return the createdCourse:

return formattedReturn(200, createdCourse);

And, of course, we should wrap things with a try/catch:

const { table } = require('./airtable'); const formattedReturn = require('./formattedReturn'); module.exports = async (event) => {   const fields = JSON.parse(event.body);   try {     const createdCourse = await table.create([{ fields }]);     return formattedReturn(200, createdCourse);   } catch (err) {     console.error(err);     return formattedReturn(500, {});   } };

Since we can’t perform a POST, PUT, or DELETE directly in the browser web address (like we did for the GET), we need to use a separate tool for testing our endpoints from now on. I prefer Postman, but I’ve heard good things about Insomnia as well.

Inside of Postman, I need the following configuration.

  • url: localhost:8888/api/courses
  • method: POST
  • body: JSON object with name, link, and tags

After running the request, we should see the new course record is returned.

We can also check the Airtable GUI to see the new record.

Tip: Copy and paste the ID from the new record to use in the next two functions.

Updating courses

Now, let’s turn to updating an existing course. From the incoming request body, we need the id of the record as well as the other field values.

We can specifically grab the id value using object destructuring, like so:

const {id} = JSON.parse(event.body);

Then, we can use the spread operator to grab the rest of the values and assign it to a variable called fields:

const {id, ...fields} = JSON.parse(event.body);

From there, we call the update() function which takes an array of objects (each with an id and fields property) to be updated:

const updatedCourse = await table.update([{id, fields}]);

Here’s the full file with all that together:

const { table } = require('./airtable'); const formattedReturn = require('./formattedReturn'); module.exports = async (event) => {   const { id, ...fields } = JSON.parse(event.body);   try {     const updatedCourse = await table.update([{ id, fields }]);     return formattedReturn(200, updatedCourse);   } catch (err) {     console.error(err);     return formattedReturn(500, {});   } };

To test this out, we’ll turn back to Postman for the PUT request:

  • url: localhost:8888/api/courses
  • method: PUT
  • body: JSON object with id (the id from the course we just created) and the fields we want to update (name, link, and tags)

I decided to append “Updated!!!” to the name of a course once it’s been updated.

We can also see the change in the Airtable GUI.

Deleting courses

Lastly, we need to add delete functionality. Open the deleteCourse.js file. We will need to get the id from the request body and use it to call the destroy() function.

const { id } = JSON.parse(event.body); const deletedCourse = await table.destroy(id);

The final file looks like this:

const { table } = require('./airtable'); const formattedReturn = require('./formattedReturn'); module.exports = async (event) => {   const { id } = JSON.parse(event.body);   try {     const deletedCourse = await table.destroy(id);     return formattedReturn(200, deletedCourse);   } catch (err) {     console.error(err);     return formattedReturn(500, {});   } };

Here’s the configuration for the Delete request in Postman.

  • url: localhost:8888/api/courses
  • method: PUT
  • body: JSON object with an id (the same id from the course we just updated)

And, of course, we can double-check that the record was removed by looking at the Airtable GUI.

Displaying a list of courses in React

Whew, we have built our entire back end! Now, let’s move on to the front end. The majority of the code is already written. We just need to write the parts that interact with our serverless functions. Let’s start by displaying a list of courses.

Open the App.js file and find the loadCourses function. Inside, we need to make a call to our serverless function to retrieve the list of courses. For this app, we are going to make an HTTP request using fetch, which is built right in.

Thanks to the netlify dev command, we can make our request using a relative path to the endpoint. The beautiful thing is that this means we don’t need to make any changes after deploying our application!

const res = await fetch('/api/courses'); const courses = await res.json();

Then, store the list of courses in the courses state variable.

setCourses(courses)

Put it all together and wrap it with a try/catch:

const loadCourses = async () => {   try {     const res = await fetch('/api/courses');     const courses = await res.json();     setCourses(courses);   } catch (error) {     console.error(error);   } };

Open up localhost:8888 in the browser and we should our list of courses.

Adding courses in React

Now that we have the ability to view our courses, we need the functionality to create new courses. Open up the CourseForm.js file and look for the submitCourse function. Here, we’ll need to make a POST request to the API and send the inputs from the form in the body.

The JavaScript Fetch API makes GET requests by default, so to send a POST, we need to pass a configuration object with the request. This options object will have these two properties.

  1. method → POST
  2. body → a stringified version of the input data
await fetch('/api/courses', {   method: 'POST',   body: JSON.stringify({     name,     link,     tags,   }), });

Then, surround the call with try/catch and the entire function looks like this:

const submitCourse = async (e) => {   e.preventDefault();   try {     await fetch('/api/courses', {       method: 'POST',       body: JSON.stringify({         name,         link,         tags,       }),     });     resetForm();     courseAdded();   } catch (err) {     console.error(err);   } };

Test this out in the browser. Fill in the form and submit it.

After submitting the form, the form should be reset, and the list of courses should update with the newly added course.

Updating purchased courses in React

The list of courses is split into two different sections: one with courses that have been purchased and one with courses that haven’t been purchased. We can add the functionality to mark a course “purchased” so it appears in the right section. To do this, we’ll send a PUT request to the API.

Open the Course.js file and look for the markCoursePurchased function. In here, we’ll make the PUT request and include both the id of the course as well as the properties of the course with the purchased property set to true. We can do this by passing in all of the properties of the course with the spread operator and then overriding the purchased property to be true.

const markCoursePurchased = async () => {   try {     await fetch('/api/courses', {       method: 'PUT',       body: JSON.stringify({ ...course, purchased: true }),     });     refreshCourses();   } catch (err) {     console.error(err);   } };

To test this out, click the button to mark one of the courses as purchased and the list of courses should update to display the course in the purchased section.

Deleting courses in React

And, following with our CRUD model, we will add the ability to delete courses. To do this, locate the deleteCourse function in the Course.js file we just edited. We will need to make a DELETE request to the API and pass along the id of the course we want to delete.

const deleteCourse = async () => {   try {     await fetch('/api/courses', {       method: 'DELETE',       body: JSON.stringify({ id: course.id }),     });     refreshCourses();   } catch (err) {     console.error(err);   } };

To test this out, click the “Delete” button next to the course and the course should disappear from the list. We can also verify it is gone completely by checking the Airtable dashboard.

Deploying to Netlify

Now, that we have all of the CRUD functionality we need on the front and back end, it’s time to deploy this thing to Netlify. Hopefully, you’re as excited as I am about now easy this is. Just make sure everything is pushed up to GitHub before we move into deployment.

If you don’t have a Netlify, account, you’ll need to create one (like Airtable, it’s free). Then, in the dashboard, click the “New site from Git” option. Select GitHub, authenticate it, then select the project repo.

Next, we need to tell Netlify which branch to deploy from. We have two options here.

  1. Use the starter branch that we’ve been working in
  2. Choose the master branch with the final version of the code

For now, I would choose the starter branch to ensure that the code works. Then, we need to choose a command that builds the app and the publish directory that serves it.

  1. Build command: npm run build
  2. Publish directory: build

Netlify recently shipped an update that treats React warnings as errors during the build proces. which may cause the build to fail. I have updated the build command to CI = npm run build to account for this.

Lastly, click on the “Show Advanced” button, and add the environment variables. These should be exactly as they were in the local .env that we created.

The site should automatically start building.

We can click on the “Deploys” tab in Netlify tab and track the build progress, although it does go pretty fast. When it is complete, our shiny new app is deployed for the world can see!

Welcome to the Jamstack!

The Jamstack is a fun new place to be. I love it because it makes building and hosting fully-functional, full-stack applications like this pretty trivial. I love that Jamstack makes us mighty, all-powerful front-end developers!

I hope you see the same power and ease with the combination of technology we used here. Again, Jamstack doesn’t require that we use Airtable, React or Netlify, but we can, and they’re all freely available and easy to set up. Check out Chris’ serverless site for a whole slew of other services, resources, and ideas for working in the Jamstack. And feel free to drop questions and feedback in the comments here!


The post Going Jamstack with React, Serverless, and Airtable appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , ,
[Top]

Where Does Logic Go on Jamstack Sites?

Here’s something I had to get my head wrapped around when I started building Jamstack sites. There are these different stages your site goes through where you can put logic.

Let’s look at a special example so you can see what I mean. Say you’re making a website for a music venue. The most important part of the site is a list of events, some in the past and some upcoming. You want to make sure to label them as such or design that to be very clear. That is date-based logic. How do you do that? Where does that logic live?

There are at least four places to consider when it comes to Jamstack.

Option 1: Write it into the HTML ourselves

Literally sit down and write an HTML file that represents all of the events. We’d look at the date of the event, decide in whether it’s in the past or the future, and write different content for either case. Commit and deploy that file.

<h1>Upcoming Event: Bill's Banjo Night</h1> <h1>Past Event: 70s Classics with Jill</h1>

This would totally work! But the downside is that weu’d have to update that HTML file all the time — once Bill’s Banjo Night is over, we have to open your code editor, change “Upcoming” to “Past” and re-upload the file.

Option 2: Write structured data and do logic at build time

Instead of writing all the HTML by hand, we create a Markdown file to represent each event. Important information like the date and title is in there as structured data. That’s just one option. The point is we have access to this data directly. It could be a headless CMS or something like that as well.

Then we set up a static site generator, like Eleventy, that reads all the Markdown files (or pulls the information down from your CMS) and builds them into HTML files. The neat thing is thatwe can run any logic we want during the build process. Do fancy math, hit APIs, run a spell-check… the sky is the limit.

For our music venue site, we might represent events as Markdown files like this:

--- title: Bill's Banjo Night date: 2020-09-02 ---  The event description goes here!

Then, we run a little bit of logic during the build process by writing a template like this:

{% if event.date > now %}   <h1>Upcoming Event: {{event.title}}</h1> {% else %}   <h1>Past Event: {{event.title}}</h1> {% endif %}

Now, each time the build process runs, it looks at the date of the event, decides if it’s in the past or the future and produces different HTML based on that information. No more changing HTML by hand!

The problem with this approach is that the date comparison only happens one time, during the build process. The now variable in the example above is going to refer to the date and time the build happens to run. And once we’ve uploaded the HTML files that build produced, those won’t change until we run the build again. This means that once an event at our music venue is over, we’d have to re-run the build to make sure the website reflects that.

Now, we could automate the rebuild so it happens once a day, or heck, even once an hour. That’s literally what the CSS-Tricks conferences site does via Zapier.

The conferences site is deployed daily using a Zapier automation that triggers a Netlify deploy,, ensuring information is current.

But this could rack up build minutes if you’re using a service like Netlify, and there might still be edge cases where someone gets an outdated version of the site.

Option 3: Do logic at the edge

Edge workers are a way of running code at the CDN level whenever a request comes in. They’re not widely available at the time of this writing but, once they are, we could write our date comparison like this:

// THIS DOES NOT WORK import eventsList from "./eventsList.json" function onRequest(request) {   const now = new Date();   eventList.forEach(event => {     if (event.date > now) {       event.upcoming = true;     }   })   const props = {     events: events,   }   request.respondWith(200, render(props), {}) }

The render() function would take our processed list of events and turn it into HTML, perhaps by injecting it into a pre-rendered template. The big promise of edge workers is that they’re extremely fast, so we could run this logic server-side while still enjoying the performance benefits of a CDN.

And because the edge worker runs every time someone requests the website, we can be sure that they’re going to get an up-to-date version of it.

Option 4: Do logic at run time

Finally, we could pass our structured data to the front end directly, for example, in the form of data attributes. Then we write JavaScript that’s going to do whatever logic we need on the user’s device and manipulates the DOM on the fly.

For our music venue site, we might write a template like this:

<h1 data-date="{{event.date}}">{{event.title}}</h1>

Then, we do our date comparison in JavaScript after the page is loaded:

function processEvents(){   const now = new Date()   events.forEach(event => {     const eventDate = new Date(event.getAttribute('data-date'))     if (eventDate > now){         event.classList.add('upcoming')     } else {         event.classList.add('past')     }   }) }

The now variable reflects the time on the user’s device, so we can be pretty sure the list of events will be up-to-date. Because we’re running this code on the user’s device, we could even get fancy and do things like adjust the way the date is displayed based on the user’s language or timezone.

And unlike the previous points in the lifecycle, run time lasts as long as the user has our website open. So, if we wanted to, we could run processEvents() every few seconds and our list would stay perfectly up-to-date without having to refresh the page. This would probably be unnecessary for our music venue’s website, but if we wanted to display the events on a billboard outside the building, it might just come in handy.

Where will you put the logic?

Although one of the core concepts of Jamstack is that we do as much work as we can at build time and serve static HTML, we still get to decide where to put logic.

Where will you put it?

It really depends on what you’re trying to do. Parts of your site that hardly ever change are totally fine to complete at edit time. When you find yourself changing a piece of information again and again, it’s probably a good time to move that into a CMS and pull it in at build time. Features that are time-sensitive (like the event examples we used here), or that rely on information about the user, probably need to happen further down the lifecycle at the edge or even at runtime.


The post Where Does Logic Go on Jamstack Sites? appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, ,
[Top]