Tag: JAMstack

Netlify Identity, a Key Aspect to Jamstack Development

(This is a sponsored post.)

Netlify is amazing at static file hosting, but it’s really so much more than that. You can build any sort of website, even highly dynamic apps, with the Jamstack approach and static file hosting at the core.

Say you want to build a TODO app with users. Those users will need to sign up and log in. Can’t do that with a static site, right? You can, actually. Netlify helps with Netlify Identity, a robust offering they’ve had for years. Enabling it is just a few clicks in the admin UI, and they even provide auth widgets so you have to build precious little to get this working.

Showing a login widget powered by Netlify Identity.

Now you’ve got a website with authentication, great! But how do you keep going with your TODO app? You’ll need some kind of cloud storage for the data on your user’s lists. For that, you’ll have to reach outside of Netlify to find a cloud storage provider you like. Netlify has had a first-class integration with Fauna for years, so that’s a good choice.

You’ll need to communicate with Fauna, of course, and being a static site, JavaScript is how that’s going to work. Fortunately, your client-side JavaScript can communicate with your own server-side JavaScript that Netlify helps with, which is called Netlify Functions. That’s right, Netlify helps you build/deploy Lambda functions. This means you can actually have the Lambda functions do the communicating with Faunda, keeping your API keys safe.

Those are the building blocks. This is a well-worn approach, and really at the heart of Jamstack. Need a head start? Netlify has templates for this kind of thing. Here are some examples with this approach in mind: netlify-fauna-todo-app and netlify-faunadb-example. We even have a tutorial that covers that. And there’s a one-minute video demo:

There you have it, a website that is every bit as dynamic as something you’d build with a traditional server. Only now, you’re building with Netlify meaning you get so many other advantages, like the fact that you’re deploying from commits to a Git repository and getting build previews, and every other amazing feature Netlify offers.

Netlify Identity, a Key Aspect to Jamstack Development originally published on CSS-Tricks. You should get the newsletter and become a supporter.


, , , ,

Jamstack TV

That’s the name of Netlify’s YouTube Channel. Love that. I linked up Rich’s talk the other day, which was a part of this past JamstackConf, but now all the talks are up on there. Rich got to talk about Svelte, but there are talks on Astro, RedwoodJS, Eleventy, Vue… all the cool kids really.

I’m subscribed, of course. I’ve been on a real YouTube bender, spending most of my non-active screen time there. I’d say half of what I watch is tech content (like JamstackTV) and the rest is watching people play video games or puzzles and watching people talk about where they live or have traveled (preferably with an AMSR-friendly voice). I find if you subscribe to enough stuff you actually like, the algorithm really kicks in and finds you more stuff you’re very likely to enjoy. We’ve got a fairly active channel ourselves!

The post Jamstack TV appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.



The Semantics of Jamstack

The past year has seen a healthy debate around the term ‘Jamstack’ as the definition gets stretched to include new use cases. I recently posted my take on a Jamstack definition in “Static vs. Dynamic vs. Jamstack: Where’s The Line?” In this article, I want to look at the evolution of Jamstack and what that means for the term.

Developers have been creating static sites long before the term Jamstack was coined. The most common use case was an easy way for developers to host their blog or open-source documentation on GitHub Pages. Some pioneers were pushing larger, commercial projects on static sites, but it certainly wasn’t as common as it is today.

Static sites had been perceived as limited — old technology from the 90s. Why would forward-looking companies want to use this ancient way of building websites? Phil Hawksworth hit it right on the button in his IJS talk about static being a loaded term:

Are we talking about a static experience, or are we talking about a static architecture?

The potential for ambiguity is incredibly confusing, especially to non-developers. The static websites of the 90s are not the same as modern static sites. There are new technologies that warrant giving static sites a second look:

  • JavaScript has evolved into a powerful language for building applications in the browser. Gmail launched in 2004 and was the first mainstream application to make heavy use of Ajax.
  • Static Site Generators (SSGs) have brought many dynamic site advantages such as layouts and separating content from code. Jekyll was the first widely popular SSG, launching in 2008.
  • CDNs used to be a technology that only large enterprises could afford. When AWS Cloudfront launched in 2008, you could set up a CDN in minutes, and at a small scale, it would only cost you a few dollars.
  • Git workflows, and the CI/CD tools built around it, have made error-prone FTP deployments a thing of the past.
  • Ecosystem — there’s a growing number of standalone tools you can drop into a static site to enable extra functionality, from search and eCommerce to databases, comments and more.

Jamstack helped change people’s perception of static websites. And the shifting winds on static were what lead Matt Biilmann to coin the term Jamstack itself in 2016. Suddenly, we had a word that captured the benefits of modern static sites without the baggage of static. Cassidy Williams does a fantastic job of capturing the essence of Jamstack:

Jamstack is about building web applications in the same style as mobile applications: the UI is compiled, and then data is pulled in as needed.

Jamstack struck a chord with many WordPress developers in particular. Coming from the world of intricate theming and plugin APIs, the simplicity and control you got with an SSG was refreshing. The movement had begun, and a community formed around Jamstack’s decoupled approach.

As Jamstack grew in popularity, so did the size and complexity of projects. We saw the principles of Jamstack move beyond websites, and as they made their way into web applications, we soon reached the technical limits of what a static site was capable of doing. Platforms added new features and workflows to expand Jamstack principles, allowing larger and more complex applications to take a Jamstack approach.

I’m excited to take part in this evolution with CloudCannon. We’re seeing a significant shift in how developers build software for the web. There’s a flourishing ecosystem of specialty tools and platforms enabling front-end developers to do more, and for sophisticated applications to live at the edge.

My concern is we can’t agree on what Jamstack actually means. We have succinct definitions that paint a clear boundary of what is and isn’t Jamstack. Many of my favorites are in this article. We’re seeing the term Jamstack used for more and more dynamic behavior. Some of the community is on board with this usage, and some aren’t. Ambiguity and perception were the original reasons for coining the term, and we’re at risk of coming full circle here.

It’s a difficult problem the Jamstack community faces because there is so much cross-over between the original meaning of “Jamstack” and the new, evolved, more dynamic-ish usage of the word. I’m conflicted myself because I love the idea of applying Jamstack principles to more dynamic approaches. I briefly considered the idea of using “Jamstack” to describe static usage, and “Jamstack++” the more dynamic usage. But quickly realized that would probably create more confusion than it solves.

Matt Biilmann nailed it with Netlify’s announcement of Distributed Persistent Rendering (DPR):

For any technology, the hardest part is not establishing simplicity, but protecting it over time.

This perfectly captures my thoughts. It’s reassuring to know I’m not limited if I build a new project with a Jamstack approach. If my site gets enormous or I need dynamic behavior, I have options. Without these options, Jamstack would be seen as a limited technology for small use cases. On the other hand, the more we emphasize these more dynamic solutions, the further we get from the elegant simplicity that created the Jamstack movement in the first place.

DPR is an exciting new technology. It’s an elegant solution to the painful limitation of prebuilding large sites. For a 100k page site, would I make the tradeoff of prebuilding a subset of those pages and have the rest build on demand the first time they’re requested, thus reducing build time significantly? Heck yes, I would! That’s a tradeoff that makes sense.

I’ve been doing a lot of thinking about where DPR fits into the Jamstack picture, mostly because it’s right on the edge. Whether you include it or exclude it from the Jamstack umbrella has rippling ramifications.

Sean Davis has a Jamstack definition I’m a fan of:

Jamstack is an architecture for atomically building and delivering precompiled, decoupled front-end web projects from the edge.

This resonates with what I believe Jamstack is all about. If we’re to include DPR in this definition, it needs some work:

Jamstack is an architecture for atomically building and delivering precompiled (or on-demand generated webpages, but only if it’s the first request and then it’s persisted), decoupled front-end web projects from the edge.

The official Jamstack definition works better for DPR:

Jamstack is the new standard architecture for the web. Using Git workflows and modern build tools, pre-rendered content is served to a CDN and made dynamic through APIs and serverless functions.

DPR either delivers content using a serverless function or as static file through a CDN so it fits the definition.

It’s interesting to see how the definition has changed over time. Before late 2020, the official Jamstack definition, posted directly on Jamstack.org at the time, was as follows:

Fast and secure sites and apps delivered by pre-rendering files and serving them directly from a CDN, removing the requirement to manage or run web servers.

Technology evolves over time, as does language, so it’s great to see the definition tweaked to keep up with the times. Introducing “serverless” into the definition makes sense on one hand as the technology is becoming more approachable to front-end developers, who are the predominant Jamstack audience. On the other hand, it goes against the core Jamstack principles of pre-rendering and decoupling. Do we need to update these core principles too?

I’m still processing all of these thoughts and ideas myself. I’m a big fan of Jamstack, it has served as a major catalyst for the growth in static site generators, and given us a language to talk about the approach of pre-rendering and decoupling websites. Looking ahead, I can see five directions Jamstack can go:

  1. Jamstack is cemented in its original meaning, prerendering and decoupling. The more dynamic, Jamstack-inspired approaches get their own name.
  2. Jamstack evolves and the definition and principles are expanded. As a result, the meaning likely becomes more ambiguous.
  3. Jamstack is the name of the community. It’s a set of guidelines and there are no hard and fast rules.
  4. Jamstack has lifted the baggage of static, and we can talk about static vs. hybrid vs. dynamic websites.
  5. Jamstack becomes mainstream enough that we can simply call it modern web development.

There are people in all of these camps pulling in different directions, leading to much confusion and discussion in the community. What is clear is that this community of people is deeply passionate about this approach to building websites. Frankly, I couldn’t be more excited about the innovations happening in this space. What we need is consensus and a path forward. Without this, I believe, for better or worse, we’re going to end up with a mix of options 3, 4 and 5.

The post The Semantics of Jamstack appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.



Jamstack Developers’ Favorite Frameworks of 2021

Which new framework should I learn this year? Is it time to ditch my CMS? What tools should I pick up if I want to scale my site to an audience of millions? The 2021 Jamstack Community Survey is here with answers to those questions and more. 

For the past two years, Netlify has conducted the Jamstack Community Survey to better understand our growing group of developers—the insights inform our services, and they also help developers learn from one another. Our survey data provides a sense of best practices as well as an idea of what else is happening in the community.

What we’re seeing this year: it’s never been a better time to be a developer in the Jamstack community! Jamstack has gone mainstream and the ecosystem is thriving. Jamstack is becoming the default choice for web developers at all stages of their careers across different geographies and touching all industries, and the community is only getting bigger. We also saw a huge rise in the percentage of students in our community over the last year, a great sign for a growing ecosystem.

In 2021, Netlify received more than 7,000 responses to the Jamstack Community Survey. This is more than double the number of responses we received in 2020, confirming the growth of the Jamstack community. 

Here are a few of the highlights from our more technical findings…

Jamstack developers work differently at scale.

32% of Jamstack developers are building sites for audiences of millions of users, but the tools they use and their development priorities are different: for instance, they are more likely to specialize in front-end or back-end work, and they are more likely to consider mobile devices a key target.

JavaScript dominates programming languages for the web—but TypeScript is giving it a run for its money.

For 55% of developers, JavaScript is their primary language. But TypeScript is coming from behind with a growing share.

A plot chart with colored dots representing different languages. Y axis is satisfaction, x-axis is usage. JavaScript is the most used and halfway up the satisfaction axis. Typescript is at the top of satisfaction, and halfway through the usage axis.

Figma is almost the only design tool that matters.

When it comes to design tools, more than 60% of survey respondents use Figma and are happier with it than the users of any other design tool we asked about.

A plot chart with colored dots representing different design apps. Y axis is satisfaction, x-axis is usage. Figma is at the upper-right corner of the chart while everything else is clustered toward the bottom left.

React still reigns supreme for frameworks.

React continues to dominate the major frameworks category in usage and satisfaction, and Next.js continues to grow alongside it. But we also saw growth and higher satisfaction from a challenger framework, Vue.

A plot chart with colored dots representing different frameworks. Y axis is satisfaction, x-axis is usage.React is at the far right, but halfway up the satisfaction axis. Express is at the top of the satisfaction axis but between 10-20% usage.

WordPress leads in CMS usage.

WordPress remains the clear leader as a content management system, but it’s not well-liked as a standalone solution. When used in a headless configuration, users reported much higher satisfaction. This was a breakout year for other headless CMSs like Sanity and Strapi.

A plot chart with colored dots representing different content management systems. Y axis is satisfaction, x-axis is usage. WordPress is all the way at the bottom right corner of the chart, showing high usage but low satisfaction. Sanity has the highest satisfaction, but is between 10-15% usage.

And that’s just a taste of what we learned. To view the complete findings of the 2021 Jamstack Community Survey, visit our survey website

The post Jamstack Developers’ Favorite Frameworks of 2021 appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.


, , , ,

Static vs. Dynamic vs. Jamstack: Where’s The Line?

You’ll often hear developers talking about “static” vs. “dynamic” sites, or you may have heard someone use the term Jamstack. What do these terms mean, and when does a “static” site become either a Jamstack or dynamic site? These questions sound simple, but they’re more nuanced than they appear. Let’s explore these terms to gain a deeper understanding of Jamstack.

Finding the line

What’s the difference between a chair and a stool? Most people will respond that a chair has four legs and back support, whereas a stool has three legs with no back support.

Two brown backed leather chairs with a black frame and legs around a white table with a decorative green succulent plant in a white vase.
Credit: Rumman Amin

Two tall brown barstools with three legs and a brass frame under a natural wood countertop with a decorative green houseplant.
Credit: Rumman Amin

OK, that’s a great starting point, but what about these?

The more stool-like a chair becomes, the fewer people will unequivocally agree that it’s a chair. Eventually, we’ll reach a point where most people agree it’s a stool rather than a chair. It may sound like a silly exercise, but if we want to have a deep appreciation of what it means to be a chair, it’s a valuable one. We find out where the limits of a chair are for most people. We also build an understanding of the gray area beyond. Eventually, we get to the point where even the biggest die-hard chair fans concede and admit there’s a stool in front of them.

As interesting as chairs are, this is an article about website delivery technology. Let’s perform this same exercise for static, dynamic, and Jamstack websites.

At a high level

When you go to a website in your browser, there’s a lot going on behind the scenes:

  1. Your browser performs a DNS lookup to turn the domain name into an IP address.
  2. It requests an HTML file from that IP address.
  3. The webserver sends back the requested file.
  4. As the browser renders the web page, it may come across a reference for an asset, such as a CSS, JavaScript, or image file. The browser then performs a request for this asset.
  5. This cycle continues until the browser has all the files for the web page. It’s not unusual for a single webpage to make 50+ requests.

For every request, the response from the webserver is always a static file, even on a dynamic website. You could save these files to a USB drive, email them to a friend just like any other file on your computer.

When comparing static and dynamic, we’re talking about what the webserver is doing. On a static site, the files the browser requests already exist on the webserver. The webserver sends them back exactly as they are. On a dynamic site, the response gets generated by software. This software might connect to a database to retrieve data, build a layout from template files, and add today’s date to the footer. It does all of this for every request.

That’s the foundational difference between static and dynamic websites.

Where does Jamstack fit in?

Static websites are restrictive. They’re great for informational websites; however, you can’t have any dynamic content or behavior by definition. Jamstack blurs the line between static and dynamic. The idea is to take advantage of all the things that make static websites awesome while enabling dynamic functionality where necessary.

The ‘stack’ in Jamstack is a misnomer. The truth is, Jamstack is not a stack at all. It’s a philosophy that exhibits a striking resemblance to The 5 Pillars of the AWS Well-Architected Framework. The ambiguity in the term has led to extensive community discussion about what it means to be Jamstack.

What is Jamstack?

Jamstack is a superset of static. But to truly understand Jamstack, let’s start with the seeds that led to the coining of the term.

In 2002, the late Aaron Swartz published a blog post titled “Bake, Don’t Fry.” While Aaron didn’t coin “Bake, Don’t Fry,” it’s the first time I can find someone recognizing the benefits of static websites while breaking out perceived constraints of the word.

I care about not having to maintain cranky AOLserver, Postgres and Oracle installs. I care about being able to back things up with scp. I care about not having to do any installation or configuration to move my site to a new server. I care about being platform and server independent.

If we trawl through history, we can find similar frustrations that led to Jamstack seeds:

  • Ben and Mena Trott created MovableType because of a [d]issatisfaction with existing blog CMSes — performance, stability.
  • Tom Preston-Werner created Jekyll to move away from complexity:

    I already knew a lot about what I didn’t want. I was tired of complicated blogging engines like WordPress and Mephisto. I wanted to write great posts, not style a zillion template pages, moderate comments all day long, and constantly lag behind the latest software release.

  • Steve Francia created Hugo for performance:

    The past few years this blog has [been] powered by wordpress [sic] and drupal prior to that. Both are are fine pieces of software, but over time I became increasingly disappointed with how they are both optimized for writing content even though significantly most common usage is reading content. Due to the need to load the PHP interpreter on each request it could never be considered fast and consumed a lot of memory on my VPS.

The same themes surface as you look at the origins of many early Jamstack tools:

  • Reduce complexity
  • Improve performance
  • Reduce vendor lock-in
  • Better workflows for developers

In the past 20 years, JavaScript has evolved from a language for adding small interactions to a website to becoming a platform for building rich web applications in the browser. In parallel, we’ve seen a movement of splitting large applications into smaller microservices. These two developments gave rise to a new way of building websites where you could have a static front-end decoupled from a dynamic back-end.

In 2015, Mathias Biilmann wanted to talk about this modern way of building websites but was struggling with the constricting definition of static:

We were in this space of modern static websites. That’s a really bad description of what we’re doing, right? And we kept having that problem that, talking to people about static sites, they would think about something very static. They would think about a brochure or something with no moving parts. A little one-pager or something like that.

To break out of these constraints, he coined the term “Jamstack” to talk about this new approach, and it caught on like wildfire. What was old static technology from the 90s became new again and pushed to new limits. Many developers caught on to the benefits of the Jamstack approach, which helped Jamstack grow into the thriving ecosystem it is today.

Aaron Swartz put it nicely, 13 years before Jamstack was coined: keep a strict separation between input (which needs dynamic code to be processed) and output (which can usually be baked). In other words, decouple the front end from the back end. Prerender content whenever possible. Layer on dynamic functionality where necessary. That’s the crux of Jamstack.

The reason you might want to build a Jamstack site over a dynamic site come down to the six pillars of Jamstack:


Jamstack sites have fewer moving parts and less surface area for malicious exploitation from outside sources.


Jamstack sites are static where possible. Static sites can live entirely in a CDN, making them much easier and cheaper to scale.


Serving a web page from a CDN rather than generating it from a centralized server on-demand improves the page load speed.


Static websites are simple. You need a webserver capable of serving files. With a dynamic site, you might need an entire team to keep a website online and fast.


Again, a static website is made up of files. As long as you find a webserver capable of serving website files, you can move your site anywhere.

Developer experience

Git workflows are a core part of software development today. With many legacy CMSs, it’s difficult to have Git development workflows. With a Jamstack site, everything is a file making it seamless to use Git.

Chris touches on some of these points in a deep-dive comparison between Jamstack and WordPress. He also compares the reasons for choosing a Jamstack architecture versus a server-side one in “Static or Not?”.

Let’s use these pillars to evaluate Jamstack use cases.

Where is the edge of static and Jamstack?

Now that we have the basics of static and Jamstack, let’s dive in and see what lies at the edge of each definition. We have four categories each edge case can fall under.

  • Static – This strictly adheres to the definition of static.
  • Basically static – While not precisely static, most people would call it a static site.
  • Jamstack – A static frontend decoupled from a dynamic backend.
  • Dynamic – Renders web pages on-demand.

Many of these use cases can be placed in multiple categories. In this exercise, we’re putting them in the most restrictive category they fit.

JavaScript interaction Static

Let’s start with an easy one. I have a static site that uses JavaScript to create a slideshow of images.

The HTML page, JavaScript, and images are all static files. All of the HTML manipulation required for the slideshow to function happens in the browser with no external influence.

Cookies Static

I have a static site that adds a banner to the top of the page using JavaScript if a cookie exists. A cookie is just a header. The rest of the files are static.

External assets Basically Static

On a web page, we can load images or JavaScript from an external source. This external source may generate these assets dynamically on request. Would that mean we have a dynamic site?

Most people, including myself, would consider this a static site because it basically is. But if we’re strict to the definition, it doesn’t fit the bill. Having any part of the page generated dynamically defiles the sacred harmony of static.

iFrames Basically Static

An inline frame allows you to embed an HTML page within another HTML page. iFrames are commonly used for embedding Google Maps, Facebook Like buttons, and YouTube videos on a webpage.

Again, most people would still consider this a static site. However, these embeds are almost always from a dynamically-generated source.

Forms Basically Static

A static site can undoubtedly have a form on it. The dilemma comes when you submit it. If you want to do something with the data, you almost certainly need a dynamic back-end. There are plenty of form submission services you can use as the action for your form.

I can see two ways to argue this:

  1. You’re submitting a form to an external website, and it happens to redirect back afterward. This separation means the definition of static remains intact.
  2. This external service is a core workflow on your website, the definition of static no longer works.

In reality, most people would still consider this a static site.

Ajax requests Jamstack

An Ajax request allows a developer to request data from an external source without reloading the page. We’re in the same boat as the above situations of relying on a third party. It’s possible the endpoint for the Ajax call is a static JSON file, but it’s more likely that it’s dynamically-generated.

The nature of how Ajax data is typically used on a website pushes it past a static website into Jamstack territory. It fits well with Jamstack as you can have a site where you prerender everything you can, then use Ajax to layer on any dynamic functionality or content on the site.

Embedded eCommerce Jamstack

There are services that allow you to add eCommerce, even to static websites. Behind the scenes, they’re essentially making Ajax requests to manage items in a shopping cart and collect payment details.

Single page application (SPA) Jamstack

The title alone puts it out of static site contention. A SPA uses Ajax calls to request data. The presentation layer lives entirely in the front end, making it Jamtastic.

Ajax call to a serverless function Jamstack

Whether the endpoint of an Ajax call is serverless with something like AWS Lambda, goes to your Kubernetes clustered Node.js back-end, or a simple PHP back-end, it doesn’t matter. The key for Jamstack is the front end is independent of the back end.

Reverse proxy in front of a webserver Static

Adding a reverse proxy in front of the webserver for a static site must make it dynamic, right? Well, not so fast. While a proxy is software that adds a dynamic element to the network, as long as the file on the server is precisely the file the browser receives, it’s still static.

A webserver, modem, and every piece of network infrastructure in between are running software. If adding a proxy makes a static site dynamic, then nothing is static.

CDN Static

A CDN is a globally-distributed reverse proxy, so it falls into the same category as a reverse proxy. CDNs often add their own headers. This still doesn’t impact the prestigious static status as the headers aren’t part of the file sitting on the server’s hard drive.

CDN in front of a dynamic site with a 200-year cache expiration time Dynamic

OK, 200 years is a long expiry time, I’ll give you that. There are two reasons this is neither a static nor Jamstack site:

  1. The first request isn’t cached, so it generates on demand.
  2. CDNs aren’t designed for persistent storage. If, after one week, you’ve only had five hits on your website, the CDN might purge your web page from the cache. It can always retrieve the web page from the origin server, which would dynamically render the response.

WordPress with a static output Static

Using a WordPress plugin like WP2Static lets you create and manage your website in WordPress and output a static website whenever something changes.

When you do this, the files the browser requests already exist on the webserver, making it a static website—a subtle but important distinction from having a CDN in front of a dynamic site.

Edge computing Dynamic

Many companies are now offering the ability to run dynamic code at the edge of a CDN. It’s a powerful concept because you can have dynamic functionality without adding latency to the user. You can even use edge computation to manipulate HTML before sending it to the client.

It comes down to how you’re using edge functions. You could use an edge function to add a header to particular requests. I would argue this is still a static site. Push much beyond this, where you’re manipulating the HTML, and you’ve crossed the dynamic boundary.

It’s hard to argue it’s a Jamstack site as it doesn’t adhere to some of the fundamental benefits: scale, maintainability, and portability. Now, you have a piece of your core infrastructure that’s changing HTML on every request, and it will only work on that particular hosting infrastructure. That’s getting pretty far away from the blissful simplicity of a static site.

One of the elegant things about Jamstack is the front end and back end are decoupled. The backend is made up of APIs that output data. They don’t know or care how the data is used. The front end is the presentation layer. It knows where to get dynamic data from and how to render it. When you break this separation of concerns, you’ve crossed into a dynamic world.

Dynamic Persistent Rendering (DPR) Dynamic

DPR is a strategy to reduce long build times on large static site generator (SSG) sites. The idea is the SSG builds a subset of the most popular pages. For the rest of the pages, the SSG builds them on-demand the first time they’re requested and saves them to persistent storage. After the initial request, the page behaves precisely like the rest of the built static pages.

Long build times limit large-scale use cases from choosing Jamstack. If all the SSG tooling were GoLang-based, we probably wouldn’t need DPR. However, that’s not the direction most Jamstack tooling has taken, and build performance can be excruciatingly long on big websites.

DPR is a means to an end and a necessity for Jamstack to grow. While it allows you to use Jamstack workflows on massive websites, ironically, I don’t think you can call a site using DPR a Jamstack site. Running software on-demand to generate a web page certainly sounds dynamicy. After the first request, a page served using DPR is a static page which makes DPR “more static” than putting a CDN in front of a dynamic site. However, it’s still a dynamic site as there isn’t a separation between frontend and backend, and it’s not portable, one of the pillars of a Jamstack site.

Incremental Static Regeneration (ISR) Dynamic

ISR is a similar but subtly different strategy to DPR to reduce long build times on large SSG sites. The difference is you can revalidate individual pages periodically to mimic a dynamic site without doing an entire site build.

Requests to a page without a cached version fall back to a stale version of that page or a generic loading page.

Again, it’s an exciting technology that expands what you can do with Jamstack workflows, but dynamically generating a page on-demand sounds like something a dynamic site would do.

Flat file CMS Dynamic

A flat file CMS uses text files for content rather than a database. While flat file CMSs remove a dynamic element from the stack, it’s still dynamically rendering the response.

The lines have been drawn

Exploring and debating these edge cases gives us a better understanding of the limits of all of these terms. The point of this exercise isn’t to be dogmatic about creating static or Jamstack websites. It’s to give us a common language to talk about the tradeoffs you make as you cross the boundary from one concept to another.

There’s absolutely nothing wrong with tradeoffs either. Not everything can be a purely static website. In many cases, the trade-offs make sense. For example, let’s say the front end needs to know the country of the visitor. There are two ways to do this:

  1. On page load, perform an Ajax call to query the country from an API. (Jamstack)
  2. Use an edge function to dynamically insert a country code into the HTML on response. (Dynamic)

If having the country code is a nice-to-have and the web page doesn’t need it immediately, then the first approach is a good option. The page can be static and the API call can fail gracefully if it doesn’t work. However, if the country code is required for the page, dynamically adding it using an edge function might make more sense. It’ll be faster as you don’t need to perform a second request/response cycle.

The key is understanding the problem you’re solving and thinking through the trade-offs you’re making with different approaches. You might end up with the majority of your site Jamstack and a portion dynamic. That’s totally fine and might be necessary for your use case. Typically, the closer you can get to static, the faster, more secure, and more scalable your site will be.

This is only the beginning of the discussion, and I’d love to hear your take. Where would you draw the lines? What do static and Jamstack mean to you? Are you sitting on a chair or stool right now?

The post Static vs. Dynamic vs. Jamstack: Where’s The Line? appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.


, , , ,

Jamstack Community Survey 2021

(This is a sponsored post.)

The folks over at Netlify have opened up the Jamstack Community Survey for 2021. More than 3,000 front-enders like yourself took last year’s survey, which gauged how familiar people are with the term “Jamstack” and which frameworks they use.

This is the survey’s second year which is super exciting because this is where we start to reveal year-over-year trends. Will the percentage of developers who have been using a Jamstack architecture increase from last year’s 71%? Will React still be the most widely used framework, but with one of the lower satisfaction scores? Or will Eleventy still be one of the least used frameworks, but with the highest satisfaction score? Only your answers will tell!

Plus, you can qualify for a limited-edition Jamstack sticker with your response. See Netlify’s announcement for more information.

Direct Link to ArticlePermalink

The post Jamstack Community Survey 2021 appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.


, , ,

Jamstack Community Survey 2021

The folks over at Netlify have opened up the Jamstack Community Survey for 2021. More than 3,000 front-enders like yourself took last year’s survey, which gauged how familiar people are with the term “Jamstack” and which frameworks they use.

This is the survey’s second year which is super exciting because this is where we start to reveal year-over-year trends. Will the percentage of developers who have been using a Jamstack architecture increase from last year’s 71%? Will React still be the most widely used framework, but with one of the lower satisfaction scores? Or will Eleventy still be one of the least used frameworks, but with the highest satisfaction score? Only your answers will tell!

Plus, you can qualify for a limited-edition Jamstack sticker with your response. See Netlify’s announcement for more information.

The post Jamstack Community Survey 2021 appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.


, , ,

Deploying a Serverless Jamstack Site with RedwoodJS, Fauna, and Vercel

This article is for anyone interested in the emerging ecosystem of tools and technologies related to Jamstack and serverless. We’re going to use Fauna’s GraphQL API as a serverless back-end for a Jamstack front-end built with the Redwood framework and deployed with a one-click deploy on Vercel.

In other words, lots to learn! By the end, you’ll not only get to dive into Jamstack and serverless concepts, but also hands-on experience with a really neat combination of tech that I think you’ll really like.

Creating a Redwood app

Redwood is a framework for serverless applications that pulls together React (for front-end component), GraphQL (for data) and Prisma (for database queries).

There are other front-end frameworks that we could use here. One example is Bison, created by Chris Ball. It leverages GraphQL in a similar fashion to Redwood, but uses a slightly different lineup of GraphQL libraries, such as Nexus in place of Apollo Client and GraphQL Codegen, in place of the Redwood CLI. But it’s only been around a few months, so the project is still very new compared to Redwood, which has been in development since June 2019.

There are many great Redwood starter templates we could use to bootstrap our application, but I want to start by generating a Redwood boilerplate project and looking at the different pieces that make up a Redwood app. We’ll then build up the project, piece by piece.

We will need to install Yarn to use the Redwood CLI to get going. Once that’s good to go, here’s what to run in a terminal

yarn create redwood-app ./csstricks

We’ll now cd into our new project directory and start our development server.

cd csstricks yarn rw dev

Our project’s front-end is now running on localhost:8910. Our back-end is running on localhost:8911 and ready to receive GraphQL queries. By default, Redwood comes with a GraphiQL playground that we’ll use towards the end of the article.

Let’s head over to localhost:8910 in the browser. If all is good, the Redwood landing page should load up.

The Redwood starting page indicates that the front end of our app is ready to go. It also provides a nice instruction for how to start creating custom routes for the app.

Redwood is currently at version 0.21.0, as of this writing. The docs warn against using it in production until it officially reaches 1.0. They also have a community forum where they welcome feedback and input from developers like yourself.

Directory structure

Redwood values convention over configuration and makes a lot of decisions for us, including the choice of technologies, how files are organized, and even naming conventions. This can result in an overwhelming amount of generated boilerplate code that is hard to comprehend, especially if you’re just digging into this for the first time.

Here’s how the project is structured:

├── api │   ├── prisma │   │   ├── schema.prisma │   │   └── seeds.js │   └── src │       ├── functions │       │   └── graphql.js │       ├── graphql │       ├── lib │       │   └── db.js │       └── services └── web     ├── public     │   ├── favicon.png     │   ├── README.md     │   └── robots.txt     └── src         ├── components         ├── layouts         ├── pages         │   ├── FatalErrorPage         │   │   └── FatalErrorPage.js         │   └── NotFoundPage         │       └── NotFoundPage.js         ├── index.css         ├── index.html         ├── index.js         └── Routes.js

Don’t worry too much about what all this means yet; the first thing to notice is things are split into two main directories: web and api. Yarn workspaces allows each side to have its own path in the codebase.

web contains our front-end code for:

  • Pages
  • Layouts
  • Components

api contains our back-end code for:

  • Function handlers
  • Schema definition language
  • Services for back-end business logic
  • Database client

Redwood assumes Prisma as a data store, but we’re going to use Fauna instead. Why Fauna when we could just as easily use Firebase? Well, it’s just a personal preference. After Google purchased Firebase they launched a real-time document database, Cloud Firestore, as the successor to the original Firebase Realtime Database. By integrating with the larger Firebase ecosystem, we could have access to a wider range of features than what Fauna offers. At the same time, there are even a handful of community projects that have experimented with Firestore and GraphQL but there isn’t first class GraphQL support from Google.

Since we will be querying Fauna directly, we can delete the prisma directory and everything in it. We can also delete all the code in db.js. Just don’t delete the file as we’ll be using it to connect to the Fauna client.


We’ll start by taking a look at the web side since it should look familiar to developers with experience using React or other single-page application frameworks.

But what actually happens when we build a React app? It takes the entire site and shoves it all into one big ball of JavaScript inside index.js, then shoves that ball of JavaScript into the “root” DOM node, which is on line 11 of index.html.

<!DOCTYPE html> <html lang="en">   <head>     <meta charset="UTF-8" />     <meta name="viewport" content="width=device-width, initial-scale=1.0" />     <link rel="icon" type="image/png" href="/favicon.png" />     <title><%= htmlWebpackPlugin.options.title %></title>   </head>    <body>     <div id="redwood-app"></div> // HIGHLIGHT   </body> </html>

While Redwood uses Jamstack in the documentation and marketing of itself, Redwood doesn’t do pre-rendering yet (like Next or Gatsby can), but is still Jamstack in that it’s shipping static files and hitting APIs with JavaScript for data.


index.js contains our root component (that big ball of JavaScript) that is rendered to the root DOM node. document.getElementById() selects an element with an id containing redwood-app, and ReactDOM.render() renders our application into the root DOM element.


The <Routes /> component (and by extension all the application pages) are contained within the <RedwoodProvider> tags. Flash uses the Context API for passing message objects between deeply nested components. It provides a typical message display unit for rendering the messages provided to FlashContext.

FlashContext’s provider component is packaged with the <RedwoodProvider /> component so it’s ready to use out of the box. Components pass message objects by subscribing to it (think, “send and receive”) via the provided useFlash hook.


The provider itself is then contained within the <FatalErrorBoundary> component which is taking in <FatalErrorPage> as a prop. This defaults your website to an error page when all else fails.

import ReactDOM from 'react-dom' import { RedwoodProvider, FatalErrorBoundary } from '@redwoodjs/web' import FatalErrorPage from 'src/pages/FatalErrorPage' import Routes from 'src/Routes' import './index.css'  ReactDOM.render(   <FatalErrorBoundary page={FatalErrorPage}>     <RedwoodProvider>       <Routes />     </RedwoodProvider>   </FatalErrorBoundary>,    document.getElementById('redwood-app') )


Router contains all of our routes and each route is specified with a Route. The Redwood Router attempts to match the current URL to each route, stopping when it finds a match and then renders only that route. The only exception is the notfound route which renders a single Route with a notfound prop when no other route matches.

import { Router, Route } from '@redwoodjs/router'  const Routes = () => {   return (     <Router>       <Route notfound page={NotFoundPage} />     </Router>   ) }  export default Routes


Now that our application is set up, let’s start creating pages! We’ll use the Redwood CLI generate page command to create a named route function called home. This renders the HomePage component when it matches the URL path to /.

We can also use rw instead of redwood and g instead of generate to save some typing.

yarn rw g page home /

This command performs four separate actions:

  • It creates web/src/pages/HomePage/HomePage.js. The name specified in the first argument gets capitalized and “Page” is appended to the end.
  • It creates a test file at web/src/pages/HomePage/HomePage.test.js with a single, passing test so you can pretend you’re doing test-driven development.
  • It creates a Storybook file at web/src/pages/HomePage/HomePage.stories.js.
  • It adds a new <Route> in web/src/Routes.js that maps the / path to the HomePage component.


If we go to web/src/pages we’ll see a HomePage directory containing a HomePage.js file. Here’s what’s in it:

// web/src/pages/HomePage/HomePage.js  import { Link, routes } from '@redwoodjs/router'  const HomePage = () => {   return (     <>       <h1>HomePage</h1>       <p>         Find me in <code>./web/src/pages/HomePage/HomePage.js</code>       </p>       <p>         My default route is named <code>home</code>, link to me with `         <Link to={routes.home()}>Home</Link>`       </p>     </>   ) }  export default HomePage
The HomePage.js file has been set as the main route, /.

We’re going to move our page navigation into a re-usable layout component which means we can delete the Link and routes imports as well as <Link to={routes.home()}>Home</Link>. This is what we’re left with:

// web/src/pages/HomePage/HomePage.js  const HomePage = () => {   return (     <>       <h1>RedwoodJS+FaunaDB+Vercel 🚀</h1>       <p>Taking Fullstack to the Jamstack</p>     </>   ) }  export default HomePage


To create our AboutPage, we’ll enter almost the exact same command we just did, but with about instead of home. We also don’t need to specify the path since it’s the same as the name of our route. In this case, the name and path will both be set to about.

yarn rw g page about
AboutPage.js is now available at /about.
// web/src/pages/AboutPage/AboutPage.js  import { Link, routes } from '@redwoodjs/router'  const AboutPage = () => {   return (     <>       <h1>AboutPage</h1>       <p>         Find me in <code>./web/src/pages/AboutPage/AboutPage.js</code>       </p>       <p>         My default route is named <code>about</code>, link to me with `         <Link to={routes.about()}>About</Link>`       </p>     </>   ) }  export default AboutPage

We’ll make a few edits to the About page like we did with our Home page. That includes taking out the <Link> and routes imports and deleting Link to={routes.about()}>About</Link>.

Here’s the end result:

// web/src/pages/AboutPage/AboutPage.js  const AboutPage = () => {   return (     <>       <h1>About 🚀🚀</h1>       <p>For those who want to stack their Jam, fully</p>     </>   ) }

If we return to Routes.js we’ll see our new routes for home and about. Pretty nice that Redwood does this for us!

const Routes = () => {   return (     <Router>       <Route path="/about" page={AboutPage} name="about" />       <Route path="/" page={HomePage} name="home" />       <Route notfound page={NotFoundPage} />     </Router>   ) }


Now we want to create a header with navigation links that we can easily import into our different pages. We want to use a layout so we can add navigation to as many pages as we want by importing the component instead of having to write the code for it on every single page.


You may now be wondering, “is there a generator for layouts?” The answer to that is… of course! The command is almost identical as what we’ve been doing so far, except with rw g layout followed by the name of the layout, instead of rw g page followed by the name and path of the route.

yarn rw g layout blog
// web/src/layouts/BlogLayout/BlogLayout.js  const BlogLayout = ({ children }) => {   return <>{children}</> }  export default BlogLayout

To create links between different pages we’ll need to:

  • Import Link and routes from @redwoodjs/router into BlogLayout.js
  • Create a <Link to={}></Link> component for each link
  • Pass a named route function, such as routes.home(), into the to={} prop for each route
// web/src/layouts/BlogLayout/BlogLayout.js  import { Link, routes } from '@redwoodjs/router'  const BlogLayout = ({ children }) => {   return (     <>       <header>         <h1>RedwoodJS+FaunaDB+Vercel 🚀</h1>          <nav>           <ul>             <li>               <Link to={routes.home()}>Home</Link>             </li>             <li>               <Link to={routes.about()}>About</Link>             </li>           </ul>         </nav>        </header>        <main>         <p>{children}</p>       </main>     </>   ) }  export default BlogLayout

We won’t see anything different in the browser yet. We created the BlogLayout but have not imported it into any pages. So let’s import BlogLayout into HomePage and wrap the entire return statement with the BlogLayout tags.

// web/src/pages/HomePage/HomePage.js  import BlogLayout from 'src/layouts/BlogLayout'  const HomePage = () => {   return (     <BlogLayout>       <p>Taking Fullstack to the Jamstack</p>     </BlogLayout>   ) }  export default HomePage
Hey look, the navigation is taking shape!

If we click the link to the About page we’ll be taken there but we are unable to get back to the previous page because we haven’t imported BlogLayout into AboutPage yet. Let’s do that now:

// web/src/pages/AboutPage/AboutPage.js  import BlogLayout from 'src/layouts/BlogLayout'  const AboutPage = () => {   return (     <BlogLayout>       <p>For those who want to stack their Jam, fully</p>     </BlogLayout>   ) }  export default AboutPage

Now we can navigate back and forth between the pages by clicking the navigation links! Next up, we’ll now create our GraphQL schema so we can start working with data.

Fauna schema definition language

To make this work, we need to create a new file called sdl.gql and enter the following schema into the file. Fauna will take this schema and make a few transformations.

// sdl.gql  type Post {   title: String!   body: String! }  type Query {   posts: [Post] }

Save the file and upload it to Fauna’s GraphQL Playground. Note that, at this point, you will need a Fauna account to continue. There’s a free tier that works just fine for what we’re doing.

The GraphQL Playground is located in the selected database.
The Fauna shell allows us to write, run and test queries.

It’s very important that Redwood and Fauna agree on the SDL, so we cannot use the original SDL that was entered into Fauna because that is no longer an accurate representation of the types as they exist on our Fauna database.

The Post collection and posts Index will appear unaltered if we run the default queries in the shell, but Fauna creates an intermediary PostPage type which has a data object.

Redwood schema definition language

This data object contains an array with all the Post objects in the database. We will use these types to create another schema definition language that lives inside our graphql directory on the api side of our Redwood project.

// api/src/graphql/posts.sdl.js  import gql from 'graphql-tag'  export const schema = gql`   type Post {     title: String!     body: String!   }    type PostPage {     data: [Post]   }    type Query {     posts: PostPage   } ` 


The posts service sends a query to the Fauna GraphQL API. This query is requesting an array of posts, specifically the title and body for each. These are contained in the data object from PostPage.

// api/src/services/posts/posts.js  import { request } from 'src/lib/db' import { gql } from 'graphql-request'  export const posts = async () => {   const query = gql`   {     posts {       data {         title         body       }     }   }   `    const data = await request(query, 'https://graphql.fauna.com/graphql')    return data['posts'] } 

At this point, we can install graphql-request, a minimal client for GraphQL with a promise-based API that can be used to send GraphQL requests:

cd api yarn add graphql-request graphql

Attach the Fauna authorization token to the request header

So far, we have GraphQL for data, Fauna for modeling that data, and graphql-request to query it. Now we need to establish a connection between graphql-request and Fauna, which we’ll do by importing graphql-request into db.js and use it to query an endpoint that is set to https://graphql.fauna.com/graphql.

// api/src/lib/db.js  import { GraphQLClient } from 'graphql-request'  export const request = async (query = {}) => {   const endpoint = 'https://graphql.fauna.com/graphql'    const graphQLClient = new GraphQLClient(endpoint, {     headers: {       authorization: 'Bearer ' + process.env.FAUNADB_SECRET     },   })    try {     return await graphQLClient.request(query)   } catch (error) {     console.log(error)     return error   } } 

A GraphQLClient is instantiated to set the header with an authorization token, allowing data to flow to our app.


We’ll use the Fauna Shell and run a couple of Fauna Query Language (FQL) commands to seed the database. First, we’ll create a blog post with a title and body.

Create(   Collection("Post"),   {     data: {       title: "Deno is a secure runtime for JavaScript and TypeScript.",       body: "The original creator of Node, Ryan Dahl, wanted to build a modern, server-side JavaScript framework that incorporates the knowledge he gained building out the initial Node ecosystem."     }   } )
{   ref: Ref(Collection("Post"), "282083736060690956"),   ts: 1605274864200000,   data: {     title: "Deno is a secure runtime for JavaScript and TypeScript.",     body:       "The original creator of Node, Ryan Dahl, wanted to build a modern, server-side JavaScript framework that incorporates the knowledge he gained building out the initial Node ecosystem."   } }

Let’s create another one.

Create(   Collection("Post"),   {     data: {       title: "NextJS is a React framework for building production grade applications that scale.",       body: "To build a complete web application with React from scratch, there are many important details you need to consider such as: bundling, compilation, code splitting, static pre-rendering, server-side rendering, and client-side rendering."     }   } )
{   ref: Ref(Collection("Post"), "282083760102441484"),   ts: 1605274887090000,   data: {     title:       "NextJS is a React framework for building production grade applications that scale.",     body:       "To build a complete web application with React from scratch, there are many important details you need to consider such as: bundling, compilation, code splitting, static pre-rendering, server-side rendering, and client-side rendering."   } }

And maybe one more just to fill things up.

Create(   Collection("Post"),   {     data: {       title: "Vue.js is an open-source front end JavaScript framework for building user interfaces and single-page applications.",       body: "Evan You wanted to build a framework that combined many of the things he loved about Angular and Meteor but in a way that would produce something novel. As React rose to prominence, Vue carefully observed and incorporated many lessons from React without ever losing sight of their own unique value prop."     }   } )
{   ref: Ref(Collection("Post"), "282083792286384652"),   ts: 1605274917780000,   data: {     title:       "Vue.js is an open-source front end JavaScript framework for building user interfaces and single-page applications.",     body:       "Evan You wanted to build a framework that combined many of the things he loved about Angular and Meteor but in a way that would produce something novel. As React rose to prominence, Vue carefully observed and incorporated many lessons from React without ever losing sight of their own unique value prop."   } }


Cells provide a simple and declarative approach to data fetching. They contain the GraphQL query along with loading, empty, error, and success states. Each one renders itself automatically depending on what state the cell is in.


yarn rw generate cell BlogPosts   export const QUERY = gql`   query BlogPostsQuery {     blogPosts {       id     }   } ` export const Loading = () => <div>Loading...</div> export const Empty = () => <div>Empty</div> export const Failure = ({ error }) => <div>Error: {error.message}</div>  export const Success = ({ blogPosts }) => {   return JSON.stringify(blogPosts) }

By default we have the query render the data with JSON.stringify on the page where the cell is imported. We’ll make a handful of changes to make the query and render the data we need. So, let’s:

  • Change blogPosts to posts.
  • Change BlogPostsQuery to POSTS.
  • Change the query itself to return the title and body of each post.
  • Map over the data object in the success component.
  • Create a component with the title and body of the posts returned through the data object.

Here’s how that looks:

// web/src/components/BlogPostsCell/BlogPostsCell.js  export const QUERY = gql`   query POSTS {     posts {       data {         title         body       }     }   } ` export const Loading = () => <div>Loading...</div> export const Empty = () => <div>Empty</div> export const Failure = ({ error }) => <div>Error: {error.message}</div>  export const Success = ({ posts }) => {   const {data} = posts   return data.map(post => (     <>       <header>         <h2>{post.title}</h2>       </header>       <p>{post.body}</p>     </>   )) }

The POSTS query is sending a query for posts, and when it’s queried, we get back a data object containing an array of posts. We need to pull out the data object so we can loop over it and get the actual posts. We do this with object destructuring to get the data object and then we use the map() function to map over the data object and pull out each post. The title of each post is rendered with an <h2> inside <header> and the body is rendered with a <p> tag.

Import BlogPostsCell to HomePage

// web/src/pages/HomePage/HomePage.js  import BlogLayout from 'src/layouts/BlogLayout' import BlogPostsCell from 'src/components/BlogPostsCell/BlogPostsCell.js'  const HomePage = () => {   return (     <BlogLayout>       <p>Taking Fullstack to the Jamstack</p>       <BlogPostsCell />     </BlogLayout>   ) }  export default HomePage
Check that out! Posts are returned to the app and rendered on the front end.


We do mention Vercel in the title of this post, and we’re finally at the point where we need it. Specifically, we’re using it to build the project and deploy it to Vercel’s hosted platform, which offers build previews when code it pushed to the project repository. So, if you don’t already have one, grab a Vercel account. Again, the free pricing tier works just fine for this work.

Why Vercel over, say, Netlify? It’s a good question. Redwood even began with Netlify as its original deploy target. Redwood still has many well-documented Netlify integrations. Despite the tight integration with Netlify, Redwood seeks to be universally portable to as many deploy targets as possible. This now includes official support for Vercel along with community integrations for the Serverless framework, AWS Fargate, and PM2. So, yes, we could use Netlify here, but it’s nice that we have a choice of available services.

We only have to make one change to the project’s configuration to integrate it with Vercel. Let’s open netlify.toml and change the apiProxyPath to "/api". Then, let’s log into Vercel and click the “Import Project” button to connect its service to the project repository. This is where we enter the URL of the repo so Vercel can watch it, then trigger a build and deploy when it noticed changes.

I’m using GitHub to host my project, but Vercel is capable of working with GitLab and Bitbucket as well.

Redwood has a preset build command that works out of the box in Vercel:

Simply select “Redwood” from the preset options and we’re good to go.

We’re pretty far along, but even though the site is now “live” the database isn’t connected:

To fix that, we’ll add the FAUNADB_SECRET token from our Fauna account to our environment variables in Vercel:

Now our application is complete!

We did it! I hope this not only gets you super excited about working with Jamstack and serverless, but got a taste of some new technologies in the process.

The post Deploying a Serverless Jamstack Site with RedwoodJS, Fauna, and Vercel appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.


, , , , , ,

How to create a client-serverless Jamstack app using Netlify, Gatsby and Fauna

The Jamstack is a modern web development architecture based on client-side JavaScript, reusable APIs, and prebuilt Markup.

The key aspects of a Jamstack application are the following:

  • The entire app runs on a CDN (or ADN). CDN stands for Content Delivery Network and an ADN is an Application Delivery Network.
  • Everything lives in Git.
  • Automated builds run with a workflow when developers push the code.
  • There’s Automatic deployment of the prebuilt markup to the CDN/ADN.
  • Reusable APIs make hasslefree integrations with many of the services. To take a few examples, Stripe for the payment and checkout, Mailgun for email services, etc. We can also write custom APIs targeted to a specific use-case. We will see such examples of custom APIs in this article.
  • It’s practically Serverless. To put it more clearly, we do not maintain any servers, rather make use of already existing services (like email, media, database, search, and so on) or serverless functions.

In this article, we will learn how to build a Jamstack application that has:

  • A global data store with GraphQL support to store and fetch data with ease. We will use Fauna to accomplish this.
  • Serverless functions that also act as the APIs to fetch data from the Fauna data store. We will use Netlify serverless functions for this.
  • We will build the client side of the app using a Static Site Generator called Gatsbyjs.
  • Finally we will deploy the app on a CDN configured and managed by Netlify CDN.

So, what are we building today?

We all love shopping. How cool would it be to manage all of our shopping notes in a centralized place? So we’ll be building an app called ‘shopnote’ that allows us to manage shop notes. We can also add one or more items to a note, mark them as done, mark them as urgent, etc.

At the end of this article, our shopnote app will look like this,


We will learn things with a step-by-step approach in this article. If you want to jump into the source code or demonstration sooner, here are links to them.

Set up Fauna

Fauna is the data API for client-serverless applications. If you are familiar with any traditional RDBMS, a major difference with Fauna would be, it is a relational NOSQL system that gives all the capabilities of the legacy RDBMS. It is very flexible without compromising scalability and performance.

Fauna supports multiple APIs for data-access,

  • GraphQL: An open source data query and manipulation language. If you are new to the GraphQL, you can find more details from here, https://graphql.org/
  • Fauna Query Language (FQL): An API for querying Fauna. FQL has language specific drivers which makes it flexible to use with languages like JavaScript, Java, Go, etc. Find more details of FQL from here.

In this article we will explain the usages of GraphQL for the ShopNote application.

First thing first, sign up using this URL. Please select the free plan which is with a generous daily usage quota and more than enough for our usage.

Next, create a database by providing a database name of your choice. I have used shopnotes as the database name.

After creating the database, we will be defining the GraphQL schema and importing it into the database. A GraphQL schema defines the structure of the data. It defines the data types and the relationship between them. With schema we can also specify what kind of queries are allowed.

At this stage, let us create our project folder. Create a project folder somewhere on your hard drive with the name, shopnote. Create a file with the name, shopnotes.gql with the following content:

type ShopNote {   name: String!   description: String   updatedAt: Time   items: [Item!] @relation }   type Item {   name: String!   urgent: Boolean   checked: Boolean   note: ShopNote! }   type Query {   allShopNotes: [ShopNote!]! }

Here we have defined the schema for a shopnote list and item, where each ShopNote contains name, description, update time and a list of Items. Each Item type has properties like, name, urgent, checked and which shopnote it belongs to. 

Note the @relation directive here. You can annotate a field with the @relation directive to mark it for participating in a bi-directional relationship with the target type. In this case, ShopNote and Item are in a one-to-many relationship. It means, one ShopNote can have multiple Items, where each Item can be related to a maximum of one ShopNote.

You can read more about the @relation directive from here. More on the GraphQL relations can be found from here.

As a next step, upload the shopnotes.gql file from the Fauna dashboard using the IMPORT SCHEMA button,

Upon importing a GraphQL Schema, FaunaDB will automatically create, maintain, and update, the following resources:

  • Collections for each non-native GraphQL Type; in this case, ShopNote and Item.
  • Basic CRUD Queries/Mutations for each Collection created by the Schema, e.g. createShopNote allShopNotes; each of which are powered by FQL.
  • For specific GraphQL directives: custom Indexes or FQL for establishing relationships (i.e. @relation), uniqueness (@unique), and more!

Behind the scene, Fauna will also help to create the documents automatically. We will see that in a while.

Fauna supports a schema-free object relational data model. A database in Fauna may contain a group of collections. A collection may contain one or more documents. Each of the data records are inserted into the document. This forms a hierarchy which can be visualized as:

Here the data record can be arrays, objects, or of any other supported types. With the Fauna data model we can create indexes, enforce constraints. Fauna indexes can combine data from multiple collections and are capable of performing computations. 

At this stage, Fauna already created a couple of collections for us, ShopNote and Item. As we start inserting records, we will see the Documents are also getting created. We will be able view and query the records and utilize the power of indexes. You may see the data model structure appearing in your Fauna dashboard like this in a while,

Point to note here, each of the documents is identified by the unique ref attribute. There is also a ts field which returns the timestamp of the recent modification to the document. The data record is part of the data field. This understanding is really important when you interact with collections, documents, records using FQL built-in functions. However, in this article we will interact with them using GraphQL queries with Netlify Functions.

With all these understanding, let us start using our Shopenotes database that is created successfully and ready for use. 

Let us try some queries

Even though we have imported the schema and underlying things are in place, we do not have a document yet. Let us create one. To do that, copy the following GraphQL mutation query to the left panel of the GraphQL playground screen and execute.

mutation {   createShopNote(data: {     name: "My Shopping List"     description: "This is my today's list to buy from Tom's shop"     items: {       create: [         { name: "Butther - 1 pk", urgent: true }         { name: "Milk - 2 ltrs", urgent: false }         { name: "Meat - 1lb", urgent: false }       ]     }   }) {     _id     name     description     items {       data {         name,         urgent       }     }   } }

Note, as Fauna already created the GraphQL mutation classes in the background, we can directly use it like, createShopNote. Once successfully executed, you can see the response of a ShopNote creation at the right side of the editor.

The newly created ShopNote document has all the required details we have passed while creating it. We have seen ShopNote has a one-to-many relation with Item. You can see the shopnote response has the item data nested within it. In this case, one shopnote has three items. This is really powerful. Once the schema and relation are defined, the document will be created automatically keeping that relation in mind.

Now, let us try fetching all the shopnotes. Here is the GraphQL query:

query {     allShopNotes {     data {       _id       name       description       updatedAt       items {         data {           name,           checked,           urgent         }       }     }   } }

Let’s try the query in the playground as before:

Now we have a database with a schema and it is fully operational with creating and fetch functionality. Similarly, we can create queries for adding, updating, removing items to a shopnote and also updating and deleting a shopnote. These queries will be used at a later point in time when we create the serverless functions.

If you are interested to run other queries in the GraphQL editor, you can find them from here,

Create a Server Secret Key

Next, we need to create a secured server key to make sure the access to the database is authenticated and authorized.

Click on the SECURITY option available in the FaunaDB interface to create the key, like so,

On successful creation of the key, you will be able to view the key’s secret. Make sure to copy and save it somewhere safe.

We do not want anyone else to know about this key. It is not even a good idea to commit it to the source code repository. To maintain this secrecy, create an empty file called .env at the root level of your project folder.

Edit the .env file and add the following line to it (paste the generated server key in the place of, <YOUR_FAUNA_KEY_SECRET>).


Add a .gitignore file and write the following content to it. This is to make sure we do not commit the .env file to the source code repo accidentally. We are also ignoring node_modules as a best practice.


We are done with all that had to do with Fauna’s setup. Let us move to the next phase to create serverless functions and APIs to access data from the Fauna data store. At this stage, the directory structure may look like this,

Set up Netlify Serverless Functions

Netlify is a great platform to create hassle-free serverless functions. These functions can interact with databases, file-system, and in-memory objects.

Netlify functions are powered by AWS Lambda. Setting up AWS Lambdas on our own can be a fairly complex job. With Netlify, we will simply set a folder and drop our functions. Writing simple functions automatically becomes APIs. 

First, create an account with Netlify. This is free and just like the FaunaDB free tier, Netlify is also very flexible.

Now we need to install a few dependencies using either npm or yarn. Make sure you have nodejs installed. Open a command prompt at the root of the project folder. Use the following command to initialize the project with node dependencies,

npm init -y

Install the netlify-cli utility so that we can run the serverless function locally.

npm install netlify-cli -g

Now we will install two important libraries, axios and dotenv. axios will be used for making the HTTP calls and dotenv will help to load the FAUNA_SERVER_SECRET environment variable from the .env file into process.env.

yarn add axios dotenv


npm i axios dotenv

Create serverless functions

Create a folder with the name, functions at the root of the project folder. We are going to keep all serverless functions under it.

Now create a subfolder called utils under the functions folder. Create a file called query.js under the utils folder. We will need some common code to query the data store for all the serverless functions. The common code will be in the query.js file.

First we import the axios library functionality and load the .env file. Next, we export an async function that takes the query and variables. Inside the async function, we make calls using axios with the secret key. Finally, we return the response.

// query.js   const axios = require("axios"); require("dotenv").config();   module.exports = async (query, variables) => {   const result = await axios({       url: "https://graphql.fauna.com/graphql",       method: "POST",       headers: {           Authorization: `Bearer $ {process.env.FAUNA_SERVER_SECRET}`       },       data: {         query,         variables       }  });    return result.data; };

Create a file with the name, get-shopnotes.js under the functions folder. We will perform a query to fetch all the shop notes.

// get-shopnotes.js   const query = require("./utils/query");   const GET_SHOPNOTES = `    query {        allShopNotes {        data {          _id          name          description          updatedAt          items {            data {              _id,              name,              checked,              urgent          }        }      }    }  }   `;   exports.handler = async () => {   const { data, errors } = await query(GET_SHOPNOTES);     if (errors) {     return {       statusCode: 500,       body: JSON.stringify(errors)     };   }     return {     statusCode: 200,     body: JSON.stringify({ shopnotes: data.allShopNotes.data })   }; };

Time to test the serverless function like an API. We need to do a one time setup here. Open a command prompt at the root of the project folder and type:

netlify login

This will open a browser tab and ask you to login and authorize access to your Netlify account. Please click on the Authorize button.

Next, create a file called, netlify.toml at the root of your project folder and add this content to it,

[build]     functions = "functions"   [[redirects]]    from = "/api/*"    to = "/.netlify/functions/:splat"    status = 200

This is to tell Netlify about the location of the functions we have written so that it is known at the build time.

Netlify automatically provides the APIs for the functions. The URL to access the API is in this form, /.netlify/functions/get-shopnotes which may not be very user-friendly. We have written a redirect to make it like, /api/get-shopnotes.

Ok, we are done. Now in command prompt type,

netlify dev

By default the app will run on localhost:8888 to access the serverless function as an API.

Open a browser tab and try this URL, http://localhost:8888/api/get-shopnotes:

Congratulations!!! You have got your first serverless function up and running.

Let us now write the next serverless function to create a ShopNote. This is going to be simple. Create a file named, create-shopnote.js under the functions folder. We need to write a mutation by passing the required parameters. 

//create-shopnote.js   const query = require("./utils/query");   const CREATE_SHOPNOTE = `   mutation($ name: String!, $ description: String!, $ updatedAt: Time!, $ items: ShopNoteItemsRelation!) {     createShopNote(data: {name: $ name, description: $ description, updatedAt: $ updatedAt, items: $ items}) {       _id       name       description       updatedAt       items {         data {           name,           checked,           urgent         }       }     }   } `;   exports.handler = async event => {      const { name, items } = JSON.parse(event.body);   const { data, errors } = await query(     CREATE_SHOPNOTE, { name, items });     if (errors) {     return {       statusCode: 500,       body: JSON.stringify(errors)     };   }     return {     statusCode: 200,     body: JSON.stringify({ shopnote: data.createShopNote })   }; };

Please give your attention to the parameter, ShopNotesItemRelation. As we had created a relation between the ShopNote and Item in our schema, we need to maintain that while writing the query as well.

We have de-structured the payload to get the required information from the payload. Once we got those, we just called the query method to create a ShopNote.

Alright, let’s test it out. You can use postman or any other tools of your choice to test it like an API. Here is the screenshot from postman.

Great, we can create a ShopNote with all the items we want to buy from a shopping mart. What if we want to add an item to an existing ShopNote? Let us create an API for it. With the knowledge we have so far, it is going to be really quick.

Remember, ShopNote and Item are related? So to create an item, we have to mandatorily tell which ShopNote it is going to be part of. Here is our next serverless function to add an item to an existing ShopNote.

//add-item.js   const query = require("./utils/query");   const ADD_ITEM = `   mutation($ name: String!, $ urgent: Boolean!, $ checked: Boolean!, $ note: ItemNoteRelation!) {     createItem(data: {name: $ name, urgent: $ urgent, checked: $ checked, note: $ note}) {       _id       name       urgent       checked       note {         name       }     }   } `;   exports.handler = async event => {      const { name, urgent, checked, note} = JSON.parse(event.body);   const { data, errors } = await query(     ADD_ITEM, { name, urgent, checked, note });     if (errors) {     return {       statusCode: 500,       body: JSON.stringify(errors)     };   }     return {     statusCode: 200,     body: JSON.stringify({ item: data.createItem })   }; };

We are passing the item properties like, name, if it is urgent, the check value and the note the items should be part of. Let’s see how this API can be called using postman,

As you see, we are passing the id of the note while creating an item for it.

We won’t bother writing the rest of the API capabilities in this article,  like updating, deleting a shop note, updating, deleting items, etc. In case, you are interested, you can look into those functions from the GitHub Repository.

However, after creating the rest of the API, you should have a directory structure like this,

We have successfully created a data store with Fauna, set it up for use, created an API backed by serverless functions, using Netlify Functions, and tested those functions/routes.

Congratulations, you did it. Next, let us build some user interfaces to show the shop notes and add items to it. To do that, we will use Gatsby.js (aka, Gatsby) which is a super cool, React-based static site generator.

The following section requires you to have basic knowledge of ReactJS. If you are new to it, you can learn it from here. If you are familiar with any other user interface technologies like, Angular, Vue, etc feel free to skip the next section and build your own using the APIs explained so far.

Set up the User Interfaces using Gatsby

We can set up a Gatsby project either using the starter projects or initialize it manually. We will build things from scratch to understand it better.

Install gatsby-cli globally. 

npm install -g gatsby-cli

Install gatsby, react and react-dom

yarn add gatsby react react-dom

Edit the scripts section of the package.json file to add a script for develop.

"scripts": {   "develop": "gatsby develop"  }

Gatsby projects need a special configuration file called, gatsby-config.js. Please create a file named, gatsby-config.js at the root of the project folder with the following content,

module.exports = {   // keep it empty     }

Let’s create our first page with Gatsby. Create a folder named, src at the root of the project folder. Create a subfolder named pages under src. Create a file named, index.js under src/pages with the following content:

import React, { useEffect, useState } from 'react';       export default () => {       const [loading, setLoading ] = useState(false);       const [shopnotes, setShopnotes] = useState(null);         return (     <>           <h1>Shopnotes to load here...</h1>     </>           )     } 

Let’s run it. We generally need to use the command gatsby develop to run the app locally. As we have to run the client side application with netlify functions, we will continue to use, netlify dev command.

netlify dev

That’s all. Try accessing the page at http://localhost:8888. You should see something like this,

Gatsby project build creates a couple of output folders which you may not want to push to the source code repository. Let us add a few entries to the .gitignore file so that we do not get unwanted noise.

Add .cache, node_modules and public to the .gitignore file. Here is the full content of the file:

.cache public node_modules *.env

At this stage, your project directory structure should match with the following:

Thinking of the UI components

We will create small logical components to achieve the ShopNote user interface. The components are:

  • Header: A header component consists of the Logo, heading and the create button to create a shopnote.
  • Shopenotes: This component will contain the list of the shop note (Note component).
  • Note: This is individual notes. Each of the notes will contain one or more items.
  • Item: Each of the items. It consists of the item name and actions to add, remove, edit an item.

You can see the sections marked in the picture below:

Install a few more dependencies

We will install a few more dependencies required for the user interfaces to be functional and look better. Open a command prompt at the root of the project folder and install these dependencies,

yarn add bootstrap lodash moment react-bootstrap react-feather shortid

Lets load all the Shop Notes

We will use the Reactjs useEffect hook to make the API call and update the shopnotes state variables. Here is the code to fetch all the shop notes. 

useEffect(() => {   axios("/api/get-shopnotes").then(result => {     if (result.status !== 200) {       console.error("Error loading shopnotes");       console.error(result);       return;     }     setShopnotes(result.data.shopnotes);     setLoading(true);   }); }, [loading]);

Finally, let us change the return section to use the shopnotes data. Here we are checking if the data is loaded. If so, render the Shopnotes component by passing the data we have received using the API.

return (   <div className="main">     <Header />     {       loading ? <Shopnotes data = { shopnotes } /> : <h1>Loading...</h1>     }   </div> );  

You can find the entire index.js file code from here The index.js file creates the initial route(/) for the user interface. It uses other components like, Shopnotes, Note and Item to make the UI fully operational. We will not go to a great length to understand each of these UI components. You can create a folder called components under the src folder and copy the component files from here.

Finally, the index.css file

Now we just need a css file to make things look better. Create a file called index.css under the pages folder. Copy the content from this CSS file to the index.css file.

import 'bootstrap/dist/css/bootstrap.min.css'; import './index.css'

That’s all. We are done. You should have the app up and running with all the shop notes created so far. We are not getting into the explanation of each of the actions on items and notes here not to make the article very lengthy. You can find all the code in the GitHub repo. At this stage, the directory structure may look like this,

A small exercise

I have not included the Create Note UI implementation in the GitHib repo. However, we have created the API already. How about you build the front end to add a shopnote? I suggest implementing a button in the header, which when clicked, creates a shopnote using the API we’ve already defined. Give it a try!

Let’s Deploy

All good so far. But there is one issue. We are running the app locally. While productive, it’s not ideal for the public to access. Let’s fix that with a few simple steps.

Make sure to commit all the code changes to the Git repository, say, shopnote. You have an account with Netlify already. Please login and click on the button, New site from Git.

Next, select the relevant Git services where your project source code is pushed. In my case, it is GitHub.

Browse the project and select it.

Provide the configuration details like the build command, publish directory as shown in the image below. Then click on the button to provide advanced configuration information. In this case, we will pass the FAUNA_SERVER_SECRET key value pair from the .env file. Please copy paste in the respective fields. Click on deploy.

You should see the build successful in a couple of minutes and the site will be live right after that.

In Summary

To summarize:

  • The Jamstack is a modern web development architecture based on client-side JavaScript, reusable APIs, and prebuilt Markup.
  • 70% – 80% of the features that once required a custom back-end can now be done either on the front end or there are APIs, services to take advantage of.
  • Fauna provides the data API for the client-serverless applications. We can use GraphQL or Fauna’s FQL to talk to the store.
  • Netlify serverless functions can be easily integrated with Fauna using the GraphQL mutations and queries. This approach may be useful when you have the need of  custom authentication built with Netlify functions and a flexible solution like Auth0.
  • Gatsby and other static site generators are great contributors to the Jamstack to give a fast end user experience.

Thank you for reading this far! Let’s connect. You can @ me on Twitter (@tapasadhikary) with comments, or feel free to follow.

The post How to create a client-serverless Jamstack app using Netlify, Gatsby and Fauna appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.


, , , , , ,

WordPress and Jamstack

I recently moderated a panel at Netlify’s virtual Jamstack Conf that included Netlify CEO Matt Biilman and Automattic founder Matt Mullenweg. The whole thing was built up — at least to some — as a “Jamstack vs. WordPress” showdown.

I have lots of thoughts of my own on this and think I’m more useful as a pundit than a moderator. This is one of my favorite conversations in tech right now! So allow me to blog.

Disclosure: both Automattic and Netlify are active sponsors of this site. I have production sites that use both, and honestly, I’m a fan of both, which is an overarching point I’ll try to make. I also happen to be writing and publishing this on a WordPress site.


  1. Richard MacManus published “WordPress Co-Founder Matt Mullenweg Is Not a Fan of JAMstack” with quotes from an email conversation between them a line from Matt saying, “JAMstack is a regression for the vast majority of the people adopting it.”
  2. Matt Biilmann published a response “On Mullenweg and the Jamstack – Regression or Future?” with a whole section titled “The end of the WordPress era.”
  3. People chimed in along the way. Netlify board member Ohad Eder-Pressman wrote an open letter. Sarah Gooding rounded up some of the activity on WP Tavern (which is owned by Matt Mullenweg). I chimed in as well.
  4. Matt Mullenweg clarified remarks with some new zingers.

The debate was on October 6th at Jamstack Conf Virtual 2020. There is no public video of it (sorry).

The Stack

Comparing Jamstack to WordPress is a bit weird. What is comparable is the fact that they are both roads you might travel when building a website. Much of this post will keep that in mind and compare the two that way. Why they aren’t directly comparable is because:

  • Jamstack is a loose descriptor of an architectural philosophy that encourages static files on CDNs and JavaScript-accessed services for any dynamic needs.
  • WordPress is a CMS on the LAMP stack.

Those things are not apples to apples.

If we stick just with the stack for a moment, the comparison would be between:

  • Static Hosting + Services
  • LAMP

An example of Static + Services is using Netlify for hosting (which is static) and using services to do anything dynamic you need to do. Maybe you use Netlify’s own forms and auth functionality and Hasura for data storage.

On a LAMP stack, you have MySQL to store data in, so you aren’t reaching for an outside service there. You also have PHP available. So with those (in addition to open-source software), you have what you need for auth. It doesn’t mean that you never reach for services; you just do so less often as you have more technology at your fingertips from the server you already have.

Jamstack: Static Hosting as the hub, reaching out to services for everything else.

LAMP: Server that does a bunch of work as the hub, reaching out to a few services if needed.

Matt B. called the LAMP stack a “monolith.” Matt M. objected to that term and called it an “integrated approach.” I’m not a computer scientist, but I could see this going either way. Here’s Wikipedia:

[…] a monolithic application describes a single-tiered software application in which the user interface and data access code are combined into a single program.

Defined that way, yes, WordPress does appear to be a monolith, and yet the Wikipedia article continues:

[…] a monolithic application describes a software application that is designed without modularity.

Seen that way, appears to disqualify WordPress as a monolith. WordPress’ hook and plugin architecture is modular. 🤷‍♂️

It would be interesting to hear these two guys dig into the nuance there, but the software is what it is. A self-hosted WordPress site runs on a server with a full stack of technology available to it. It makes sense to ask as much of that server as you can (i.e. integrated). In a Jamstack approach, the server is abstracted from you. Everything else you need to do is split into different services (i.e. not integrated).

The WordPress approach, doesn’t mean that you never reach for outside services. In both stacks, you’d likely use something like Stripe for eCommerce APIs. You might reach for something like Cloudinary for robust media storage and serving. Even WordPress’ Jetpack service (which I use and like) brings a lot of power to a self-hosted WordPress site by behaving like a third-party service and moving things like asset-hosting and search technology off your own servers by moving them over to cloud servers. Both stacks are conglomerations of technologies.

Neither stack is any more “house of cards” or prone than the other. All websites can have that “only as strong as its weakest link” metaphor apply to them. If a WordPress plugin ships a borked version or somehow is corrupted on upload, it may screw up my site until I fix it. If my API keys become invalid for my serverless database, my Jamstack site might be hosed until I fix it. If Stripe is down, I’m not selling any products on any kind of site until they are back up.

A missing semicolon can sink any site.


WordPress.com has a free plan, and that’s absolutely a place you can build a site. (I have several.) But you don’t really have developer-style access to it until you’re on the business plan at $ 25 per month. Self-hosted WordPress itself is open-source and free, but you’re not going to find a place to spin up a self-hosted WordPress site for free. It starts cheap and scales up. You need LAMP hosting to run WordPress. Here’s a look around at fairly inexpensive hosting plans:

There is money involved right off the bat.

Starting free is much more common with Jamstack, then you incur costs at different points. Jamstack being newer, it feels like a market that’s still figuring itself out.

  • Vercel is free until you need team members or features like password protected sites. A single password-protected site is $ 150/month. You can toss basic auth on any server with Apache for no additional cost.
  • Netlify is very similar, unlocking features on higher plans, and offering ala-carte per-site features, like analytics ($ 9/month) and auth (5,000 active users is $ 99/month).
  • AWS Amplify starts free, but like everything on AWS, your usage is metered on lots of levels, like build minutes, storage, and bandwidth. They have an example calculation of a web app has 10,000 daily active users and is updated two times per month, which costs $ 65.98/month.
  • Azure Static Web Apps hasn’t released pricing yet, but will almost certainly have a free tier or free usage or some kind.

All of this is a good reminder that Netlify isn’t the only one in the Jamstack game. Jamstack just means static hosting plus services.

You can’t make blanket statements like Jamstack is cheaper. It’s far too dependent on the site’s usage and needs. With high usage and a bunch of premium services, Jamstack (much like Serverless in general) can get super expensive. Jamstack says their enterprise pricing starts at $ 3,000/month, and while you get things like auth, forms, and media handling, you don’t get a CMS or any data storage, that is likely to kick you up much higher.

While this WordPress site isn’t enterprise, I can tell you it requires a server in the vicinity of a $ 1,000/month, and that assumes Cloudflare is in front of it to help reduce bandwidth directly to the host and Jetpack handling things like media hosting and search functionality. Mailchimp sends our newsletter. Wufoo powers our forms. We also have paid plugins, like Advanced Custom Fields Pro and a few WooCommerce add-ons. That’s not all of it. It’s probably a few thousand per month, all told. This isn’t unique to any integrated approach, but helps illustrate that the cost of a WordPress site can be quite high as well. They don’t publish prices (a common enterprise tactic), but Automattic’s own WordPress VIP hosting service surely starts at mid-4-figures before you start adding third-party stuff.

Bottom line: there is no sea change in pricing happening here.


80% of web performance is a front-end concern.

That’s a true story, but it’s also built on the foundation of the server (accounting for the first 20%). The speediest front-end in the world doesn’t feel speedy at all if the first request back from the server takes multiple seconds. You’ve gotta make sure that first request is smoking fast if you want a fast site.

The first rectangle is the first server response, and the second is the entire front-end. Even with a fast front end, it can’t save you from a slow back end.

You know what’s super fast? Global CDNs serving static files. That’s what you want to make happen on any website, regardless of the stack. While that’s the foundation of Jamstack (static CDN-backed hosting), it doesn’t mean WordPress can’t do it.

You take an index.html file with static content, put that on Netlify, and it’s gonna be smoking fast. Maybe your static site generator makes that file (which, it’s worth pointing out, could very well get content from WordPress). There is something very nice about the robustness and steady foundation of that.

By default, WordPress isn’t making static files that are catchable on a global CDN. WordPress responds to requests from a single origin, runs PHP, which asks the database for stuff, before a response is assembled, and then, finally, the page is returned. That can be pretty fast, but it’s far less sturdy than a static file on a global CDN and it’s far easier to overwhelm with requests.

WordPress hosts know this, and they try to solve the problem at the hosting level. Just look at WP Engine’s approach. Without you doing anything, they use a page cache so that the site can essentially return a static asset rather than needing to run PHP or hit a database. They employ all sorts of other caching as well, including partnering with Cloudflare to do the best possible caching. As I’m writing this, my shoptalkshow.com site literally went down. I wrote to the host, Flywheel, to see what was up. Turns out that when I went in there to turn on a staging site, I flipped a wrong switch and turned off their caching. The site couldn’t handle the traffic and just died. Flipping the caching switch back on instantly solved it. I didn’t have Cloudflare in front of the site, but I should have.

Cloudflare is part of the magic sauce of making WordPress fast. Just putting it in front of your self-hosted WordPress site is going to do a ton of work in making it fast and reliable. One of the missing pieces has been great caching of the HTML itself, which they literally dealt with this month and now that can be cached as well. There is kind of a funny irony in that caching WordPress means caching requests as static HTML and static assets, and serving them from a global CDN, which is essentially what Jamstack is at the end of the day.

Matt M. mentioned that WordPress.com employs global CDNs that kick in at certain levels of traffic. I’m not sure if that’s Cloudflare or not, but I wouldn’t doubt it.

With Cloudflare in front of a WordPress site, I see the same first-response numbers as I do on Netlify sites without Cloudflare (because the do not recommend using Cloudflare in front of Netlify-hosted sites). That’s mid-2-digit millisecond numbers, which is very, very good.

First request on WordPress site css-tricks.com, hosted by Flywheel with Cloudflare in front. Very fast.
First request on my Jamstack site, conferences.css-tricks.com, hosted by Netlify. Very fast.

From that foundation, any discussion about performance becomes front-end specific. Front-end tactics for speed are the same no matter what the server, hosting, or CMS situation is on the back end.


There are far more stories about WordPress sites getting hacked than Jamstack sites. But is it fair to say that WordPress is less secure? WordPress is going on a couple of decades of history and has a couple orders of magnitude more sites built on it than Jamstack does. Security aside, you’re going to get more stories from WordPress with those numbers.

Matt M brought up that whitehouse.gov is on WordPress, which is obviously a site that needs the highest levels of security. It’s not that WordPress itself is insecure software. It’s what you do with it. Do you have insecure passwords? That’s insecure no matter what platform you’re using. Is the server itself insecure via file permissions or access levels? That’s not exactly the software’s fault, but you may be in that position because of the software. Are you running the latest version of WordPress? Usage is fragmented, at best, and the older the version, the less secure it’s going to be. Tricky.

It may be more interesting to think about security vectors. That is, at what points it is possible to get hacked. If you have a static file sitting on static hosting, I think it’s safe to say there are fairly few attack vectors. But still, there are some:

  • Your hosting account could be hacked
  • Your Git repo could be hacked
  • Your Cloudflare account could be hacked
  • Your domain name could be stolen (it happens)

That’s all true of a WordPress site, too, only there are additional attack vectors like:

  • Server-side code: XSS, bad plugins, remote execution, etc.
  • Database vulnerabilities
  • Running an older, outdated version of WordPress
  • The login system is right on the site itself, e.g. bad guys can hammer /wp-login.php

I think it’s fair to say there are more attack vectors on a WordPress site, but there are plenty of vectors on any site. The hosting account of any site is a big vector. Anyting that sits in the DNS chain. Any third-party services with logins. Anything with an API key.

Personal experience: this site is on WordPress and has never been hacked, but not for lack of trying. I do feel like I need to think more about security on my WordPress sites than my sites that are only built from static site generators.


Scaling either approach costs money. This WordPress site isn’t massively scaled, but does require some decent scaling up from entry-level server requirements. I serve all traffic through Cloudflare, so a peak at the last 30 days tells me I serve 5 TB of bandwidth a month.

On a Netlify Business plan (600 GB of traffic for $ 99, then $ 20 per additional 100 GB) that math works out to $ 979. Remember when I said this site requires about a server that costs about $ 1,000/month? I wrote that before I ran these numbers, so I was super close (go me). Jamstack versus WordPress at the scale of this site is pretty neck-and-neck. All hosts are going to charge for bandwidth and have caps with overage charges. Amplify charges $ 0.15/GB over a 15 GB monthly cap. Flywheel (my WordPress host) charges based on a monthly visitor cap and, over that, it’s $ 1 per 1000.

The story with WordPress scaling is:

  • Use a host that can handle it and that has their own proven caching strategy.
  • CDN everything (which usually means putting Cloudflare in front of it).
  • Ultimately, you’re going to pay for it.

The story with Jamstack scaling is:

  • The host and services are built to scale.
  • You have to think about scaling less in terms of can this service handle this, or do I have to move?
  • You have to think about scaling more in terms of the fact that every aspect of every service will have pricing you need to keep an eye on.
  • Ultimately, you’re going to pay for it.

I’ve had to move around a bit with my WordPress hosting, finding hosts that are in-line with the current needs of the site. Moving a WordPress site is non-trivial, but it’s far easier than moving to another CMS. For example, if you build a Jamstack site on a headless CMS that becomes too pricey, the cost of moving is a bigger job than switching hosts.

I like what Dave Rupert wrote the other day (in a Slack conversation) about comparing performance between the two:

Jamstack: Use whatever thing to build your thing, there’s addons to help you, and use our thing to deploy it out to a CDN so it won’t fall over.

WordPress: Use our thing to build your thing, there’s addons to help you, and you have to use certain hosts to get it to not fall over.

There are other kinds of “scaling” as well. I think of something like number of users. That’s something that all sorts of services use for pricing tiers, which is an understandable metric. But that’s free in WordPress. You can have as many users with as many nuanced permissions as you like. That’s just the CMS, so adding on other services might still charge you by the head. Vercel or Netlify charge you by the head for team accounts. Contentful (a popular headless CMS) starts at $ 489/month for teams. Even GitHub’s Team tier is $ 4 per user if you need anything the free account can’t do.

Splitting the Front and Back

This is one of the big things that gets people excited about building with Jamstack. If all of my site’s functionality and content are behind APIs, that frees up the front end to build however it wants to.

  • Wanna build an all-static site? OK, hit that API during the build process and do that.
  • Wanna build a client-rendered site with React or Vue or whatever? Fine, hit the API client-side.
  • Wanna split the middle, pre-rendering some, client-rendering some, and server-rendering some? Cool, it’s an API, you can hit it however you want.

That flexibility is neat on green-field builds, but people are just as excited about theoretical future flexibility. If all functionality and content is API-driven, you’ve entirely split the front and back, meaning you can change either in the future with more flexibility.

  • As long as your APIs keep spitting out what the front end expects, you can re-architect the back end without troubling the front end.
  • As long as you’re getting the data you need, you can re-architect the front end without troubling the back end.

This kind of split feels “future safe” for sites at a certain size and scale. I can’t quite put my finger on what those scale numbers are, but they are there.

If you’ve ever done any major site re-architecture just to accommodate one side or the other, moving to a system where you’ve split the back end and front surely feels like a smart move.

Once we’ve split, as long as the expectations are maintained, back (B) and front (F) are free to evolve independently.

You can split a WordPress site (we’ll get to that in the “Using Both” section), but by default, WordPress is very much an integrated approach where the front end is built from themes in PHP using very WordPress-specific APIs. Not split at all.

Developer Experience

Jamstack has done a good job of heavily prioritizing developer experience (DX). I’ve heard someone call it “a local optimum,” meaning Jamstack is designed around local development (and local developer) experience.

  • You’re expected to work locally. You work in your own comfortable (local, fast, customized) development environment.
  • Git is a first-class citizen. You push to your production branch (e.g. master or main), then your build process runs, and your site is deployed. You even get a preview URL of what the production site will be for every pull request, which is an impressively great feature.
  • Use whatever tooling you like. You wanna pre-build a site in Hugo? Go for it. You learned create-react-app in school? Use that. Wanna experiment with the cool new framework de jour? Have at it. There is a lot of freedom to build however you want, leveraging the fact that you can run a build and deploy whatever folder in your repo you want.
  • What you don’t have to do is important, too. You don’t have to deal with HTTPS, you don’t have to deal with caching, you don’t have to worry about file permissions, you don’t have to configure a CDN. Even advanced developers appreciate having to do less.

It’s not that WordPress doesn’t consider developer experience (for example, they have a CLI and it can do helpful things, like scaffold blocks), but the DX doesn’t feel as to-the-core of the project to me.

  • Running WordPress locally is tricky, requiring you to run a (X)AMP stack somehow, which involves notoriously finicky third-party software. Thank god for Local by Flywheel. There is some guidance but it doesn’t feel like a priority.
  • What should go in Git? To this day, I don’t really know, but I’ve largely settled on the entire /wp-content folder. It feels weird to me there is no guidance or obvious best practices.
  • You’re totally on your own for deployment. Even WordPress-specific hosts don’t really nail it here. It’s largely just: here’s your SFTP credentials.
  • Even if you have a nice local development and deployment pipeline set up (I’m happy with mine), that doesn’t really help deal with moving the database around, so you’re on own your own there as well.

These are all solvable things, and the WordPress community is so big that you’ll find plenty of information on it, but I think it’s fair to say that WordPress doesn’t have DX down to the core. It’s a little wild-west-y even after all these years.

In fact, I’ve found that because the encouragement of a healthy local development environment is so sidelined, a lot of people just don’t have one at all. This is anecdotal, but now twice is as many years have I found myself involved in other people’s sites that work entirely production-only. That would be one thing if they were very simple sites with largely default behavior, but these have been anything but. They’re very complicated (much more so than this site) involving public user logins, paid memberships and permissions, page builders, custom shortcodes, custom CSS, and just a heck of a lot of moving parts. It scared me to death. I didn’t want to touch anything. They were live-editing PHP to make things work — cowboy coding, as people jokingly call that. One syntax error and the site is hosed, maybe even the very page you’re looking at.

The fact that WordPress powers such a huge swath of the web without particularly good DX is very interesting. There is no Jamstack without DX. It’s an entirely developer-focused thing. With WordPress, there probably isn’t a developer at all on most sites. It’s installed (or just activated, in the case of WordPress.com) and the site owner takes it from there. The site owner is like a developer in that they have lots of power, but perhaps doesn’t write any code at all.

To that end, I’d say WordPress has far more focus on UX than DX, which is a huge part of all this…

CMS and End User UX

WordPress is a damn fine CMS. Even if you don’t like it, there are a hell of a lot of people that do, and the numbers speak for themselves. What you get, when you decide to build a site with WordPress, is a heaping helping of ability to build just about any kind of site you want. It’s unlikely that you’ll have that oops, painted myself into a corner here situation with WordPress.

That’s a big deal. Jenn put her finger on this, noting that the people who use WordPress are a bigger story than a developer’s needs.

WordPress can do an absolute ton of things:

  • Blog (or be any type of content-driven CMS-style site)…
    • With content previews, which is possible-but-tricky on Jamstack
  • Deal with users/permissions…
  • eCommerce
  • Process forms
  • Handle plugins to the moon and back

Jamstack can absolutely do all these things too, but now it is Jamstack that is in Wild West territory. When you look at tutorials about how to store data, they often involve explaining how to write individual CRUD functions for a cloud database. That’s down to the metal stuff which can be very powerful, but it’s a far cry from clicking a few buttons, which is what WordPress feels like a lot of the time.

I bet I could probably cobble together a basic Jamstack eCommerce setup with Stripe APIs, which is very cool. But then I’d get nervous when I need to start thinking about inventory management, shipping zones, product variations, and who knows what else that gets complicated in eCommerce-land, making me wish I had something super robust that did it all for me.

Sometimes, we developers are building sites just for us (I do more than my fair share of that), but I’d say developers are mostly building sites for other people. So the most important question is: am I building something that is empowering for the people I’m building it for?

You can pull off a good site manager experience no matter what, but WordPress has surely proven that it delivers in that department without asking terribly much in terms of custom development.

Jamstack has some tricks that I wish I would pull off on WordPress, though. Here’s a big one for me: user-submitted content and updates. I literally have three websites now that benefit from this. A site about conferences, a site about serverless, and an upcoming site about coding fonts. WordPress could have absolutely done a great job at all three of those sites. But, what I really want is for people to be able to update and submit content in a way that I can be like: Yep, looks good, merge. By having gone with a Jamstack approach, the content is in public GitHub repos, and anyone can participate.

I think that’s super great. It doesn’t even necessarily require someone from the public knowing or understanding Git or GitHub, as Netlify CMS has this concept of Open Authoring, which keeps the whole contribution experience in the browser with UI for the editing.

Using Both

This is a big one that I see brought up a lot. Even Netlify themselves say “There is no Versus.”

Here’s the deal:

  • The “A” in “Jam” means APIs. Use APIs to build your site either at build time or client-side.
  • WordPress sites, by default, have a REST API (and can have a GraphQL API as well).
  • So, hit that API for CMS data on your Jamstack site.

Yep, totally. This works and people do it. I think it’s pretty cool.


  • Running a WordPress site somewhere in addition to your Jamstack site means… you’re running a WordPress site in addition to your Jamstack site. There is cost and technical debt to that.
  • You often aren’t getting all the value of WordPress. Hitting an API for data might be all you need, but this is a super, very different approach to building a site than building a WordPress theme. You’re getting none of the other value of WordPress. I think of situations like this: you find a neat plugin that adds a fancy Gutenberg block to your site. That’ll “just work” on a WordPress site, but it likely has some special front-end behavior that won’t work if all you’re doing is sucking the HTML from an API. It probably enqueues some additional scripts and styles that you’ll be on your own to figure out how to incorporate where your front-end is hosted, and on your own to maintain updates.

Here’s some players that all have a unique approach to “using both”:

  • Frontity: A React framework for WordPress. Looks like you can run it with a Node server so the pages are server-side rendered, or as a static site generator. Either way, WordPress is the CMS but you’re building the front end with React and getting data from the REST API.
  • WP2Static: A WordPress plugin that builds a static version of your site and can auto-deploy it when changes are made.
  • Strattic: They host the dynamic WordPress site for you (which they refer to as “staging”) and you work with WordPress normally there. Then you choose to deploy, and they also host a static version of your site for you.

There are loads of other ways to integrate both. Here’s our own Geoff and Sarah talking about using WordPress and Jamstack together by using Vue/Nuxt with the REST API and hosting on Netlify.

Using Neither

Just in case this isn’t clear, there are absolutely loads of ways to build websites. If you’re building a Ruby on Rails site, that’s not Jamstack or WordPress. You could argue it’s more like a WordPress site in that it requires a server and you’ll be using that server to do as much as you can. You could also argue it’s more like Jamstack in that, even though it’s not static hosting, it encourages using APIs and piecing together services.

The web is a big place, gang, and this isn’t a zero-sum game. I fully expect WordPress to continue to grow and Jamstack to continue to grow because the web itself is growing. Even if we’re only considering the percentage of market share, I’d still bet that both will grow, pushing whatever else into smaller slices.


I’m not even going to go here. Not because I’m avoiding playing favorites, but because it isn’t necessary. I don’t see developers out there biting their fingernails trying to decide between a WordPress or Jamstack approach to building a website. We’re at the point where the technologies are well-understood enough that the process goes like:

  1. Put adult pants on
  2. Evaluate needs and outcomes
  3. Pick technologies

The post WordPress and Jamstack appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.