👋 The demos in this article experiment with a non-standard bug related to CSS gradients and sub-pixel rendering. Their behavior may change at any time in the future. They’re also heavy as heck. We’re serving them async where you click to load, but still want to give you a heads-up in case your laptop fan starts spinning.
Do you remember that static noise on old TVs with no signal? Or when the signal is bad and the picture is distorted? In case the concept of a TV signal predates you, here’s a GIF that shows exactly what I mean.
View image (contains auto-playing media)
Yes, we are going to do something like this using only CSS. Here is what we’re making:
Before we start digging into the code, I want to say that there are better ways to create a static noise effect than the method I am going to show you. We can use SVG, <canvas>, the filter property, etc. In fact, Jimmy Chion wrote a good article showing how to do it with SVG.
What I will be doing here is kind of a CSS experiment to explore some tricks leveraging a bug with gradients. You can use it on your side projects for fun but using SVG is cleaner and more suitable for a real project. Plus, the effect behaves differently across browsers, so if you’re checking these out, it’s best to view them in Chrome, Edge, or Firefox.
Let’s make some noise!
To make this noise effect we are going to use… gradients! No, there is no secret ingredient or new property that makes it happen. We are going to use stuff that’s already in our CSS toolbox!
The “trick” relies on the fact that gradients are bad at anti-aliasing. You know those kind of jagged edges we get when using hard stop colors? Yes, I talk about them in most of my articles because they are a bit annoying and we always need to add or remove a few pixels to smooth things out:
As you can see, the second circle renders better than the first one because there is a tiny difference (0.5%) between the two colors in the gradient rather than using a straight-up hard color stop using whole number values like the first circle.
Here’s another look, this time using a conic-gradient where the result is more obvious:
An interesting idea struck me while I was making these demos. Instead of fixing the distortion all the time, why not trying to do the opposite? I had no idea what would happen but it was a fun surprise! I took the conic gradient values and started to decrease them to make the poor anti-aliasing results look even worse.
Do you see how bad the last one is? It’s a kind of scrambled in the middle and nothing is smooth. Let’s make it full-screen with smaller values:
I suppose you see where this is going. We get a strange distorted visual when we use very small decimal values for the hard colors stops in a gradient. Our noise is born!
We are still far from the grainy noise we want because we can still see the actual conic gradient. But we can decrease the values to very, very small ones — like 0.0001% — and suddenly there’s no more gradient but pure graininess:
Tada! We have a noise effect and all it takes is one CSS gradient. I bet if I was to show this to you before explaining it, you’d never realize you’re looking at a gradient. You have to look very carefully at center of the gradient to see it.
We can increase the randomness by making the size of the gradient very big while adjusting its position:
The gradient is applied to a fixed 3000px square and placed at the 60% 60% coordinates. We can hardly notice its center in this case. The same can be done with radial gradient as well:
And to make things even more random (and closer to a real noise effect) we can combine both gradients and use background-blend-mode to smooth things out:
Our noise effect is perfect! Even if we look closely at each example, there’s no trace of either gradient in there, but rather beautiful grainy static noise. We just turned that anti-aliasing bug into a slick feature!
Now that we have this, let’s see a few interesting examples where we might use it.
Animated no TV signal
Getting back to the demo we started with:
If you check the code, you will see that I am using a CSS animation on one of the gradients. It’s really as simple as that! All we’re doing is moving the conic gradient’s position at a lightning fast duration (.1s) and this is what we get!
I used this same technique on a one-div CSS art challenge:
Grainy image filter
Another idea is to apply the noise to an image to get an old-time-y look. Hover each image to see them without the noise.
I am using only one gradient on a pseudo-element and blending it with the image, thanks to mix-blend-mode: overlay.
We can get an even funnier effect if we use the CSS filter property
And if we add a mask to the mix, we can make even more effects!
Grainy text treatment
We can apply this same effect to text, too. Again, all we need is a couple of chained gradients on a background-image and then blend the backgrounds. The only difference is that we’re also reaching for background-clip so the effect is only applied to the bounds of each character.
Generative art
If you keep playing with the gradient values, you may get more surprising results than a simple noise effect. We can get some random shapes that look a lot like generative art!
Of course, we are far from real generative art, which requires a lot of work. But it’s still satisfying to see what can be achieved with something that is technically considered a bug!
I hope you enjoyed this little CSS experiment. We didn’t exactly learn something “new” but we took a little quirk with gradients and turned it into something fun. I’ll say it again: this isn’t something I would consider using on a real project because who knows if or when anti-aliasing will be addressed at some point in time. Instead, this was a very random, and pleasant, surprise when I stumbled into it. It’s also not that easy to control and it behaves inconsistently across browsers.
This said, I am curious to see what you can do with it! You can play with the values, combine different layers, use a filter, or mix-blend-mode, or whatever, and you will for sure get something really cool. Share your creations in the comment section — there are no prizes but we can get a nice collection going!
So, you’ve decided to build a blog with Next.js. Like any dev blogger, you’d like to have code snippets in your posts that are formatted nicely with syntax highlighting. Perhaps you also want to display line numbers in the snippets, and maybe even have the ability to call out certain lines of code.
This post will show you how to get that set up, as well as some tips and tricks for getting these other features working. Some of it is tricker than you might expect.
Prerequisites
We’re using the Next.js blog starter as the base for our project, but the same principles should apply to other frameworks. That repo has clear (and simple) getting started instructions. Scaffold the blog, and let’s go!
Another thing we’re using here is Prism.js, a popular syntax highlighting library that’s even used right here on CSS-Tricks. The Next.js blog starter uses Remark to convert Markdown into markup, so we’ll use the remark-Prism.js plugin for formatting our code snippets.
Basic Prism.js integration
Let’s start by integrating Prism.js into our Next.js starter. Since we already know we’re using the remark-prism plugin, the first thing to do is install it with your favorite package manager:
npm i remark-prism
Now go into the markdownToHtml file, in the /lib folder, and switch on remark-prism:
import remarkPrism from "remark-prism"; // later ... .use(remarkPrism, { plugins: ["line-numbers"] })
Depending on which version of the remark-html you’re using, you might also need to change its usage to .use(html, { sanitize: false }).
The whole module should now look like this:
import { remark } from "remark"; import html from "remark-html"; import remarkPrism from "remark-prism"; export default async function markdownToHtml(markdown) { const result = await remark() .use(html, { sanitize: false }) .use(remarkPrism, { plugins: ["line-numbers"] }) .process(markdown); return result.toString(); }
Adding Prism.js styles and theme
Now let’s import the CSS that Prism.js needs to style the code snippets. In the pages/_app.js file, import the main Prism.js stylesheet, and the stylesheet for whichever theme you’d like to use. I’m using Prism.js’s “Tomorrow Night” theme, so my imports look like this:
Notice I’ve also started a prism-overrides.css stylesheet where we can tweak some defaults. This will become useful later. For now, it can remain empty.
And with that, we now have some basic styles. The following code in Markdown:
```js class Shape { draw() { console.log("Uhhh maybe override me"); } } class Circle { draw() { console.log("I'm a circle! :D"); } } ```
…should format nicely:
Adding line numbers
You might have noticed that the code snippet we generated does not display line numbers even though we imported the plugin that supports it when we imported remark-prism. The solution is hidden in plain sight in the remark-prism README:
Don’t forget to include the appropriate css in your stylesheets.
In other words, we need to force a .line-numbers CSS class onto the generated <pre> tag, which we can do like this:
And with that, we now have line numbers!
Note that, based on the version of Prism.js I have and the “Tomorrow Night” theme I chose, I needed to add this to the prism-overrides.css file we started above:
You may not need that, but there you have it. We have line numbers!
Highlighting lines
Our next feature will be a bit more work. This is where we want the ability to highlight, or call out certain lines of code in the snippet.
There’s a Prism.js line-highlight plugin; unfortunately, it is not integrated with remark-prism. The plugin works by analyzing the formatted code’s position in the DOM, and manually highlights lines based on that information. That’s impossible with the remark-prism plugin since there is no DOM at the time the plugin runs. This is, after all, static site generation. Next.js is running our Markdown through a build step and generating HTML to render the blog. All of this Prism.js code runs during this static site generation, when there is no DOM.
But fear not! There’s a fun workaround that fits right in with CSS-Tricks: we can use plain CSS (and a dash of JavaScript) to highlight lines of code.
Let me be clear that this is a non-trivial amount of work. If you don’t need line highlighting, then feel free to skip to the next section. But if nothing else, it can be a fun demonstration of what’s possible.
Our base CSS
Let’s start by adding the following CSS to our prism-overrides.css stylesheet:
We’re defining some CSS custom properties up front: a background color and a highlight width. We’re setting them to empty values for now. Later, though, we’ll set meaningful values in JavaScript for the lines we want highlighted.
We’re then setting the line number <span> to position: relative, so that we can add a ::after pseudo element with absolute positioning. It’s this pseudo element that we’ll use to highlight our lines.
Declaring the highlighted lines
Now, let’s manually add a data attribute to the <pre> tag that’s generated, then read that in code, and use JavaScript to tweak the styles above to highlight specific lines of code. We can do this the same way that we added line numbers before:
This will cause our <pre> element to be rendered with a data-line="3,8-10" attribute, where line 3 and lines 8-10 are highlighted in the code snippet. We can comma-separate line numbers, or provide ranges.
Let’s look at how we can parse that in JavaScript, and get highlighting working.
Reading the highlighted lines
Head over to components/post-body.tsx. If this file is JavaScript for you, feel free to either convert it to TypeScript (.tsx), or just ignore all my typings.
First, we’ll need some imports:
import { useEffect, useRef } from "react";
And we need to add a ref to this component:
const rootRef = useRef<HTMLDivElement>(null);
Then, we apply it to the root element:
<div ref={rootRef} className="max-w-2xl mx-auto">
The next piece of code is a little long, but it’s not doing anything crazy. I’ll show it, then walk through it.
We’re running an effect once, when the content has all been rendered to the screen. We’re using querySelectorAll to grab all the <pre> elements under this root element; in other words, whatever blog post the user is viewing.
For each one, we make sure there’s a <code> element under it, and we check for both the line numbers container and the data-line attribute. That’s what dataset.line checks. See the docs for more info.
If we make it past the second continue, then highlightRanges is the set of highlights we declared earlier which, in our case, is "3,8-10", where lineNumbersContainer is the container with the .line-numbers-rows CSS class.
Lastly, we declare a runHighlight function that calls a highlightCode function that I’m about to show you. Then, we set up a ResizeObserver to run that same function anytime our blog post changes size, i.e., if the user resizes the browser window.
The highlightCode function
Finally, let’s see our highlightCode function:
function highlightCode(pre, highlightRanges, lineNumberRowsContainer) { const ranges = highlightRanges.split(",").filter(val => val); const preWidth = pre.scrollWidth; for (const range of ranges) { let [start, end] = range.split("-"); if (!start || !end) { start = range; end = range; } for (let i = +start; i <= +end; i++) { const lineNumberSpan: HTMLSpanElement = lineNumberRowsContainer.querySelector( `span:nth-child($ {i})` ); lineNumberSpan.style.setProperty( "--highlight-background", "rgb(100 100 100 / 0.5)" ); lineNumberSpan.style.setProperty("--highlight-width", `$ {preWidth}px`); } } }
We get each range and read the width of the <pre> element. Then we loop through each range, find the relevant line number <span>, and set the CSS custom property values for them. We set whatever highlight color we want, and we set the width to the total scrollWidth value of the <pre> element. I kept it simple and used "rgb(100 100 100 / 0.5)" but feel free to use whatever color you think looks best for your blog.
Here’s what it looks like:
Line highlighting without line numbers
You may have noticed that all of this so far depends on line numbers being present. But what if we want to highlight lines, but without line numbers?
One way to implement this would be to keep everything the same and add a new option to simply hide those line numbers with CSS. First, we’ll add a new CSS class, .hide-numbers:
```js[class="line-numbers"][class="hide-numbers"][data-line="3,8-10"] class Shape { draw() { console.log("Uhhh maybe override me"); } } class Circle { draw() { console.log("I'm a circle! :D"); } } ```
Now let’s add CSS rules to hide the line numbers when the .hide-numbers class is applied:
The first rule undoes the shift to the right from our base code in order to make room for the line numbers. By default, the padding of the Prism.js theme I chose is 1em. The line-numbers plugin increases it to 3.8em, then inserts the line numbers with absolute positioning. What we did reverts the padding back to the 1em default.
The second rule takes the container of line numbers, and squishes it to have no width. The third rule erases all of the line numbers themselves (they’re generated with ::before pseudo elements).
The last rule simply shifts the now-empty line number <span> elements back to where they would have been so that the highlighting can be positioned how we want it. Again, for my theme, the line numbers normally adds 3.8em worth of left padding, which we reverted back to the default 1em. These new styles add the other 2.8em so things are back to where they should be, but with the line numbers hidden. If you’re using different plugins, you might need slightly different values.
Here’s what the result looks like:
Copy-to-Clipboard feature
Before we wrap up, let’s add one finishing touch: a button allowing our dear reader to copy the code from our snippet. It’s a nice little enhancement that spares people from having to manually select and copy the code snippets.
It’s actually somewhat straightforward. There’s a navigator.clipboard.writeText API for this. We pass that method the text we’d like to copy, and that’s that. We can inject a button next to every one of our <code> elements to send the code’s text to that API call to copy it. We’re already messing with those <code> elements in order to highlight lines, so let’s integrate our copy-to-clipboard button in the same place.
First, from the useEffect code above, let’s add one line:
useEffect(() => { const allPres = rootRef.current.querySelectorAll("pre"); const cleanup: (() => void)[] = []; for (const pre of allPres) { const code = pre.firstElementChild; if (!code || !/code/i.test(code.tagName)) { continue; } pre.appendChild(createCopyButton(code));
Note the last line. We’re going to append our button right into the DOM underneath our <pre> element, which is already position: relative, allowing us to position the button more easily.
Let’s see what the createCopyButton function looks like:
Lots of code, but it’s mostly boilerplate. We create our button then give it a CSS class and some text. And then, of course, we create a click handler to do the copying. After the copy is done, we change the button’s text and disable it for a few seconds to help give the user feedback that it worked.
We’re passing codeEl.textContent rather than innerHTML since we want only the actual text that’s rendered, rather than all the markup Prism.js adds in order to format our code nicely.
Now let’s see how we might style this button. I’m no designer, but this is what I came up with:
And it works! It copies our code, and even preserves the formatting (i.e. new lines and indentation)!
Wrapping up
I hope this has been useful to you. Prism.js is a wonderful library, but it wasn’t originally written for static sites. This post walked you through some tips and tricks for bridging that gap, and getting it to work well with a Next.js site.
I got this question from a listener the other day. Fair question, I’d say. The word “Static” in “Static Site Generator” is at-odds with the word “Dynamic.” It seems to imply that the website created with a Static Site Generator (SSG) is locked in stone, only to be changed when it is run again some future date. That’s a somewhat unfortunate implication, if entirely understandable.
“Static Sites,” in actuality, can be as dynamic as any other side because of one¹ thing baked right in an available to any website: JavaScript. JavaScript can, say, hit an API and update the otherwise statically generated markup of a page. Just think. JavaScript. APIs. Markup… J… A… M… Jamstack!
Part of the trick to understanding this Jamstack world (aka Static Sites that do Dynamic Things) is just looking at what Netlify offers. Netlify literally only offers static hosting. No server-side languages (think Ruby, PHP, or Python) serving up individual pages on Netlify. So SSGs and Netlify go hand in hand. But let’s go through it as a list:
Netlify runs your build process for you. Which very likely includes a SSG. The point of which is largely that you keep your built site files out of your version control system, which would otherwise be a wasteful mess.
Netlify processes your forms. No need to run an always-on server just for this dynamic feature.
Netlify offers authentication. That’s right reader, auth is a first-class citzen of the platform.
Netlify runs your server side code in the form of cloud functions. Static hosting doesn’t mean you can’t do server side things. It means you do server side things with modern, cheap, secure, focused, fast, powerful cloud functions.
Netlify can build pages on-demand. Meaning you don’t have to pre-build all your pages if you don’t want to.
That’s just some of the feature set. Here’s a fun blog post from a little while ago with some of the staff’s favorite features, many of which aren’t in the list above. Jamstack is starting to literally mean that indeed dynamic things are happening to a static site.
I hope that answers the question for this particular reader and anyone else with the same confusion. SSGs can produce entirely static websites with zero dynamic data and that offer no special logged-in experience. But they can also be every bit as dynamic as any other site, just built from a more solid, static foundation.
Well, let’s say two things. Dynamic things can be done via edge handlers as well, without the need for client-side JavaScript.
Jamstack has been in the website world for years. Static Site Generators (SSGs) — which often have content that lives right within a GitHub repo itself — are a big part of that story. That opens up the idea of having contributors that can open pull requests to add, change, or edit content. Very useful!
When we need to build content-based sites like this, it’s common to think about what database to use. Keeping content in a database is a time-honored good idea. But it’s not the only approach! SSGs can be a great alternative because…
They are cheap and easy to deploy. SSGs are usually free, making them great for an MVP or a proof of concept.
They have great security. There is nothing to hack through the browser, as all the site contains is often just static files.
You’re ready to scale. The host you’re already on can handle it.
There is another advantage for us when it comes to a content site. The content of the site itself can be written in static files right in the repo. That means that adding and updating content can happen right from pull requests on GitHub, for example. Even for the non-technically inclined, it opens the door to things like Netlify CMS and the concept of open authoring, allowing for community contributions.
But let’s go the super lo-fi route and embrace the idea of pull requests for content, using nothing more than basic HTML.
The challenge
How people contribute adding or updating a resource isn’t always perfectly straightforward. People need to understand how to fork your repository, how to and where to add their content, content formatting standards, required fields, and all sorts of stuff. They might even need to “spin up” the site themselves locally to ensure the content looks right.
People who seriously want to help our site sometimes will back off because the process of contributing is a technological hurdle and learning curve — which is sad.
You know what anybody can do? Use a <form>
Just like a normal website, the easy way for people to submit a content is to fill out a form and submit it with the content they want.
What if we can make a way for users to contribute content to our sites by way of nothing more than an HTML <form> designed to take exactly the content we need? But instead of the form posting to a database, it goes the route of a pull request against our static site generator? There is a trick!
The trick: Create a GitHub pull request with query parameters
Here’s a little known trick: We can pre-fill a pull request against our repository by adding query parameter to a special GitHub URL. This comes right from the GitHub docs themselves.
Let’s reverse engineer this.
If we know we can pre-fill a link, then we need to generate the link. We’re trying to make this easy remember. To generate this dynamic data-filled link, we’ll use a touch of JavaScript.
So now, how do we generate this link after the user submits the form?
Demo time!
Let’s take the Serverless site from CSS-Tricks as an example. Currently, the only way to add a new resource is by forking the repo on GitHub and adding a new Markdown file. But let’s see how we can do it with a form instead of jumping through those hoops.
The Serverless site itself has many categories (e.g. for forms) we can contribute to. For the sake of simplicity, let’s focus on the “Resources” category. People can add articles about things related to Serverless or Jamstack from there.
All of the resource files are in this folder in the repo.
Just picking a random file from there to explore the structure…
--- title: "How to deploy a custom domain with the Amplify Console" url: "https://read.acloud.guru/how-to-deploy-a-custom-domain-with-the-amplify-console-a884b6a3c0fc" author: "Nader Dabit" tags: ["hosting", "amplify"] --- In this tutorial, we’ll learn how to add a custom domain to an Amplify Console deployment in just a couple of minutes.
Looking over that content, our form must have these columns:
I’m using Bulma for styling, so the class names in use here are from that.
Now we write a JavaScript function that transforms a user’s input into a friendly URL that we can combine as GitHub query parameters on our pull request. Here is the step by step:
Get the user’s input about the content they want to add
Generate a string from all that content
Encode the string to format it in a way that humans can read
Attach the encoded string to a complete URL pointing to GitHub’s page for new pull requests
Here is the Pen:
After pressing the Submit button, the user is taken right to GitHub with an open pull request for this new file in the right location.
Quick caveat: Users still need a GitHub account to contribute. But this is still much easier than having to know how to fork a repo and create a pull request from that fork.
Other benefits of this approach
Well, for one, this is a form that lives on our site. We can style it however we want. That sort of control is always nice to have.
Secondly, since we’ve already written the JavaScript, we can use the same basic idea to talk with other services or APIs in order to process the input first. For example, if we need information from a website (like the title, meta description, or favicon) we can fetch this information just by providing the URL.
Taking things further
Let’s have a play with that second point above. We could simply pre-fill our form by fetching information from the URL provided for the user rather than having them have to enter it by hand.
With that in mind, let’s now only ask the user for two inputs (rather than four) — just the URL and tags.
How does this work? We can fetch meta information from a website with JavaScript just by having the URL. There are many APIs that fetch information from a website, but you might the one that I built for this project. Try hitting any URL like this:
The demo above uses that as an API to pre-fill data based on the URL the user provides. Easier for the user!
Wrapping up
You could think of this as a very minimal CMS for any kind of Static Site Generator. All you need to do is customize the form and update the pre-filled query parameters to match the data formats you need.
How will you use this sort of thing? The four sites we saw at the very beginning are good examples. But there are so many other times where might need to do something with a user submission, and this might be a low-lift way to do it.
Just ran across îles, a new static site generator mostly centered around Vue. The world has no particular shortage of static site generators, but it’s interesting to see what this “next generation” of SSGs seem to focus on or try to solve.
îles looks to take a heaping spoonful of inspiration from Astro. If we consider them together, along with other emerging and quickly-evolving SSGs, there is some similarities:
Ship zero JavaScript by default. Interactive bits are opt-in — that’s what the islands metaphor is all about. Astro and îles do it at the per-component level and SvelteKit prefers it at the page level.
Additional fanciness around controls for when hydration happens, like “when the browser is idle,” or “when the component is visible.”
Use a fast build tool, like Vite which is Go-based esbuild under the hood. Or Rust-based swc in the case of Next 12.
Support multiple JavaScript frameworks for componentry. Astro and îles do this out of the box, and another example is how Slinkity brings that to Eleventy.
File-system based routing.
Assumption that Markdown is used for content.
When you compare these to first-cohort SSGs, like Jekyll, I get a few feelings:
These really aren’t that much different. The feature set is largely the same.
The biggest change is probably that far more of them are JavaScript library based. Turns out JavaScript libraries are what really what people wanted out of HTML preprocessors, perhaps because of the strong focus on components.
They are incrementally better. They are faster, the live reloading is better, the common needs have been ironed out.
If you run or have recently switched to a static site generator, you might find yourself writing a lot of Markdown. And the more you write it, the more you want the tooling experience to disappear so that the content takes focus.
I’m going to give you some options (including my favorite), but more importantly, I’ll walk though features that these apps offer that are especially relevant when choosing. Here are some key considerations for Markdown editing apps to help the words flow.
Consideration #1: Separate writing and reading modes
UX principles tell us that modes are problematic. But perhaps there is an exception for text editing software. From vi(m) to Google Docs, separate modes for writing and reading seem to appeal to writers. Similarly, many Markdown editors have separate modes or views for writing, editing and reading.
I happen to like Markdown editors that provide a side-by-side or paned design where I can see both at once. Writing Markdown is not the same as writing code. What it looks like matters, and having a preview can give you a feel for that. It’s kind of like static site generators that auto-refresh so that you can see your changes as you make them.
Having both edit and preview mode active at once can help you feel more connected to the finished product.
In contrast, I’m not a fan of the one-mode-to-rule-them-all design where Markdown formatting automatically converts to styled text, hiding the formatted code (implemented in some form by Dropbox Paper, Typora, Ulysses, and Bear). I can’t stand the work of futzing with the app to change a heading level, for example. Do I click it, double-click, triple-click? What if I’m just using the keyboard?
It’s not so much that these features aren’t useful, it’s that they break my flow.
I want to see all the Markdown that I’ve written, even if the end user won’t. That’s one thing that I do want a Markdown editor to borrow from code editors.
Consideration #2: Good themes
Some Markdown editors allow full customization of editor themes, while others ship with nice ones out of the box. Regardless, I think a good editor should have just the right amount of styling to differentiate plain text from formatted text, but not so much that it distracts you from being able to read it and focus on the content. Even with the preview pane open, I typically spend most of my time looking at the editing view.
Different colors for each style
Since most of the text in the editor isn’t going to be rendered as it would in the browser, it’s nice to quickly see which text you’ve formatted using Markdown. This helps you determine, for example, whether a URL is actually written out in the text or is used inside a hyperlink. So, I like to have a different color for each Markdown style (headings, links, bold, italic, quotes, images, code, bullets, etc.)
Colors tell you which text has Markdown formatting applied.
Apply bold and italics styles too
I prefer to use asterisks for Markdown formatting everywhere I’m able to (e.g., bold, italics, and unordered lists), so I find it helpful to have extra styling beyond color to distinguish bold, italic, and bold+italic. When skimming it can be hard to differentiate between **this is important** and *this is important*, whereas **this is important** and *this is important* are easier to separate. It also helps me see if I’ve accidentally mismatched asterisks (e.g., **is this important?*).
Different font sizes for each heading level
This might be a bit controversial and may split the audience. Code editors don’t show different font sizes within a file. Colors and styles, sure, but not sizes. But, for me, it helps.
When writing, hierarchy is the key to organization. With different font sizes for each heading, you can see the outline of whatever you’re writing just by skimming through it.
Seeing the headings in different font sizes is a subtle way to help you visualize the structure. This is especially helpful for long content.
Shortcuts and smart keyboard behaviors
I expect all the standard shortcuts that work in a text editor to work. CTRL/CMD + B for bold, I for italic, etc., as well as some that are nice-to-have when writing articles, in particular CTRL/CMD + (number) for headings. CTRL/CMD + 1 for H1, etc.
Making something into a heading should be a simple shortcut.
But there are also some keyboard behaviors I like that are borrowed from code editors. For example, if I select some text and press [ or ( it won’t overwrite that text, but, instead, enclose it with the opening and closing character. Same for using text formatting characters like *, `, and _.
A good Markdown editor won’t overwrite your text when you select it and press a valid Markdown formatting character.
I also rely on keyboard shortcuts to create links and images. Even after more than five years of writing Markdown on a regular basis, I still sometimes forget whether the brackets or parentheses comes first. So, I really like having a handy shortcut to insert them correctly.
Even better, in some editors, if you have a URL in your clipboard and you select text then use a keyboard shortcut to make it into a link, it will insert the URL in the hyperlink field. This has really sped up my workflow.
When I have a URL in the clipboard and use the create link shortcut, it assumes that’s what I want to link to. Handy!
Bonus feature: Copy to HTML
The editor that I use most often has a one-click “Copy HTML” feature (with keyboard shortcut) that takes all of the Markdown I’ve written and copies the HTML to the clipboard. This can be very convenient when using an online editor (e.g., WordPress) that has a code/source option.
Easy peasy!
Consideration #3: Stand-alone editor vs. CMS/IDE plugin
I know that a lot of people who work with static site generators love their IDEs and may even jump back and forth between code and Markdown in a single day. I often do. So I can see why using a familiar IDE would be more attractive than having a separate app for Markdown.
But when I’m sitting down to write a page in Markdown or an article, where I’m focusing on the text itself, I prefer a separate app.
I’m not fanatical about using standalone Markdown editors over IDE editor or plugins; I use one occasionally for complex find-and-replace tasks and other edits. As long as it offers the benefits listed above, I wouldn’t try to talk anyone out of it.
Here are a few reasons why a standalone app might work better for writing:
Cleaner interface. I’m not someone who needs “Zen mode” in my writing app, but I do like to have as few panels open as possible when I’m writing, which typically requires turning a lot of things off in an IDE.
Performance. Most Markdown tools just feel lighter to me. They are certainly less complex and do less stuff, so they should be faster. I don’t ever want to feel like my writing app is exerting any effort. It should launch fast and respond instantly, always.
Availability. I just haven’t found a Markdown editor in an IDE that I really like. Perhaps there is one out there; I just don’t have time to try them all. But I like most standalone Markdown editors that I’ve used, and I can’t say the same for what I’ve tried in IDE-land.
Mental shift. When I open my IDE, I’m thinking about writing code, but when I open my Markdown editor, I’m thinking about writing words. I like that it gets me into the right mindset.
That’s a few too many choices.
My favorite Markdown editors for writing
While these are my top picks, it doesn’t mean that if an app isn’t on this list that it’s bad. There are several good apps that I didn’t mention because they had too many features or were too expensive given the number of decent free or cheap options. And similar to IDE packages, there are a ton of Markdown apps out there and I haven’t tried them all (but I have tried a lot of them!).
A note about features that help you get “into the zone,” such as “typewriter” or “focus” modes, or soothing background music. They’ve never really worked for me and I eventually turn them off, so they aren’t a feature that I go looking for. (Although if you are into those, you can try Typora, which is free (during Beta) and runs on Mac, Windows, and Linux.)
For a more full-featured app, the editor interface is pretty good, and meets most of the criteria mentioned above. Zettlr offers similar features, but just feels more complicated, IMO.
Not my favorite app for writing and editing text, but it has the nice added ability to publish to various platforms (e.g., Medium, WordPress, Tumblr, Blogger, and Evernote).
A good choice if you use Markdown for more than just site content (personal notes, task management, etc.). Scores high in appearance and usability, too.
Summary
With Markdown syntax being supported in more and more places — including Slack, GitHub, WordPress, etc. — it is quickly becoming a lingua franca for richer communication in our increasingly text-based lives. It’s here to stay because it’s not only easy to learn and use, it’s intuitive. And luckily we’re currently spoiled for choice when it comes to quality Markdown writing apps.
Many developers love working with static site generators like Gatsby and Hugo. These powerful yet flexible systems help create beautiful websites using familiar tools like Markdown and React. Nearly every popular modern programming language has at least one actively developed, fully-featured static site generator.
Static site generators boast a number of advantages, including fast page loads. Quickly rendering web pages isn’t just a technical feat, it improves audience attraction, retention, and conversion. But as much as developers love these tools, marketers and other less technical end users may struggle with unfamiliar workflows and unclear processes.
The templates, easy automatic deploys, and convenient asset management provided by static site generators all free up developers to focus on creating more for their audiences to enjoy. However, while developers take the time to build and maintain static sites, it is the marketing teams that use them daily, creating and updating content. Unfortunately, many of the features that make static site generators awesome for developers make them frustrating to marketers.
Let’s explore some of the disadvantages of using a static site generator. Then, see how switching to a dynamic content management system (CMS) — especially one powered by a CRM (customer relationship management) platform — can make everyone happy, from developers to marketers to customers.
Static Site Generator Disadvantages
Developers and marketers typically thrive using different workflows. Marketers don’t usually want to learn Markdown just to write a blog post or update site copy — and they shouldn’t need to.
Frankly, it isn’t reasonable to expect marketers to learn complex systems for everyday tasks like embedding graphs or adjusting image sizes just to complete simple tasks. Marketers should have tools that make it easier to create and circulate content, not more complicated.
Developers tend to dedicate most of their first week on a project to setting up a development environment and getting their local and staging tooling up and running. When a development team decides that a static site generator is the right tool, they also commit to either configuring and maintaining local development environments for each member of the marketing team or providing a build server to preview changes.
Both approaches have major downsides. When marketers change the site, they want to see their updates instantly. They don’t want to commit their changes to a Git repository then wait for a CI/CD pipeline to rebuild and redeploy the site every time. Local tooling enabling instant updates tends to be CLI-based and therefore inaccessible for less technical users.
This does not have to devolve into a prototypical development-versus-marketing power struggle. A dynamic website created with a next-generation tool like HubSpot’s CMS Hub can make everyone happy.
A New Generation of Content Management Systems
One reason developers hold static site generators in such high regard is the deficiency of the systems they replaced. Content management systems of the past were notorious for slow performance, security flaws, and poor user experiences for both developers and content creators. However, some of today’s CMS platforms have learned from these mistakes and deficiencies and incorporated the best static site generator features while developing their own key advantages.
A modern, CMS-based website gives developers the control they need to build the features their users demand while saving implementation time. Meanwhile, marketing teams can create content with familiar, web-based, what-you-see-is-what-you-get tools that integrate directly with existing data and software.
For further advantages, consider a CRM-powered solution, like HubSpot’s CMS Hub. Directly tied to your customer data, a CRM-powered site builder allows you to create unique and highly personalized user experiences, while also giving you greater visibility into the customer journey.
Content Management Systems Can Solve for Developers
Modern content management systems like CMS Hub allow developers to build sites locally with the tools and frameworks they prefer, then easily deploy to them their online accounts. Once deployed, marketers can create and edit content using drag-and-drop and visual design tools within the guardrails set by the developers. This gives both teams the flexibility they need and streamlines workflows.
Solutions like CMS Hub also replace the need for unreliable plugins with powerful serverless functions. Serverless functions, which are written in JavaScript and use the NodeJS runtime, allow for more complex user interactions and dynamic experiences. Using these tools, developers can build out light web applications without ever configuring or managing a server. This elevates websites from static flyers to a modern, personalized customer experience without piling on excess developer work.
While every content management system will have its advantages, CMS Hub also includes a built-in relational database, multi-language support, and the ability to build dynamic content and login pages based on CRM data. All features designed to make life easier for developers.
Modern CMS-Based Websites Make Marketers Happy, Too
Marketing teams can immediately take advantage of CMS features, especially when using a CRM-powered solution. They can add pages, edit copy, and even alter styling using a drag-and-drop editor, without needing help from a busy developer. This empowers the marketing team and reduces friction when making updates. It also reduces the volume of support requests that developers have to manage.
Marketers can also benefit from built-in tools for search engine optimization (SEO), A/B testing, and specialized analytics. In addition to standard information like page views, a CRM-powered website offers contact attribution reporting. This end-to-end measurement reveals which initiatives generate actual leads via the website. These leads then flow seamlessly into the CRM for the sales team to close deals.
CRM-powered websites also support highly customized experiences for site users. The CRM behind the website already holds the customer data. This data automatically synchronizes because it lives within one system as a single source of truth for both marketing pages and sales workflows. This default integration saves development teams time that they would otherwise spend building data pipelines.
Next Steps
Every situation is unique, and in some cases, a static site generator is the right decision. But if you are building a site for an organization and solving for the needs of developers and marketers, a modern CMS may be the way to go.
Options like CMS Hub offer all the benefits of a content management system while coming close to matching static site generators’ marquee features: page load speed, simple deployment, and stout reliability. But don’t take my word for it. Create a free CMS Hub developer test account and take it for a test drive.
You’ll often hear developers talking about “static” vs. “dynamic” sites, or you may have heard someone use the term Jamstack. What do these terms mean, and when does a “static” site become either a Jamstack or dynamic site? These questions sound simple, but they’re more nuanced than they appear. Let’s explore these terms to gain a deeper understanding of Jamstack.
Finding the line
What’s the difference between a chair and a stool? Most people will respond that a chair has four legs and back support, whereas a stool has three legs with no back support.
The more stool-like a chair becomes, the fewer people will unequivocally agree that it’s a chair. Eventually, we’ll reach a point where most people agree it’s a stool rather than a chair. It may sound like a silly exercise, but if we want to have a deep appreciation of what it means to be a chair, it’s a valuable one. We find out where the limits of a chair are for most people. We also build an understanding of the gray area beyond. Eventually, we get to the point where even the biggest die-hard chair fans concede and admit there’s a stool in front of them.
As interesting as chairs are, this is an article about website delivery technology. Let’s perform this same exercise for static, dynamic, and Jamstack websites.
At a high level
When you go to a website in your browser, there’s a lot going on behind the scenes:
Your browser performs a DNS lookup to turn the domain name into an IP address.
It requests an HTML file from that IP address.
The webserver sends back the requested file.
As the browser renders the web page, it may come across a reference for an asset, such as a CSS, JavaScript, or image file. The browser then performs a request for this asset.
This cycle continues until the browser has all the files for the web page. It’s not unusual for a single webpage to make 50+ requests.
For every request, the response from the webserver is always a static file, even on a dynamic website. You could save these files to a USB drive, email them to a friend just like any other file on your computer.
When comparing static and dynamic, we’re talking about what the webserver is doing. On a static site, the files the browser requests already exist on the webserver. The webserver sends them back exactly as they are. On a dynamic site, the response gets generated by software. This software might connect to a database to retrieve data, build a layout from template files, and add today’s date to the footer. It does all of this for every request.
That’s the foundational difference between static and dynamic websites.
Where does Jamstack fit in?
Static websites are restrictive. They’re great for informational websites; however, you can’t have any dynamic content or behavior by definition. Jamstack blurs the line between static and dynamic. The idea is to take advantage of all the things that make static websites awesome while enabling dynamic functionality where necessary.
The ‘stack’ in Jamstack is a misnomer. The truth is, Jamstack is not a stack at all. It’s a philosophy that exhibits a striking resemblance to The 5 Pillars of the AWS Well-Architected Framework. The ambiguity in the term has led to extensive community discussion about what it means to be Jamstack.
What is Jamstack?
Jamstack is a superset of static. But to truly understand Jamstack, let’s start with the seeds that led to the coining of the term.
In 2002, the late Aaron Swartz published a blog post titled “Bake, Don’t Fry.” While Aaron didn’t coin “Bake, Don’t Fry,” it’s the first time I can find someone recognizing the benefits of static websites while breaking out perceived constraints of the word.
I care about not having to maintain cranky AOLserver, Postgres and Oracle installs. I care about being able to back things up with scp. I care about not having to do any installation or configuration to move my site to a new server. I care about being platform and server independent.
If we trawl through history, we can find similar frustrations that led to Jamstack seeds:
Ben and Mena Trott created MovableType because of a [d]issatisfaction with existing blog CMSes — performance, stability.
I already knew a lot about what I didn’t want. I was tired of complicated blogging engines like WordPress and Mephisto. I wanted to write great posts, not style a zillion template pages, moderate comments all day long, and constantly lag behind the latest software release.
The past few years this blog has [been] powered by wordpress [sic] and drupal prior to that. Both are are fine pieces of software, but over time I became increasingly disappointed with how they are both optimized for writing content even though significantly most common usage is reading content. Due to the need to load the PHP interpreter on each request it could never be considered fast and consumed a lot of memory on my VPS.
The same themes surface as you look at the origins of many early Jamstack tools:
Reduce complexity
Improve performance
Reduce vendor lock-in
Better workflows for developers
In the past 20 years, JavaScript has evolved from a language for adding small interactions to a website to becoming a platform for building rich web applications in the browser. In parallel, we’ve seen a movement of splitting large applications into smaller microservices. These two developments gave rise to a new way of building websites where you could have a static front-end decoupled from a dynamic back-end.
In 2015, Mathias Biilmann wanted to talk about this modern way of building websites but was struggling with the constricting definition of static:
We were in this space of modern static websites. That’s a really bad description of what we’re doing, right? And we kept having that problem that, talking to people about static sites, they would think about something very static. They would think about a brochure or something with no moving parts. A little one-pager or something like that.
To break out of these constraints, he coined the term “Jamstack” to talk about this new approach, and it caught on like wildfire. What was old static technology from the 90s became new again and pushed to new limits. Many developers caught on to the benefits of the Jamstack approach, which helped Jamstack grow into the thriving ecosystem it is today.
Aaron Swartz put it nicely, 13 years before Jamstack was coined: keep a strict separation between input (which needs dynamic code to be processed) and output (which can usually be baked). In other words, decouple the front end from the back end. Prerender content whenever possible. Layer on dynamic functionality where necessary. That’s the crux of Jamstack.
The reason you might want to build a Jamstack site over a dynamic site come down to the six pillars of Jamstack:
Security
Jamstack sites have fewer moving parts and less surface area for malicious exploitation from outside sources.
Scale
Jamstack sites are static where possible. Static sites can live entirely in a CDN, making them much easier and cheaper to scale.
Performance
Serving a web page from a CDN rather than generating it from a centralized server on-demand improves the page load speed.
Maintainability
Static websites are simple. You need a webserver capable of serving files. With a dynamic site, you might need an entire team to keep a website online and fast.
Portability
Again, a static website is made up of files. As long as you find a webserver capable of serving website files, you can move your site anywhere.
Developer experience
Git workflows are a core part of software development today. With many legacy CMSs, it’s difficult to have Git development workflows. With a Jamstack site, everything is a file making it seamless to use Git.
Let’s use these pillars to evaluate Jamstack use cases.
Where is the edge of static and Jamstack?
Now that we have the basics of static and Jamstack, let’s dive in and see what lies at the edge of each definition. We have four categories each edge case can fall under.
Static – This strictly adheres to the definition of static.
Basically static – While not precisely static, most people would call it a static site.
Jamstack – A static frontend decoupled from a dynamic backend.
Dynamic – Renders web pages on-demand.
Many of these use cases can be placed in multiple categories. In this exercise, we’re putting them in the most restrictive category they fit.
JavaScript interaction Static
Let’s start with an easy one. I have a static site that uses JavaScript to create a slideshow of images.
The HTML page, JavaScript, and images are all static files. All of the HTML manipulation required for the slideshow to function happens in the browser with no external influence.
Cookies Static
I have a static site that adds a banner to the top of the page using JavaScript if a cookie exists. A cookie is just a header. The rest of the files are static.
External assets Basically Static
On a web page, we can load images or JavaScript from an external source. This external source may generate these assets dynamically on request. Would that mean we have a dynamic site?
Most people, including myself, would consider this a static site because it basically is. But if we’re strict to the definition, it doesn’t fit the bill. Having any part of the page generated dynamically defiles the sacred harmony of static.
iFrames Basically Static
An inline frame allows you to embed an HTML page within another HTML page. iFrames are commonly used for embedding Google Maps, Facebook Like buttons, and YouTube videos on a webpage.
Again, most people would still consider this a static site. However, these embeds are almost always from a dynamically-generated source.
Forms Basically Static
A static site can undoubtedly have a form on it. The dilemma comes when you submit it. If you want to do something with the data, you almost certainly need a dynamic back-end. There are plenty of form submission services you can use as the action for your form.
I can see two ways to argue this:
You’re submitting a form to an external website, and it happens to redirect back afterward. This separation means the definition of static remains intact.
This external service is a core workflow on your website, the definition of static no longer works.
In reality, most people would still consider this a static site.
Ajax requests Jamstack
An Ajax request allows a developer to request data from an external source without reloading the page. We’re in the same boat as the above situations of relying on a third party. It’s possible the endpoint for the Ajax call is a static JSON file, but it’s more likely that it’s dynamically-generated.
The nature of how Ajax data is typically used on a website pushes it past a static website into Jamstack territory. It fits well with Jamstack as you can have a site where you prerender everything you can, then use Ajax to layer on any dynamic functionality or content on the site.
Embedded eCommerce Jamstack
There are services that allow you to add eCommerce, even to static websites. Behind the scenes, they’re essentially making Ajax requests to manage items in a shopping cart and collect payment details.
Single page application (SPA) Jamstack
The title alone puts it out of static site contention. A SPA uses Ajax calls to request data. The presentation layer lives entirely in the front end, making it Jamtastic.
Ajax call to a serverless function Jamstack
Whether the endpoint of an Ajax call is serverless with something like AWS Lambda, goes to your Kubernetes clustered Node.js back-end, or a simple PHP back-end, it doesn’t matter. The key for Jamstack is the front end is independent of the back end.
Reverse proxy in front of a webserver Static
Adding a reverse proxy in front of the webserver for a static site must make it dynamic, right? Well, not so fast. While a proxy is software that adds a dynamic element to the network, as long as the file on the server is precisely the file the browser receives, it’s still static.
A webserver, modem, and every piece of network infrastructure in between are running software. If adding a proxy makes a static site dynamic, then nothing is static.
CDN Static
A CDN is a globally-distributed reverse proxy, so it falls into the same category as a reverse proxy. CDNs often add their own headers. This still doesn’t impact the prestigious static status as the headers aren’t part of the file sitting on the server’s hard drive.
CDN in front of a dynamic site with a 200-year cache expiration time Dynamic
OK, 200 years is a long expiry time, I’ll give you that. There are two reasons this is neither a static nor Jamstack site:
The first request isn’t cached, so it generates on demand.
CDNs aren’t designed for persistent storage. If, after one week, you’ve only had five hits on your website, the CDN might purge your web page from the cache. It can always retrieve the web page from the origin server, which would dynamically render the response.
WordPress with a static output Static
Using a WordPress plugin like WP2Static lets you create and manage your website in WordPress and output a static website whenever something changes.
When you do this, the files the browser requests already exist on the webserver, making it a static website—a subtle but important distinction from having a CDN in front of a dynamic site.
Edge computing Dynamic
Many companies are now offering the ability to run dynamic code at the edge of a CDN. It’s a powerful concept because you can have dynamic functionality without adding latency to the user. You can even use edge computation to manipulate HTML before sending it to the client.
It comes down to how you’re using edge functions. You could use an edge function to add a header to particular requests. I would argue this is still a static site. Push much beyond this, where you’re manipulating the HTML, and you’ve crossed the dynamic boundary.
It’s hard to argue it’s a Jamstack site as it doesn’t adhere to some of the fundamental benefits: scale, maintainability, and portability. Now, you have a piece of your core infrastructure that’s changing HTML on every request, and it will only work on that particular hosting infrastructure. That’s getting pretty far away from the blissful simplicity of a static site.
One of the elegant things about Jamstack is the front end and back end are decoupled. The backend is made up of APIs that output data. They don’t know or care how the data is used. The front end is the presentation layer. It knows where to get dynamic data from and how to render it. When you break this separation of concerns, you’ve crossed into a dynamic world.
Dynamic Persistent Rendering (DPR) Dynamic
DPR is a strategy to reduce long build times on large static site generator (SSG) sites. The idea is the SSG builds a subset of the most popular pages. For the rest of the pages, the SSG builds them on-demand the first time they’re requested and saves them to persistent storage. After the initial request, the page behaves precisely like the rest of the built static pages.
Long build times limit large-scale use cases from choosing Jamstack. If all the SSG tooling were GoLang-based, we probably wouldn’t need DPR. However, that’s not the direction most Jamstack tooling has taken, and build performance can be excruciatingly long on big websites.
DPR is a means to an end and a necessity for Jamstack to grow. While it allows you to use Jamstack workflows on massive websites, ironically, I don’t think you can call a site using DPR a Jamstack site. Running software on-demand to generate a web page certainly sounds dynamicy. After the first request, a page served using DPR is a static page which makes DPR “more static” than putting a CDN in front of a dynamic site. However, it’s still a dynamic site as there isn’t a separation between frontend and backend, and it’s not portable, one of the pillars of a Jamstack site.
Incremental Static Regeneration (ISR) Dynamic
ISR is a similar but subtly different strategy to DPR to reduce long build times on large SSG sites. The difference is you can revalidate individual pages periodically to mimic a dynamic site without doing an entire site build.
Requests to a page without a cached version fall back to a stale version of that page or a generic loading page.
Again, it’s an exciting technology that expands what you can do with Jamstack workflows, but dynamically generating a page on-demand sounds like something a dynamic site would do.
Flat file CMS Dynamic
A flat file CMS uses text files for content rather than a database. While flat file CMSs remove a dynamic element from the stack, it’s still dynamically rendering the response.
The lines have been drawn
Exploring and debating these edge cases gives us a better understanding of the limits of all of these terms. The point of this exercise isn’t to be dogmatic about creating static or Jamstack websites. It’s to give us a common language to talk about the tradeoffs you make as you cross the boundary from one concept to another.
There’s absolutely nothing wrong with tradeoffs either. Not everything can be a purely static website. In many cases, the trade-offs make sense. For example, let’s say the front end needs to know the country of the visitor. There are two ways to do this:
On page load, perform an Ajax call to query the country from an API. (Jamstack)
Use an edge function to dynamically insert a country code into the HTML on response. (Dynamic)
If having the country code is a nice-to-have and the web page doesn’t need it immediately, then the first approach is a good option. The page can be static and the API call can fail gracefully if it doesn’t work. However, if the country code is required for the page, dynamically adding it using an edge function might make more sense. It’ll be faster as you don’t need to perform a second request/response cycle.
The key is understanding the problem you’re solving and thinking through the trade-offs you’re making with different approaches. You might end up with the majority of your site Jamstack and a portion dynamic. That’s totally fine and might be necessary for your use case. Typically, the closer you can get to static, the faster, more secure, and more scalable your site will be.
This is only the beginning of the discussion, and I’d love to hear your take. Where would you draw the lines? What do static and Jamstack mean to you? Are you sitting on a chair or stool right now?
There are so manystatic site generators (SSGs). It’s overwhelming trying to decide where to start. While an abundance of helpful articles may help wade through the (popular) options, they don’t magically make the decision easy.
I’ve been on a quest to help make that decision easier. A colleague of mine built a static site generator evaluation cheatsheet. It provides a really nice snapshot across numerous popular SSG choices. What’s missing is how they actually perform in action.
One feature every static site generator has in common is that it takes input data, runs it through a templating engine, and outputs HTML files. We typically refer to this process as The Build.
There’s too much nuance, context, and variability needed to compare how various SSGs perform during the build process to display on a spreadsheet — and thus begins our test to benchmark build times against popular static site generators.
This isn’t just to determine which SSG is fastest. Hugo already has that reputation. I mean, they say it on their website — The world’s fastest framework for building websites — so it must be true!
This is an in-depth comparison of build times across multiple popular SSGs and, more importantly, to analyze why those build times look the way they do. Blindly choosing the fastest or discrediting the slowest would be a mistake. Let’s find out why.
The tests
The testing process is designed to start simple — with just a few popular SSGs and a simple data format. A foundation on which to expand to more SSGs and more nuanced data. For today, the test includes six popular SSG choices:
Each test used the following approach and conditions:
The data source for each build are Markdown files with a randomly-generated title (as frontmatter) and body (containing three paragraphs of content).
The content contains no images.
Tests are run in series on a single machine, making the actual values less relevant than the relative comparison among the lot.
The output is plain text on an HTML page, run through the default starter, following each SSG’s respective guide on getting started.
Each test is a cold run. Caches are cleared and Markdown files are regenerated for every test.
These tests are considered benchmark tests. They are using basic Markdown files and outputting unstyled HTML into the built output.
In other words, the output is technically a website that could be deployed to production, though it’s not really a real-world scenario. Instead, this provides a baseline comparison among these frameworks. The choices you make as a developer using one of these frameworks will adjust the build times in various ways (usually by slowing it down).
For example, one way in which this doesn’t represent the real-world is that we’re testing cold builds. In the real-world, if you have 10,000 Markdown files as your data source and are using Gatsby, you’re going to make use of Gatsby’s cache, which will greatly reduce the build times (by as much as half).
The same can be said for incremental builds, which are related to warm versus cold runs in that they only build the file that changed. We’re not testing the incremental approach in these tests (at this time).
The two tiers of static site generators
Before we do that, let’s first consider that there are really two tiers of static site generators. Let’s call them basic and advanced.
Basic generators (which are not basic under the hood) are essentially a command-line interface (CLI) that takes in data and outputs HTML, and can often be extended to process assets (which we’re not doing here).
Advanced generators offer something in addition to outputting a static site, such as server-side rendering, serverless functions, and framework integration. They tend to be configured to be more dynamic right out of the box.
I intentionally chose three of each type of generator in this test. Falling into the basic bucket would be Eleventy, Hugo, and Jekyll. The other three are based on a front-end framework and ship with various amounts of tooling. Gatsby and Next are built on React, while Nuxt is built atop Vue.
Basic generators
Advanced generators
Eleventy
Gatsby
Hugo
Next
Jekyll
Nuxt
My hypothesis
Let’s apply the scientific method to this approach because science is fun (and useful)!
My hypothesis is that if an SSG is advanced, then it will perform slower than a basic SSG. I believe the results will reflect that because advanced SSGs have more overhead than basic SSGs. Thus, it’s likely that we’re going to see both groups of generators — basic and advanced — bundled together, in the results with basic generators moving significantly quicker.
Let me expand on that hypothesis a bit.
Linear(ish) and fast
Hugo and Eleventy will fly with smaller datasets. They are (relatively) simple processes in Go and Node.js, respectively, and their build output will reflect that. While both SSG will slow down as the number of files grows, I expect them to remain at the top of the class, though Eleventy may be a little less linear at scale, simply because Go tends to be more performant than Node.
Slow, then fast, but still slow
The advanced, or framework-bound SSGs, will start out and appear slow. I suspect a single-file test to contain a significant difference — milliseconds for the basic ones, compared to several seconds for Gatsby, Next, and Nuxt.
The framework-based SSGs are each built using webpack, bringing a significant amount of overhead along with it, regardless of the amount of content they are processing. That’s the baggage we sign up for in using those tools (more on this later).
But, as we add thousands of files, I suspect we’ll see the gap between the buckets close, though the advanced SSG group will stay farther behind by some significant amount.
In the advanced SSG group, I expect Gatsby to be the fastest, only because it doesn’t have a server-side component to worry about — but that’s just a gut feeling. Next and Nuxt may have optimized this to the point where, if we’re not using that feature, it won’t affect build times. And I suspect Nuxt will beat out Next, only because there is a little less overhead with Vue, compared to React.
Jekyll: The odd child
Ruby is infamously slow. It’s gotten more performant over time, but I don’t expect it to scale with Node, and certainly not with Go. And yet, at the same time, it doesn’t have the baggage of a framework.
At first, I think we’ll see Jekyll as pretty speedy, perhaps even indistinguishable from Eleventy. But as we get to the thousands of files, the performance will take a hit. My gut feeling is that there may exist a point at which Jekyll becomes the slowest of all six. We’ll push up to the 100,000 mark to see for sure.
After many iterations of building out a foundation on which these tests could be run, I ended up with a series of 10 runs in three different datasets:
Base: A single file, to compare the base build times
Small sites: From 1 to 1024 files, doubling each to time (to make it easier to determine if the SSGs scaled linearly)
Large sites: From 1,000 to 64,000 files, double on each run. I originally wanted to go up to 128,000 files, but hit some bottlenecks with a few of the frameworks. 64,000 ended up being enough to produce an idea of how the players would scale with even larger sites.
Click or tap the images to view them larger.
Summarizing the results
A few results were surprising to me, while others were expected. Here are the high-level points:
As expected, Hugo was the fastest, regardless of size. What I didn’t expect is that it wasn’t even close to any other generator, even at base builds (nor was it linear, but more on that below.)
The basic and advanced groups of SSGs are quite obvious when looking at the results for small sites. That was expected, but it was surprising to see Next is faster than Jekyll at 32,000 files, and faster than both Eleventy and Jekyll at 64,000 files. Also surprising is that Jekyll performed faster than Eleventy at 64,000 files.
None of the SSGs scale linearly. Next was the closest. Hugo has the appearance of being linear, but only because it’s so much faster than the rest.
I figured Gatsby to be the fastest among the advanced frameworks, and suspected it would be the one to get closer to the basics. But Gatsbyturned out to be the slowest, producing the most dramatic curve.
While it wasn’t specifically mentioned in the hypothesis, the scale of differences was larger than I would have imagined. At one file, Hugo was approximately 170 times faster than Gatsby. But at 64,000 files, it was closer — about 25 times faster. That means that, while Hugo remains the fastest, it actually has the most dramatic exponential growth shape among the lot. It just looks linear because of the scale of the chart.
What does it all mean?
When I shared my results with the creators and maintainers of these SSGs, I generally received the same message. To paraphrase:
The generators that take more time to build do so because they are doing more. They are bringing more to the table for developers to work with, whereas the faster sites (i.e. the “basic” tools) focus their efforts largely in converting templates into HTML files.
I agree.
To sum it up: Scaling Jamstack sites is hard.
The challenges that will present themselves to you, Developer, as you scale a site will vary depending on the site you’re trying to build. That data isn’t captured here because it can’t be — every project is unique in some way.
What it really comes down to is your level of tolerance for waiting in exchange for developer experience.
For example, if you’re going to build a large, image-heavy site with Gatsby, you’re going to pay for it with build times, but you’re also given an immense network of plugins and a foundation on which to build a solid, organized, component-based website. Do the same with Jekyll, and it’s going to take a lot more effort to stay organized and efficient throughout the process, though your builds may run faster.
At work, I typically build sites with Gatsby (or Next, depending on the level of dynamic interactivity required). We’ve worked with the Gatsby framework to build a core on which we can rapidly build highly-customized, image-rich websites, packed with an abundance of components. Our builds become slower as the sites scale, but that’s when we get creative by implementing micro front-ends, offloading image processing, implementing content previews, along with many other optimizations.
On the side, I tend to prefer working with Eleventy. It’s usually just me writing code, and my needs are much simpler. (I like to think of myself as a good client for myself.) I feel I have more control over the output files, which makes it easier for me to get 💯s on client-side performance, and that’s important to me.
In the end, this isn’t only about what is fast or slow. It’s about what works best for you and how long you’re willing to wait.
Wrapping up
This is just the beginning! The goal of this effort was to create a foundation on which we can, together, benchmark relative build times across popular static site generators.
What ideas do you have? What holes can you poke in the process? What can we do to tighten up these tests? How can we make them more like real-world scenarios? Should we offload the processing to a dedicated machine?
These are the questions I’d love for you to help me answer. Let’s talk about it.
Just like there is a movement to make more websites (and more of websites) from pre-rendered static files (Jamstack), so to might we consider moving content-based APIs to be static. Sean C Davis:
A static API is simply a collection of flat JSON files that live on a content delivery network (CDN). It doesn’t perform any action other than delivering content (static JSON files) to the requesting user.
But that doesn’t mean a static API is simple. And every file doesn’t have to be manually generated or updated. A static API can still use a database and it can still pull data from external services. In other words, it can be dynamically generated, but it is statically delivered.
The use cases are probably somewhat limited, but I still like it. In a sense, all RSS feeds (or, more directly, JSON feeds) could be done this way, and many are.