Tag: JAMstack

Jamstack Conf

Here’s an important detail here: It’s free!

Jamstack Conf Virtual is coming up October 6th and 7th, 2020. The sessions are on October 6th. That’s the free part (register here). Then on October 7th there are a variety of workshops (they all look great to me) that are $ 100 USD each. That’s the classic conference one-two punch. Sessions are for getting a broad sense of what’s happening and will very likely open your eyes to some new concepts; workshops are for deep learning and walking away with some new skills.

The speaker lineup is top-notch!

I’ve been to several Jamstack Conf’s myself, and I’m a fan. Some of my most favorite conferences ever are ones that are focused around a technological idea at the rise of their relevance. That’s exactly what’s happening here and I’m excited to check it out. You can even watch videos from the 2019 conference to see just how awesome of an experience it is.

Direct Link to ArticlePermalink


The post Jamstack Conf appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

,

State of Jamstack 2020: Data Deep Dive

(This is a sponsored post.)

The Jamstack, a modern approach to building websites and apps, delivers better performance, higher security, lower cost of scaling, and a better developer experience. But how popular is it among developers worldwide, and what do they love and hate about it?

We at Kentico Kontent decided to take a closer look at the current state of Jamstackʼs adoption and use. Surveying more than 500 developers in four countries, we wanted to find out how long they’ve been working with the Jamstack, what they’re using this architecture for, where they typically deploy and host their projects, and more.

Based on the findings, we created The State of Jamstack 2020 Report that provides an overview of the results and comments on the most interesting facts. Now we’ve released something just as exciting:

Our free interactive data visualization page lets you dive into the data from our survey and discover how the web developers’ answers varied according to age, gender, primary programming language, and other factors.

Do you know where the majority of US developers working for large companies store their content? Or what’s their favorite static site generator? The answers may surprise you—click below to find out:

Direct Link to ArticlePermalink


The post State of Jamstack 2020: Data Deep Dive appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , ,
[Top]

Going Jamstack with React, Serverless, and Airtable

The best way to learn is to build. Let’s learn about this hot new buzzword, Jamstack, by building a site with React, Netlify (Serverless) Functions, and Airtable. One of the ingredients of Jamstack is static hosting, but that doesn’t mean everything on the site has to be static. In fact, we’re going to build an app with full-on CRUD capability, just like a tutorial for any web technology with more traditional server-side access might.

Why these technologies, you ask?

You might already know this, but the “JAM” in Jamstack stands for JavaScript, APIs, and Markup. These technologies individually are not new, so the Jamstack is really just a new and creative way to combine them. You can read more about it over at the Jamstack site.

One of the most important benefits of Jamstack is ease of deployment and hosting, which heavily influence the technologies we are using. By incorporating Netlify Functions (for backend CRUD operations with Airtable), we will be able to deploy our full-stack application to Netlify. The simplicity of this process is the beauty of the Jamstack.

As far as the database, I chose Airtable because I wanted something that was easy to get started with. I also didn’t want to get bogged down in technical database details, so Airtable fits perfectly. Here’s a few of the benefits of Airtable:

  1. You don’t have to deploy or host a database yourself
  2. It comes with an Excel-like GUI for viewing and editing data
  3. There’s a nice JavaScript SDK

What we’re building

For context going forward, we are going to build an app where you can use to track online courses that you want to take. Personally, I take lots of online courses, and sometimes it’s hard to keep up with the ones in my backlog. This app will let track those courses, similar to a Netflix queue.

 

One of the reasons I take lots of online courses is because I make courses. In fact, I have a new one available where you can learn how to build secure and production-ready Jamstack applications using React and Netlify (Serverless) Functions. We’ll cover authentication, data storage in Airtable, Styled Components, Continuous Integration with Netlify, and more! Check it out  →

Airtable setup

Let me start by clarifying that Airtable calls their databases “bases.” So, to get started with Airtable, we’ll need to do a couple of things.

  1. Sign up for a free account
  2. Create a new “base”
  3. Define a new table for storing courses

Next, let’s create a new database. We’ll log into Airtable, click on “Add a Base” and choose the “Start From Scratch” option. I named my new base “JAMstack Demos” so that I can use it for different projects in the future.

Next, let’s click on the base to open it.

You’ll notice that this looks very similar to an Excel or Google Sheets document. This is really nice for being able tower with data right inside of the dashboard. There are few columns already created, but we add our own. Here are the columns we need and their types:

  1. name (single line text)
  2. link (single line text)
  3. tags (multiple select)
  4. purchased (checkbox)

We should add a few tags to the tags column while we’re at it. I added “node,” “react,” “jamstack,” and “javascript” as a start. Feel free to add any tags that make sense for the types of classes you might be interested in.

I also added a few rows of data in the name column based on my favorite online courses:

  1. Build 20 React Apps
  2. Advanced React Security Patterns
  3. React and Serverless

The last thing to do is rename the table itself. It’s called “Table 1” by default. I renamed it to “courses” instead.

Locating Airtable credentials

Before we get into writing code, there are a couple of pieces of information we need to get from Airtable. The first is your API Key. The easiest way to get this is to go your account page and look in the “Overview” section.

Next, we need the ID of the base we just created. I would recommend heading to the Airtable API page because you’ll see a list of your bases. Click on the base you just created, and you should see the base ID listed. The documentation for the Airtable API is really handy and has more detailed instructions for find the ID of a base.

Lastly, we need the table’s name. Again, I named mine “courses” but use whatever you named yours if it’s different.

Project setup

To help speed things along, I’ve created a starter project for us in the main repository. You’ll need to do a few things to follow along from here:

  1. Fork the repository by clicking the fork button
  2. Clone the new repository locally
  3. Check out the starter branch with git checkout starter

There are lots of files already there. The majority of the files come from a standard create-react-app application with a few exceptions. There is also a functions directory which will host all of our serverless functions. Lastly, there’s a netlify.toml configuration file that tells Netlify where our serverless functions live. Also in this config is a redirect that simplifies the path we use to call our functions. More on this soon.

The last piece of the setup is to incorporate environment variables that we can use in our serverless functions. To do this install the dotenv package.

npm install dotenv

Then, create a .env file in the root of the repository with the following. Make sure to use your own API key, base ID, and table name that you found earlier.

AIRTABLE_API_KEY=<YOUR_API_KEY> AIRTABLE_BASE_ID=<YOUR_BASE_ID> AIRTABLE_TABLE_NAME=<YOUR_TABLE_NAME>

Now let’s write some code!

Setting up serverless functions

To create serverless functions with Netlify, we need to create a JavaScript file inside of our /functions directory. There are already some files included in this starter directory. Let’s look in the courses.js file first.

const  formattedReturn  =  require('./formattedReturn'); const  getCourses  =  require('./getCourses'); const  createCourse  =  require('./createCourse'); const  deleteCourse  =  require('./deleteCourse'); const  updateCourse  =  require('./updateCourse'); exports.handler  =  async  (event)  =>  {   return  formattedReturn(200, 'Hello World'); };

The core part of a serverless function is the exports.handler function. This is where we handle the incoming request and respond to it. In this case, we are accepting an event parameter which we will use in just a moment.

We are returning a call inside the handler to the formattedReturn function, which makes it a bit simpler to return a status and body data. Here’s what that function looks like for reference.

module.exports  =  (statusCode, body)  =>  {   return  {     statusCode,     body: JSON.stringify(body),   }; };

Notice also that we are importing several helper functions to handle the interaction with Airtable. We can decide which one of these to call based on the HTTP method of the incoming request.

  • HTTP GET → getCourses
  • HTTP POST → createCourse
  • HTTP PUT → updateCourse
  • HTTP DELETE → deleteCourse

Let’s update this function to call the appropriate helper function based on the HTTP method in the event parameter. If the request doesn’t match one of the methods we are expecting, we can return a 405 status code (method not allowed).

exports.handler = async (event) => {   if (event.httpMethod === 'GET') {     return await getCourses(event);   } else if (event.httpMethod === 'POST') {     return await createCourse(event);   } else if (event.httpMethod === 'PUT') {     return await updateCourse(event);   } else if (event.httpMethod === 'DELETE') {     return await deleteCourse(event);   } else {     return formattedReturn(405, {});   } };

Updating the Airtable configuration file

Since we are going to be interacting with Airtable in each of the different helper files, let’s configure it once and reuse it. Open the airtable.js file.

In this file, we want to get a reference to the courses table we created earlier. To do that, we create a reference to our Airtable base using the API key and the base ID. Then, we use the base to get a reference to the table and export it.

require('dotenv').config(); var Airtable = require('airtable'); var base = new Airtable({ apiKey: process.env.AIRTABLE_API_KEY }).base(   process.env.AIRTABLE_BASE_ID ); const table = base(process.env.AIRTABLE_TABLE_NAME); module.exports = { table };

Getting courses

With the Airtable config in place, we can now open up the getCourses.js file and retrieve courses from our table by calling table.select().firstPage(). The Airtable API uses pagination so, in this case, we are specifying that we want the first page of records (which is 20 records by default).

const courses = await table.select().firstPage(); return formattedReturn(200, courses);

Just like with any async/await call, we need to handle errors. Let’s surround this snippet with a try/catch.

try {   const courses = await table.select().firstPage();   return formattedReturn(200, courses); } catch (err) {   console.error(err);   return formattedReturn(500, {}); }

Airtable returns back a lot of extra information in its records. I prefer to simplify these records with only the record ID and the values for each of the table columns we created above. These values are found in the fields property. To do this, I used the an Array map to format the data the way I want.

const { table } = require('./airtable'); const formattedReturn = require('./formattedReturn'); module.exports = async (event) => {   try {     const courses = await table.select().firstPage();     const formattedCourses = courses.map((course) => ({       id: course.id,       ...course.fields,     }));     return formattedReturn(200, formattedCourses);   } catch (err) {     console.error(err);     return formattedReturn(500, {});   } };

How do we test this out? Well, the netlify-cli provides us a netlify dev command to run our serverless functions (and our front-end) locally. First, install the CLI:

npm install -g netlify-cli

Then, run the netlify dev command inside of the directory.

This beautiful command does a few things for us:

  • Runs the serverless functions
  • Runs a web server for your site
  • Creates a proxy for front end and serverless functions to talk to each other on Port 8888.

Let’s open up the following URL to see if this works:

We are able to use /api/* for our API because of the redirect configuration in the netlify.toml file.

If successful, we should see our data displayed in the browser.

Creating courses

Let’s add the functionality to create a course by opening up the createCourse.js file. We need to grab the properties from the incoming POST body and use them to create a new record by calling table.create().

The incoming event.body comes in a regular string which means we need to parse it to get a JavaScript object.

const fields = JSON.parse(event.body);

Then, we use those fields to create a new course. Notice that the create() function accepts an array which allows us to create multiple records at once.

const createdCourse = await table.create([{ fields }]);

Then, we can return the createdCourse:

return formattedReturn(200, createdCourse);

And, of course, we should wrap things with a try/catch:

const { table } = require('./airtable'); const formattedReturn = require('./formattedReturn'); module.exports = async (event) => {   const fields = JSON.parse(event.body);   try {     const createdCourse = await table.create([{ fields }]);     return formattedReturn(200, createdCourse);   } catch (err) {     console.error(err);     return formattedReturn(500, {});   } };

Since we can’t perform a POST, PUT, or DELETE directly in the browser web address (like we did for the GET), we need to use a separate tool for testing our endpoints from now on. I prefer Postman, but I’ve heard good things about Insomnia as well.

Inside of Postman, I need the following configuration.

  • url: localhost:8888/api/courses
  • method: POST
  • body: JSON object with name, link, and tags

After running the request, we should see the new course record is returned.

We can also check the Airtable GUI to see the new record.

Tip: Copy and paste the ID from the new record to use in the next two functions.

Updating courses

Now, let’s turn to updating an existing course. From the incoming request body, we need the id of the record as well as the other field values.

We can specifically grab the id value using object destructuring, like so:

const {id} = JSON.parse(event.body);

Then, we can use the spread operator to grab the rest of the values and assign it to a variable called fields:

const {id, ...fields} = JSON.parse(event.body);

From there, we call the update() function which takes an array of objects (each with an id and fields property) to be updated:

const updatedCourse = await table.update([{id, fields}]);

Here’s the full file with all that together:

const { table } = require('./airtable'); const formattedReturn = require('./formattedReturn'); module.exports = async (event) => {   const { id, ...fields } = JSON.parse(event.body);   try {     const updatedCourse = await table.update([{ id, fields }]);     return formattedReturn(200, updatedCourse);   } catch (err) {     console.error(err);     return formattedReturn(500, {});   } };

To test this out, we’ll turn back to Postman for the PUT request:

  • url: localhost:8888/api/courses
  • method: PUT
  • body: JSON object with id (the id from the course we just created) and the fields we want to update (name, link, and tags)

I decided to append “Updated!!!” to the name of a course once it’s been updated.

We can also see the change in the Airtable GUI.

Deleting courses

Lastly, we need to add delete functionality. Open the deleteCourse.js file. We will need to get the id from the request body and use it to call the destroy() function.

const { id } = JSON.parse(event.body); const deletedCourse = await table.destroy(id);

The final file looks like this:

const { table } = require('./airtable'); const formattedReturn = require('./formattedReturn'); module.exports = async (event) => {   const { id } = JSON.parse(event.body);   try {     const deletedCourse = await table.destroy(id);     return formattedReturn(200, deletedCourse);   } catch (err) {     console.error(err);     return formattedReturn(500, {});   } };

Here’s the configuration for the Delete request in Postman.

  • url: localhost:8888/api/courses
  • method: PUT
  • body: JSON object with an id (the same id from the course we just updated)

And, of course, we can double-check that the record was removed by looking at the Airtable GUI.

Displaying a list of courses in React

Whew, we have built our entire back end! Now, let’s move on to the front end. The majority of the code is already written. We just need to write the parts that interact with our serverless functions. Let’s start by displaying a list of courses.

Open the App.js file and find the loadCourses function. Inside, we need to make a call to our serverless function to retrieve the list of courses. For this app, we are going to make an HTTP request using fetch, which is built right in.

Thanks to the netlify dev command, we can make our request using a relative path to the endpoint. The beautiful thing is that this means we don’t need to make any changes after deploying our application!

const res = await fetch('/api/courses'); const courses = await res.json();

Then, store the list of courses in the courses state variable.

setCourses(courses)

Put it all together and wrap it with a try/catch:

const loadCourses = async () => {   try {     const res = await fetch('/api/courses');     const courses = await res.json();     setCourses(courses);   } catch (error) {     console.error(error);   } };

Open up localhost:8888 in the browser and we should our list of courses.

Adding courses in React

Now that we have the ability to view our courses, we need the functionality to create new courses. Open up the CourseForm.js file and look for the submitCourse function. Here, we’ll need to make a POST request to the API and send the inputs from the form in the body.

The JavaScript Fetch API makes GET requests by default, so to send a POST, we need to pass a configuration object with the request. This options object will have these two properties.

  1. method → POST
  2. body → a stringified version of the input data
await fetch('/api/courses', {   method: 'POST',   body: JSON.stringify({     name,     link,     tags,   }), });

Then, surround the call with try/catch and the entire function looks like this:

const submitCourse = async (e) => {   e.preventDefault();   try {     await fetch('/api/courses', {       method: 'POST',       body: JSON.stringify({         name,         link,         tags,       }),     });     resetForm();     courseAdded();   } catch (err) {     console.error(err);   } };

Test this out in the browser. Fill in the form and submit it.

After submitting the form, the form should be reset, and the list of courses should update with the newly added course.

Updating purchased courses in React

The list of courses is split into two different sections: one with courses that have been purchased and one with courses that haven’t been purchased. We can add the functionality to mark a course “purchased” so it appears in the right section. To do this, we’ll send a PUT request to the API.

Open the Course.js file and look for the markCoursePurchased function. In here, we’ll make the PUT request and include both the id of the course as well as the properties of the course with the purchased property set to true. We can do this by passing in all of the properties of the course with the spread operator and then overriding the purchased property to be true.

const markCoursePurchased = async () => {   try {     await fetch('/api/courses', {       method: 'PUT',       body: JSON.stringify({ ...course, purchased: true }),     });     refreshCourses();   } catch (err) {     console.error(err);   } };

To test this out, click the button to mark one of the courses as purchased and the list of courses should update to display the course in the purchased section.

Deleting courses in React

And, following with our CRUD model, we will add the ability to delete courses. To do this, locate the deleteCourse function in the Course.js file we just edited. We will need to make a DELETE request to the API and pass along the id of the course we want to delete.

const deleteCourse = async () => {   try {     await fetch('/api/courses', {       method: 'DELETE',       body: JSON.stringify({ id: course.id }),     });     refreshCourses();   } catch (err) {     console.error(err);   } };

To test this out, click the “Delete” button next to the course and the course should disappear from the list. We can also verify it is gone completely by checking the Airtable dashboard.

Deploying to Netlify

Now, that we have all of the CRUD functionality we need on the front and back end, it’s time to deploy this thing to Netlify. Hopefully, you’re as excited as I am about now easy this is. Just make sure everything is pushed up to GitHub before we move into deployment.

If you don’t have a Netlify, account, you’ll need to create one (like Airtable, it’s free). Then, in the dashboard, click the “New site from Git” option. Select GitHub, authenticate it, then select the project repo.

Next, we need to tell Netlify which branch to deploy from. We have two options here.

  1. Use the starter branch that we’ve been working in
  2. Choose the master branch with the final version of the code

For now, I would choose the starter branch to ensure that the code works. Then, we need to choose a command that builds the app and the publish directory that serves it.

  1. Build command: npm run build
  2. Publish directory: build

Netlify recently shipped an update that treats React warnings as errors during the build proces. which may cause the build to fail. I have updated the build command to CI = npm run build to account for this.

Lastly, click on the “Show Advanced” button, and add the environment variables. These should be exactly as they were in the local .env that we created.

The site should automatically start building.

We can click on the “Deploys” tab in Netlify tab and track the build progress, although it does go pretty fast. When it is complete, our shiny new app is deployed for the world can see!

Welcome to the Jamstack!

The Jamstack is a fun new place to be. I love it because it makes building and hosting fully-functional, full-stack applications like this pretty trivial. I love that Jamstack makes us mighty, all-powerful front-end developers!

I hope you see the same power and ease with the combination of technology we used here. Again, Jamstack doesn’t require that we use Airtable, React or Netlify, but we can, and they’re all freely available and easy to set up. Check out Chris’ serverless site for a whole slew of other services, resources, and ideas for working in the Jamstack. And feel free to drop questions and feedback in the comments here!


The post Going Jamstack with React, Serverless, and Airtable appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , ,
[Top]

Where Does Logic Go on Jamstack Sites?

Here’s something I had to get my head wrapped around when I started building Jamstack sites. There are these different stages your site goes through where you can put logic.

Let’s look at a special example so you can see what I mean. Say you’re making a website for a music venue. The most important part of the site is a list of events, some in the past and some upcoming. You want to make sure to label them as such or design that to be very clear. That is date-based logic. How do you do that? Where does that logic live?

There are at least four places to consider when it comes to Jamstack.

Option 1: Write it into the HTML ourselves

Literally sit down and write an HTML file that represents all of the events. We’d look at the date of the event, decide in whether it’s in the past or the future, and write different content for either case. Commit and deploy that file.

<h1>Upcoming Event: Bill's Banjo Night</h1> <h1>Past Event: 70s Classics with Jill</h1>

This would totally work! But the downside is that weu’d have to update that HTML file all the time — once Bill’s Banjo Night is over, we have to open your code editor, change “Upcoming” to “Past” and re-upload the file.

Option 2: Write structured data and do logic at build time

Instead of writing all the HTML by hand, we create a Markdown file to represent each event. Important information like the date and title is in there as structured data. That’s just one option. The point is we have access to this data directly. It could be a headless CMS or something like that as well.

Then we set up a static site generator, like Eleventy, that reads all the Markdown files (or pulls the information down from your CMS) and builds them into HTML files. The neat thing is thatwe can run any logic we want during the build process. Do fancy math, hit APIs, run a spell-check… the sky is the limit.

For our music venue site, we might represent events as Markdown files like this:

--- title: Bill's Banjo Night date: 2020-09-02 ---  The event description goes here!

Then, we run a little bit of logic during the build process by writing a template like this:

{% if event.date > now %}   <h1>Upcoming Event: {{event.title}}</h1> {% else %}   <h1>Past Event: {{event.title}}</h1> {% endif %}

Now, each time the build process runs, it looks at the date of the event, decides if it’s in the past or the future and produces different HTML based on that information. No more changing HTML by hand!

The problem with this approach is that the date comparison only happens one time, during the build process. The now variable in the example above is going to refer to the date and time the build happens to run. And once we’ve uploaded the HTML files that build produced, those won’t change until we run the build again. This means that once an event at our music venue is over, we’d have to re-run the build to make sure the website reflects that.

Now, we could automate the rebuild so it happens once a day, or heck, even once an hour. That’s literally what the CSS-Tricks conferences site does via Zapier.

The conferences site is deployed daily using a Zapier automation that triggers a Netlify deploy,, ensuring information is current.

But this could rack up build minutes if you’re using a service like Netlify, and there might still be edge cases where someone gets an outdated version of the site.

Option 3: Do logic at the edge

Edge workers are a way of running code at the CDN level whenever a request comes in. They’re not widely available at the time of this writing but, once they are, we could write our date comparison like this:

// THIS DOES NOT WORK import eventsList from "./eventsList.json" function onRequest(request) {   const now = new Date();   eventList.forEach(event => {     if (event.date > now) {       event.upcoming = true;     }   })   const props = {     events: events,   }   request.respondWith(200, render(props), {}) }

The render() function would take our processed list of events and turn it into HTML, perhaps by injecting it into a pre-rendered template. The big promise of edge workers is that they’re extremely fast, so we could run this logic server-side while still enjoying the performance benefits of a CDN.

And because the edge worker runs every time someone requests the website, we can be sure that they’re going to get an up-to-date version of it.

Option 4: Do logic at run time

Finally, we could pass our structured data to the front end directly, for example, in the form of data attributes. Then we write JavaScript that’s going to do whatever logic we need on the user’s device and manipulates the DOM on the fly.

For our music venue site, we might write a template like this:

<h1 data-date="{{event.date}}">{{event.title}}</h1>

Then, we do our date comparison in JavaScript after the page is loaded:

function processEvents(){   const now = new Date()   events.forEach(event => {     const eventDate = new Date(event.getAttribute('data-date'))     if (eventDate > now){         event.classList.add('upcoming')     } else {         event.classList.add('past')     }   }) }

The now variable reflects the time on the user’s device, so we can be pretty sure the list of events will be up-to-date. Because we’re running this code on the user’s device, we could even get fancy and do things like adjust the way the date is displayed based on the user’s language or timezone.

And unlike the previous points in the lifecycle, run time lasts as long as the user has our website open. So, if we wanted to, we could run processEvents() every few seconds and our list would stay perfectly up-to-date without having to refresh the page. This would probably be unnecessary for our music venue’s website, but if we wanted to display the events on a billboard outside the building, it might just come in handy.

Where will you put the logic?

Although one of the core concepts of Jamstack is that we do as much work as we can at build time and serve static HTML, we still get to decide where to put logic.

Where will you put it?

It really depends on what you’re trying to do. Parts of your site that hardly ever change are totally fine to complete at edit time. When you find yourself changing a piece of information again and again, it’s probably a good time to move that into a CMS and pull it in at build time. Features that are time-sensitive (like the event examples we used here), or that rely on information about the user, probably need to happen further down the lifecycle at the edge or even at runtime.


The post Where Does Logic Go on Jamstack Sites? appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, ,
[Top]

Make Jamstack Slow? Challenge Accepted.

“Jamstack is slowwwww.” That’s not something you hear often, right? Especially, when one of the main selling points of Jamstack is performance. But yeah, it’s true that even a Jamstack site can suffer hits to performance just like any other site. 

Don’t think that by choosing Jamstack you no longer have to think about performance. Jamstack can be fast — really fast — but you have to make the right choices. Let’s see if we can spot some of the poor decisions that can lead to a “slow” Jamstack site.

To do that, we’re going to build a really slow Gatsby site. Seems strange right? Why would we intentionally do that!? It’s the sort of thing where, if we make it, then perhaps we can gain a better understanding of what affects Jamstack performance and how to avoid bottlenecks.

We will use continuous performance testing and Google Lighthouse to audit every change. This will highlight the importance of testing every code change. Our site will start with a top Lighthouse performance score of 100. From there, we will make changes until it scores a mere 17. It is easier to do than you might think!

Let’s get started!

Creating our Jamstack site

We are going to use Gatsby for our test site. Let’s start by installing the Gatsby CLI installed:

npm install -g gatsby-cli

We can up a new Gatsby site using this command:

gatsby new slow-jamstack

Let’s cd into the new slow-jamstack project directory and start the development server:

cd slow-jamstack gatsby develop

To add Lighthouse to the mix, we need a Gatsby production build. We can use Vercel to host the site, giving Lighthouse a way to runs its tests. That requires installing the Vercel command-line tool and logging in:

npm install -g vercel-cli vercel

This will create the site in Vercel and put it on a live server. Here’s the example I’ve already set up that we’ll use for testing.

We’ve gotta use Chrome to access directly from DevTools and run a performance audit. No surprise here, the default Gatsby site is fast:

Screenshot of a Lighthouse audit showing a score of 100 out of 100.

A score of 100 is the fastest you can get. Let’s see what we can do to slow it down.

Slow CSS

CSS frameworks are great. They can do a lot of heavy lifting for you. When deciding on a CSS framework use one that is modular or employs CSS-in-JS so that the only CSS you need is what’s loaded.

But let’s make the bad decision to reach for an entire framework just to style a button component. In fact, let’s even grab the heaviest framework while we’re at it. These are the sizes of some popular frameworks:

Framework CSS Size (gzip)
Bootstrap 68kb (12kb)
Bulma 73kb (10kb)
Foundation 30kb (7kb)
Milligram 10kb (3kb)
Pure 17kb (4kb)
SemanticUI 146kb (20kb)
UIKit 33kb (6kb)

Alright, SemanticUI it is! The “right” way to load this framework would be to use a Sass or Less package, which would allow us to choose the parts of the framework we need. The wrong way would be to load all the CSS and JavaScript files in the <head> of the HTML. That’s what we’ll do with the full SemanticUI stylesheet. Plus, we’re going to link up jQuery because it’s a SemanticUI dependency.

We want these files to load in the head so let’s jump into the html.js file. This is not available in the src directory until we run a command to copy over the default from the cache:

cp .cache/default-html.js src/html.js

That gives us html.js in the src directory. Open it up and add the required stylesheet and scripts:

<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/semantic-ui@2.4.2/dist/semantic.css"></link> <script src="https://code.jquery.com/jquery-3.1.1.js"></script> <script src="https://cdn.jsdelivr.net/npm/semantic-ui@2.4.2/dist/semantic.js"></script>  

Now let’s push the changes straight to our production URL:

vercel --prod

OK, let’s view the audit…

Screenshot of a Lighthouse report showing a score of 66 out of 100.
Zoikes! A 33% reduction!

We have reduced the speed of the site down to a score of 66. Remember that we are not even using this framework at the moment. All we have done is load the files in the head and that reduced the performance score by one-third. Our Time to Interactive (TTI) jumped from a quick 1.9 seconds to a noticeable 4.9 seconds. And look at the possible savings we could get with from Lighthouse’s recommendations.

Slow marketing dependencies

Next, we are going to look at marketing tags and how these third-party scripts can affect performance. Let’s pretend we work with a marketing department and they want to start measuring traffic with Google Analytics. They also have a Facebook campaign and want to track it as well. 

They give us the details of the scripts that we need to add to get everything working. First, for Google Analytics:

<script async src="https://www.googletagmanager.com/gtag/js?id=UA-4369823-4"></script> <script   dangerouslySetInnerHTML={{ __html: `   window.dataLayer = window.dataLayer || [];   function gtag(){dataLayer.push(arguments);}   gtag('js', new Date());   gtag('config', 'UA-4369823-4');   `}} />

Then for the Facebook campaign:

<script   dangerouslySetInnerHTML={{ __html: `     !function(f,b,e,v,n,t,s)     {if(f.fbq)return;n=f.fbq=function(){n.callMethod?     n.callMethod.apply(n,arguments):n.queue.push(arguments)};     if(!f._fbq)f._fbq=n;n.push=n;n.loaded=!0;n.version='2.0';     n.queue=[];t=b.createElement(e);t.async=!0;     t.src=v;s=b.getElementsByTagName(e)[0];     s.parentNode.insertBefore(t,s)}(window, document,'script',     'https://connect.facebook.net/en_US/fbevents.js');     fbq('init', '3180830148641968');     fbq('track', 'PageView');     `}} />  <noscript><img height="1" width="1" src="https://www.facebook.com/tr?id=3180830148641968&ev=PageView&noscript=1"/></noscript>

We’ll place these scripts inside html.js, again in the <head> section, before the closing </head> tag.

Just like before, let’s push to Vercel and re-run Lighthouse:

vercel --prod
Screenshot of a Lighthouse audit showing a score of 51 out of 100.

Wow, the site is already down to 51 and all we’ve done is tack on one framework and a couple of measly scripts. Together, they/ve reduced the score by a whopping 49 points, nearly half of where we started.

Slow images

We haven’t added any images to the site yet but we know we absolutely would in a real-life scenario. We are going to add 100 images to the page. Sure, 100 is a lot for a single page but, then again, we know that images are often the biggest culprits of bloated web pages so we might as well let them shine.

We’ll make things a little worse by hot loading the images directly from https://placeimg.com instead of serving them on our own server.

Let’s crack open index.js and drop this code in, which will loop through 100 instances of images:

const IndexPage = () => {   const items = []   for(var i = 0; i < 100; i++) {     const url = `http://placeimg.com/640/360/any?=$ {i}`     items.push(<img key={i} alt={i} src=https://css-tricks.com/make-jamstack-slow-challenge-accepted/ />)   }      return (     <Layout>       // ...       {items}       // ...     </Layout>   ) }

The 100 images are all different and will all load as the page loads, thereby blocking the rendering. OK, let’s push to Vercel and see what’s up.

vercel --prod
Screenshot of a Lighthouse audit showing a score of 17 out of 100.
That score deserves a sad trombone. 🎺

OK, we now have a very slow Jamstack site. The images are blocking the rendering of the page and the TTI is now a whopping 16.5 seconds. We have taken a very fast Jamstack site and dropped it to a Lighthouse score of 17 — a reduction of 83 points!

Now, you may be think that you would never make these poor decisions when building an app. But you are missing the point. Every choice we make has an impact on performance. It’s a balance and performance does not come free. Even on Jamstack sites.

Making Jamstack fast again

You have seen that we cannot ignore client-side performance when using Jamstack. 

So why do people say that Jamstack is fast? Well, the main advantage of Jamstack — or using static site generators in general — is caching. Static files are cached on the edge reducing Time to First Byte (TTFB).

This is always going to be faster than going to a single-origin web server before generating the page. This is a great feature of Jamstack and gives you a fighting chance to create a page that can hit 100 in Lighthouse. (But, hey, as a side note, remember that great scores aren’t always indicative of an actual user experience.)


See, I told you we could make Jamstack slow! There are also many other things that can slow it down, but hopefully this drives home the point.

While we’re talking about performance, here are a few of my favorite performance articles on here at CSS-Tricks:


The post Make Jamstack Slow? Challenge Accepted. appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , ,
[Top]

Settling down in a Jamstack world

One of the things I like about Jamstack is that it’s just a philosophy. It’s not particularly prescriptive about how you go about it. To me, the only real requirement is that it’s based on static (CDN-backed) hosting. You can use whatever tooling you like. Those tools, though, tend to be somewhat new, and new sometimes comes with issues. Some pragmatism from Sean C Davis here:

I have two problems with solving problems using the newest, best tool every time a problem arises.

1. It’s simply not productive. Introducing new processes and tools takes time. Mastery and efficiency are built over time. If we’re trying to run a profitable business, we shouldn’t start from scratch every time.

2. We can’t know everything, all the time. With the rapidity at which we’re seeing new tools, there’s simply no way of knowing the best tool for the job because there’s no way to know all the tools available.

The trick is to settle into some tools you’ve proved that work and then keep using them to increase that level of expertise.

The post Settling down in a Jamstack world appeared first on CSS-Tricks.

CSS-Tricks

, , ,
[Top]

5 Myths About Jamstack

Jamstack isn’t necessarily new. The term was officially coined in 2016, but the technologies and architecture it describes have been around well before that. Jamstack has received a massive dose of attention recently, with articles about it appearing in major sites and publications and new Jamstack-focused events, newsletters, podcasts, and more. As someone who follows it closely, I’ve even seen what seems like a significant uptick in discussion about it on Twitter, often from people who are being newly introduced to the concept.

The buzz has also seemed to bring out the criticism. Some of the criticism is fair, and I’ll get to some of that in a bit, but others appear to be based on common myths about the Jamstack that persist, which is what I want to address first. So let’s look at five common myths about Jamstack I’ve encountered and debunk them. As with many myths, they are often based on a kernel of truth but lead to invalid conclusions.

Myth 1: They are just rebranded static sites

Yes, as I covered previously, the term “Jamstack” was arguably a rebranding of what we previously called “static sites.” This was not a rebranding meant to mislead or sell you something that wasn’t fully formed — quite the opposite. The term “static site” had long since lost its ability to describe what people were building. The sites being built using static site generators (SSG) were frequently filled with all sorts of dynamic content and capabilities.

Static sites were seen as largely about blogs and documentation where the UI was primarily fixed. The extent of interaction was perhaps embedded comments and a contact form. Jamstack sites, on the other hand, have things like user authentication, dynamic content, ecommerce, user generated content.

A listing of sites built using Jamstack on Jamstack.org

Want proof? Some well-known companies and sites built using the Jamstack include Smashing Magazine, Sphero, Postman, Prima, Impossible Foods and TriNet, just to name a few.

Myth 2: Jamstack sites are fragile

A Medium article with no byline: The issues with JAMStack: You might need a backend

Reading the dependency list for Smashing Magazine reads like the service equivalent of node_modules, including AlgoliaGoCommerceGoTrueGoTell and a variety of Netlify services to name a few. There is a huge amount of value in knowing what to outsource (and when), but it is amusing to note the complexity that has been introduced in an apparent attempt to ‘get back to basics’. This is to say nothing of the potential fragility in relying on so many disparate third-party services.

Yes, to achieve the dynamic capabilities that differentiate the Jamstack from static sites, Jamstack projects generally rely on a variety of services, both first- or third-party. Some have argued that this makes Jamstack sites particularly vulnerable for two reasons. The first, they say, is that if any one piece fails, the whole site functionality collapses. The second is that your infrastructure becomes too dependent on tools and services that you do not own.

Let’s tackle that first argument. The majority of a Jamstack site should be pre-rendered. This means that when a user visits the site, the page and most of its content is delivered as static assets from the CDN. This is what gives Jamstack much of its speed and security. Dynamic functionality — like shopping carts, authentication, user generated content and perhaps search — rely upon a combination of serverless functions and APIs to work.

Broadly speaking, the app will call a serverless function that serves as the back end to connect to the APIs. If, for example, our e-commerce functionality relies on Stripe’s APIs to work and Stripe is down, then, yes, our e-commerce functionality will not work. However, it’s important to note that the site won’t go down. It can handle the issue gracefully by informing the user of the issue. A server-rendered page that relies on the Stripe API for e-commerce would face the identical issue. Assuming the server-rendered page still calls the back end code for payment asynchronously, it would be no more or less fragile than the Jamstack version. On the other hand, if the server-rendering is actually dependent upon the API call, the user may be stuck waiting on a response or receive an error (a situation anyone who uses the web is very familiar with).

As for the second argument, it’s really hard to gauge the degree of dependence on third-parties for a Jamstack web app versus a server-rendered app. Many of today’s server-rendered applications still rely on APIs for a significant amount of functionality because it allows for faster development, takes advantage of the specific area of expertise of the provider, can offload responsibility for legal and other compliance issues, and more. In these cases, once again, the server-rendered version would be no more or less dependent than the Jamstack version. Admittedly, if your app relies mostly on internal or homegrown solutions, then this may be different.

Myth 3: Editing content is difficult

Kev Quirk, on Why I Don’t Use A Static Site Generator:

Having to SSH into a Linux box, then editing a post on Vim just seems like a ridiculously high barrier for entry when it comes to writing on the go. The world is mobile first these days, like it or not, so writing on the go should be easy.

This issue feels like a relic of static sites past. To be clear, you do not need to SSH into a Linux box to edit your site content. There are a wide range of headless CMS options, from completely free and open source to commercial offerings that power content for large enterprises. They offer an array of editing capabilities that rival any traditional CMS (something I’ve talked about before). The point is, there is no reason to be manually editing Markdown, YAML or JSON files, even on your blog side project. Aren’t sure how to hook all these pieces up? We’ve got a solution for that too!

One legitimate criticism has been that the headless CMS and build process can cause a disconnect between the content being edited and the change on the site. It can be difficult to preview exactly what the impact of a change is on the live site until it is published or without some complex build previewing process. This is something that is being addressed by the ecosystem. Companies like Stackbit (who I work for) are building tools that make this process seamless.

Animated screenshot. A content editor is shown on a webpage with a realtime preview of changes.
Editing a site using Stackbit

We’re not the only ones working on solving this problem. Other solutions include TinaCMS and Gatsby Preview. I think we are close to it becoming commonplace to have the simplicity of WYSIWYG editing on a tool like Wix running on top of the Jamstack.

Myth 4: SEO is Hard on the Jamstack

Kym Ellis, on What the JAMstack means for marketing:

Ditching the concept of the plugin and opting for a JAMstack site which is “just HTML” doesn’t actually mean you have to give up functionality, or suddenly need to know how to code like a front-end developer to manage a site and its content.

I haven’t seen this one pop up as often in recent years and I think it is mostly legacy criticism of the static site days, where managing SEO-related metadata involved manually editing YAML-based frontmatter. The concern was that doing SEO properly became cumbersome and hard to maintain, particularly if you wanted to inject different metadata for each unique page that was generated or to create structured data like JSON-LD, which can be critical for enhancing your search listing.

The advances in content management for the Jamstack have generally addressed the complexity of maintaining SEO metadata. In addition, because pages are pre-rendered, adding sitemaps and JSON-LD is relatively simple, provided the metadata required exists. While pre-rendering makes it easy to create the resources search engines (read: Google) need to index a site, they also, combined with CDNs, making it easier to achieve the performance benchmarks necessary to improve a site’s ranking.

Basically, Jamstack excels at “technical SEO” while also providing the tools necessary for content editors to supply the keyword and other metadata they require. For a more comprehensive look at Jamstack and SEO, I highly recommend checking out the Jamstack SEO Guide from Bejamas.

Myth 5: Jamstack requires heavy JavaScript frameworks

If you’re trying to sell plain ol’ websites to management who are obsessed with flavour-of-the-month frameworks, a slick website promoting the benefits of “JAMstack” is a really useful thing.

– jdietrich, Hacker News

Lately, it feels as if Jamstack has become synonymous with front-end JavaScript frameworks. It’s true that a lot of the most well-known solutions do depend on a front-end framework, including Gatsby (React), Next.js (React), Nuxt (Vue), VuePress (Vue), Gridsome (Vue) and Scully (Angular). This seems to be compounded by confusion around the “J” in Jamstack. While it stands for JavaScript, this does not mean that Jamstack solutions are all JavaScript-based, nor do they all require npm or JavaScript frameworks.

In fact, many of the most widely used tools are not built in JavaScript, notably Hugo (Go), Jekyll (Ruby), Pelican (Python) and the recently released Bridgetown (Ruby). Meanwhile, tools like Eleventy are built using JavaScript but do not depend on a JavaScript framework. None of these tools prevent the use of a JavaScript framework, but they do not require it.

The point here isn’t to dump on JavaScript frameworks or the tools that use them. These are great tools, used successfully by many developers. JavaScript frameworks can be very powerful tools capable of simplifying some very complex tasks. The point here is simply that the idea that a JavaScript framework is required to use the Jamstack is false — Jamstack comes in 460 flavors!

Where we can improve

So that’s it, right? Jamstack is an ideal world of web development where everything isn’t just perfect, but perfectly easy. Unfortunately, no. There are plenty of legitimate criticisms of Jamstack.

Simplicity

Sebastian De Deyne, with Thoughts (and doubts) after messing around with the JAMstack:

In my experience, the JAMstack (JavaScript, APIs, and Markup) is great until is isn’t. When the day comes that I need to add something dynamic–and that day always comes–I start scratching my head.

Let’s be honest: Getting started with the Jamstack isn’t easy. Sure, diving into building a blog or a simple site using a static site generator may not be terribly difficult. But try building a real site with anything dynamic and things start to get complicated fast.

You are generally presented with a myriad of options for completing the task, making it tough to weigh the pros and cons. One of the best things about Jamstack is that it is not prescriptive, but that can make it seem unapproachable, leaving people with the impression that perhaps it isn’t suited for complex tasks.

Tying services together

When you actually get to the point of building those dynamic features, your site can wind up being dependent on an array of services and APIs. You may be calling a headless CMS for content, a serverless function that calls an API for payment transactions, a service like Algolia for search, and so on. Bringing all those pieces together can be a very complex task. Add to that the fact that each often comes with its own dashboard and API/SDK updates, things get even more complex.

This is why I think services like Stackbit and tools like RedwoodJS are important, as they bring together disparate pieces of the infrastructure behind a Jamstack site and make those easier to build and manage.

Overusing frameworks

In my opinion, our dependence on JavaScript frameworks for modern front-end development has been getting a much needed skeptical look lately. There are tradeoffs, as this post by Tim Kadlec recently laid out. As I said earlier, you don’t need a JavaScript framework to work in the Jamstack.

However, the impression was created both because so many Jamstack tools rely on JavaScript frameworks and also because much of the way we teach Jamstack has been centered on using frameworks. I understand the reasoning for this — many Jamstack developers are comfortable in JavaScript frameworks and there’s no way to teach every tool, so you pick the one you like. Still, I personally believe the success of Jamstack in the long run depends on its flexibility, which (despite what I said about the simplicity above) means we need to present the diversity of solutions it offers — both with and without JavaScript frameworks.

Where to go from here

Sheesh, you made it! I know I had a lot to say, perhaps more than I even realized when I started writing, so I won’t bore you with a long conclusion other than to say that, obviously, I have presented these myths from the perspective of someone who believes very deeply in the value of the Jamstack, despite it’s flaws!

If you are looking for a good post about when to and when not to choose Jamstack over server-side rendering, check out Chris Coyier’s recent post Static or Not?.

The post 5 Myths About Jamstack appeared first on CSS-Tricks.

CSS-Tricks

, ,
[Top]

Jamstack News!

I totally forgot that the Jamstack Conf was this week but thankfully they’ve already published the talks on the Jamstack YouTube channel. I’m really looking forward to sitting down with these over a coffee while I also check out Netlify’s other big release today: Build Plugins.

These are plugins that run whenever your site is building. One example is the A11y plugin that will fail a build if accessibility failures are detected. Another minifies HTML and there’s even one that inlines critical CSS. What’s exciting is that these build plugins are kinda making complex Gulp/Grunt environments the stuff of legend. Instead of going through the hassle of config stuff, build plugins let Netlify figure it all out for you. And that’s pretty neat.

Also, our very own Sarah Drasner wrote just about how to create your first Netlify Build Plugin. So, if you have an idea for something that you could share with the community, then that may be the best place to start.

Direct Link to ArticlePermalink

The post Jamstack News! appeared first on CSS-Tricks.

CSS-Tricks

,
[Top]

Angular + Jamstack! (Free Webinar)

(This is a sponsored post.)

It’s easy to think that working with Jamstack means working with some specific set of technologies. That’s how it’s traditionally been packaged for us. Think LAMP stack, where Linux, Apache, MySQL and PHP are explicit tools and languages. or MEAN or MERN or whatever. With Jamstack, the original JAM meant JavaScript, APIs, and Markup. That’s not specific technologies so much as a loose philosophy.

That’s cool, because it means we can bring our own set of favorite technologies, and then figure out how to use that philosophy for the most benefit. That can mean bringing our favorite CMS, favorite build tools, and even favorite front-end frameworks.

That’s the crux of Netlify’s upcoming webinar on using Angular in the Jamstack. They’ll walk through where Angular fits into the Jamstack architecture, how to develop with Angular in the stack, and the benefits of working this way. Plus you get to hang with Tara Z. Manicsic, which is worth it right there.

The webinar is free and scheduled for May 13 at 9:00am Pacific Time.

Direct Link to ArticlePermalink

The post Angular + Jamstack! (Free Webinar) appeared first on CSS-Tricks.

CSS-Tricks

, , ,
[Top]

Drupal to Jamstack

I’ve been harping for a while that Jamstack doesn’t necessarily mean throwing away your old CMS. In fact, I’d argue that Jamstack is at it’s most powerful when paired with a system that you already know, are comfortable with, and perhaps even like. You’d call that decoupling the front end.

Netlify has a webinar coming up on exactly this, featuring Alex De Winne, who is going to show you a real-world site in a live demo combining Drupal (a classic PHP & MySQL CMS that powers an absolute ton of sites) and Jamstack architecture.

The post Drupal to Jamstack appeared first on CSS-Tricks.

CSS-Tricks

,
[Top]