For the past few weeks, I’ve been hiring for a senior full-stack JavaScript engineer at my rental furniture company, Pabio. Since we’re a remote team, we conduct our interviews on Zoom, and I’ve observed that some developers are not great at live-coding or whiteboard interviews, even if they’re good at the job. So, instead, we have an hour-long technical discussion where I ask them questions about web vitals, accessibility, the browser wars, and other similar topics about the web. One of the questions I always like to ask is: “Explain the first ten or so lines of the Twitter source code to me.”
I think it’s a simple test that tells me a lot about the depth of fundamental front-end knowledge they have, and this article lists the best answers.
For context, I share my screen, open Twitter.com and click View source. Then I ask them to go line-by-line to help me understand the HTML, and they can say as much or as little as they like. I also zoom in to make the text more legible, so you don’t see the full line but you get an idea. Here’s what it looks like:
Note that since our technical discussion is a conversation. I don’t expect a perfect answer from anyone. If I hear some right keywords, I know that the candidate knows the concept, and I try to push them in the right direction.
Line 1: <!DOCTYPE html>
The first line of every document’s source code is the perfect for this interview because how much a candidate knows about the DOCTYPE declaration closely resembles how many years of experience they have. I still remember my Dreamweaver days with the long XHTML DOCTYPE line, like Chris listed in his article “The Common DOCTYPES” from 2009.
Perfect answer: This is the document type (doc-type) declaration that we always put as the first line in HTML files. You might think that this information is redundant because the browser already knows that the MIME type of the response is text/html; but in the Netscape/Internet Explorer days, browsers had the difficult task of figuring out which HTML standard to use to render the page from multiple competing versions.
This was especially annoying because each standard generated a different layout so this tag was adopted to make it easy for browsers. Previously, DOCTYPE tags were long and even included the specification link (kinda like SVGs have today), but luckily the simple <!doctype html> was standardized in HTML5 and still lives on.
Also accepted: This is the DOCTYPE tag to let the browser know that this is an HTML5 page and should be rendered as such.
Line 2: <html dir="ltr" lang="en">
This line in the source code tells me if the candidate knows about accessibility and localization. Surprisingly, only a few people knew about the dir attribute in my interviews, but it’s a great segue into a discussion about screen readers. Almost everyone was able to figure out the lang="en" attribute, even if they haven’t used it before.
Perfect answer: This is the root element of an HTML document and all other elements are inside this one. Here, it has two attributes, direction and language. The direction attribute has the value left-to-right to tell user agents which direction the content is in; other values are right-to-left for languages like Arabic, or just auto which leaves it to the browser to figure out.
The language attribute tells us that all content inside this tag is in English; you can set this value to any language tag, even to differentiate en-us and en-gb, for example. This is also useful for screen readers to know which language to announce in.
Line 3: <meta charset="utf-8">
Perfect answer: The meta tag in the source code is for supplying metadata about this document. The character set (char-set) attribute tells the browser which character encoding to use, and Twitter uses the standard UTF-8 encoding. UTF-8 is great because it has many character points so you can use all sorts of symbols and emoji in your source code. It’s important to put this tag near the beginning of your code so the browser hasn’t already started parsing too much text when it comes across this line; I think the rule is to put it in the first kilobyte of the document, but I’d say the best practice is to put it right at the top of <head>.
As a side note, it looks like Twitter omits the <head> tag for performance reasons (less code to load), but I still like to make it explicit as it’s a clear home for all metadata, styles, etc.
Line 4: <meta name="viewport" content="width=device-...
Perfect answer: This meta tag in the source code is for properly sizing the webpage on small screens, like smartphones. If you remember the original iPhone keynote, Steve Jobs showed the entire New York Times website on that tiny 4.5-inch screen; back then it was an amazing feature that you had to pinch to zoom to actually be able to read.
Now that websites are responsive by design, width=device-width tells the browser to use 100% of the device’s width as the viewport so there’s no horizontal scrolling, but you can even specify specific pixel values for width. The standard best practice is to set the initial scale to 1 and the width to device-width so people can still zoom around if they wish.
The screenshot of the source code doesn’t show these values but it’s good to know: Twitter also applies user-scalable=0 which, as the name suggests, disables the ability to zoom. This is not good for accessibility but makes the webpage feel more like a native app. It also sets maximum-scale=1 for the same reason (you can use minimum and maximum scale to clamp the zoom-ablity between these values). In general, setting the full width and initial scale is enough.
Line 5: <meta property="og:site_name" content="Twitt...
About 50% of all candidates knew about Open Graph tags, and a good answer to this question shows that they know about SEO.
Perfect answer: This tag is an Open Graph (OG) meta tag for the site name, Twitter. The Open Graph protocol was made by Facebook to make it easier to unfurl links and show their previews in a nice card layout; developers can add all sorts of authorship details and cover images for fancy sharing. In fact, these days it’s even common to auto-generate the open graph image using something like Puppeteer. (CSS-Tricks uses a WordPress plugin that does it.)
Another interesting side note is that meta tags usually have the name attribute, but OG uses the non-standard property attribute. I guess that’s just Facebook being Facebook? The title, URL, and description Open Graph tags are kinda redundant because we already have regular meta tags for these, but people add them just to be safe. Most sites these days use a combination of Open Graph and other metatags and the content on a page to generate rich previews.
Line 6: <meta name="apple-mobile-web-app-title" cont...
Most candidates didn’t know about this one, but experienced developers can talk about how to optimize a website for Apple devices, like apple-touch-icons and Safari pinned tab SVGs.
Perfect answer: You can pin a website on an iPhone’s homescreen to make it feel like a native app. Safari doesn’t support progressive web apps and you can’t really use other browser engines on iOS, so you don’t really have other options if you want that native-like experience, which Twitter, of course, likes. So they add this to tell Safari that the title of this app is Twitter. The next line is similar and controls how the status bar should look like when the app has launched.
Line 8: <meta name="theme-color" content="#ffffff"...
Perfect answer: This is the proper web standards-esque equivalent of the Apple status bar color meta tag. It tells the browser to theme the surrounding UI. Chrome on Android and Brave on desktop both do a pretty good job with that. You can put any CSS color in the content, and can even use the media attribute to only show this color for a specific media query like, for example, to support a dark theme. You can also define this and additional properties in the web app manifest.
Line 9: <meta http-equiv="origin-trial" content="...
Nobody I interviewed knew about this one. I would assume that you’d know this only if you have in-depth knowledge about all the new things that are happening on the standards track.
Perfect answer: Origin trials let us use new and experimental features on our site and the feedback is tracked by the user agent and reported to the web standards community without users having to opt-in to a feature flag. For example, Edge has an origin trial for dual-screen and foldable device primitives, which is pretty cool as you can make interesting layouts based on whether a foldable phone is opened or closed.
Also accepted: I don’t know about this one.
Line 10: html{-ms-text-size-adjust:100%;-webkit-text...
Almost nobody knew about this one too; only if you know about CSS edge cases and optimizations, you’d be able to figure this line out.
Perfect answer: Imagine that you don’t have a mobile responsive site and you open it on a small screen, so the browser might resize the text to make it bigger so it’s easier to read. The CSS text-size-adjust property can either disable this feature with the value none or specify a percentage up to which the browser is allowed to make the text bigger.
In this case, Twitter says the maximum is 100%, so the text should be no bigger than the actual size; they just do that because their site is already responsive and they don’t want to risk a browser breaking the layout with a larger font size. This is applied to the root HTML tag so it applies to everything inside it. Since this is an experimental CSS property, vendor prefixes are required. Also, there’s a missing <style> before this CSS, but I’m guessing that’s minified in the previous line and we don’t see it.
Also accepted: I don’t know about this property in specific but the -ms and -webkit- are vendor prefixes needed by Internet Explorer and WebKit-based browsers, respectively, for non-standard properties. We used to require these prefixes when CSS3 came out, but as properties go from experimental to stable or are adopted to a standards track, these prefixes go away in favor of a standardized property.
Bonus — Line 11: body{margin:0;}
This line from Twitter’s source code is particularly fun because you can follow-up with a question about the difference between resetting and normalizing a webpage. Almost everyone knew a version of the right answer.
Perfect answer: Because different browsers have different default styles (user agent stylesheet), you want to overwrite them by resetting properties so your site looks the same across devices. In this case, Twitter is telling the browser to remove the body tag’s default margin. This is just to reduce browser inconsistencies, but I prefer normalizing the styles instead of resetting them, i.e., applying the same defaults across browsers rather than removing them altogether. People even used to use * { margin: 0 } which is totally overkill and not great for performance, but now it’s common to import something like normalize.css or reset.css (or even something newer) and start from there.
More lines!
I always enjoy playing with the browser Inspector tool to see how sites are made, which is how I came up with this idea. Even though I consider myself sort of an expert on semantic HTML, I learn something new every time I do this exercise.
Since Twitter is mostly a client-side React app, there’s only a few dozen lines in the source code. Even with that, there’s so much to learn! There are a few more interesting lines in the Twitter source code that I leave as an exercise for you, the reader. How many of them could you explain in an interview?
It’s a god-damned miracle to me that open source is as robust as it is in tech. Consider the options. You could have a job (or be entrepreneurial) with your coding skills and likely be paid quite well. Or, you could write code for free and have strangers yell at you every day at all hours. I like being a contributing kinda guy, but I don’t have the stomach for the latter because of the work that potentially comes with open source.
Fair enough, in reality, most developers do a bit of coding work on both sides. And clearly, they find some value in doing open-source work; otherwise, they wouldn’t do it. But we’ve all heard the stories. It leads to developer burnout, depression, and countless abandoned projects. It’s like we know how to contribute to an open-source project (and even have some ground rules on etiquette), but lack an understanding of how to maintain it.
Dave, in “Sustaining Maintaining,” thinks it might be a lack of education on how to manage open source:
There’s plenty of write-ups on GitHub about how to start a new open source project, or how to add tooling, but almost no information or best practices on how to maintain a project over years. I think there’s a big education gap and opportunity here. GitHub has an obvious incentive to increase num_developers and num_repos, but I think it’s worthwhile to ease the burden of existing developers and increase the quality and security of existing repos. Open source maintenance needs a manual.
That’s a wonderful idea. I’ve been around tech a hot minute, but I don’t feel particularly knowledgeable about how to operate an open-source project. And frankly, that makes me scared of it, and my fear makes me avoid doing it at all.
I know how to set up the basics, but what if the project blows up in popularity? How to I manage my time commitment do it? How do I handle community disputes? Do I need a request for comments workflow? Who can I trust to help? What are the monetization strategies? What are the security concerns? What do I do when there starts to be dozens, then hundreds, then thousands of open issues? What do I do when I stop caring about this project? How do I stop myself from burning it to the ground?
If there was more education around how to do this well, more examples out there of people doing it well and benefitting from it, and some attempts at guardrails from the places that host them, that would go a long way.
Money is a key factor. Whenever I see success in open source, I see actually usable amounts of money coming in. I see big donations appropriately coming into Vue. I see Automattic building an empire around their core open-source products. I see Greensock having an open-source library but offering membership and a license for certain use cases and having that sustain a team long-term.
While it’s possible to bring in a decent amount of money through individual sponsorships, the real path to open source sustainability is to get larger donations from the companies that depend on your project. Getting $ 5 to $ 10 each month from a bunch of individuals is nice, but not as nice as getting $ 1,000 each month from a bunch of companies.
I think it would be cool to see a lot more developers making a proper healthy living on open source. If nothing else it would make me feel like this whole ecosystem is more stable.
There is a conversation that has been percolating for as long as I’ve been in the web design and development industry. It’s centered around the conflict between design tools and development tools. The final product of web design is often a mockup. The old joke was that web developers make websites and web designers make paintings of websites. That disconnect is a source of immense friction. Which is the source of truth?
What if there really could be a single source of truth. What if the design tool works on the same exact code as the production website? The latest chapter in this epic conversation is UXPin.
Let’s set up the facts so you can see this all play out.
UXPin is an in-browser design tool.
UXPin is a powerful design tool with all the features you’d expect, particularly focused on digital screen-based design.
The fact that it is in-browser is extra great here. Designing websites… on a website is an obvious and natural fit. It means what you are looking at is how it’s going to look. This is particularly important with typography! The implementer of this card component can see exact colors (in the right formats) are being used, as well as the exact pixel dimensions, etc.
Over a decade ago, Jason Santa Maria thought a lot about what a next-gen design tool would look like. Could we just use the browser directly?
I don’t think the browser is enough. A web designer jumping into the browser before tackling the creative and messaging problems is akin to an architect hammering pieces of wood together and then measuring afterwards. The imaginative process is cut short by the tools at hand; and it’s that imagination—or spark—at the beginning of a design that lays the path for everything that follows.
Perhaps not the browser directly, but a design tool within a browser, that could be the best of both worlds:
An application like this could change the process of web design considerably. Most importantly, it wouldn’t be a proxy application that we use to simulate the way webpages look—it would already speak the language of the web. It would truly be designing in the browser.
It’s so cool to see this play out in a way that aligns with what great designers envisioned before it was possible, and with new aspects that melt with today’s technological possibilities.
You can work on your own React components within UXPin.
This is where the source of truth magic can happen. It’s one thing if a design tool can output a React (or any other framework) component. That’s a neat trick. But it’s likely to be a one-way trip. Components in real-world projects are full of other things that aren’t entirely the domain of design. Perhaps a component uses a hook to return the current user’s permissions and disable a button if they don’t have access. The disabled button has an element of design to it, but most of that code does not.
It’s impractical to have a design tool that can’t respect other code in that component and essentially just leave it alone. Essentially, it’s not really a design tool if it exports components but can’t import them.
Now, fair is fair, this is going to take a little work to set up. Might just be a couple of hours, or it might take few days for a complete design system. UXPin only works with React and uses a webpack configuration to integrate it.
Once you’ve gotten in going, the components you use in UXPin are very literally the components you use to build your production website.
It’s pretty impressive really, to see a design tool digest pre-built components and allow them to be used on an entirely new canvas for prototyping.
They’ve got lots of help for you on getting this going on your project, including:
As it should, it’s likely to influence how you build components.
Components tend to have props, and props control things like design and content inside. UXPin gives you a UI for the props, meaning you have total control over the component.
Knowing that, you might give yourself a prop interface for your components that provides you with lots of design control. For example, integrating theme switching.
This is all even faster with Storybook.
Another awfully popular tool in JavaScript-components-land to view your components is Storybook. It’s not a design tool like UXPin—it’s more like a zoo for your components. You might already have it set up, or you might find value in using Storybook as well.
The great news? UXPin Merge works together awesomely with Storybook. It makes integration super quick and easy. Plus then it supports any framework, like Angular, Svelte, Vue, etc—in addition to React.
What if designers could use the very same components used by engineers and they’re all stored in a shared design system (with accurate documentation and tests)? Many of the frustrating and expensive misunderstandings between designers and engineers would stop happening.
And a plan:
Connect to any Git repo.
Learn about all the components in that repo.
Make those components available to the UXPin visual editor.
Watch for any changes to the repo and reflect those changes in the visual editor.
Let designer’s design and deliver accurate specs to developers for using those components.
Have you ever wished you could see the HTML source of a web page while on a mobile browser, which generally doesn’t offer that feature? If you have a desktop machine around, there are ways, but what I mean is getting the source without anything but the device itself.
The little View Source tool by Neatnik does the trick.
You enter the URL in the little bar to see the source of that URL. Or add the URL to the the tool’s URL itself to link right to it. Here’s CSS-Tricks (without line wrapping and tidyied up!):
Have you ever wished you could see the HTML source of a web page while on a mobile browser, which generally doesn’t offer that feature? If you have a desktop machine around, there are ways, but what I mean is getting the source without anything but the device itself.
The little View Source tool by Neatnik does the trick.
You enter the URL in the little bar to see the source of that URL. Or add the URL to the the tool’s URL itself to link right to it. Here’s CSS-Tricks (without line wrapping and tidyied up!):
In my previous article, I discussed how I learned to create a decoupled WordPress powered Gatsby site using the Gatsby Source WPGraphQL plugin. The project was done following the ongoing developmental version of WPGraphQL and an excellent tutorial by Henrik Wirth. Although WPGraphQL was used in some production sites at that time, there were lot of iterations that introduced breaking changes. Since the release of WPGraphQL v1.0 last November, the plugin is stable and available via the WordPress plugin directory.
The WPGraphQL plugin can be used to create a site that uses WordPress for content management, but with a front-end that’s driven by Gatsby. We call this a “decoupled” or “headless” CMS because the site’s back-end and front-end are separate entities that still talk to one another via APIs where components on the front end consume data from the CMS.
The Denver Post, The Twin Cities Pioneer Press, The San Jose Mercury News, and Credit Karma have a decoupled WordPress site with WPGraphQL.
In the true sense of a “decoupled” or “headless” site, WPGraphQL can be used to port WordPress data to other frameworks, like Next.js, Vue.js, among others. For the Gatsby framework, the Gatsby Source WordPress plugin is recommended, and it utilizes WPGraphQL to source data from WordPress.
Let’s set everything up together and tour the plugin.
Prerequisites
In my previous article, we covered the prerequisites needed for setting up WordPress and Gatsby sites, and porting back-end WordPress data to a Gatsby-powered front-end site with site deployment. I’m skipping a lot of those details here because the fundamental concepts are the same for this article, except that WordPress data is fetched by the Gatsby Source WordPress plugin this time.
If you are new to Gatsby and just now jumping into Gatsby’s generated static site generator bandwagon, I’d suggest reading “An Honest Review of Gatsby” by React expert David Cramer and “Gatsby vs Next.js” by Jared Palmer. What we’re going to cover isn‘t for everyone, and these articles may be helpful to evaluate for yourself whether it’s the right technology for you and your project.
This article is not a step-by-step guide on how to use Gatsby Source WordPress Plugin. Again, that’s already available in Gatsby’s documentation. Conversely, if you happen to be an expert in React, JavaScript, Node.js or GraphQL, then what we cover here is probably stuff you already know. This article is an opinion piece based on my personal experience, which I hope is useful for the average WordPress user with basic working knowledge on the subject.
And, before we get started, it’s worth mentioning that the Gatsby Source WordPress plugin was completely rewritten in version 4 and uses WPGraphQL as its data source. The previous release, version 3, was built with REST API as its data source. Since the stable version of the plugin was recently released, the number of starter themes and demos that support it are limited.
First, we need WordPress
For this project, I set up a fresh WordPress site with Local by Flywheel that uses the default Twenty Twenty theme. I imported theme unit test data for pages and posts, as described in the WordPress Codex. While this was the baseline I was working with, this could have just as easily been an existing WordPress site that’s either on a remote server or a local install.
Now that we have an established baseline, we can log into the WordPress admin and install the WPGraphQL and WPGatsby plugins we need and activate them.
As we covered in the previous article, what this does is expose GraphQL and WPGraphiQL API in the WordPress admin, allowing the GraphiQL API to create a “playground” for testing GraphQL queries based on WordPress data.
The GraphiQL screen provides three panels: one to navigate between different objects (left), one to query data (center), and one to visualize the returned data (right).
According to Gatsby tutorials, there are three options for creating a WordPress-powered Gatsby site. Let’s look at each one.
Option 1: Using the Gatsby starter
The docs have a step-by-step guide on how to set up a WordPress-Gatsby site, but here’s the gist.
Run the following in the command line to fetch the starter from GitHub repository and clone it into a my-wpstarter project folder:
#! clone starter repo gatsby new my-wpstarter https://github.com/gatsbyjs/gatsby-starter-wordpress-blog
Then, install the npm packages
#! npm npm install #! or yarn yarn install
Now that the starter is cloned, let’s open the gatsby-config.js file in our code editor and update its URL option to fetch data from our WordPress endpoint (see above).
// gatsby-config.js { resolve: gatsby-source-wordpress, options: { // WordPress is the GraphQL url. url: process.env.WPGRAPHQL_URL || https://wpgatsbydemo.wpengine.com/graphql, }, },
Now, we’ll replace the starter’s data source endpoint URL with our own WordPress site URL:
Let’s make sure we are in the my-wpstarter project directory. From the project folder, we’ll run the gatsby develop command to build our new Gatsby site from our WordPress data source endpoint. In the terminal we should be able see the gatsby-source-wordpress plugin fetching data, including errors and successful site processes along the way.
If we see a success Building development bundle message at the end, that means the Gatsby site build process is complete and the site can be viewed at http://localhost:8000.
This is a bare-bone starter blog with basic files and a few components. It’s file structure is very similar to the gatsby-starter-blog, except this one has a templates folder that includes blog-post.js and blog-post-achive.js template files.
When we view the GraphiQL API explorer at http://localhost:8000/___graphql we can see all of the data from WordPress exposed by WPGraphQL, as well as query and retrieve specific data right from the UI.
This example shows a query for menu items in WordPress (middle panel) and the returned data of that query (right panel).
You got it! Gatsby assumes the rest is up to us to build, using Gatsby components that pull in WordPress data for the presentation.
We’ll spin up a new project from the command line:
#! create a new Gatsby site gatsby new wpgatsby-from-scratch-demo
This gets us a wpgatsby-from-scratch-demo folder that includes the starter theme. From here, we’ll cd into that folder and start developing:
cd wpgatsby-from-scratch-demo gatsby develop
Now we can open up http://localhost:8000 in the browser and get the welcome page.
Now we are good to go to get start grabbing data from our WordPress site. Let’s install the Gatsby Source Plugin:
#! install with rpm npm install gatsby-source-wordpress #! install with yarn yarn add Gatsby-source-wordpress
If we check our browser now, you’ll noticed that nothing happens — we still get the same Gatsby welcome. To fetch our WordPress site data, we need to add the plugin to the gatsby-config.js file. Open the file and insert the following:
// gatsby-config.js module.exports = { siteMetadata: { // ... }, plugins: [ // Add Gatsby-source-wordpress plugin { resolve: `gatsby-source-wordpress`, options: { /* * The full URL of the WordPress site's GraphQL API. * Example : 'https://www.example-site.com/graphql' */ url: `http://gatsbywpv4.local/graphql`, }, }, // The following plugins are not required for gatsby-source-wordpress .... ], }
Just like last time, we need to change the WordPress data endpoint source to the URL of our WordPress site. Let’s run gatsby develop in our terminal to start things up.
Now we see that the createPages function is running successfully to build a development bundle (left), and that WordPress data for posts, pages, taxonomies, users, menus, and everything else are fetched (right).
However, when we open http://localhost:8000 in our browser, nothing seems to happen. We still see the same welcome screen. But if we examine GraphiQL in our browser (at http://localhost:8000/___graphql) then we see all WordPress data exposed to our Gatsby site that we can query and display as we want.
GraphiQL shows that WordPress data is indeed exposed and we are able to create and execute queries.
Let’s test the following query I pulled straight from Gatsby’s tutorial, in the GraphiQL explorer:
query { allWpPost { nodes { id title excerpt slug date(formatString: "MMMM DD, YYYY") } } }
When we run the above query, we will see the allWpPost.nodes property value, with sub properties for id, title, excerpt, and others.
Now, let’s open our src/components/pages/index.js component file and replace the code with this:
// src/components/pages/index.js import React from "react" import { graphql } from "gatsby" import Layout from "../components/layout" import SEO from "../components/seo" export default function Home({ data }) { return ( <Layout> <SEO title="home" /> <h1>My WordPress Blog</h1> <h4>Posts</h4> {data.allWpPost.nodes.map(node => ( <div> <p>{node.title}</p> <div dangerouslySetInnerHTML={{ __html: node.excerpt }} /> </div> ))} </Layout> ) } export const pageQuery = graphql` query { allWpPost(sort: { fields: [date] }) { nodes { title excerpt slug } } } `
Save it, restart the server with gatsby develop, and refresh the page. If the build was successful, then our site’s homepage should display a list of sorted blog posts from WordPress!
As described in the tutorial, Gatsby uses createPages API, or what it calls as its “workhorse” API, to programmatically create pages from data (from Markdown or WordPress). Unlike Markdown data, we don’t need to create a slug here because each WordPress post has its own unique slug which can be fetched from the WordPress data endpoint.
Creating pages for each post
Gatsby uses the gatsby-node.js file, located at the root of our project, to programmatically create blog post. Let’s open the gatsby-node.js file in our text editor add the following code from the tutorial.
As noted in the Gatsby Part 7 tutorial, the above code is the first part of creating our post pages from WordPress data source. Following the guide, let’s restart our server and develop our site with gatsby develop.
We should see console.log output in our terminal as pages being build). However, our homepage still looks the same. To create single posts, Gatsby requires templates to build pages, which we will create in next step.. That’s what we’ll do next.
Creating blog post templates
Let’s create a src/components/templates folder in the src/ directory and create a blog-post.js file by pasting the following code snippets from the tutorial:
// src/templates/blog-post.js import React from "react" import Layout from "../components/layout" import { graphql } from "gatsby" export default function BlogPost({ data }) { const post = data.allWpPost.nodes[0] console.log(post) return ( <Layout> <div> <h1>{post.title}</h1> <div dangerouslySetInnerHTML={{ __html: post.content }} /> </div> </Layout> ) } export const query = graphql` query($ slug: String!) { allWpPost(filter: { slug: { eq: $ slug } }) { nodes { title content } } } `
As explained in the tutorial, the above code snippets create a single post with React JSX and wraps post.title and post.content (lines 12-13) around the src/components/layout.js components. At the bottom section of the file, a GraphQL query is added and calls a specific post based on the post slug variable $ slug. This variable is passed to the blog-post.js template when the page is created in gatsby-node.js.
Next we should also update lines 12-13 of our gatsby-node.js file with the following code from the tutorial.
Let‘s stop and restart our local server with gatsby develop and view the site. We won’t see our homepage with a list of blog post links. However, if we check with http://localhost:8000/abcdf we should see the following 404 page with a list of individual pages and posts links.
If we check http://localhost:8000/hello-gatsby-world, we should our “Hello Gatsby WordPress World” post in all its glory.
The next step is to link the post titles from the homepage to the actual posts.
Linking to posts from the homepage
Linking the work from the homepage to post pages is done by wrapping post titles in the index.js file with Gatsby‘s Link component. Let’s open the index.js file that we created earlier and add the Link component:
// src/components/pages/index.js import React from "react" import { Link, graphql } from "gatsby" import Layout from "../components/layout" import SEO from "../components/seo" export default function Home({ data }) { return ( <Layout> <SEO title="home" /> {/* <h1>My WordPress Blog</h1> <h4>Posts</h4> */} {data.allWpPost.nodes.map(node => ( <div key={node.slug}> <Link to={node.slug}> <h2>{node.title}</h2> </Link> <div dangerouslySetInnerHTML={{ __html: node.excerpt }} /> </div> ))} </Layout> ) } export const pageQuery = graphql` query { allWpPost(sort: { fields: [date], order: DEC }) { nodes { title excerpt slug } } } `
We imported the Link component from Gatsby and then wrapped the post title with the Link component and reference the slug of the post. Let’s clean up the code by commenting out the page title, changing the title element to <h2>, and adding sorted posts order to DEC in our graphql query as well as the gatsby-node.js file.
As we did earlier, let’s stop and restart the development server with gatsby develop, and view our homepage at http://localhost:8000. The post title should link to single post page.
You can apply the same procedure to calling and creating pages, custom post types, custom fields, taxonomies, and all the fun and flexible content WordPress is known for. This can be as simple or as complex as you would like it to be, so explore and have fun with it!
Option 3: Using Gatsby’s WordPress Twenty Twenty Starter
Gatsby’s starter template for the default WordPress Twenty Twenty theme is created and maintained by Henrik Wirth, who also has an extremely detailed and thorough step-by-step guide that you might recall from my previous article. This starter, unlike the others, is actually updated to version 4 of the Gatsby Source Plugin and works out of the box after the initial WordPress setup described in the documentation. It maintains the same Twenty Twenty styling in the Gatsby front-end site, but has few limitations — including, comments, monthly archive pages, and tags — that are unsupported.
First let’s clone the starter in our twenty-twenty-starter folder.
#! clone gatsby-starter-wordpress-twenty-twenty gatsby new twenty-twenty-starter https://github.com/henrikwirth/gatsby-starter-wordpress-twenty-twenty
Let’s cd into that folder and then run gatsby develop to spin up the site. It won’t work properly the first time because we have not changed our WPGRAPHQL_URL value yet in the env.example file. We need to rename the file from .env.example to simply .env, as suggested in the documentation.
After that, restart the development server with gatsby develop. It should build the site successfully.
The menu may or may not appear depending on how the WordPress menu is named. The starter’s menu slug for querying menu items is primary in Menu.js (line 8). Because I had set my WordPress site up using main-menu instead, I had to update the Menu.js file accordingly.
Because the starter was tested with older versions of our tools , I decided bump up the plugins to the latest versions — WPGraphQL 1.2.6, WPGatsby 1.0.6, and Gatsby Source WordPress 4.0.1 — and it worked fine without any errors.
This starter uses the same CSS that the default WordPress Twenty Twenty theme does, and the same assets folder, including fonts, images, SVG files, and other files that are included in the default theme.
If you are happy with WordPress Twenty Twenty styling, then that’s it. Enjoy your new decoupled Gatsby site!
But let’s say we want to work with custom styles. The CSS files are imported from the assets folder via the gatsby-browser.js file.
Let’s modify the styles for the site’s header, footer, posts and pages. Gatsby provides different options to style its components and, in this project, I followed the CSS module for styling and modified CSS markup of the Twenty Twenty starter components accordingly.
We can start by creating a style folder at src/components/styles and, inside it, a base folder. Here’s the general file structure we’re aiming for:
#! partial structure of /style folder src |--/components |--/styles |--main.css |--/base |--reset.css |--variables.css |--/scss |--header.module.css |--mainNav.module.css |--footer.module.css |--elements.module.css // and so on...
We want to style the site’s header and footer, so let’s open up the Header.js and Footer.js components in the starter and replace the code with the following:
Now, let’s restart our development server. We should see the following, including a new customized header and footer. I used the same style from Learning Gatsby which is an online course by Morten Rand-Hendriksen (I am a fan!).
Screenshot showing modified header and footer styling.
There are many posts that compare the advantages and disadvantages of a decoupled WordPress and Jamstack site like the Gatsby examples we’ve covered. In my research, I realized that none of them are as exhaustive as what Chris already wrote in ”WordPress and Jamstack” where he compares everything, from performance and features, to the developer experience and build processes, and beyond.
I found the following articles draw some helpful conclusions on a variety of topics, including:
What’s the cost?
The general assumption is that Jamstack hosting is cheap, and cheaper than traditional LAMP stack hosting. But there’s actually quite a bit to consider and your actual costs might vary.
“How to Run Your WordPress Site On Local, Gatsby and Netlify for FREE!” (Nate Fitch): Nate’s take is that a headless WordPress setup like this might be a good option if the project is a static blog or a site that doesn’t require any interactions. For example, It wouldn’t take too much work to get images hosted on Cloudinary, or another CDN, but it would for large, interactive sites.
“WordPress and Jamstack” (Chris Coyier): There’s a specific section in here where Chris breaks down the pricing for different types of hosting for Jamstack sites and why a blanket statement like “Jamstack is cheaper” doesn’t fly because the actual cost depends on the site and its usage.
“Choosing between Netlify, Vercel and Digital Ocean” by (Zell Liew): Zell discusses his experience choosing a hosting plan. His take: If you have a small project, go with Netlify; if you have a larger project, use Digital Ocean.
Why go static at all?
Considering all the things you get for “free” in WordPress — think comments, plugins, integrations, etc. — you might wonder if it’s even worth trading in a server-side setup for a client-side solution. In his “Static or Not?” post, Chris breaks down the reasons why you’d want to choose one over the other.
How do we get commenting functionality?
We get native commenting right out of the box with WordPress. Yet, support for comments on a static site is a bit of a juggernaut. In “JAMstack Comments” here on CSS-Tricks, the author explains how dynamic comments can be implemented in a static site, like Gatsby, using Netlify services. I briefly touched on this in my previous article.
“A primer on JavaScript SEO for WordPress” (Jono Alderson): This exhaustive guide includes a section on how to integrate Yoast SEO into a headless architecture and the implications of relying on JavaScript for SEO. The bottom line is that theme and plugin developers shouldn’t worry much about the changing landscape of JavaScript and SEO as long as they continue following best practices. At the same time, they should be aware of what’s changing and adapt accordingly.
“Jason Bahl of WPGraphQL’s role in the operating system for the web” (YouTube): Here’s Jason again, this time discussing templating. He covers WPGraphQL’s role in the WordPress ecosystem and headless Gatsby sites, emphasizing that WPGraphQL is all about data exposure, just like the WordPress REST API. How the WPGraphQL exposes WordPress data in Gatsby and displays it in front-end React components is up to developers.
There are currently no Gatsby React templates that are geared toward non-developers but some agencies, like Gatsby WP Themes and the Themeforest marketplace, are beginning to fill the gap. For example, Gatsby WP Themes covers plugins for dynamic contents like MailChimp integration, using the Contact Form 7 plugin for forms, Yoast SEO, and more. Themeforest lists 30+ Gatsby templates, including the Gatsby – WordPress + eCommerce theme which gives you an idea of how far we can go with this sort of setup. Just remember: these are commercial sites, and much of what you’ll find has a cost attached to it.
My evolving personal take
If you recall, I ended my last article with a personal reflection on my journey to create a headless WordPress site that uses Gatsby as the front end. My initial take was less than a glowing review:
Based on my very limited experience, I think that currently available Gatsby WordPress themes are not ready for prime time use for users like me. Yeah, it is exciting to try something on the bleeding edge that’s clearly in the minds of many WordPress users and developers. At the same time, the constantly evolving work being done on the WordPress block editor, WPGraphQL and Gatsby Source WordPress plugins makes it difficult to predict where things are going and when it will settle into a state where it is safe to use in other contexts.
So, after all this my long journey to headless WordPress site, what is my take now? As an open-minded learner, my thoughts are still evolving. But I couldn’t agree more with what Chris says in his “Static or Not?” post:
It’s a perfectly acceptable and often smart choice to run a WordPress site. I think about it in terms of robustness and feature-readiness. Need e-commerce? It’s there. Need forms? There are great plugins. Need to augment how the CMS works? You have control over the types of content and what is in them. Need auth? That’s a core feature. Wish you had a great editing experience? Gutenberg is glorious.
I am a WordPress enthusiast and I love WordPress as a CMS. However, as a technological learning challenge, I have not given up yet to have a decoupled WordPress site as a personal project.
I must admit that learning to create a decoupled Gatsby site with WordPress has continued to be frustrating. I acknowledge that any modern technology stack is not “a cup of tea” for many WordPress users. Gatsby has steep learning curve as these stacks are targeted for experienced React and JavaScript developers.
Self-learning a new technology can be a frustrating experience. Learning Gatsby is especially frustrating if we (including yours truly) happen to lack experience with Node, React, JavaScript and, most importantly, GraphQL. My learning project sites would break because of some dependency and fixing it might take me several days of research. I sometimes wonder if the trouble is worth the outcome. Don’t get me wrong; my frustration is with my own lack of experience, not the frameworks themselves (because they are amazing).
Even experienced developers like David Cramer and Jared Palmer find using Gatsby and GraphQL frustrating and echo some of the same sentiments that we beginners face when using GraphQL. I couldn’t agree more with David who writes:
It’s a static website generator. It literally does not need GraphQL all over the place. While there are few instances in the real world where that is valuable, it shouldn’t require a GraphQL API to read objects that are already in memory.
GraphQL is an opinionated query language API and its specification changes frequently. Indeed, most of the discussion in the WPGraphQL Slack are related to queries.
While working on this project, I came cross the Frontity React Framework while reading an article on CSS-Tricks. It fetches all WordPress data with the REST API without writing a single query. That seems to be a better option, at least for my use case. Additionally, it appears to be a much simpler alternative. In my brief experience with it, I didn’t have to deal with any dependency or library issues at all. Frontity’s themes concept is so WordPress-y and provides excellent tutorials.
I am currently exploring whether the Frontity framework would be a better option for my decoupled project site and will share my experience in a future article.
Resources
Gatsby feels like it is targeted at experienced React and JavaScript developers, not for beginners like me! The gatsby-source-wordpress and gatsby-source-wpgraphql plugins do an excellent job of exposing WordPress data into Gatsby sites, but the rest is up to users to present the data on the front end using your framework of choice: React, Vue, Next, etc.
A lack of sound knowledge of React and JavaScript is the main hurdle for beginners. The Gatsby community fills a lot of these gaps, and there are a lot of resources available to keep learning and exploring.
Gatsby Conference 2021 talks
Two workshop talks from the recent 2021 Gatsby Conference were related to Gatsby WordPress sites. In one, Jason Bahl hosts a workshop that walks through how to create a Gatsby blog powered by WordPress data, including support the Yoast SEO plugin, and how to deploy the site to Gatsby Cloud.
Both talks are good, especially if you learn better with hands-on experience. I also found this Matt Report podcast episode with Jason Bahl where Jason answers basic questions that are geared toward beginners.
Tutorial courses
Morten Rand-Hendriksen has an excellent course on LinkedIn Learning that uses the Gatsby Source WordPress plugin. If you are interested in more hands-on experience making a customized site that expands on the two Gatsby starters we covered, this course is great because in teaches how to create a complete working site, complete with a dropdown header navigation, footer menus, posts, pages, categories, tags, and page navigation.
Check that out, a homepage, post template, categories, tags, a header that includes navigation… there’s a lot going on here!
At the time I’m writing this, there are ten Gatsby starters for WordPress. As we mentioned earlier, only Gatsby Starter WordPress Twenty Twenty is based on the latest version of the Gatsby Source WordPress plugin; the rest are version 3.
Thanks for reading. I am always interested to know how fellow WordPress users who lack heavy technical experience like me are using this plugin. If you have any feedback, please feel free to post them in the comments.
Last time there was a little flurry of activity around the concept of “View Source,” I did get the sense that not everyone was on the same page about what that even means. Jim Nielsen:
First, when we talk about “View Source” what precisely are we talking about? I think this is an important point to clarify, as it sometimes goes unsaid and therefore a lot of assumptions sneak into the conversation and we might realize we’re not all talking about the same thing.
There are three things that people might be talking about:
View source code (the code that generates the HTML delivered over the network)
View page source (the HTML delivered over the network)
View runtime source (the living HTML, a.k.a the DOM)
I’ll assign what I think are the values of each are, as slices of a pie chart:
10%
5%
85%
Every major browser ships with built-in DevTools where you can easily peak at the “runtime source.” That’s where the vast bulk of value is to me. If browsers ever talked about removing that, I’m sure we’d all be up in arms. Even for non-developers, the existence of this tool might be the spark that grows baby web developers.
DevTools also provides a way to view the HTML delivered over the network, hence my hardline stance from before:
I literally don’t care at all about View Source and wouldn’t miss it if it was removed from browsers. I live in DevTools, and I’ll bet you do too. It entirely supersedes View Source, as you can quite literally view source inside it if you’d like.
Jim’s post explains the difference between all three types of “viewing source” in great detail. For sites that are built entirely from client-side JavaScript, viewing the HTML over the wire is nearly useless. But if you could see the whole codebase (say if it was open-source on GitHub), there is certainly value there.
I was browsing the Svelte docs on my iPhone and came across a blaring UI bug. The notch in the in the REPL knob was totally out of whack. I’m always looking to contribute to open source, and I thought this would be a quick and easy fix. Turns out, there was a lot more to it than just changing one line of CSS.
Replicating, debugging, setting up the local environment was interesting, difficult, and meaningful.
The issue
I opened my browser DevTools, thinking I’d see the same issue in the phone view. But, the bug wasn’t there. Now this is a seriously tricky CSS problem.
💡 What I learned
If you’re using Chrome on iOS as your browser, you’re still using Safari’s renderer. From Wikipedia:
Chrome uses the iOS WebKit – which is Apple’s own mobile rendering engine and components, developed for their Safari browser – therefore it is restricted from using Google’s own V8 JavaScript engine.
This is backed up by caniuse, which provides this note on iPS Safari:
Now it’s clear why the issue wasn’t showing up on my machine but it was showing up on my phone. Different rendering engines!
Reproduce the issue locally
I pulled down the project and ran it locally. I confirmed it was still an issue by running the local code in a simulator as well as on my actual iPhone. Safari on macOS has an easy way to open up DevTools instances of each one.
This provides access to a console just like you would in the browser but for iOS Safari. I’m not going to lie, Apple’s developer experience is top notch (see what I did there? 😬).
I’m able to reproduce the issue locally now.
💡 What I learned
After pulling down the Svelte repo and looking around the code a bit, I noticed the UI and SVGs were being pulled in via a package called @sveltejs/site-kit. Great, now I need my local version of site kit to get pulled into svelte/site so I can see changes and debug the issue.
I needed to point the node_modules in Svelte’s package.json to my local copy of site-kit. This sounded like a Symlink. After looking through the docs without much luck I Googled around and stumbled upon npm-link. That let me see what I was doing!
I can now make local changes to site-kit and see them reflected in the Svelte project.
Solving the issue
Seriously, all this needed was a one-line change:
border: transparent;
But locating where that one line should go was not as straightforward as you’d think. Source maps on the project are still a little rough around the edges and are showing this CSS coming from the Nav.svelte component when it was really coming from another file. That would be another great way to contribute to the project!
Then you search around and learn that this is being handled and you learn a little more about how it’s done. Everything now looks great on mobile and desktop.
That’s all it needed!
Let’s rewind
What started as a quick, one-line change became a bit of a journey. I had to:
Run the project and component repositories
Learn about system linking
Contribute documentation about lining to site-kit
Learn about different browser renderers
Learn how to emulate an iOS Safari browser
Learn how to get access to its debugger
Find the issue when source maps weren’t working correctly
Fix the issue (finally!)
Working on your own, you normally don’t get to deal with issues like this, or have a large complex system you need to build a mental model of and learn. You don’t get to learn from maintainers. Most importantly, you don’t see all of the hard work that goes into building a popular technical product.
When I submitted this idea to CSS-Tricks. Chris said he had recently dealt with a similar situation. Difficult learning is durable learning. Embrace the struggle.
Never stop learning
I grabbed another issue from the Svelte project and now I’m learning about CSSStyleSheet because there’s another issue (I think), with how Safari handles keyframe animations within stylemanager.ts. And so the learning continues down paths I never would have treaded in my day-to-day work.
When something breaks, enjoy the journey of learning the system. You’ll gain valuable insights into why that thing broke and what can be done to fix it. That’s one of the awesome benefits of contributing to open source projects and why I’d encourage you to do the same.
I feel like the tech industry takes itself far too seriously sometimes. I get frustrated by all the posturing and gatekeeping – “You’re not a real developer unless you use x framework”, “CSS isn’t a real programming language”.
I think this kind of rhetoric often puts new developers off, and the ones that don’t get put off are more inclined to skip over learning things like semantic markup and accessibility in favour of learning the latest framework.
Having a deeper knowledge of HTML and CSS is often devalued.
Posturing and gatekeeping, indeed. I’ve yet to witness a conversation where discussing what is or isn’t “real” programming was fruitful for anybody.
It’s a valid question. A “source map” is a special file that connects a minified/uglified version of an asset (CSS or JavaScript) to the original authored version. Say you’ve got a filed called _header.scss that gets imported into global.scss which is compiled to global.css. That final CSS file is what gets loaded in the browser, so for example, when you inspect an element in DevTools, it might tell you that the <nav> is display: flex; because it says so on line 387 in global.css.
On line 528 of page.css</, we can find out that <code>.meta has position: relative;
But because that final CSS file is probably minified (all whitespace removed), DevTools is likely to tell us that we’ll find the declaration we’re looking for on line 1! Unfortunate, and not helpful for development.
That’s where source maps come in. Like I said up top, source maps are special files that connect that final output file the browser is actually using with the authored files that you actually work with and write code in on your file system.
Typically, source maps are a configuration option from the preprocessor. Here’s Babel’s options. I believe that with Sass, you don’t even have to pass a flag for it in the command or anything because it produces source maps by default.
So, these source maps are for developers. They are particularly useful for you and your team because they help tremendously for debugging issues as well as day-to-day work. I’m sure I make use of them just about every day. I’d say in general, they are used for local development. You might even .gitignore them or skip them in a deployment process in order to serve and store fewer assets to production. But there’s been some recent chatter about making sure they go to production as well.
But source maps have long been seen merely as a local development tool. Not something you ship to production, although people have also been doing that, such that live debugging would be easier. That in itself is a great reason to ship source maps. […]
Check out that issue thread for more interesting conversation about shipping source maps to production. The benefits boil down to these two things:
It might help you track down bugs in production more easily
It helps other people learn from your website more easily
Both are cool. Personally, I’d be opposed to shipping performance-optimized code for learning purposes alone. I wrote about that last year:
I don’t want my source to be human-readable, not for protective reasons, but because I care about web performance more. I want my website to arrive at light speed on a tiny spec of magical network packet dust and blossom into a complete website. Or do whatever computer science deems is the absolute fastest way to send website data between computers. I’m much more worried about the state of web performance than I am about web education. But even if I was very worried about web education, I don’t think it’s the network’s job to deliver teachability.
Shipping source maps to production is a nice middle ground. There’s no hit on performance (source maps don’t get loaded unless you have DevTools open, which is, IMO, irrelevant to a real performance discussion) with the benefit of delivering debugging and learning benefits.
The downsides brought up in recent discussion boil down to:
Sourcemaps require compilation time
It allows people to, I dunno, steal your code or something
I don’t care about #2 (sorry), and #1 seems generally negligible for a small or what we think of as the average site, though I’m afraid I can’t speak for mega sites.
One thing I should add though is that source maps can even be generated for CSS-in-JS tooling, so for those that literally inject styles into the DOM for you, those source maps are injected as well. I’ve seen major slowdowns in those situations, so I would say definitely do not ship source maps to production if you can’t split them out of your main bundles. Otherwise, I’d vote strongly that you do.