Tag: GitHub

Custom Styles in GitHub Readme Files

Even though GitHub Readme files (typically ./readme.md) are Markdown, and although Markdown supports HTML, you can’t put <style> or <script> tags init. (Well, you can, they just get stripped.) So you can’t apply custom styles there. Or can you?

  1. You can use SVG as an <img src="./file.svg" alt="" /> (anywhere).
  2. When used that way, even stuff like animations within them play (wow).
  3. SVG has stuff like <text> for textual content, but also <foreignObject> for regular ol’ HTML content.
  4. SVG support <style> tags.
  5. Your readme.md file does support <img> with SVG sources.

Sindre Sorhus combined all that into an example.

That same SVG source will work here:


The post Custom Styles in GitHub Readme Files appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , ,

Continuous Performance Analysis with Lighthouse CI and GitHub Actions

Lighthouse is a free and open-source tool for assessing your website’s performance, accessibility, progressive web app metrics, SEO, and more. The easiest way to use it is through the Chrome DevTools panel. Once you open the DevTools, you will see a “Lighthouse” tab. Clicking the “Generate report” button will run a series of tests on the web page and display the results right there in the Lighthouse tab. This makes it easy to test any web page, whether public or requiring authentication.

If you don’t use Chrome or Chromium-based browsers, like Microsoft Edge or Brave, you can run Lighthouse through its web interface but it only works with publicly available web pages. A Node CLI tool is also provided for those who wish to run Lighthouse audits from the command line.

All the options listed above require some form of manual intervention. Wouldn‘t it be great if we could integrate Lighthouse testing in the continuous integration process so that the impact of our code changes can be displayed inline with each pull request, and so that we can fail the builds if certain performance thresholds are not net? Well, that’s exactly why Lighthouse CI exists!

It is a suite of tools that help you identify the impact of specific code changes on you site not just performance-wise, but in terms of SEO, accessibility, offline support, and other best practices. It’s offers a great way to enforce performance budgets, and also helps you keep track of each reported metric so you can see how they have changed over time.

In this article, we’ll go over how to set up Lighthouse CI and run it locally, then how to get it working as part of a CI workflow through GitHub Actions. Note that Lighthouse CI also works with other CI providers such as Travis CI, GitLab CI, and Circle CI in case you prefer to not to use GitHub Actions.

Setting up the Lighthouse CI locally

In this section, you will configure and run the Lighthouse CI command line tool locally on your machine. Before you proceed, ensure you have Node.js v10 LTS or later and Google Chrome (stable) installed on your machine, then proceed to install the Lighthouse CI tool globally:

$  npm install -g @lhci/cli

Once the CLI has been installed successfully, ru lhci --help to view all the available commands that the tool provides. There are eight commands available at the time of writing.

$  lhci --help lhci <command> <options>  Commands:   lhci collect      Run Lighthouse and save the results to a local folder   lhci upload       Save the results to the server   lhci assert       Assert that the latest results meet expectations   lhci autorun      Run collect/assert/upload with sensible defaults   lhci healthcheck  Run diagnostics to ensure a valid configuration   lhci open         Opens the HTML reports of collected runs   lhci wizard       Step-by-step wizard for CI tasks like creating a project   lhci server       Run Lighthouse CI server  Options:   --help             Show help  [boolean]   --version          Show version number  [boolean]   --no-lighthouserc  Disables automatic usage of a .lighthouserc file.  [boolean]   --config           Path to JSON config file

At this point, you‘re ready to configure the CLI for your project. The Lighthouse CI configuration can be managed through (in order of increasing precedence) a configuration file, environmental variables, or CLI flags. It uses the Yargs API to read its configuration options, which means there’s a lot of flexibility in how it can be configured. The full documentation covers it all. In this post, we’ll make use of the configuration file option.

Go ahead and create a lighthouserc.js file in the root of your project directory. Make sure the project is being tracked with Git because the Lighthouse CI automatically infers the build context settings from the Git repository. If your project does not use Git, you can control the build context settings through environmental variables instead.

touch lighthouserc.js

Here’s the simplest configuration that will run and collect Lighthouse reports for a static website project, and upload them to temporary public storage.

// lighthouserc.js module.exports = {   ci: {     collect: {       staticDistDir: './public',     },     upload: {       target: 'temporary-public-storage',     },   }, };

The ci.collect object offers several options to control how the Lighthouse CI collects test reports. The staticDistDir option is used to indicate the location of your static HTML files — for example, Hugo builds to a public directory, Jekyll places its build files in a _site directory, and so on. All you need to do is update the staticDistDir option to wherever your build is located. When the Lighthouse CI is run, it will start a server that’s able to run the tests accordingly. Once the test finishes, the server will automatically shut dow.

If your project requires the use of a custom server, you can enter the command used to start the server through the startServerCommand property. When this option is used, you also need to specify the URLs to test against through the url option. This URL should be serveable by the custom server that you specified.

module.exports = {   ci: {     collect: {       startServerCommand: 'npm run server',       url: ['http://localhost:4000/'],     },     upload: {       target: 'temporary-public-storage',     },   }, };

When the Lighthouse CI runs, it executes the server command and watches for the listen or ready string to determine if the server has started. If it does not detect this string after 10 seconds, it assumes the server has started and continues with the test. It then runs Lighthouse three times against each URL in the url array. Once the test has finished running, it shuts down the server process.

You can configure both the pattern string to watch for and timeout duration through the startServerReadyPattern and startServerReadyTimeout options respectively. If you want to change the number of times to run Lighthouse against each URL, use the numberOfRuns property.

// lighthouserc.js module.exports = {   ci: {     collect: {       startServerCommand: 'npm run server',       url: ['http://localhost:4000/'],       startServerReadyPattern: 'Server is running on PORT 4000',       startServerReadyTimeout: 20000 // milliseconds       numberOfRuns: 5,     },     upload: {       target: 'temporary-public-storage',     },   }, };

The target property inside the ci.upload object is used to configure where Lighthouse CI uploads the results after a test is completed. The temporary-public-storage option indicates that the report will be uploaded to Google’s Cloud Storage and retained for a few days. It will also be available to anyone who has the link, with no authentication required. If you want more control over how the reports are stored, refer to the documentation.

At this point, you should be ready to run the Lighthouse CI tool. Use the command below to start the CLI. It will run Lighthouse thrice against the provided URLs (unless changed via the numberOfRuns option), and upload the median result to the configured target.

lhci autorun

The output should be similar to what is shown below:

✅  .lighthouseci/ directory writable ✅  Configuration file found ✅  Chrome installation found ⚠️   GitHub token not set Healthcheck passed!  Started a web server on port 52195... Running Lighthouse 3 time(s) on http://localhost:52195/web-development-with-go/ Run #1...done. Run #2...done. Run #3...done. Running Lighthouse 3 time(s) on http://localhost:52195/custom-html5-video/ Run #1...done. Run #2...done. Run #3...done. Done running Lighthouse!  Uploading median LHR of http://localhost:52195/web-development-with-go/...success! Open the report at https://storage.googleapis.com/lighthouse-infrastructure.appspot.com/reports/1606403407045-45763.report.html Uploading median LHR of http://localhost:52195/custom-html5-video/...success! Open the report at https://storage.googleapis.com/lighthouse-infrastructure.appspot.com/reports/1606403400243-5952.report.html Saving URL map for GitHub repository ayoisaiah/freshman...success! No GitHub token set, skipping GitHub status check.  Done running autorun. 

The GitHub token message can be ignored for now. We‘ll configure one when it’s time to set up Lighthouse CI with a GitHub action. You can open the Lighthouse report link in your browser to view the median test results for reach URL.

Configuring assertions

Using the Lighthouse CI tool to run and collect Lighthouse reports works well enough, but we can go a step further and configure the tool so that a build fails if the tests results do not match certain criteria. The options that control this behavior can be configured through the assert property. Here’s a snippet showing a sample configuration:

// lighthouserc.js module.exports = {   ci: {     assert: {       preset: 'lighthouse:no-pwa',       assertions: {         'categories:performance': ['error', { minScore: 0.9 }],         'categories:accessibility': ['warn', { minScore: 0.9 }],       },     },   }, };

The preset option is a quick way to configure Lighthouse assertions. There are three options:

  • lighthouse:all: Asserts that every audit received a perfect score
  • lighthouse:recommended: Asserts that every audit outside performance received a perfect score, and warns when metric values drop below a score of 90
  • lighthouse:no-pwa: The same as lighthouse:recommended but without any of the PWA audits

You can use the assertions object to override or extend the presets, or build a custom set of assertions from scratch. The above configuration asserts a baseline score of 90 for the performance and accessibility categories. The difference is that failure in the former will result in a non-zero exit code while the latter will not. The result of any audit in Lighthouse can be asserted so there’s so much you can do here. Be sure to consult the documentation to discover all of the available options.

You can also configure assertions against a budget.json file. This can be created manually or generated through performancebudget.io. Once you have your file, feed it to the assert object as shown below:

// lighthouserc.js module.exports = {   ci: {     collect: {       staticDistDir: './public',       url: ['/'],     },     assert: {       budgetFile: './budget.json',     },     upload: {       target: 'temporary-public-storage',     },   }, };

Running Lighthouse CI with GitHub Actions

A useful way to integrate Lighthouse CI into your development workflow is to generate new reports for each commit or pull request to the project’s GitHub repository. This is where GitHub Actions come into play.

To set it up, you need to create a .github/workflow directory at the root of your project. This is where all the workflows for your project will be placed. If you’re new to GitHub Actions, you can think of a workflow as a set of one or more actions to be executed once an event is triggered (such as when a new pull request is made to the repo). Sarah Drasner has a nice primer on using GitHub Actions.

mkdir -p .github/workflow

Next, create a YAML file in the .github/workflow directory. You can name it anything you want as long as it ends with the .yml or .yaml extension. This file is where the workflow configuration for the Lighthouse CI will be placed.

cd .github/workflow touch lighthouse-ci.yaml

The contents of the lighthouse-ci.yaml file will vary depending on the type of project. I‘ll describe how I set it up for my Hugo website so you can adapt it for other types of projects. Here’s my configuration file in full:

# .github/workflow/lighthouse-ci.yaml name: Lighthouse on: [push] jobs:   ci:     runs-on: ubuntu-latest     steps:       - name: Checkout code         uses: actions/checkout@v2         with:           token: $ {{ secrets.PAT }}           submodules: recursive        - name: Setup Hugo         uses: peaceiris/actions-hugo@v2         with:           hugo-version: "0.76.5"           extended: true        - name: Build site         run: hugo        - name: Use Node.js 15.x         uses: actions/setup-node@v2         with:           node-version: 15.x       - name: Run the Lighthouse CI         run: |           npm install -g @lhci/cli@0.6.x           lhci autorun

The above configuration creates a workflow called Lighthouse consisting of a single job (ci) which runs on an Ubuntu instance and is triggered whenever code is pushed to any branch in the repository. The job consists of the following steps:

  • Check out the repository that Lighthouse CI will be run against. Hugo uses submodules for its themes, so it’s necessary to ensure all submodules in the repo are checked out as well. If any submodule is in a private repo, you need to create a new Personal Access Token with the repo scope enabled, then add it as a repository secret at https://github.com/<username>/<repo>/settings/secret. Without this token, this step will fail if it encounters a private repo.
  • Install Hugo on the GitHub Action virtual machine so that it can be used to build the site. This Hugo Setup Action is what I used here. You can find other setup actions in the GitHub Actions marketplace.
  • Build the site to a public folder through the hugo command.
  • Install and configure Node.js on the virtual machine through the setup-node action
  • Install the Lighthouse CI tool and execute the lhci autorun command.

Once you’ve set up the config file, you can commit and push the changes to your GitHub repository. This will trigger the workflow you just added provided your configuration was set up correctly. Go to the Actions tab in the project repository to see the status of the workflow under your most recent commit.

If you click through and expand the ci job, you will see the logs for each of the steps in the job. In my case, everything ran successfully but my assertions failed — hence the failure status. Just as we saw when we ran the test locally, the results are uploaded to the temporary public storage and you can view them by clicking the appropriate link in the logs.

Setting up GitHub status checks

At the moment, the Lighthouse CI has been configured to run as soon as code is pushed to the repo whether directly to a branch or through a pull request. The status of the test is displayed on the commit page, but you have click through and expand the logs to see the full details, including the links to the report.

You can set up a GitHub status check so that build reports are displayed directly in the pull request. To set it up, go to the Lighthouse CI GitHub App page, click the “Configure” option, then install and authorize it on your GitHub account or the organization that owns the GitHub repository you want to use. Next, copy the app token provided on the confirmation page and add it to your repository secrets with the name field set to LHCI_GITHUB_APP_TOKEN.

The status check is now ready to use. You can try it out by opening a new pull request or pushing a commit to an already existing pull request.

Historical reporting and comparisons through the Lighthouse CI Server

Using the temporary public storage option to store Lighthouse reports is great way to get started, but it is insufficient if you want to keep your data private or for a longer duration. This is where the Lighthouse CI server can help. It provides a dashboard for exploring historical Lighthouse data and offers an great comparison UI to uncover differences between builds.

To utilize the Lighthouse CI server, you need to deploy it to your own infrastructure. Detailed instructions and recipes for deploying to Heroku and Docker can be found on GitHub.

Conclusion

When setting up your configuration, it is a good idea to include a few different URLs to ensure good test coverage. For a typical blog, you definitely want to include to include the homepage, a post or two which is representative of the type of content on the site, and any other important pages.

Although we didn’t cover the full extent of what the Lighthouse CI tool can do, I hope this article not only helps you get up and running with it, but gives you a good idea of what else it can do. Thanks for reading, and happy coding!


The post Continuous Performance Analysis with Lighthouse CI and GitHub Actions appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , ,
[Top]

MDN on GitHub

Looks like all the content of MDN is on GitHub now. That’s pretty rad. That’s been the public plan for a while. Chris Mills:

We will be using GitHub’s contribution tools and features, essentially moving MDN from a Wiki model to a pull request (PR) model. This is so much better for contribution, allowing for intelligent linting, mass edits, and inclusion of MDN docs in whatever workflows you want to add it to (you can edit MDN source files directly in your favorite code editor).

Looks like that transition is happening basically today, and it’s a whole new back-end and front-end architecture.

Say you wanted to update the article for :focus-within. There will be a button on that page that takes you to the file in the repo (rather than the wiki editing page), and you can edit it from the GitHub UI (or however you like to do Git, but that seems like right-on-GitHub will be where the bulk of editing happens). Saving the changed document will automatically become a Pull Request, and presumably, there is a team in place to approve those.

We think that your changes should be live on the site in 48 hours as a worst case scenario.

Big claps from me here, I think this is a smart move. I can’t speak to the tech, but the content model is smart. I’d maybe like to see the content in Markdown with less specialized classes and such, but I suspect that kind of thing can evolve over time and this is already a behemoth of an update to ship all at once.

In August 2020, the entire MDN (writers) team was laid off, so it looks like the play here is to open up the creation and editing of these technical docs to the world of developers. Will it work? It super didn’t work for the “Web Platform Docs” (remember those?). But MDN has way more existing content, mindshare, and momentum. I suspect it will work great for the maintenance of existing docs, decently for new docs on whatever is hot ‘n’ fresh, but less so for anything old and “boring”.

Seems a little risky to fire all the writers before you find out if it works, so that speaks to the product direction. Things are changing and paying a content creation team directly just ain’t a part of whatever the new direction is.


The post MDN on GitHub appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

[Top]

Optimize Images with a GitHub Action

I was playing with GitHub Actions the other day. Such a nice tool! Short story: you can have it run code for you, like run your build processes, tests, and deployments. But it’s just configuration files that can run whatever you need. There is a whole marketplace of Actions wanting to do work for you.

What I wanted to do was run code to do image optimization. That way I never have to think about it. Any image in the repo has been optimized.

There is an action for this already, Calibre’s image-actions, which we’ll leverage here. You’ll also need to ensure Actions is enabled for the repo. I know in my main organization we only flip on Actions on a per-repo basis, which is one of the options.

Then you make a file at ./github/workflows/optimize-images.yml. That’s where you can configure this action. All your actions can have separate files, if you want them to. I made this a separate file because (1) it only works with “pushes to pull requests,” so if you have other actions that run on different triggers, they won’t mix nicely, and (2) That’s what is in their docs and looks like the suggested usage.

name: Optimize images on: pull_request jobs:   build:     name: calibreapp/image-actions     runs-on: ubuntu-latest     steps:       - name: Checkout Repo         uses: actions/checkout@master        - name: Compress Images         uses: calibreapp/image-actions@master         with:           githubToken: $ {{ secrets.GITHUB_TOKEN }}

Now if you make a pull request, you’ll see it run:

That successful run then leaves a comment on the pull request saying what it was able to optimize:

It will literally re-commit those files back to the pull request as well, so if you’re going to stay on the pull request and keep working, you’ll need to push again before you can push to get the optimized images.

I can look at that automatic commit and see the difference:

The commit preview in Git Tower.

How I can merge the PR knowing all is well:

Pretty cool. Is optimizing your images locally particularly hard? No. Is never having to think about it again better? Yeah. You’re taking on a smidge of technical debt here, but reducing it elsewhere, which is a very fair trade, at least in my book.


The post Optimize Images with a GitHub Action appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , ,
[Top]

The GitHub Profile Trick

Monica Powell shared a really cool trick the other day:

The profile README is created by creating a new repository that’s the same name as your username. For example, my GitHub username is m0nica so I created a new repository with the name m0nica.

Now the README.md from that repo is essentially the homepage of her profile. Above the usual list of popular repos, you can see the rendered version of that README on her profile:

Lemme do a lame version for myself real quick just to try it out…

OK, I start like this:

Screenshot of the default profile page for Chris Coyier.

Then I’ll to go repo.new (hey, CodePen has one of those cool domains too!) and make a repo on my personal account that is exactly the same as my username:

Screenshot showing the create new repo screen on GitHub. The repository name is set to chriscoyier.

I chose to initialize the repo with a README file and nothing else. So immediately I get:

Screenshot of the code section of the chriscoyier repo, which only contains a read me file that says hi there.

I can edit this directly on the web, and if I do, I see more helpful stuff:

Screenshot of editing the read me file directly in GitHub.

Fortunately, my personal website has a Markdown bio ready to use!

Screenshot of Chris Coyier's personal website homepage. It has a dark background and a large picture of Chris wearing a red CodePen hat next to some text welcoming people to the site.

I’ll copy and paste that over.

Screenshot showing the Markdown code from the personal website in the GitHub editor.

After committing that change, my own profile shows it!

Screenshot of the updated GitHub profile page, showing the welcome text from the personal website homepage.

Maybe I’ll get around to doing something more fun with it someday. Monica’s post has a bunch of fun examples in it. My favorite is Kaya Thomas’ profile, which I saw Jina Anne share:

You can’t use CSS in there (because GitHub strips it out), so I love the ingenuity of using old school <img align="right"> to pull off the floating image look.


The post The GitHub Profile Trick appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, ,
[Top]

How to Make a Simple CMS With Cloudflare, GitHub Actions and Metalsmith

Let’s build ourselves a CMS. But rather than build out a UI, we’re going to get that UI for free in the form of GitHub itself! We’ll be leveraging GitHub as the way to manage the content for our static site generator (it could be any static site generator). Here’s the gist of it: GitHub is going to be the place to manage, version control, and store files, and also be the place we’ll do our content editing. When edits occur, a series of automations will test, verify, and ultimately deploy our content to Cloudflare.

You can find the completed code for the project is available on GitHub. I power my own website, jonpauluritis.com, this exact way.

What does the full stack look like?

Here’s the tech stack we’ll be working with in this article:

  • Any Markdown Editor (Optional. e.g Typora.io)
  • A Static Site Generator (e.g. Metalsmith)
  • Github w/ Github Actions (CICD and Deployment)
  • Cloudflare Workers

Why should you care about about this setup? This setup is potentially the leanest, fastest, cheapest (~$ 5/month), and easiest way to manage a website (or Jamstack site). It’s awesome both from a technical side and from a user experience perspective. This setup is so awesome I literally went out and bought stock in Microsoft and Cloudflare. 

But before we start…

I’m not going to walk you through setting up accounts on these services, I’m sure you can do that yourself. Here are the accounts you need to setup: 

I would also recommend Typora for an amazing Markdown writing experience, but Markdown editors are a very personal thing, so use which editor feels right for you. 

Project structure

To give you a sense of where we’re headed, here’s the structure of the completed project:

├── build.js ├── .github/workflows │   ├── deploy.yml │   └── nodejs.js ├── layouts │   ├── about.hbs │   ├── article.hbs │   ├── index.hbs │   └── partials │       └── navigation.hbs ├── package-lock.json ├── package.json ├── public ├── src │   ├── about.md │   ├── articles │   │   ├── post1.md │   │   └── post2.md │   └── index.md ├── workers-site └── wrangler.toml

Step 1: Command line stuff

In a terminal, change directory to wherever you keep these sorts of projects and type this:

$  mkdir cms && cd cms && npm init -y

That will create a new directory, move into it, and initialize the use of npm.

The next thing we want to do is stand on the shoulders of giants. We’ll be using a number of npm packages that help us do things, the meat of which is using the static site generator Metalsmith:

$  npm install --save-dev metalsmith metalsmith-markdown metalsmith-layouts metalsmith-collections metalsmith-permalinks handlebars jstransformer-handlebars

Along with Metalsmith, there are a couple of other useful bits and bobs. Why Metalsmith? Let’s talk about that.

Step 2: Metalsmith

I’ve been trying out static site generators for 2-3 years now, and I still haven’t found “the one.” All of the big names — like Eleventy, Gatsby, Hugo, Jekyll, Hexo, and Vuepress — are totally badass but I can’t get past Metalsmith’s simplicity and extensibility.

As an example, this will code will actually build you a site: 

// EXAMPLE... NOT WHAT WE ARE USING FOR THIS TUTORIAL Metalsmith(__dirname)            .source('src')          .destination('dest')        .use(markdown())                .use(layouts())              .build((err) => if (err) throw err);

Pretty cool right?

For sake of brevity, type this into the terminal and we’ll scaffold out some structure and files to start with.

First, make the directories:

$  mkdir -p src/articles &&  mkdir -p layouts/partials 

Then, create the build file:

$  touch build.js

Next, we’ll create some layout files:

$  touch layouts/index.hbs && touch layouts/about.hbs && touch layouts/article.hbs && touch layouts/partials/navigation.hbt

And, finally, we’ll set up our content resources:

$  touch src/index.md && touch src/about.md && touch src/articles/post1.md && touch src/articles/post1.md touch src/articles/post2.md

The project folder should look something like this:

├── build.js ├── layouts │   ├── about.hbs │   ├── article.hbs │   ├── index.hbs │   └── partials │       └── navigation.hbs ├── package-lock.json ├── package.json └── src     ├── about.md     ├── articles     │   ├── post1.md     │   └── post2.md     └── index.md

Step 3: Let’s add some code

To save space (and time), you can use the commands below to create the content for our fictional website. Feel free to hop into “articles” and create your own blog posts. The key is that the posts need some meta data (also called “Front Matter”) to be able to generate properly.  The files you would want to edit are index.md, post1.md and post2.md.

The meta data should look something like this: 

--- title: 'Post1' layout: article.hbs  --- ## Post content here....

Or, if you’re lazy like me, use these terminal commands to add mock content from GitHub Gists to your site:

$  curl https://gist.githubusercontent.com/jppope/35dd682f962e311241d2f502e3d8fa25/raw/ec9991fb2d5d2c2095ea9d9161f33290e7d9bb9e/index.md > src/index.md $  curl https://gist.githubusercontent.com/jppope/2f6b3a602a3654b334c4d8df047db846/raw/88d90cec62be6ad0b3ee113ad0e1179dfbbb132b/about.md > src/about.md $  curl https://gist.githubusercontent.com/jppope/98a31761a9e086604897e115548829c4/raw/6fc1a538e62c237f5de01a926865568926f545e1/post1.md > src/articles/post1.md $  curl https://gist.githubusercontent.com/jppope/b686802621853a94a8a7695eb2bc4c84/raw/9dc07085d56953a718aeca40a3f71319d14410e7/post2.md > src/articles/post2.md

Next, we’ll be creating our layouts and partial layouts (“partials”). We’re going to use Handlebars.js for our templating language in this tutorial, but you can use whatever templating language floats your boat. Metalsmith can work with pretty much all of them, and I don’t have any strong opinions about templating languages.

Build the index layout

<!DOCTYPE html> <html lang="en">   <head>     <style>       /* Keeping it simple for the tutorial */       body {         font-family: 'Avenir', Helvetica, Arial, sans-serif;         -webkit-font-smoothing: antialiased;         -moz-osx-font-smoothing: grayscale;         text-align: center;         color: #2c3e50;         margin-top: 60px;       }       .navigation {         display: flex;         justify-content: center;         margin: 2rem 1rem;       }       .button {         margin: 1rem;         border: solid 1px #ccc;         border-radius: 4px;                 padding: 0.5rem 1rem;         text-decoration: none;       }     </style>   </head>   <body>     {{>navigation }}     <div>        {{#each articles }}         <a href="{{path}}"><h3>{{ title }}</h3></a>         <p>{{ description }}</p>        {{/each }}     </div>   </body> </html>

A couple of notes: 

  • Our “navigation” hasn’t been defined yet, but will ultimately replace the area where {{>navigation }} resides. 
  • {{#each }} will iterate through the “collection” of articles that metalsmith will generate during its build process. 
  • Metalsmith has lots of plugins you can use for things like stylesheets, tags, etc., but that’s not what this tutorial is about, so we’ll leave that for you to explore. 

Build the About page

Add the following to your about.hbs page:

<!DOCTYPE html> <html lang="en">   <head>     <style>       /* Keeping it simple for the tutorial */       body {         font-family: 'Avenir', Helvetica, Arial, sans-serif;         -webkit-font-smoothing: antialiased;         -moz-osx-font-smoothing: grayscale;         text-align: center;         color: #2c3e50;         margin-top: 60px;       }       .navigation {         display: flex;         justify-content: center;         margin: 2rem 1rem;       }       .button {         margin: 1rem;         border: solid 1px #ccc;         border-radius: 4px;                 padding: 0.5rem 1rem;         text-decoration: none;       }         </style>   </head>   <body>     {{>navigation }}     <div>       {{{contents}}}     </div>   </body> </html>

Build the Articles layout

<!DOCTYPE html> <html lang="en">   <head>     <style>       /* Keeping it simple for the tutorial */       body {         font-family: 'Avenir', Helvetica, Arial, sans-serif;         -webkit-font-smoothing: antialiased;         -moz-osx-font-smoothing: grayscale;         text-align: center;         color: #2c3e50;         margin-top: 60px;       }       .navigation {         display: flex;         justify-content: center;         margin: 2rem 1rem;       }       .button {         margin: 1rem;         border: solid 1px #ccc;         border-radius: 4px;                 padding: 0.5rem 1rem;         text-decoration: none;       }     </style>   </head>   <body>     {{>navigation }}     <div>       {{{contents}}}     </div>   </body> </html>

You may have noticed that this is the exact same layout as the About page. It is. I just wanted to cover how to add additional pages so you’d know how to do that. If you want this one to be different, go for it.

Add navigation

Add the following to the layouts/partials/navigation.hbs file

<div class="navigation">   <div>     <a class="button" href="/">Home</a>     <a class="button" href="/about">About</a>   </div> </div>

Sure there’s not much to it… but this really isn’t supposed to be a Metalsmith/SSG tutorial.  ¯_(ツ)_/¯

Step 4: The Build file

The heart and soul of Metalsmith is the build file. For sake of thoroughness, I’m going to go through it line-by-line. 

We start by importing the dependencies

Quick note: Metalsmith was created in 2014, and the predominant module system at the time was common.js , so I’m going to stick with require statements as opposed to ES modules. It’s also worth noting that most of the other tutorials are using require statements as well, so skipping a build step with Babel will just make life a little less complex here.

// What we use to glue everything together const Metalsmith = require('metalsmith'); 
 // compile from markdown (you can use targets as well) const markdown = require('metalsmith-markdown'); 
 // compiles layouts const layouts = require('metalsmith-layouts'); 
 // used to build collections of articles const collections = require('metalsmith-collections'); 
 // permalinks to clean up routes const permalinks = require('metalsmith-permalinks'); 
 // templating const handlebars = require('handlebars'); 
 // register the navigation const fs = require('fs'); handlebars.registerPartial('navigation', fs.readFileSync(__dirname + '/layouts/partials/navigation.hbt').toString()); 
 // NOTE: Uncomment if you want a server for development // const serve = require('metalsmith-serve'); // const watch = require('metalsmith-watch');

Next, we’ll be including Metalsmith and telling it where to find its compile targets:

// Metalsmith Metalsmith(__dirname)               // where your markdown files are   .source('src')         // where you want the compliled files to be rendered   .destination('public')

So far, so good. After we have the source and target set, we’re going to set up the markdown rendering, the layouts rendering, and let Metalsmith know to use “Collections.” These are a way to group files together. An easy example would be something like “blog posts” but it could really be anything, say recipes, whiskey reviews, or whatever. In the above example, we’re calling the collection “articles.”

 // previous code would go here 
   // collections create groups of similar content   .use(collections({      articles: {       pattern: 'articles/*.md',     },   }))   // compile from markdown   .use(markdown())   // nicer looking links   .use(permalinks({     pattern: ':collection/:title'   }))   // build layouts using handlebars templates   // also tell metalsmith where to find the raw input   .use(layouts({     engine: 'handlebars',     directory: './layouts',     default: 'article.html',     pattern: ["*/*/*html", "*/*html", "*html"],     partials: {       navigation: 'partials/navigation',     }   })) 
 // NOTE: Uncomment if you want a server for development // .use(serve({ //   port: 8081, //   verbose: true // })) // .use(watch({ //   paths: { //     "$ CSS-Tricks/**/*": true, //     "layouts/**/*": "**/*", //   } // }))

Next, we’re adding the markdown plugin, so we can use markdown for content to compile to HTML.

From there, we’re using the layouts plugin to wrap our raw content in the layout we define in the layouts folder. You can read more about the nuts and bolts of this on the official plugin site but the result is that we can use {{{contents}}} in a template and it will just work. 

The last addition to our tiny little build script will be the build method:

// Everything else would be above this .build(function(err) {   if (err) {     console.error(err)   }   else {     console.log('build completed!');   } });

Putting everything together, we should get a build script that looks like this:

const Metalsmith = require('metalsmith'); const markdown = require('metalsmith-markdown'); const layouts = require('metalsmith-layouts'); const collections = require('metalsmith-collections'); const permalinks = require('metalsmith-permalinks'); const handlebars = require('handlebars'); const fs = require('fs'); 
 // Navigation handlebars.registerPartial('navigation', fs.readFileSync(__dirname + '/layouts/partials/navigation.hbt').toString()); 
 Metalsmith(__dirname)   .source('src')   .destination('public')   .use(collections({     articles: {       pattern: 'articles/*.md',     },   }))   .use(markdown())   .use(permalinks({     pattern: ':collection/:title'   }))   .use(layouts({     engine: 'handlebars',     directory: './layouts',     default: 'article.html',     pattern: ["*/*/*html", "*/*html", "*html"],     partials: {       navigation: 'partials/navigation',     }   }))   .build(function (err) {     if (err) {       console.error(err)     }     else {       console.log('build completed!');     }   });

I’m a sucker for simple and clean and, in my humble opinion, it doesn’t get any simpler or cleaner than a Metalsmith build. We just need to make one quick update to the package.json file and we’ll be able to give this a run:

 "name": "buffaloTraceRoute",   "version": "1.0.0",   "description": "",   "main": "index.js",   "scripts": {     "build": "node build.js",     "test": "echo "No Tests Yet!" "   },   "keywords": [],   "author": "Your Name",   "license": "ISC",   "devDependencies": {     // these should be the current versions     // also... comments aren't allowed in JSON   } }

If you want to see your handy work, you can uncomment the parts of the build file that will let you serve your project and do things like run npm run build. Just make sure you remove this code before deploying.

Working with Cloudflare

Next, we’re going to work with Cloudflare to get access to their Cloudflare Workers. This is where the $ 5/month cost comes into play.

Now, you might be asking: “OK, but why Cloudflare? What about using something free like GutHub Pages or Netlify?” It’s a good question. There are lots of ways to deploy a static site, so why choose one method over another?

Well, Cloudflare has a few things going for it…

Speed and performance

One of the biggest reasons to switch to a static site generator is to improve your website performance. Using Cloudflare Workers Site can improve your performance even more.

Here’s a graph that shows Cloudflare compared to two competing alternatives:

Courtesy of Cloudflare

The simple reason why Cloudflare is the fastest: a site is deployed to 190+ data centers around the world. This reduces latency since users will be served the assets from a location that’s physically closer to them.

Simplicity

Admittedly, the initial configuration of Cloudflare Workers may be a little tricky if you don’t know how to setup environmental variables. But after you setup the basic configurations for your computer, deploying to Cloudflare is as simple as wrangler publish from the site directory. This tutorial is focused on the CI/CD aspect of deploying to Cloudflare which is a little more involved, but it’s still incredibly simple compared to most other deployment processes. 

(It’s worth mentioning GitHub Pages, Netlify are also killing it in this area. The developer experience of all three companies is amazing.)

More bang for the buck

While Github Pages and Netlify both have free tiers, your usage is (soft) limited to 100GB of bandwidth a month. Don’t get me wrong, that’s a super generous limit. But after that you’re out of luck. GitHub Pages doesn’t offer anything more than that and Netlify jumps up to $ 45/month, making Cloudflare’s $ 5/month price tag very reasonable.

Service Free Tier Bandwidth Paid Tier Price Paid Tier Requests / Bandwidth
GitHub Pages 100GB N/A N/A
Netlify 100GB $ 45 ~150K / 400 GB
Cloudflare Workers Sites none $ 5 10MM / unlimited 
Calculations assume a 3MB average website. Cloudflare has additional limits on CPU use. GitHub Pages should not be used for sites that have credit card transactions.

Sure, there’s no free tier for Cloudflare, but $ 5 for 10 million requests is a steal. I would also be remise if I didn’t mention that GitHub Pages has had a few outages over the last year. That’s totally fine in my book a demo site, but it would be bad news for a business.

Cloudflare offers a ton of additional features for that worth briefly mentioning: free SSL certificates, free (and easy) DNS routing, a custom Workers Sites domain name for your projects (which is great for staging), unlimited environments (e.g. staging), and registering a domain name at cost (as opposed to the markup pricing imposed by other registrars). 

Deploying to Cloudflare

Cloudflare provides some great tutorials for how to use their Cloudflare Workers product. We’ll cover the highlights here.

First, make sure the Cloudflare CLI, Wrangler, is installed:

$  npm i @cloudflare/wrangler -g

Next, we’re going to add Cloudflare Sites to the project, like this:

wrangler init --site cms 

Assuming I didn’t mess up and forget about a step, here’s what we should have in the terminal at this point:

⬇️ Installing cargo-generate... 🔧   Creating project called `workers-site`... ✨   Done! New project created /Users/<User>/Code/cms/workers-site ✨  Succesfully scaffolded workers site ✨  Succesfully created a `wrangler.toml`

There should also be a generated folder in the project root called /workers-site as well as a config file called wrangler.toml — this is where the magic resides.

name = "cms" type = "webpack" account_id = "" workers_dev = true route = "" zone_id = "" 
 [site] bucket = "" entry-point = "workers-site"

You might have already guessed what comes next… we need to add some info to the config file! The first key/value pair we’re going to update is the bucket property.

bucket = "./public"

Next, we need to get the Account ID and Zone ID (i.e. the route for your domain name). You can find them in your Cloudflare account all the way at the bottom of the dashboard for your domain:

Stop! Before going any further, don’t forget to click the “Get your API token” button to grab the last config piece that we’ll need. Save it on a notepad or somewhere handy because we’ll need it for the next section. 

Phew! Alright, the next task is to add the Account ID and Zone ID we just grabbed to the .toml file:

name = "buffalo-traceroute" type = "webpack" account_id = "d7313702f333457f84f3c648e9d652ff" # Fake... use your account_id workers_dev = true # route = "example.com/*"  # zone_id = "805b078ca1294617aead2a1d2a1830b9" # Fake... use your zone_id 
 [site] bucket = "./public" entry-point = "workers-site" (Again, those IDs are fake.)

Again, those IDs are fake. You may be asked to set up credentials on your computer. If that’s the case, run wrangler config in the terminal.

GitHub Actions

The last piece of the puzzle is to configure GitHub to do automatic deployments for us. Having done previous forays into CI/CD setups, I was ready for the worst on this one but, amazingly, GitHub Actions is very simple for this sort of setup.

So how does this work?

First, let’s make sure that out GitHub account has GitHub Actions activated. It’s technically in beta right now, but I haven’t run into any issues with that so far.

Next, we need to create a repository in GitHub and upload our code to it. Start by going to GitHub and creating a repository.

This tutorial isn’t meant to cover the finer points of Git and/or GitHub, but there’s a great introduction. Or, copy and paste the following commands while in the root directory of the project:

# run commands one after the other $  git init $  touch .gitignore && echo 'node_modules' > .gitignore $  git add . $  git commit -m 'first commit' $  git remote add origin https://github.com/{username}/{repo name} $  git push -u origin master

That should add the project to GitHub. I say that with a little hesitance but this is where everything tends to blow up for me. For example, put too many commands into the terminal and suddenly GitHub has an outage, or the terminal unable to location the path for Python. Tread carefully!

Assuming we’re past that part, our next task is to activate Github Actions and create a directory called .github/workflows in the root of the project directory. (GitHub can also do this automatically by adding the “node” workflow when activating actions. At the time of writing, adding a GitHub Actions Workflow is part of GitHub’s user interface.)

Once we have the directory in the project root, we can add the final two files. Each file is going to handle a different workflow:

  1. A workflow to check that updates can be merged (i.e. the “CI” in CI/CD)
  2. A workflow to deploy changes once they have been merged into master (i.e. the “CD” in CI/CD)
# integration.yml name: Integration 
 on:   pull_request:     branches: [ master ] 
 jobs:   build:     runs-on: ubuntu-latest     strategy:       matrix:         node-version: [10.x, 12.x]     steps:     - uses: actions/checkout@v2     - name: Use Node.js $ {{ matrix.node-version }}       uses: actions/setup-node@v1       with:         node-version: $ {{ matrix.node-version }}     - run: npm ci     - run: npm run build --if-present     - run: npm test       env:         CI: true

This is a straightforward workflow. So straightforward, in fact, that I copied it straight from the official GitHub Actions docs and barely modified it. Let’s go through what is actually happening in there:

  1. on: Run this workflow only when a pull request is created for the master branch
  2. jobs: Run the below steps for two-node environments (e.g. Node 10, and Node 12 — Node 12 is currently the recommended version). This will build, if a build script is defined. It will also run tests if a test script is defined.

The second file is our deployment script and is a little more involved.

# deploy.yml name: Deploy 
 on:   push:     branches:       - master 
 jobs:   deploy:     runs-on: ubuntu-latest     name: Deploy     strategy:       matrix:         node-version: [10.x] 
     steps:       - uses: actions/checkout@v2       - name: Use Node.js $ {{ matrix.node-version }}         uses: actions/setup-node@v1         with:           node-version: $ {{ matrix.node-version }}       - run: npm install       - uses: actions/checkout@master       - name: Build site         run: "npm run build"       - name: Publish         uses: cloudflare/wrangler-action@1.1.0         with:           apiToken: $ {{ secrets.CF_API_TOKEN }}

Important! Remember that Cloudflare API token I mentioned way earlier? Now is the time to use it. Go to the project settings and add a secret. Name the secret CF_API_TOKEN and add the API token.

Let’s go through whats going on in this script:

  1. on: Run the steps when code is merged into the master branch
  2. steps: Use Nodejs to install all dependencies, use Nodejs to build the site, then use Cloudflare Wrangler to publish the site

Here’s a quick recap of what the project should look like before running a build (sans node_modules): 

├── build.js ├── dist │   └── worker.js ├── layouts │   ├── about.hbs │   ├── article.hbs │   ├── index.hbs │   └── partials │       └── navigation.hbs ├── package-lock.json ├── package.json ├── public ├── src │   ├── about.md │   ├── articles │   │   ├── post1.md │   │   └── post2.md │   └── index.md ├── workers-site │   ├── index.js │   ├── package-lock.json │   ├── package.json │   └── worker │       └── script.js └── wrangler.toml

A GitHub-based CMS

Okay, so I made it this far… I was promised a CMS? Where is the database and my GUI that I log into and stuff?

Don’t worry, you are at the finish line! GitHub is your CMS now and here’s how it works:

  1. Write a markdown file (with front matter).
  2. Open up GitHub and go to the project repository.
  3. Click into the “Articles” directory, and upload the new article. GitHub will ask whether a new branch should be created along with a pull request. The answer is yes. 
  4. After the integration is verified, the pull request can be merged, which triggers deployment. 
  5. Sit back, relax and wait 10 seconds… the content is being deployed to 164 data centers worldwide.

Congrats! You now have a minimal Git-based CMS that basically anyone can use. 

Troubleshooting notes

  • Metalsmith layouts can sometimes be kinda tricky. Try adding this debug line before the build step to have it kick out something useful: DEBUG=metalsmith-layouts npm run build
  • Occasionally, Github actions needed me to add node_modules to the commit so it could deploy… this was strange to me (and not a recommended practice) but fixed the deployment.
  • Please let me know if you run into any trouble and we can add it to this list!

The post How to Make a Simple CMS With Cloudflare, GitHub Actions and Metalsmith appeared first on CSS-Tricks.

CSS-Tricks

, , , ,
[Top]

Continuous Deployments for WordPress Using GitHub Actions

Continuous Integration (CI) workflows are considered a best practice these days. As in, you work with your version control system (Git), and as you do, CI is doing work for you like running tests, sending notifications, and deploying code. That last part is called Continuous Deployment (CD). But shipping code to a production server often requires paid services. With GitHub Actions, Continuous Deployment is free for everyone. Let’s explore how to set that up.

DevOps is for everyone

As a front-end developer, continuous deployment workflows used to be exciting, but mysterious to me. I remember numerous times being scared to touch deployment configurations. I defaulted to the easy route instead — usually having someone else set it up and maintain it, or manual copying and pasting things in a worst-case scenario. 

As soon as I understood the basics of rsync, CD finally became tangible to me. With the following GitHub Action workflow, you do not need to be a DevOps specialist; but you’ll still have the tools at hand to set up best practice deployment workflows.

The basics of a Continuous Deployment workflow

So what’s the deal, how does this work? It all starts with CI, which means that you commit code to a shared remote repository, like GitHub, and every push to it will run automated tasks on a remote server. Those tasks could include test and build processes, like linting, concatenation, minification and image optimization, among others.

CD also delivers code to a production website server. That may happen by copying the verified and built code and placing it on the server via FTP, SSH, or by shipping containers to an infrastructure. While every shared hosting package has FTP access, it’s rather unreliable and slow to send many files to a server. And while shipping application containers is a safe way to release complex applications, the infrastructure and setup can be rather complex as well. Deploying code via SSH though is fast, safe and flexible. Plus, it’s supported by many hosting packages.

How to deploy with rsync

An easy and efficient way to ship files to a server via SSH is rsync, a utility tool to sync files between a source and destination folder, drive or computer. It will only synchronize those files which have changed or don’t already exist at the destination. As it became a standard tool on popular Linux distributions, chances are high you don’t even need to install it.

The most basic operation is as easy as calling rsync SRC DEST to sync files from one directory to another one. However, there are a couple of options you want to consider:

  • -c compares file changes by checksum, not modification time
  • -h outputs numbers in a more human readable format
  • -a retains file attributes and permissions and recursively copies files and directories
  • -v shows status output
  • --delete deletes files from the destination that aren’t found in the source (anymore)
  • --exclude prevents syncing specified files like the .git directory and node_modules

And finally, you want to send the files to a remote server, which makes the full command look like this:

rsync -chav --delete --exclude /.git/ --exclude /node_modules/ ./ ssh-user@example.com:/mydir

You could run that command from your local computer to deploy to any live server. But how cool would it be if it was running in a controlled environment from a clean state? Right, that’s what you’re here for. Let’s move on with that.

Create a GitHub Actions workflow

With GitHub Actions you can configure workflows to run on any GitHub event. While there is a marketplace for GitHub Actions, we don’t need any of them but will build our own workflow.

To get started, go to the “Actions” tab of your repository and click “Set up a workflow yourself.” This will open the workflow editor with a .yaml template that will be committed to the .github/workflows directory of your repository.

When saved, the workflow checks out your repo code and runs some echo commands. name helps follow the status and results later. run contains the shell commands you want to run in each step.

Define a deployment trigger

Theoretically, every commit to the master branch should be production-ready. However, reality teaches you that you need to test results on the production server after deployment as well and you need to schedule that. We at bleech consider it a best practice to only deploy on workdays — except Fridays and only before 4:00 pm — to make sure we have time to roll back or fix issues during business hours if anything goes wrong.

An easy way to get manual-level control is to set up a branch just for triggering deployments. That way, you can specifically merge your master branch into it whenever you are ready. Call that branch production, let everyone on your team know pushes to that branch are only allowed from the master branch and tell them to do it like this:

git push origin master:production

Here’s how to change your workflow trigger to only run on pushes to that production branch:

name: Deployment on:   push:     branches: [ production ]

Build and verify the theme

I’ll assume you’re using Flynt, our WordPress starter theme, which comes with dependency management via Composer and npm as well as a preconfigured build process. If you’re using a different theme, the build process is likely to be similar, but might need adjustments. And if you’re checking in the built assets to your repository, you can skip all steps except the checkout command.

For our example, let’s make sure that node is executed in the required version and that dependencies are installed before building:

jobs:   deploy:     runs-on: ubuntu-latest     steps:     - uses: actions/checkout@v2          - uses: actions/setup-node@v1.1.0       with:         version: 12.x      - name: Install dependencies       run: |         composer install -o         npm install      - name: Build       run: npm run build

The Flynt build task finally requires, lints, compiles, and transpiles Sass and JavaScript files, then adds revisioning to assets to prevent browser cache issues. If anything in the build step fails, the workflow will stop executing and thus prevents you from deploying a broken release.

Configure server access and destination

For the rsync command to run successfully, GitHub needs access to SSH into your server. This can be accomplished by:

  1. Generating a new SSH key (without a passphrase)
  2. Adding the public key to your ~/.ssh/authorized_keys on the production server
  3. Adding the private key as a secret with the name DEPLOY_KEY to the repository

The sync workflow step needs to save the key to a local file, adjust file permissions and pass the file to the rsync command. The destination has to point to your WordPress theme directory on the production server. It’s convenient to define it as a variable so you know what to change when reusing the workflow for future projects.

- name: Sync   env:     dest: 'ssh-user@example.com:/mydir/wp-content/themes/mytheme’   run: |     echo "$ {{secrets.DEPLOY_KEY}}" > deploy_key     chmod 600 ./deploy_key     rsync -chav --delete        -e 'ssh -i ./deploy_key -o StrictHostKeyChecking=no'        --exclude /.git/        --exclude /.github/        --exclude /node_modules/        ./ $ {{env.dest}}

Depending on your project structure, you might want to deploy plugins and other theme related files as well. To accomplish that, change the source and destination to the desired parent directory, make sure to check if the excluded files need an update, and check if any paths in the build process should be adjusted. 

Put the pieces together

We’ve covered all necessary steps of the CD process. Now we need to run them in a sequence which should:

  1. Trigger on each push to the production branch
  2. Install dependencies
  3. Build and verify the code
  4. Send the result to a server via rsync

The complete GitHub workflow will look like this:

name: Deployment  on:   push:     branches: [ production ]  jobs:   deploy:     runs-on: ubuntu-latest      steps:     - uses: actions/checkout@v2      - uses: actions/setup-node@v1.1.0       with:         version: 12.x      - name: Install dependencies       run: |         composer install -o         npm install      - name: Build       run: npm run build          - name: Sync       env:         dest: 'ssh-user@example.com:/mydir/wp-content/themes/mytheme’       run: |         echo "$ {{secrets.DEPLOY_KEY}}" > deploy_key         chmod 600 ./deploy_key         rsync -chav --delete            -e 'ssh -i ./deploy_key -o StrictHostKeyChecking=no'            --exclude /.git/            --exclude /.github/            --exclude /node_modules/            ./ $ {{env.dest}}

To test the workflow, commit the changes, pull them into your local repository and trigger the deployment by pushing your master branch to the production branch:

git push origin master:production

You can follow the status of the execution by going to the “Actions” tab in GitHub, then selecting the recent execution and clicking on the “deploy“ job. The green checkmarks indicate that everything went smoothly. If there are any issues, check the logs of the failed step to fix them.

Check the full report on GitHub


Congratulations! You’ve successfully deployed your WordPress theme to a server. The workflow file can easily be reused for future projects, making continuous deployment setups a breeze.

To further refine your deployment process, the following topics are worth considering:

  • Caching dependencies to speed up the GitHub workflow
  • Activating the WordPress maintenance mode while syncing files
  • Clearing the website cache of a plugin (like Cache Enabler) after the deployment

The post Continuous Deployments for WordPress Using GitHub Actions appeared first on CSS-Tricks.

CSS-Tricks

, , , , ,
[Top]

Using GitHub Template Repos to Jump-Start Static Site Projects

If you’re getting started with static site generators, did you know you can use GitHub template repositories to quickly start new projects and reduce your setup time?

Most static site generators make installation easy, but each project still requires configuration after installation. When you build a lot of similar projects, you may duplicate effort during the setup phase. GitHub template repositories may save you a lot of time if you find yourself:

  • creating the same folder structures from previous projects,
  • copying and pasting config files from previous projects, and
  • copying and pasting boilerplate code from previous projects.

Unlike forking a repository, which allows you to use someone else’s code as a starting point, template repositories allow you to use your own code as a starting point, where each new project gets its own, independent Git history. Check it out!

Let’s take a look at how we can set up a convenient workflow. We’ll set up a boilerplate Eleventy project, turn it into a Git repository, host the repository on GitHub, and then configure that repository to be a template. Then, next time you have a static site project, you’ll be able to come back to the repository, click a button, and start working from an exact copy of your boilerplate.

Are you ready to try it out? Let’s set up our own static site using GitHub templates to see just how much templates can help streamline a static site project.

I’m using Eleventy as an example of a static site generator because it’s my personal go-to, but this process will work for Hugo, Jekyll, Nuxt, or any other flavor of static site generator you prefer.

If you want to see the finished product, check out my static site template repository.

First off, let’s create a template folder

We’re going to kick things off by running each of these in the command line:

cd ~ mkdir static-site-template cd static-site-template

These three commands change directory into your home directory (~ in Unix-based systems), make a new directory called static-site-template, and then change directory into the static-site-template directory.

Next, we’ll initialize the Node project

In order to work with Eleventy, we need to install Node.js which allows your computer to run JavaScript code outside of a web browser.

Node.js comes with node package manager, or npm, which downloads node packages to your computer. Eleventy is a node package, so we can use npm to fetch it.

Assuming Node.js is installed, let’s head back to the command line and run:

npm init

This creates a file called package.json in the directory. npm will prompt you for a series of questions to fill out the metadata in your package.json. After answering the questions, the Node.js project is initialized.

Now we can install Eleventy

Initializing the project gave us a package.json file which lets npm install packages, run scripts, and do other tasks for us inside that project. npm uses package.json as an entry point in the project to figure out precisely how and what it should do when we give it commands.

We can tell npm to install Eleventy as a development dependency by running:

npm install -D @11ty/eleventy

This will add a devDependency entry to the package.json file and install the Eleventy package to a node_modules folder in the project.

The cool thing about the package.json file is that any other computer with Node.js and npm can read it and know to install Eleventy in the project node_modules directory without having to install it manually. See, we’re already streamlining things!

Configuring Eleventy

There are tons of ways to configure an Eleventy project. Flexibility is Eleventy’s strength. For the purposes of this tutorial, I’m going to demonstrate a configuration that provides:

  • A folder to cleanly separate website source code from overall project files
  • An HTML document for a single page website
  • CSS to style the document
  • JavaScript to add functionality to the document

Hop back in the command line. Inside the static-site-template folder, run these commands one by one (excluding the comments that appear after each # symbol):

mkdir src           # creates a directory for your website source code mkdir src/css       # creates a directory for the website styles mkdir src/js        # creates a directory for the website JavaScript touch index.html    # creates the website HTML document touch css/style.css # creates the website styles touch js/main.js    # creates the website JavaScript

This creates the basic file structure that will inform the Eleventy build. However, if we run Eleventy right now, it won’t generate the website we want. We still have to configure Eleventy to understand that it should only use files in the src folder for building, and that the css and js folders should be processed with passthrough file copy.

You can give this information to Eleventy through a file called .eleventy.js in the root of the static-site-template folder. You can create that file by running this command inside the static-site-template folder:

touch .eleventy.js

Edit the file in your favorite text editor so that it contains this:

module.exports = function(eleventyConfig) {   eleventyConfig.addPassthroughCopy("src/css");   eleventyConfig.addPassthroughCopy("src/js");   return {     dir: {       input: "src"     }   }; };

Lines 2 and 3 tell Eleventy to use passthrough file copy for CSS and JavaScript. Line 6 tells Eleventy to use only the src directory to build its output.

Eleventy will now give us the expected output we want. Let’s put that to the test by putting this In the command line:

npx @11ty/eleventy

The npx command allows npm to execute code from the project node_module directory without touching the global environment. You’ll see output like this:

Writing _site/index.html from ./src/index.html. Copied 2 items and Processed 1 file in 0.04 seconds (v0.9.0)

The static-site-template folder should now have a new directory in it called _site. If you dig into that folder, you’ll find the css and js directories, along with the index.html file.

This _site folder is the final output from Eleventy. It is the entirety of the website, and you can host it on any static web host.

Without any content, styles, or scripts, the generated site isn’t very interesting:

Let’s create a boilerplate website

Next up, we’re going to put together the baseline for a super simple website we can use as the starting point for all projects moving forward.

It’s worth mentioning that Eleventy has a ton of boilerplate files for different types of projects. It’s totally fine to go with one of these though I often find I wind up needing to roll my own. So that’s what we’re doing here.

<!DOCTYPE html> <html lang="en">   <head>     <meta charset="utf-8">     <meta http-equiv="X-UA-Compatible" content="IE=edge">     <title>Static site template</title>     <meta name="description" content="A static website">     <meta name="viewport" content="width=device-width, initial-scale=1">     <link rel="stylesheet" href="css/style.css">   </head>   <body>   <h1>Great job making your website template!</h1>   <script src="js/main.js"></script>   </body> </html>

We may as well style things a tiny bit, so let’s add this to src/css/style.css:

body {   font-family: sans-serif; }

And we can confirm JavaScript is hooked up by adding this to src/js/main.js:

(function() {   console.log('Invoke the static site template JavaScript!'); })();

Want to see what we’ve got? Run npx @11ty/eleventy --serve in the command line. Eleventy will spin up a server with Browsersync and provide the local URL, which is probably something like localhost:8080.

Even the console tells us things are ready to go!

Let’s move this over to a GitHub repo

Git is the most commonly used version control system in software development. Most Unix-based computers come with it installed, and you can turn any directory into a Git repository by running this command:

git init

We should get a message like this:

Initialized empty Git repository in /path/to/static-site-template/.git/

That means a hidden .git folder was added inside the project directory, which allows the Git program to run commands against the project.

Before we start running a bunch of Git commands on the project, we need to tell Git about files we don’t want it to touch.

Inside the static-site-template directory, run:

touch .gitignore

Then open up that file in your favorite text editor. Add this content to the file:

_site/ node_modules/

This tells Git to ignore the node_modules directory and the _site directory. Committing every single Node.js module to the repo could make things really messy and tough to manage. All that information is already in package.json anyway.

Similarly, there’s no need to version control _site. Eleventy can generate it from the files in src, so no need to take up space in GitHub. It’s also possible that if we were to:

  • version control _site,
  • change files in src, or
  • forget to run Eleventy again,

then _site will reflect an older build of the website, and future developers (or a future version of yourself) may accidentally use an outdated version of the site.

Git is version control software, and GitHub is a Git repository host. There are other Git host providers like BitBucket or GitLab, but since we’re talking about a GitHub-specific feature (template repositories), we’ll push our work up to GitHub. If you don’t already have an account, go ahead and join GitHub. Once you have an account, create a GitHub repository and name it static-site-template.

GitHub will ask a few questions when setting up a new repository. One of those is whether we want to create a new repository on the command line or push an existing repository from the command line. Neither of these choices are exactly what we need. They assume we either don’t have anything at all, or we have been using Git locally already. The static-site-template project already exists, has a Git repository initialized, but doesn’t yet have any commits on it.

So let’s ignore the prompts and instead run the following commands in the command line. Make sure to have the URL GitHub provides in the command from line 3 handy:

git add . git commit -m "first commit" git remote add origin https://github.com/your-username/static-site-template.git git push -u origin master

This adds the entire static-site-template folder to the Git staging area. It commits it with the message “first commit,” adds a remote repository (the GitHub repository), and then pushes up the master branch to that repository.

Let’s template-ize this thing

OK, this is the crux of what we have been working toward. GitHub templates allows us to use the repository we’ve just created as the foundation for other projects in the future — without having to do all the work we’ve done to get here!

Click Settings on the GitHub landing page of the repository to get started. On the settings page, check the button for Template repository.

Now when we go back to the repository page, we’ll get a big green button that says Use this template. Click it and GitHub will create a new repository that’s a mirror of our new template. The new repository will start with the same files and folders as static-site-template. From there, download or clone that new repository to start a new project with all the base files and configuration we set up in the template project.

We can extend the template for future projects

Now that we have a template repository, we can use it for any new static site project that comes up. However, You may find that a new project has additional needs than what’s been set up in the template. For example, let’s say you need to tap into Eleventy’s templating engine or data processing power.

Go ahead and build on top of the template as you work on the new project. When you finish that project, identify pieces you want to reuse in future projects. Perhaps you figured out a cool hover effect on buttons. Or you built your own JavaScript carousel element. Or maybe you’re really proud of the document design and hierarchy of information.

If you think anything you did on a project might come up again on your next run, remove the project-specific details and add the new stuff to your template project. Push those changes up to GitHub, and the next time you use static-site-template to kick off a project, your reusable code will be available to you.

There are some limitations to this, of course

GitHub template repositories are a useful tool for avoiding repetitive setup on new web development projects. I find this especially useful for static site projects. These template repositories might not be as appropriate for more complex projects that require external services like databases with configuration that cannot be version-controlled in a single directory.

Template repositories allow you to ship reusable code you have written so you can solve a problem once and use that solution over and over again. But while your new solutions will carry over to future projects, they won’t be ported backwards to old projects.

This is a useful process for sites with very similar structure, styles, and functionality. Projects with wildly varied requirements may not benefit from this code-sharing, and you could end up bloating your project with unnecessary code.

Wrapping up

There you have it! You now have everything you need to not only start a static site project using Eleventy, but the power to re-purpose it on future projects. GitHub templates are so handy for kicking off projects quickly where we otherwise would have to re-build the same wheel over and over. Use them to your advantage and enjoy a jump start on your projects moving forward!

The post Using GitHub Template Repos to Jump-Start Static Site Projects appeared first on CSS-Tricks.

CSS-Tricks

, , , , , , ,
[Top]