Tag: Environment

Hack the “Deploy to Netlify” Button Using Environment Variables to Make a Customizable Site Generator

If you’re anything like me, you like being lazy shortcuts. The “Deploy to Netlify” button allows me to take this lovely feature of my personality and be productive with it.

Deploy to Netlify

Clicking the button above lets me (or you!) instantly clone my Next.js starter project and automatically deploy it to Netlify. Wow! So easy! I’m so happy!

Now, as I was perusing the docs for the button the other night, as one does, I noticed that you can pre-fill environment variables to the sites you deploy with the button. Which got me thinking… what kind of sites could I customize with that?

Ah, the famed “link in bio” you see all over social media when folks want you to see all of their relevant links in life. You can sign up for the various services that’ll make one of these sites for you, but what if you could make one yourself without having to sign up for yet another service?

But, we also are lazy and like shortcuts. Sounds like we can solve all of these problems with the “Deploy to Netlify” (DTN) button, and environment variables.

How would we build something like this?

In order to make our DTN button work, we need to make two projects that work together:

  • A template project (This is the repo that will be cloned and customized based on the environment variables passed in.)
  • A generator project (This is the project that will create the environment variables that should be passed to the button.)

I decided to be a little spicy with my examples, and so I made both projects with Vite, but the template project uses React and the generator project uses Vue.

I’ll do a high-level overview of how I built these two projects, and if you’d like to just see all the code, you can skip to the end of this post to see the final repositories!

The Template project

To start my template project, I’ll pull in Vite and React.

npm init @vitejs/app

After running this command, you can follow the prompts with whatever frameworks you’d like!

Now after doing the whole npm install thing, you’ll want to add a .local.env file and add in the environment variables you want to include. I want to have a name for the person who owns the site, their profile picture, and then all of their relevant links.

VITE_NAME=Cassidy Williams VITE_PROFILE_PIC=https://github.com/cassidoo.png VITE_GITHUB_LINK=https://github.com/cassidoo VITE_TWITTER_LINK=https://twitter.com/cassidoo

You can set this up however you’d like, because this is just test data we’ll build off of! As you build out your own application, you can pull in your environment variables at any time for parsing with import.meta.env. Vite lets you access those variables from the client code with VITE_, so as you play around with variables, make sure you prepend that to your variables.

Ultimately, I made a rather large parsing function that I passed to my components to render into the template:

function getPageContent() {   // Pull in all variables that start with VITE_ and turn it into an array   let envVars = Object.entries(import.meta.env).filter((key) => key[0].startsWith('VITE_'))    // Get the name and profile picture, since those are structured differently from the links   const name = envVars.find((val) => val[0] === 'VITE_NAME')[1].replace(/_/g, ' ')   const profilePic = envVars.find((val) => val[0] === 'VITE_PROFILE_PIC')[1]      // ...      // Pull all of the links, and properly format the names to be all lowercase and normalized   let links = envVars.map((k) => {     return [deEnvify(k[0]), k[1]]   })    // This object is what is ultimately sent to React to be rendered   return { name, profilePic, links } }  function deEnvify(str) {   return str.replace('VITE_', '').replace('_LINK', '').toLowerCase().split('_').join(' ') }

I can now pull in these variables into a React function that renders the components I need:

// ...   return (     <div>       <img alt={vars.name} src={vars.profilePic} />       <p>{vars.name}</p>       {vars.links.map((l, index) => {         return <Link key={`link$ {index}`} name={l[0]} href={l[1]} />       })}     </div>   )  // ...

And voilà! With a little CSS, we have a “link in bio” site!

Now let’s turn this into something that doesn’t rely on hard-coded variables. Generator time!

The Generator project

I’m going to start a new Vite site, just like I did before, but I’ll be using Vue for this one, for funzies.

Now in this project, I need to generate the environment variables we talked about above. So we’ll need an input for the name, an input for the profile picture, and then a set of inputs for each link that a person might want to make.

In my App.vue template, I’ll have these separated out like so:

<template>   <div>     <p>       <span>Your name:</span>       <input type="text" v-model="name" /> 	</p>     <p>       <span>Your profile picture:</span>	       <input type="text" v-model="propic" />     </p>   </div>    <List v-model:list="list" />    <GenerateButton :name="name" :propic="propic" :list="list" /> </template>

In that List component, we’ll have dual inputs that gather all of the links our users might want to add:

<template>   <div class="list">     Add a link: <br />     <input type="text" v-model="newItem.name" />     <input type="text" v-model="newItem.url" @keyup.enter="addItem" />     <button @click="addItem">+</button>      <ListItem       v-for="(item, index) in list"       :key="index"       :item="item"       @delete="removeItem(index)"     />   </div> </template>

So in this component, there’s the two inputs that are adding to an object called newItem, and then the ListItem component lists out all of the links that have been created already, and each one can delete itself.

Now, we can take all of these values we’ve gotten from our users, and populate the GenerateButton component with them to make our DTN button work!

The template in GenerateButton is just an <a> tag with the link. The power in this one comes from the methods in the <script>.

// ... methods: {   convertLink(str) {     // Convert each string passed in to use the VITE_WHATEVER_LINK syntax that our template expects     return `VITE_$ {str.replace(/ /g, '_').toUpperCase()}_LINK`   },   convertListOfLinks() {     let linkString = ''          // Pass each link given by the user to our helper function     this.list.forEach((l) => {       linkString += `$ {this.convertLink(l.name)}=$ {l.url}&`     })      return linkString   },   // This function pushes all of our strings together into one giant link that will be put into our button that will deploy everything!   siteLink() {     return (       // This is the base URL we need of our template repo, and the Netlify deploy trigger       'https://app.netlify.com/start/deploy?repository=https://github.com/cassidoo/link-in-bio-template#' +       'VITE_NAME=' +       // Replacing spaces with underscores in the name so that the URL doesn't turn that into %20       this.name.replace(/ /g, '_') +       '&' +       'VITE_PROFILE_PIC=' +       this.propic +       '&' +       // Pulls all the links from our helper function above       this.convertListOfLinks()     )   }, }, 

Believe it or not, that’s it. You can add whatever styles you like or change up what variables are passed (like themes, toggles, etc.) to make this truly customizable!

Put it all together

Once these projects are deployed, they can work together in beautiful harmony!

This is the kind of project that can really illustrate the power of customization when you have access to user-generated environment variables. It may be a small one, but when you think about generating, say, resume websites, e-commerce themes, “/uses” websites, marketing sites… the possibilities are endless for turning this into a really cool boilerplate method.


The post Hack the “Deploy to Netlify” Button Using Environment Variables to Make a Customizable Site Generator appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , , , , , ,

Give your Eleventy Site Superpowers with Environment Variables

Eleventy is increasing in popularity because it allows us to create nice, simple websites, but also — because it’s so developer-friendly. We can build large-scale, complex projects with it, too. In this tutorial we’re going to demonstrate that expansive capability by putting a together a powerful and human-friendly environment variable solution.

What are environment variables?

Environment variables are handy variables/configuration values that are defined within the environment that your code finds itself in.

For example, say you have a WordPress site: you’re probably going to want to connect to one database on your live site and a different one for your staging and local sites. We can hard-code these values in wp-config.php but a good way of keeping the connection details a secret and making it easier to keep your code in source control, such as Git, is defining these away from your code.

Here’s a standard-edition WordPress wp-config.php snippet with hardcoded values:

<?php   define( 'DB_NAME', 'my_cool_db' ); define( 'DB_USER', 'root' ); define( 'DB_PASSWORD', 'root' ); define( 'DB_HOST', 'localhost' );

Using the same example of a wp-config.php file, we can introduce a tool like phpdotenv and change it to something like this instead, and define the values away from the code:

<?php   $ dotenv = DotenvDotenv::createImmutable(__DIR__); $ dotenv->load();  define( 'DB_NAME', $ _ENV['DB_NAME'] ); define( 'DB_USER', $ _ENV['DB_USER'] ); define( 'DB_PASSWORD', $ _ENV['DB_PASSWORD'] ); define( 'DB_HOST', $ _ENV['DB_HOST'] );

A way to define these environment variable values is by using a .env file, which is a text file that is commonly ignored by source control.

Example of a dot env file showing variables for a node environment, port, API key and API URL.

We then scoop up those values — which might be unavailable to your code by default, using a tool such as dotenv or phpdotenv. Tools like dotenv are super useful because you could define these variables in an .env file, a Docker script or deploy script and it’ll just work — which is my favorite type of tool!

The reason we tend to ignore these in source control (via .gitignore) is because they often contain secret keys or database connection information. Ideally, you want to keep that away from any remote repository, such as GitHub, to keep details as safe as possible.

Getting started

For this tutorial, I’ve made some starter files to save us all a bit of time. It’s a base, bare-bones Eleventy site with all of the boring bits done for us.

Step one of this tutorial is to download the starter files and unzip them wherever you want to work with them. Once the files are unzipped, open up the folder in your terminal and run npm install. Once you’ve done that, run npm start. When you open your browser at http://localhost:8080, it should look like this:

A default experience of standard HTML content running in localhost with basic styling

Also, while we’re setting up: create a new, empty file called .env and add it to the root of your base files folder.

Creating a friendly interface

Environment variables are often really shouty, because we use all caps, which can get irritating. What I prefer to do is create a JavaScript interface that consumes these values and exports them as something human-friendly and namespaced, so you know just by looking at the code that you’re using environment variables.

Let’s take a value like HELLO=hi there, which might be defined in our .env file. To access this, we use process.env.HELLO, which after a few calls, gets a bit tiresome. What if that value is not defined, either? It’s handy to provide a fallback for these scenarios. Using a JavaScript setup, we can do this sort of thing:

require('dotenv').config();  module.exports = {   hello: process.env.HELLO || 'Hello not set, but hi, anyway 👋' };

What we are doing here is looking for that environment variable and setting a default value, if needed, using the OR operator (||) to return a value if it’s not defined. Then, in our templates, we can do {{ env.hello }}.

Now that we know how this technique works, let’s make it happen. In our starter files folder, there is a directory called src/_data with an empty env.js file in it. Open it up and add the following code to it:

require('dotenv').config();  module.exports = {   otherSiteUrl:     process.env.OTHER_SITE_URL || 'https://eleventy-env-vars-private.netlify.app',   hello: process.env.HELLO || 'Hello not set, but hi, anyway 👋'   };

Because our data file is called env.js, we can access it in our templates with the env prefix. If we wanted our environment variables to be prefixed with environment, we would change the name of our data file to environment.js . You can read more on the Eleventy documentation.

We’ve got our hello value here and also an otherSiteUrl value which we use to allow people to see the different versions of our site, based on their environment variable configs. This setup uses Eleventy JavaScript Data Files which allow us to run JavaScript and return the output as static data. They even support asynchronous code! These JavaScript Data Files are probably my favorite Eleventy feature.

Now that we have this JavaScript interface set up, let’s head over to our content and implement some variables. Open up src/index.md and at the bottom of the file, add the following:

Here’s an example: The environment variable, HELLO is currently: “{{ env.hello }}”. This is called with {% raw %}{{ env.hello }}{% endraw %}. 

Pretty cool, right? We can use these variables right in our content with Eleventy! Now, when you define or change the value of HELLO in your .env file and restart the npm start task, you’ll see the content update.

Your site should look like this now:

The same page as before with the addition of content which is using environment variables

You might be wondering what the heck {% raw %} is. It’s a Nunjucks tag that allows you to define areas that it should ignore. Without it, Nunjucks would try to evaluate the example {{ env.hello }} part.

Modifying image base paths

That first example we did was cool, but let’s really start exploring how this approach can be useful. Often, you will want your production images to be fronted-up with some sort of CDN, but you’ll probably also want your images running locally when you are developing your site. What this means is that to help with performance and varied image format support, we often use a CDN to serve up our images for us and these CDNs will often serve images directly from your site, such as from your /images folder. This is exactly what I do on Piccalilli with ImgIX, but these CDNs don’t have access to the local version of the site. So, being able to switch between CDN and local images is handy.

The solution to this problem is almost trivial with environment variables — especially with Eleventy and dotenv, because if the environment variables are not defined at the point of usage, no errors are thrown.

Open up src/_data/env.js and add the following properties to the object:

imageBase: process.env.IMAGE_BASE || '/images/', imageProps: process.env.IMAGE_PROPS,

We’re using a default value for imageBase of /images/ so that if IMAGE_BASE is not defined, our local images can be found. We don’t do the same for imageProps because they can be empty unless we need them.

Open up _includes/base.njk and, after the <h1>{{ title }}</h1> bit, add the following:

<img src="https://assets.codepen.io/174183/mountains.jpg?width=1275&height=805&format=auto&quality=70" alt="Some lush mountains at sunset" /> 

By default, this will load /images/mountains.jpg. Cool! Now, open up the .env file and add the following to it:

IMAGE_BASE=https://assets.codepen.io/174183/ IMAGE_PROPS=?width=1275&height=805&format=auto&quality=70

If you stop Eleventy (Ctrl+C in terminal) and then run npm start again, then view source in your browser, the rendered image should look like this:

<img src="https://assets.codepen.io/174183/mountains.jpg?width=1275&height=805&format=auto&quality=70" alt="Some lush mountains at sunset" />

This means we can leverage the CodePen asset optimizations only when we need them.

Powering private and premium content with Eleventy

We can also use environment variables to conditionally render content, based on a mode, such as private mode. This is an important capability for me, personally, because I have an Eleventy Course, and CSS book, both powered by Eleventy that only show premium content to those who have paid for it. There’s all-sorts of tech magic happening behind the scenes with Service Workers and APIs, but core to it all is that content can be conditionally rendered based on env.mode in our JavaScript interface.

Let’s add that to our example now. Open up src/_data/env.js and add the following to the object:

mode: process.env.MODE || 'public'

This setup means that by default, the mode is public. Now, open up src/index.md and add the following to the bottom of the file:

{% if env.mode === 'private' %}  ## This is secret content that only shows if we’re in private mode.  This is called with {% raw %}`{{ env.mode }}`{% endraw %}. This is great for doing special private builds of the site for people that pay for content, for example.  {% endif %}

If you refresh your local version, you won’t be able to see that content that we just added. This is working perfectly for us — especially because we want to protect it. So now, let’s show it, using environment variables. Open up .env and add the following to it:

MODE=private

Now, restart Eleventy and reload the site. You should now see something like this:

The same page, with the mountain image and now some added private content

You can run this conditional rendering within the template too. For example, you could make all of the page content private and render a paywall instead. An example of that is if you go to my course without a license, you will be presented with a call to action to buy it:

A paywall that encourages the person to buy the content while blocking it

Fun mode

This has hopefully been really useful content for you so far, so let’s expand on what we’ve learned and have some fun with it!

I want to finish by making a “fun mode” which completely alters the design to something more… fun. Open up src/_includes/base.njk, and just before the closing </head> tag, add the following:

{% if env.funMode %}   <link rel="stylesheet" href="https://fonts.googleapis.com/css2?family=Lobster&display=swap" />   <style>     body {       font-family: 'Comic Sans MS', cursive;       background: #fc427b;       color: #391129;     }     h1,     .fun {       font-family: 'Lobster';     }     .fun {       font-size: 2rem;       max-width: 40rem;       margin: 0 auto 3rem auto;       background: #feb7cd;       border: 2px dotted #fea47f;       padding: 2rem;       text-align: center;     }   </style> {% endif %}

This snippet is looking to see if our funMode environment variable is true and if it is, it’s adding some “fun” CSS.

Still in base.njk, just before the opening <article> tag, add the following code:

{% if env.funMode %}   <div class="fun">     <p>🎉 <strong>Fun mode enabled!</strong> 🎉</p>   </div> {% endif %}

This code is using the same logic and rendering a fun banner if funMode is true. Let’s create our environment variable interface for that now. Open up src/_data/env.js and add the following to the exported object:

funMode: process.env.FUN_MODE

If funMode is not defined, it will act as false, because undefined is a falsy value.

Next, open up your .env file and add the following to it:

FUN_MODE=true

Now, restart the Eleventy task and reload your browser. It should look like this:

The main site we’re working on but now, it’s bright pink with Lobster and Comic Sans fonts

Pretty loud, huh?! Even though this design looks pretty awful (read: rad), I hope it demonstrates how much you can change with this environment setup.

Wrapping up

We’ve created three versions of the same site, running the same code to see all the differences:

  1. Standard site
  2. Private content visible
  3. Fun mode

All of these sites are powered by identical code with the only difference between each site being some environment variables which, for this example, I have defined in my Netlify dashboard.

I hope that this technique will open up all sorts of possibilities for you, using the best static site generator, Eleventy!


The post Give your Eleventy Site Superpowers with Environment Variables appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , ,
[Top]

A Gentle Introduction to Using a Docker Container as a Dev Environment

Sarcasm disclaimer: This article is mostly sarcasm. I do not think that I actually speak for Dylan Thomas and I would never encourage you to foist a light theme on people who don’t want it. No matter how wrong they may be.

When Dylan Thomas penned the words, “Do not go gentle into that good night,” he was talking about death. But if he were alive today, he might be talking about Linux containers. There is no way to know for sure because he passed away in 1953, but this is the internet, so I feel extremely confident speaking authoritatively on his behalf.

My confidence comes from a complete overestimation of my skills and intelligence coupled with the fact that I recently tried to configure a Docker container as my development environment. And I found myself raging against the dying of the light as Docker rejected every single attempt I made like I was me and it was King James screaming, “NOT IN MY HOUSE!”

Pain is an excellent teacher. And because I care about you and have no other ulterior motives, I want to use that experience to give you a “gentle” introduction to using a Docker container as a development environment. But first, let’s talk about whyyyyyyyyyyy you would ever want to do that.

kbutwhytho?

Close your eyes and picture this: a grown man dressed up like a fox.

Wait. No. Wrong scenario.

Instead, picture a project that contains not just your source code, but your entire development environment and all the dependencies and runtimes your app needs. You could then give that project to anyone anywhere (like the fox guy) and they could run your project without having to make a lick of configuration changes to their own environment.

This is exactly what Docker containers do. A Dockerfile defines an entire runtime environment with a single file. All you would need is a way to develop inside of that container.

Wait for it…

VS Code and Remote – Containers

VS Code has an extension called Remote – Containers that lets you load a project inside a Docker container and connect to it with VS Code. That’s some Inception-level stuff right there. (Did he make it out?! THE TALISMAN NEVER ACTUALLY STOPS SPINNING.) It’s easier to understand if we (and by “we” I mean you) look at it in action.

Adding a container to a project

Let’s say for a moment that you are on a high-end gaming PC that you built for your kids and then decided to keep if for yourself. I mean, why exactly do they deserve a new computer again? Oh, that’s right. They don’t. They can’t even take out the trash on Sundays even though you TELL THEM EVERY WEEK.

This is a fresh Windows machine with WSL2 and Docker installed, but that’s all. Were you to try and run a Node.js project on this machine, Powershell would tell you that it has absolutely no idea what you are reffering to and maybe you mispelled something. Which, in all fairness, you do suck at spelling. Remember that time in 4ᵗʰ grade when you got knocked out of the first round of the spelling bee because you couldn’t spell “fried.” FRYED? There’s no “Y” in there!

Now this is not a huge problem — you could always skip off and install Node.js. But let’s say for a second that you can’t be bothered to do that and you’re pretty sure that skipping is not something adults do.

Instead, we can configure this project to run in a container that already has Node.js installed. Now, as I’ve already discussed, I have no idea how to use Docker. I can barely use the microwave. Fortunately, VS Code will configure your project for you — to an extent.

From the Command Palette, there is an “Add Development Container Configuration Files…” command. This command looks at your project and tries to add the proper container definition.

In this case, VS Code knows I’ve got a Node project here, so I’ll just pick Node.js 14. Yes, I am aware that 12 is LTS right now, but it’s gonna be 14 in [checks watch] one month and I’m an early adopter, as is evidenced by my interest in container technology just now in 2020.

This will add a .devcontainer folder with some assets inside. One is a Dockerfile that contains the Node.js image that we’re going to use, and the other is a devcontainer.json that has some project level configuration going on.

Now, before we touch anything and break it all (we’ll get to that, trust me), we can select “Rebuild and Reopen in Container” from the Command Palette. This will restart VS Code and set about building the container. Once it completes (which can take a while the first time if you’re not on a high-end gaming PC that your kids will never know the joys of), the project will open inside of the container. VS Code is connected to the container, and you know that because it says so in the lower left-hand corner.

Now if we open the terminal in VS Code, Powershell is conspicously absent because we are not on Windows anymore, Dorthy. We are now in a Linux container. And we can both npm install and npm start in this magical land.

This is an Express App, so it should be running on port 3000. But if you try and visit that port, it won’t load. This is because we need to map a port in the container to 3000 on our localhost. As one does.

Fortunately, there is a UI for this.

The Remote Containers extension puts a “Remote Explorer” icon in the Action Bar. Which is on the left-hand side for you, but the right-hand side for me. Because I moved it and you should too.

There are three sections here, but look at the bottom one which says “Port Forwarding,” I’m not the sandwich with the most lettuce, but I’m pretty sure that’s what we want here. You can click on the “Forward a Port” and type “3000,” Now if we try and hit the app from the browser…

Mostly things, “just worked.” But the configuration is also quite simple. Let’s look at how we can start to customize this setup by automating some of the aspects of the project itself. Project specific configuration is done in the devcontainer.json file.

Automating project configuration

First off, we can automate the port forwarding by adding a forwardPorts variable and specifying 3000 as the value. We can also automate the npm install command by specifying the postCreateCommand property. And let’s face it, we could all stand to run AT LEAST one less npm install.

{   // ...   // Use 'forwardPorts' to make a list of ports inside the container available locally.   "forwardPorts": [3000],   // Use 'postCreateCommand' to run commands after the container is created.   "postCreateCommand": "npm install",   // ... }

Additionally, we can include VS Code extensions. The VS Code that runs in the Docker container does not automatically get every extension you have installed. You have to install them in the container, or just include them like we’re doing here.

Extensions like Prettier and ESLint are perfect for this kind of scenario. We can also take this opportunity to foist a light theme on everyone because it turns out that dark themes are worse for reading and comprehension. I feel like a prophet.

// For format details, see https://aka.ms/vscode-remote/devcontainer.json or this file's README at: // https://github.com/microsoft/vscode-dev-containers/tree/v0.128.0/containers/javascript-node-14 {   // ...   // Add the IDs of extensions you want installed when the container is created.   "extensions": [     "dbaeumer.vscode-eslint",     "esbenp.prettier-vscode",     "GitHub.github-vscode-theme"   ]   // ... }

If you’re wondering where to find those extension ID’s, they come up in intellisense (Ctrl/Cmd + Shift) if you have them installed. If not, search the extension marketplace, right-click the extension and say “Copy extension ID.” Or even better, just select “Add to devcontainer.json.”

By default, the Node.js container that VS Code gives you has things like git and cURL already installed. What it doesn’t have, is “cowsay,” And we can’t have a Linux environment without cowsay. That’s in the Linux bi-laws (it’s not). I don’t make the rules. We need to customize this container to add that.

Automating environment configuration

This is where things went off the rails for me. In order to add software to a development container, you have to edit the Dockerfile. And Linux has no tolerance for your shenanigans or mistakes.

The base Docker container that you get with the container configurations in VS Code is Debian Linux. Debian Linux uses the apt-get dependency manager.

apt-get install cowsay

We can add this to the end of the Dockerfile. Whenever you install something from apt-get, run an apt-get update first. This command updates the list of packages and package repos so that you have the most current list cached. If you don’t do this, the container build will fail and tell you that it can’t find “cowsay.”

# To fully customize the contents of this image, use the following Dockerfile instead: # https://github.com/microsoft/vscode-dev-containers/tree/v0.128.0/containers/javascript-node-14/.devcontainer/Dockerfile FROM mcr.microsoft.com/vscode/devcontainers/javascript-node:0-14 # ** Install additional packages ** RUN apt-get update    && apt-get -y install cowsay

A few things to note here…

  1. That RUN command is a Docker thing and it creates a new “layer.” Layers are how the container knows what has changed and what in the container needs to be updated when you rebuild it. They’re kind of like cake layers except that you don’t want a lot of them because enormous cakes are awesome. Enormous containers are not. You should try and keep related logic together in the same RUN command so that you don’t create unnecessary layers.
  2. That denotes a line break at the end of a line. You need it for multi-line commands. Leave it off and you will know the pain of many failed Docker builds.
  3. The && is how you add an additional command to the RUN line. For the love of god, don’t forget that on the previous line.
  4. The -y flag is important because by default, apt-get is going to prompt you to ensure you really want to install what you just tried to install. This will cause the container build to fail because there is nobody there to say Y or N. The -y flag is shorthand for “don’t bother me with your silly confirmation prompts”. Apparently everyone is supposed to know this already. I didn’t know it until about four hours ago. 

Use the command prompt to select “Rebuild Container”…

And, just like that…

It doesn’t work.

This the first lesson in what I like to call, “Linux Vertigo.” There are so many distributions of Linux and they don’t all handle things the same way. It can be difficult to figure out why things work in one place (Mac, WSL2) and don’t work in others. The reason why “cowsay” isn’t available, is that Debian puts “cowsay” in /usr/games, which is not included in the PATH environment variable.

One solution would be to add it to the PATH in the Dockerfile. Like this…

FROM mcr.microsoft.com/vscode/devcontainers/javascript-node:0-14 RUN apt-get update    && apt-get -y install cowsay ENV PATH="/usr/games:$ {PATH}"

EXCELLENT. We’re solving real problems here, folks. People like cow one-liners. I bullieve I herd that somewhere.

To summarize, project configuration (forwarding ports, installing project depedencies, ect) is done in the “devcontainer.json” and enviornment configuration (installing software) is done in the “Dockerfile.” Now let’s get brave and try something a little more edgy.

Advanced configuration

Let’s say for a moment that you have a gorgeous, glammed out terminal setup that you really want to put in the container as well. I mean, just because you are developing in a container doesn’t mean that your terminal has to be boring. But you also wouldn’t want to reconfigure your pretentious zsh setup for every project that you open. Can we automate that too? Let’s find out.

Fortunately, zsh is already installed in the image that you get. The only trouble is that it’s not the default shell when the container opens. There are a lot of ways that you can make zsh the default shell in a normal Docker scenario, but none of them will work here. This is because you have no control over the way the container is built.

Instead, look again to the trusty devcontainer.json file. In it, there is a "settings" block. In fact, there is a line already there showing you that the default terminal is set to "/bin/bash". Change that to "/bin/zsh".

// Set *default* container specific settings.json values on container create. "settings": {   "terminal.integrated.shell.linux": "/bin/zsh" }

By the way, you can set ANY VS Code setting there. Like, you know, moving the sidebar to the right-hand side. There – I fixed it for you.

// Set default container specific settings.json values on container create. "settings": {   "terminal.integrated.shell.linux": "/bin/zsh",   "workbench.sideBar.location": "right" },

And how about those pretentious plugins that make you better than everyone else? For those you are going to need your .zshrc file. The container already has oh-my-zsh in it, and it’s in the “root” folder. You just need to make sure you set the path to ZSH at the top of the .zshrc so that it points to root. Like this…

# Path to your oh-my-zsh installation. export ZSH="/root/.oh-my-zsh" 
 # Set name of the theme to load --- if set to "random", it will # load a random theme each time oh-my-zsh is loaded, in which case, # to know which specific one was loaded, run: echo $ RANDOM_THEME # See https://github.com/ohmyzsh/ohmyzsh/wiki/Themes ZSH_THEME="cloud" 
 # Which plugins would you like to load? plugins=(zsh-autosuggestions nvm git) 
 source $ ZSH/oh-my-zsh.sh

Then you can copy in that sexy .zshrc file to the root folder in the Dockerfile. I put that .zshrc file in the .devcontainer folder in my project.

COPY .zshrc /root/.zshrc

And if you need to download a plugin before you install it, do that in the Dockerfile with a RUN command. Just remember to group all of these into one command since each RUN is a new layer. You are nearly a container expert now. Next step is to write a blog post about it and instruct people on the ways of Docker like you invented the thing.

RUN git clone https://github.com/zsh-users/zsh-autosuggestions $ {ZSH_CUSTOM:-~/.oh-my-zsh/custom}/plugins/zsh-autosuggestions

Look at the beautiful terminal! Behold the colors! The git plugin which tells you the branch and adds a lightning emoji! Nothing says, “I know what I’m doing” like a customized terminal. I like to take mine to Starbucks and just let people see it in action and wonder if I’m a celebrity.

Go gently

Hopefully you made it to this point and thought, “Geez, this guy is seriously overreacting. This is not that hard.” If so, I have successfully saved you. You are welcome. No need to thank me. Yes, I do have an Amazon wish list.

For more information on Remote Containers, including how to do things like add a database or use Docker Compose, check out the official Remote Container docs, which provide much more clarity with 100% less neurotic commentary.


The post A Gentle Introduction to Using a Docker Container as a Dev Environment appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , ,
[Top]

I Spun up a Scalable WordPress Server Environment with Trellis, and You Can, Too

A few years back, my fledgling website design agency was starting to take shape; however, we had one problem: managing clients’ web servers and code deployments. We were unable to build a streamlined process of provisioning servers and maintaining operating system security patches. We had the development cycle down pat, but server management became the bane of our work. We also needed tight control over each server depending on a site’s specific needs. Also, no, shared hosting was not the long term solution.

I began looking for a prebuilt solution that could solve this problem but came up with no particular success. At first, I manually provisioned servers. This process quickly proved to be both monotonous and prone to errors. I eventually learned Ansible and created a homegrown conglomeration of custom Ansible roles, bash scripts and Ansible Galaxy roles that further simplified the process — but, again, there were still many manual steps needed to take before the server was 100%.

I’m not a server guy (nor do I pretend to be one), and at this point, it became apparent that going down this path was not going to end well in the long run. I was taking on new clients and needed a solution, or else I would risk our ability to be sustainable, let alone grow. I was spending gobs of time typing arbitrary sudo apt-get update commands into a shell when I should have been managing clients or writing code. That’s not to mention I was also handling ongoing security updates for the underlying operating system and its applications.

Tell me if any of this sounds familiar.

Serendipitously, at this time, the team at Roots had released Trellis for server provisioning; after testing it out, things seemed to fall into place. A bonus is that Trellis also handles complex code deployments, which turned out to be something else I needed as most of the client sites and web applications that we built have a relatively sophisticated build process for WordPress using Composer, npm, webpack, and more. Better yet, it takes just minutes to jumpstart a new project. After spending hundreds of hours perfecting my provisioning process with Trellis, I hope to pass what I’ve learned onto you and save you all the hours of research, trials, and manual work that I wound up spending.

A note about Bedrock

We’re going to assume that your WordPress project is using Bedrock as its foundation. Bedrock is maintained by the same folks who maintain Trellis and is a “WordPress boilerplate with modern development tools, easier configuration, and an improved folder structure.” This post does not explicitly explain how to manage Bedrock, but it is pretty simple to set up, which you can read about in its documentation. Trellis is natively designed to deploy Bedrock projects.

A note about what should go into the repo of a WordPress site

One thing that this entire project has taught me is that WordPress applications are typically just the theme (or the child theme in the parent/child theme relationship). Everything else, including plugins, libraries, parent themes and even WordPress itself are just dependencies. That means that our version control systems should typically include the theme alone and that we can use Composer to manage all of the dependencies. In short, any code that is managed elsewhere should never be versioned. We should have a way for Composer to pull it in during the deployment process. Trellis gives us a simple and straightforward way to accomplish this.

Getting started

Here are some things I’m assuming going forward:

  • The code for the new site in the directory ~/Sites/newsite
  • The staging URL is going to be https://newsite.statenweb.com
  • The production URL is going to be https://newsite.com
  • Bedrock serves as the foundation for your WordPress application
  • Git is used for version control and GitHub is used for storing code. The repository for the site is: git@github.com:statenweb/newsite.git

I am a little old school in my local development environment, so I’m foregoing Vagrant for local development in favor of MAMP. We won’t go over setting up the local environment in this article.

I set up a quick start bash script for MacOS to automate this even further.

The two main projects we are going to need are Trellis and Bedrock. If you haven’t done so already, create a directory for the site (mkdir ~/Sites/newsite) and clone both projects from there. I clone Trellis into a /trellis directory and Bedrock into the /site directory:

cd ~/Sites/newsite git clone git@github.com:roots/trellis.git git clone git@github.com:roots/bedrock.git site cd trellis rm -rf .git cd ../site rm -rf .git

The last four lines enable us to version everything correctly. When you version your project, the repo should contain everything in ~/Sites/newsite.

Now, go into trellis and make the following changes:

First, open ~/Sites/newsite/trellis/ansible.cfg and add these lines to the bottom of the [defaults] key:

vault_password_file = .vault_pass host_key_checking = False

The first line allows us to use a .vault_pass file to encrypt all of our vault.yml files which are going to store our passwords, sensitive data, and salts.

The second host_key_checking = False can be omitted for security as it could be considered somewhat dangerous. That said, it’s still helpful in that we do not have to manage host key checking (i.e., typing yes when prompted).

Ansible vault password

Photo by Micah Williams on Unsplash

Next, let’s create the file ~/Sites/newsite/trellis/.vault_pass and enter a random hash of 64 characters in it. We can use a hash generator to create that (see here for example). This file is explicitly ignored in the default .gitignore, so it will (or should!) not make it up to the source control. I save this password somewhere extremely secure. Be sure to run chmod 600 .vault_pass to restrict access to this file.

The reason we do this is so we can store encrypted passwords in the version control system and not have to worry about exposing any of the server’s secrets. The main thing to call out is that the .vault_pass file is (and should not be) not committed to the repo and that the vault.yml file is properly encrypted; more on this in the “Encrypting the Secret Variables” section below.

Setting up target hosts

Photo by N. on Unsplash

Next, we need to set up our target hosts. The target host is the web address where Trellis will deploy our code. For this tutorial, we are going to be configuring newsite.com as our production target host and newsite.statenweb.com as our staging target host. To do this, let’s first update the production servers address in the production host file, stored in ~/Sites/newsite/trellis/hosts/production to:

[production] newsite.com  [web] newsite.com

Next, we can update the staging server address in the staging host file, which is stored in ~/Sites/newsite/trellis/hosts/staging to:

[staging] newsite.statenweb.com  [web] newsite.statenweb.com

Setting up GitHub SSH Keys

For deployments to be successful, SSH keys need to be working. Trellis takes advantage of how GitHub’ exposes all public (SSH) keys so that you do not need to add keys manually. To set this up go into the group_vars/all/users.yml and update both the web_user and the admin_user object’s keys value to include your GitHub username. For example:

users:   - name: '{{ web_user }}' 		groups: 		  - '{{ web_group }}' 		keys: 		  - https://github.com/matgargano.keys   - name: '{{ admin_user }}' 		groups: 		  - sudo 		keys: 		  - https://github.com/matgargano.keys

Of course, all of this assumes that you have a GitHub account with all of your necessary public keys associated with it.

Site Meta

We store essential site information in:

  • ~/Sites/newsite/trellis/group_vars/production/wordpress_sites.yml for production
  • ~/Sites/newsite/trellis/group_vars/staging/wordpress_sites.yml for staging.

Let’s update the following information for our staging wordpress_sites.yml:

wordpress_sites: 	newsite.statenweb.com: 		site_hosts: 		  - canonical: newsite.statenweb.com 		local_path: ../site  		repo: git@github.com:statenweb/newsite.git 		repo_subtree_path: site  		branch: staging 		multisite: 			enabled: false 		ssl: 			enabled: true 			provider: letsencrypt 		cache: 			enabled: false

This file is saying that we:

  • removed the site hosts redirects as they are not needed for staging
  • set the canonical site URL (newsite.statenweb.com) for the site key (newsite.statenweb.com)
  • defined the URL for the repository
  • the git repo branch that gets deployed to this target is staging, i.e., we are using a separate branch named staging for our staging site
  • enabled SSL (set to true), which will also install an SSL certificate when the box provisions

Let’s update the following information for our production wordpress_sites.yml:

wordpress_sites: 	newsite.com: 		site_hosts: 		  - canonical: newsite.com 				redirects: 				  - www.newsite.com 		local_path: ../site # path targeting local Bedrock site directory (relative to Ansible root) 		repo: git@github.com:statenweb/newsite.git 		repo_subtree_path: site  		branch: master 		multisite: 			enabled: false 		ssl: 			enabled: true 			provider: letsencrypt 		cache: 			enabled: false

Again, what this translates to is that we:

  • set the canonical site URL (newsite.com) for the site key (newsite.com)
  • set a redirect for www.newsite.com
  • defined the URL for the repository
  • the git repo branch that gets deployed to this target is master, i.e., we are using a separate branch named master for our production site
  • enabled SSL (set to true), which will install an SSL certificate when you provision the box

In the wordpress_sites.yml you can further configure your server with caching, which is beyond the scope of this guide. See Trellis’ documentation on FastCGI Caching for more information.

Secret Variables

Photo by Kristina Flour on Unsplash

There are going to be several secret pieces of information for both our staging and production site including the root user password, MySQL root password, site salts, and more. As referenced previously, Ansible Vault and using .vault_pass file makes this a breeze.

We store this secret site information in:

  • ~/Sites/newsite/trellis/group_vars/production/vault.yml for production
  • ~/Sites/newsite/trellis/group_vars/staging/vault.yml for staging

Let’s update the following information for our staging vault.yml:

vault_mysql_root_password: pK3ygadfPHcLCAVHWMX  vault_users: 	- name: "{{ admin_user }}" 		password: QvtZ7tdasdfzUmJxWr8DCs 		salt: "heFijJasdfQbN8bA3A"  vault_wordpress_sites: 	newsite.statenweb.com: 		env: 			auth_key: "Ab$ YTlX%:Qt8ij/99LUadfl1:U]m0ds@N<3@x0LHawBsO$ (gdrJQm]@alkr@/sUo.O" 			secure_auth_key: "+>Pbsd:|aiadf50;1Gz;.Z{nt%Qvx.5m0]4n:L:h9AaexLR{1B6.HeMH[w4$ >H_" 			logged_in_key: "c3]7HixBkSC%}-fadsfK0yq{HF)D#1S@Rsa`i5aW^jW+W`8`e=&PABU(s&JH5oPE" 			nonce_key: "5$ vig.yGqWl3G-.^yXD5.ddf/BsHx|i]>h=mSy;99ex*Saj<@lh;3)85D;#|RC=" 			auth_salt: "Wv)[t.xcPsA}&/]rhxldafM;h(FSmvR]+D9gN9c6{*hFiZ{]{,#b%4Um.QzAW+aLz" 			secure_auth_salt: "e4dz}_x)DDg(si/8Ye&U.p@pB}NzHdfQccJSAh;?W)>JZ=8:,i?;j$ bwSG)L!JIG" 			logged_in_salt: "DET>c?m1uMAt%hj3`8%_emsz}EDM7R@44c0HpAK(pSnRuzJ*WTQzWnCFTcp;,:44" 			nonce_salt: "oHB]MD%RBla*#x>[UhoE{hm{7j#0MaRA#fdQcdfKe]Y#M0kQ0F/0xe{cb|g,h.-m"

Now, let’s update the following information for our production vault.yml:

vault_mysql_root_password: nzUMN4zBoMZXJDJis3WC vault_users: 	- name: "{{ admin_user }}" 		password: tFxea6ULFM8CBejagwiU 		salt: "9LgzE8phVmNdrdtMDdvR" vault_wordpress_sites: 	newsite.com: 		env: 			db_password: eFKYefM4hafxCFy3cash 			# Generate your keys here: https://roots.io/salts.html 			auth_key: "|4xA-:Pa=-rT]&!-(%*uKAcd</+m>ix_Uv,`/(7dk1+;b|ql]42gh&HPFdDZ@&of" 			secure_auth_key: "171KFFX1ztl+1I/P$ bJrxi*s;}.>S:{^-=@*2LN9UfalAFX2Nx1/Q&i&LIrI(BQ[" 			logged_in_key: "5)F+gFFe}}0;2G:k/S>CI2M*rjCD-mFX?Pw!1o.@>;?85JGu}#(0#)^l}&/W;K&D" 			nonce_key: "5/[Zf[yXFFgsc#`4r[kGgduxVfbn::<+F<$ jw!WX,lAi41#D-Dsaho@PVUe=8@iH" 			auth_salt: "388p$ c=GFFq&hw6zj+T(rJro|V@S2To&dD|Q9J`wqdWM&j8.KN]y?WZZj$ T-PTBa" 			secure_auth_salt: "%Rp09[iM0.n[ozB(t;0vk55QDFuMp1-=+F=f%/Xv&7`_oPur1ma%TytFFy[RTI,j" 			logged_in_salt: "dOcGR-m:%4NpEeSj>?A8%x50(d0=[cvV!2x`.vB|^#G!_D-4Q>.+1K!6FFw8Da7G" 			nonce_salt: "rRIHVyNKD{LQb$ uOhZLhz5QX}P)QUUo!Yw]+@!u7WB:INFFYI|Ta5@G,j(-]F.@4"

The essential lines for both are that:

  • The site key must match the key in wordpress_sites.yml we are using newsite.statenweb.com: for staging and newsite.com: for production
  • I randomly generated vault_mysql_root_password, password, salt, db_password, and db_password. I used Roots’ helper to generate the salts.

I typically use Gmail’s SMTP servers using the Post SMTP plugin, so there’s no need for me to edit the ~/Sites/newsite/group_vars/all/vault.yml.

Encrypting the Secret Variables

Photo by Markus Spiske on Unsplash

As previously mentioned we use Ansible Vault to encrypt our vault.yml files. Here’s how to encrypt the files and make them ready to be stored in our version control system:

cd ~/Sites/newsite/trellis ansible-vault encrypt group_vars/staging/vault.yml group_vars/production/vault.yml

Now, if we open either ~/Sites/newsite/trellis/group_vars/staging/vault.yml or ~/Sites/newsite/trellis/group_vars/production/vault.yml, all we’ll get is garbled text. This is safe to be stored in a repository as the only way to decrypt it is to use the .vault_pass. It goes without saying to make extra sure that the .vault_pass itself does not get committed to the repository.

A note about compiling, transpiling, etc.

Another thing that’s out of scope is setting up Trellis deployments to handle a build process using build tools such as npm and webpack. This is example code to handle a custom build that could be included in ~/Sites/newsite/trellis/deploy-hooks/build-before.yml:

---  - 	args: 		chdir: "{{ project.local_path }}/web/app/themes/newsite" 	command: "npm install" 	connection: local 	name: "Run npm install"  - 	args: 		chdir: "{{ project.local_path }}/web/app/themes/newsite" 	command: "npm run build" 	connection: local 	name: "Compile assets for production"  - 	name: "Copy Assets" 	synchronize: 		dest: "{{ deploy_helper.new_release_path }}/web/app/themes/newsite/dist/" 		group: no 		owner: no 		rsync_opts: "--chmod=Du=rwx,--chmod=Dg=rx,--chmod=Do=rx,--chmod=Fu=rw,--chmod=Fg=r,--chmod=Fo=r" 		src: "{{ project.local_path }}/web/app/themes/newsite/dist/"

These are instructions that build assets and moves them into a directory that I explicitly decided not to version. I hope to write a follow-up guide that dives specifically into that.

Provision

Photo by Bill Jelen on Unsplash

I am not going to go in great detail about setting up the servers themselves, but I typically would go into DigitalOcean and spin up a new droplet. As of this writing, Trellis is written on Ubuntu 18.04 LTS (Bionic Beaver) which acts as the production server. In that droplet, I would add a public key that is also included in my GitHub account. For simplicity, I can use the same server as my staging server. This scenario is likely not what you would be using; maybe you use a single server for all of your staging sites. If that is the case, then you may want to pay attention to the passwords configured in ~/Sites/newsite/trellis/group_vars/staging/vault.yml.

At the DNS level, I would map the naked A record for newsite.com to the IP address of the newly created droplet. Then I’d map the CNAME www to @. Additionally, the A record for newsite.statenweb.com would be mapped to the IP address of the droplet (or, alternately, a CNAME record could be created for newsite.statenweb.com to newsite.com since they are both on the same box in this example).

After the DNS propagates, which can take some time, the staging box can be provisioned by running the following commands.

First off, it;’s possible you may need to run this before anything else:

ansible-galaxy install -r requirements.yml

Then, install the required Ansible Galaxy roles before proceeding:

cd ~/Sites/newsite/trellis ansible-playbook server.yml -e env=staging

Next up, provision the production box:

cd ~/Sites/newsite/trellis ansible-playbook server.yml -e env=production

Deploy

If all is set up correctly to deploy to staging, we can run these commands:

cd ~/Sites/newsite/trellis ansible-playbook deploy.yml -e "site=newsite.statenweb.com env=staging" -i hosts/staging

And, once this is complete, hit https://newsite.statenweb.com. That should bring up the WordPress installation prompt that provides the next steps to complete the site setup.

If staging is good to go, then we can issue the following commands to deploy to production:

cd ~/Sites/newsite/trellis ansible-playbook deploy.yml -e "site=newsite.com env=production" -i hosts/production

And, like staging, this should also prompt installation steps to complete when hitting https://newsite.com.

Go forth and deploy!

Hopefully, this gives you an answer to a question I had to wrestle with personally and saves you a ton of time and headache in the process. Having stable, secure and scalable server environments that take relatively little effort to spin up has made a world of difference in the way our team works and how we’re able to accommodate our clients’ needs.

While we’re technically done at this point, there are still further steps to take to wrap up your environment fully:

  • Add dependencies like plugins, libraries and parent themes to ~/Sites/newsite/composer.json and run composer update to grab the latest manifest versions.
  • Place the theme to ~/Sites/newsite/site/themes/. (Note that any WordPress theme can be used.)
  • Include any build processes you’d need (e.g. transpiling ES6, compiling SCSS, etc.) in one of the deployment hooks. (See the documentation for Trellis Hooks).

I have also been able to dive into enterprise-level continuous integration and continuous delivery, as well as how to handle premium plugins with Composer by running a custom Composer server, among other things, while incurring no additional cost. Hopefully, those are areas I can touch on in future posts.

Trellis provides a dead simple way to provision WordPress servers. Thanks to Trellis, long gone are the days of manually creating, patching and maintaining servers!

The post I Spun up a Scalable WordPress Server Environment with Trellis, and You Can, Too appeared first on CSS-Tricks.

CSS-Tricks

, , , , ,
[Top]