Tag: Simple

Generate a Pull Request of Static Content With a Simple HTML Form

Jamstack has been in the website world for years. Static Site Generators (SSGs) — which often have content that lives right within a GitHub repo itself — are a big part of that story. That opens up the idea of having contributors that can open pull requests to add, change, or edit content. Very useful!

Examples of this are like:

Why built with a static site approach?

When we need to build content-based sites like this, it’s common to think about what database to use. Keeping content in a database is a time-honored good idea. But it’s not the only approach! SSGs can be a great alternative because…

  • They are cheap and easy to deploy. SSGs are usually free, making them great for an MVP or a proof of concept.
  • They have great security. There is nothing to hack through the browser, as all the site contains is often just static files.
  • You’re ready to scale. The host you’re already on can handle it.

There is another advantage for us when it comes to a content site. The content of the site itself can be written in static files right in the repo. That means that adding and updating content can happen right from pull requests on GitHub, for example. Even for the non-technically inclined, it opens the door to things like Netlify CMS and the concept of open authoring, allowing for community contributions.

But let’s go the super lo-fi route and embrace the idea of pull requests for content, using nothing more than basic HTML.

The challenge

How people contribute adding or updating a resource isn’t always perfectly straightforward. People need to understand how to fork your repository, how to and where to add their content, content formatting standards, required fields, and all sorts of stuff. They might even need to “spin up” the site themselves locally to ensure the content looks right.

People who seriously want to help our site sometimes will back off because the process of contributing is a technological hurdle and learning curve — which is sad.

You know what anybody can do? Use a <form>

Just like a normal website, the easy way for people to submit a content is to fill out a form and submit it with the content they want.

What if we can make a way for users to contribute content to our sites by way of nothing more than an HTML <form> designed to take exactly the content we need? But instead of the form posting to a database, it goes the route of a pull request against our static site generator? There is a trick!

The trick: Create a GitHub pull request with query parameters

Here’s a little known trick: We can pre-fill a pull request against our repository by adding query parameter to a special GitHub URL. This comes right from the GitHub docs themselves.

Let’s reverse engineer this.

If we know we can pre-fill a link, then we need to generate the link. We’re trying to make this easy remember. To generate this dynamic data-filled link, we’ll use a touch of JavaScript.

So now, how do we generate this link after the user submits the form?

Demo time!

Let’s take the Serverless site from CSS-Tricks as an example. Currently, the only way to add a new resource is by forking the repo on GitHub and adding a new Markdown file. But let’s see how we can do it with a form instead of jumping through those hoops.

The Serverless site itself has many categories (e.g. for forms) we can contribute to. For the sake of simplicity, let’s focus on the “Resources” category. People can add articles about things related to Serverless or Jamstack from there.

The Resources page of the CSS-Tricks Serverless site. The site is primarily purple in varying shades with accents of orange. The page shows a couple of serverless resources in the main area and a list of categories in the right sidebar.

All of the resource files are in this folder in the repo.

Showing the main page of the CSS-Tricks Serverless repo in GitHub, displaying all the files.

Just picking a random file from there to explore the structure…

--- title: "How to deploy a custom domain with the Amplify Console" url: "https://read.acloud.guru/how-to-deploy-a-custom-domain-with-the-amplify-console-a884b6a3c0fc" author: "Nader Dabit" tags: ["hosting", "amplify"] ---  In this tutorial, we’ll learn how to add a custom domain to an Amplify Console deployment in just a couple of minutes.

Looking over that content, our form must have these columns:

  • Title
  • URL
  • Author
  • Tags
  • Snippet or description of the link.

So let’s build an HTML form for all those fields:

<div class="columns container my-2">   <div class="column is-half is-offset-one-quarter">   <h1 class="title">Contribute to Serverless Resources</h1>    <div class="field">     <label class="label" for="title">Title</label>     <div class="control">       <input id="title" name="title" class="input" type="text">     </div>   </div>      <div class="field">     <label class="label" for="url">URL</label>     <div class="control">       <input id="url" name="url" class="input" type="url">     </div>   </div>        <div class="field">     <label class="label" for="author">Author</label>     <div class="control">       <input id="author" class="input" type="text" name="author">     </div>   </div>      <div class="field">     <label class="label" for="tags">Tags (comma separated)</label>     <div class="control">       <input id="tags" class="input" type="text" name="tags">     </div>   </div>        <div class="field">     <label class="label" for="description">Description</label>     <div class="control">       <textarea id="description" class="textarea" name="description"></textarea>     </div>   </div>       <!-- Prepare the JavaScript function for later -->   <div class="control">     <button onclick="validateSubmission();" class="button is-link is-fullwidth">Submit</button>   </div>        </div> </div>

I’m using Bulma for styling, so the class names in use here are from that.

Now we write a JavaScript function that transforms a user’s input into a friendly URL that we can combine as GitHub query parameters on our pull request. Here is the step by step:

  • Get the user’s input about the content they want to add
  • Generate a string from all that content
  • Encode the string to format it in a way that humans can read
  • Attach the encoded string to a complete URL pointing to GitHub’s page for new pull requests

Here is the Pen:

After pressing the Submit button, the user is taken right to GitHub with an open pull request for this new file in the right location.

GitHub pull request screen showing a new file with content.

Quick caveat: Users still need a GitHub account to contribute. But this is still much easier than having to know how to fork a repo and create a pull request from that fork.

Other benefits of this approach

Well, for one, this is a form that lives on our site. We can style it however we want. That sort of control is always nice to have.

Secondly, since we’ve already written the JavaScript, we can use the same basic idea to talk with other services or APIs in order to process the input first. For example, if we need information from a website (like the title, meta description, or favicon) we can fetch this information just by providing the URL.

Taking things further

Let’s have a play with that second point above. We could simply pre-fill our form by fetching information from the URL provided for the user rather than having them have to enter it by hand.

With that in mind, let’s now only ask the user for two inputs (rather than four) — just the URL and tags.

How does this work? We can fetch meta information from a website with JavaScript just by having the URL. There are many APIs that fetch information from a website, but you might the one that I built for this project. Try hitting any URL like this:

https://metadata-api.vercel.app/api?url=https://css-tricks.com

The demo above uses that as an API to pre-fill data based on the URL the user provides. Easier for the user!

Wrapping up

You could think of this as a very minimal CMS for any kind of Static Site Generator. All you need to do is customize the form and update the pre-filled query parameters to match the data formats you need.

How will you use this sort of thing? The four sites we saw at the very beginning are good examples. But there are so many other times where might need to do something with a user submission, and this might be a low-lift way to do it.


The post Generate a Pull Request of Static Content With a Simple HTML Form appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , , , ,

Embedded Analytics Made Simple With Cumul.io Integrations

Browse through SaaS communities on Twitter, LinkedIn, Reddit, Discord, you name it and you’ll see a common theme appear in many of them. That theme can go by many names: BI, analytics, insights and so on. It’s natural, we do business, collect data, we succeed or fail. We want to look into all of that, make some sense of the data we have and take action. This need has produced many projects and tools that make the lives of anyone who wants to look into the data just a bit easier. But, when humans have, humans want more. And in the world of BI and analytics, “more” often comes in the form of embedding, branding, customized styling and access and so on. Which ends up meaning more work for developers and more time to account for. So, naturally there has been a need for BI tools that will let you have it all.

Let’s make a list of challenges you may face as the builder and maintainer of these dashboards:

  1. You want to make the dashboards available to end users or viewers from within your own application or platform
  2. You want to be able to manage different dashboard collections (i.e. “integrations”)
  3. You want to be able to grant specific user rights to a collection of dashboards and datasets
  4. You want to make sure users have access to data only relevant to them

Cumul.io provides a tool we call Integrations which helps solve these challenges. In this article I’ll walk you through what integrations are, and how to set one up. The cool thing is that for most of the points above, there is minimal code required and for the most part can be set within the Cumul.io UI.

Some Background — Integrations

An Integration in Cumul.io is a structure that defines a collection of dashboards intended to be used together (e.g. in the same application). It is also what we use to embed dashboards into an application. In other words, to embed dashboards into an application, we give the application access to the integration that they belong to. You can associate dashboards to an integration and administrate what type of access rights the end users of the integration will have on these dashboards and the datasets they use. A dashboard may be a part of multiple integrations, but it may have different access rights on different integrations. When it comes to embedding, there are a number of SDKs available to make life simple regardless of what your stack looks like. 😊

Once you have a Cumul.io account and if you are an “owner” of an organization in Cumul.io, you will be able to manage and maintain all of your integrations via the Integrations tab. Let’s have a look at an example Cumul.io account. Below you can see the Dashboards that one Cumul.io user might have created:

Although these are all the dashboards this user may have created, it’s likely that not all dashboards are intended for the same end-users, or application for that matter. So, the owner of this Cumul.io account would create and maintain an Integration (or more!) 💪 Let’s have a look at what that might look like for them:

So, it looks like the owner of this Cumul.io account maintains two separate applications.

Now let’s see what the process of creating an integration and embedding its dashboards into an application would look like. The good news is, as mentioned before, a lot of the steps you will have to take can be done within the Cumul.io UI.

Disclaimer: For the purposes of this article, I’ll solely focus on the Integration part. So, I’ll be skipping everything to do with dashboard creation and design and we will be starting with a pre-made set of imaginary dashboards.

What we will be doing:

Creating an Integration

For simplicity, let’s only create one integration for now. Let’s imagine we have an analytics platform that we maintain for our company. There are three dashboards that we want to provide to our end-users: the Marketing Dashboard, the Sales Dashboard and the Leads Dashboard.

Let’s say that out of all the dashboards this account has created or has access to, for this particular project they want to use only the following:

New Integration

To create the integration, we go to the Integrations tab and select New Integration. The dialogue that pops up will already give you some idea of what your next steps will be:

Selecting Dashboards

Next up, you will be able to select which of your dashboards will be included in this integration. You will also be able to give the Integration a name, which here I’ve decided will appropriately be “Very Important Integration”:

Once you confirm your selection, you will have the option of defining a slug for each dashboard (highly recommended). These can later be used while embedding the dashboards into your application. You will later see that slugs make it easy to reference dashboards in your front-end code, and make it easier to replace dashboards if needed too (as you won’t need to worry about dashboard IDs in the front-end code).

Access Rights

You will then get to set the integration’s access rights for the datasets its dashboards use. Here we set this to “Can view.” For more info on access rights and what they entail, check out our associating datasets to integrations:

Filters and Parameters (and Multi-Tenant Access)

Side Note: To help with multi-tenant access — which would make sense in this imaginary set up — Cumul.io makes it possible to set parameters and filters on datasets that a dashboard uses. This means that each user that logs into your analytics platform would only see the data they personally have access to in the dashboards. You can imagine that in this scenario access would be based on which department the end user works for in the company. For more on how to set up multi-tenancy with Cumul.io, check out our article, “Multi-Tenancy on Cumul.io Dashboards with Auth0”. This can be done within the dashboard design process (which we are skipping), which makes it easier to visualize what the filters are doing. But here, we will be setting these filters in the Integration creation process.

Here, we set the filters the datasets might need to have. In this scenario, as we filter based on the users’ departments, we define a department parameter and filter based on that:

And voilà! Once you’re done with setting those, you have successfully created an integration. The next dialogue will give you instructions for what will be your next steps for embedding your integration:

Now you’ll be able to see this brand new Integration in your Integration tab. This is also where you will have quick access to the Integration ID, which will later be used for embedding the dashboards.

Good news! After your Integration is created, you can always edit it. You can remove or add dashboards, change the slugs of dashboards or access rights too. So you don’t have to worry about creating new integrations as your application changes and evolves. And as editing an integration is all within the UI, you won’t need to worry about having a developer set it all up again. Non-technical users can adapt these integrations on the go.

Embedding Dashboards

Let’s see where we want to get to. We want to provide the dashboards within a custom app. Simple, user logs into an app, the app has dashboards, they see the dashboards with the data they’re allowed to see. It could look like the following for example:

Someone had a very specific vision on how they wanted to provide the dashboards to the end user. They wanted a sidebar where they could flip through each of the dashboards. It could have been something completely different too. What we will focus on is how we can embed these dashboards into our application regardless of what the host application looks like.

Cumul.io comes with a set of publicly available SDKs. Here I’ll show you what you would do if you were to use the Node SDK. Check out our developer docs to see what other SDKs are available and instructions on how to use them.

Step 1: Generate SSO Tokens For Your End Users

Before you can generate SSO tokens for your end users, you will have to make sure that you create an API key and token in Cumul.io. You can do this from your Cumul.io Profile. It should be the organization owner with access to the integration that creates and uses this API key and token to make the SSO authorization request. Once you’ve done this, let’s first create a Cumul.io client which would be done in the server side of the application:

const Cumulio = require("cumulio");  const client = new Cumulio({   api_key: '<YOUR API KEY>',   api_token: '<YOUR API TOKEN>', });

Now we can create the SSO token for the end user. For more information on this API call and the required fields check out our developer documentation on generating SSO tokens.

let promise = client.create('authorization', {   integration_id: '<THE INTEGRATION ID>',   type: 'sso',   expiry: '24 hours',   inactivity_interval: '10 minutes',   username: '< A unique identifier for your end user >',   name: '< end-user name >',   email: '< end-user email >',   suborganization: '< end-user suborganization >',   role: 'viewer',   metadata: {} });

Here, notice how we have added the optional metadata field. This is where you can provide the parameters and values with which you want to filter the dashboards’ datasets on. In the example we’ve been going through we’ve been filtering based on department so we would be adding this to the metadata. Ideally you would get this information from the authentication provider you use. See a detailed explanation on how we’ve done this with Auth0.

This request will return a JSON object that contains an authorization id and token which is later used as the key/token combination to embed dashboards in the client-side.

Something else you can optionally add here which is pretty cool is a CSS property. This would allow you to define custom look and feel for each user (or user group). For the same application, this is what the Marketing Dashboard could look like for Angelina vs Brad:

Step 2: Embed

We jumped ahead a bit there. We created SSO tokens for end users but we haven’t yet actually embedded the dashboards into the application. Let’s have a look at that. First up, you should install and import the Web component.

import '@cumul.io/cumulio-dashboard';

After importing the component you can use it as if it were an HTML tag. This is where you will embed your dashboards:

<cumulio-dashboard   dashboardId="< a dashboard id >"   dashboardSlug="< a dashboard slug >"   authKey="< SSO key from step 1 >"   authToken="< SSO token from step 1 >"> </cumulio-dashboard>

Here you will have a few options. You can either provide the dashboard Id for any dashboard you want to be embedding, or you can provide the dashboard slug which we defined in the Integration setup (which is why I highly recommend this, it’s much more readable doing it this way). For more detailed information on how to embed dashboards you can also check out our developer documentation.

A nice way to do this step is of course just defining the skeleton of the dashboard component in your HTML file and filling in the rest of it from the client side of your application. I’ve done the following, although it’s of course not the only way:

I’ve added the dashboard component with the ID dashboard:

<cumulio-dashboard id="dashboard"></cumulio-dashboard>

Then, I’ve retrieved this component in the client code as follows:

const dashboardElement = document.getElementById("dashboard");

Then I request the SSO token from the server side of my application which returns the required key and token to add to the dashboard component. Let’s assume we have a wrapper function getDashboardAuthorizationToken() that does this for us and returns the response from the server-side SSO token request. Next, we simply fill in the dashboard component accordingly:

const authorizationToken = await getDashboardAuthorizationToken(); if (authorizationToken.id && authorizationToken.token) {   dashboardElement.authToken = authorizationToken.token;   dashboardElement.authKey = authorizationToken.id;   dashboardElement.dashboardSlug = "marketing|sales|leads"; }

Notice how in the previous steps I chose to define slugs for my dashboards that are a part of this integration. This means I can avoid looking up dashboard IDs and adding dashboardId as one of my parameters of the dashboardElement. Instead I can just provide one of the slugs marketing, sales or leads and I’m done! Of course you would have to set up some sort of selection process to your application to decide where and when you embed which dashboard.

That’s it folks! We’ve successfully created an Integration in Cumul.io and in a few lines of code, we’ve been able to embed its dashboards into our application 🎉 Now imagine a scenario where you have to maintain multiple applications at once, either for within the same company or separate ones. Whatever your scenario, I’m sure you can imagine how if you have a number of dashboards where each of them have to go to different places and each of them have to have different access rights depending on where they are and on and on we go.. How it can quickly get out of hand. Integrations allow you to manage this in a simple and neat way, all in one place, and as you can see, mostly from within the Cumul.io UI.

There’s a lot more you can do here which we haven’t gone through in detail. Such as adding user specific custom themes and CSS. We also didn’t go through how you would set parameters and filters in dashboards, or how you would use them from within your host application so that you have a multi-tenant setup. Below you can find some links to useful tutorials and documentation for these steps if you are interested.


The post Embedded Analytics Made Simple With Cumul.io Integrations appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , ,
[Top]

Create a Tag Cloud with some Simple CSS and even Simpler JavaScript

I’ve always liked tag clouds. I like the UX of seeing what tags are most popular on a website by seeing the relative font size of the tags, popular tags being bigger. They seem to have fallen out of fashion, though you do often see versions of them used in illustrations in tools like Wordle.

How difficult is it to make a tag cloud? Not very difficult at all. Let’s see!

Let’s start with the markup

For our HTML, we’re going to put each of our tags into a list, <ul class="tags"><ul>. We’ll be injecting into that with JavaScript.

If your tag cloud is already in HTML, and you are just looking to do the relative font-size thing, that’s good! Progressive enhancement! You should be able to adapt the JavaScript later on so it does just that part, but not necessarily building and injecting the tags themselves.

I have mocked out some JSON with a certain amount of articles tagged with each property. Let’s write some JavaScript to go grab that JSON feed and do three things.

First, it we’ll create an <li> from each entry for our list. Imagine the HTML, so far, is like this:

<ul class="tags">   <li>align-content</li>   <li>align-items</li>   <li>align-self</li>   <li>animation</li>   <li>...</li>   <li>z-index</li> </ul>

Second, we’ll put the number of articles each property has in parentheses beside inside each list item. So now, the markup is like this:

<ul class="tags">   <li>align-content (2)</li>   <li>align-items (2)</li>   <li>align-self (2)</li>   <li>animation (9)</li>   <li>...</li>   <li>z-index (4)</li> </ul>

Third, and last, we’ll create a link around each tag that goes to the correct place. This is where we can set the font-size property for each item depending on how many articles that property is tagged with, so animation that has 13 articles will be much bigger than background-color which only has one article.

<li class="tag">   <a     class="tag__link"     href="https://example.com/tags/animation"     style="font-size: 5em">     animation (9)   </a> </li>

The JavasScript part

Let’s have a look at the JavaScript to do this.

const dataURL =   "https://gist.githubusercontent.com/markconroy/536228ed416a551de8852b74615e55dd/raw/9b96c9049b10e7e18ee922b4caf9167acb4efdd6/tags.json"; const tags = document.querySelector(".tags"); const fragment = document.createDocumentFragment(); const maxFontSizeForTag = 6;  fetch(dataURL)   .then(function (res) {     return res.json();   })   .then(function (data) {     // 1. Create a new array from data     let orderedData = data.map((x) => x);     // 2. Order it by number of articles each tag has     orderedData.sort(function(a, b) {       return a.tagged_articles.length - b.tagged_articles.length;     });     orderedData = orderedData.reverse();     // 3. Get a value for the tag with the most articles     const highestValue = orderedData[0].tagged_articles.length;     // 4. Create a list item for each result from data.     data.forEach((result) => handleResult(result, highestValue));     // 5. Append the full list of tags to the tags element     tags.appendChild(tag);   });

The JavaScript above uses the Fetch API to fetch the URL where tags.json is hosted. Once it gets this data, it returns it as JSON. Here we seque into a new array called orderedData (so we don’t mutate the original array), find the tag with the most articles. We’ll use this value later on in a font-scale so all other tags will have a font-size relative to it. Then, forEach result in the response, we call a function I have named handleResult() and pass the result and the highestValue to this function as a parameter. It also creates:

  • a variable called tags which is what we will use to inject each list item that we create from the results,
  • a variable for a fragment to hold the result of each iteration of the loop, which we will later append to the tags, and
  • a variable for the max font size, which we’ll use in our font scale later.

Next up, the handleResult(result) function:

function handleResult(result, highestValue) {   const tag = document.createElement("li");   tag.classList.add("tag");   tag.innerHTML = `<a class="tag__link" href="$ {result.href}" style="font-size: $ {result.tagged_articles.length * 1.25}em">$ {result.title} ($ {result.tagged_articles.length})</a>`;    // Append each tag to the fragment   fragment.appendChild(tag); }

This is pretty simple function that creates a list element set to the variable named tag and then adds a .tag class to this list element. Once that’s created, it sets the innerHTML of the list item to be a link and populates the values of that link with values from the JSON feed, such as a result.href for the link to the tag. When each li is created, it’s then added as a string to the fragment, which we will later then append to the tags variable. The most important item here is the inline style tag that uses the number of articles—result.tagged_articles.length—to set a relative font size using em units for this list item. Later, we’ll change that value to a formula to use a basic font scale.

I find this JavaScript just a little bit ugly and hard on the eyes, so let’s create some variables and a simple font scale formula for each of our properties to tidy it up and make it easier to read.

function handleResult(result, highestValue) {   // Set our variables   const name = result.title;   const link = result.href;   const numberOfArticles = result.tagged_articles.length;   let fontSize = numberOfArticles / highestValue * maxFontSizeForTag;   fontSize = +fontSize.toFixed(2);   const fontSizeProperty = `$ {fontSize}em`;    // Create a list element for each tag and inline the font size   const tag = document.createElement("li");   tag.classList.add("tag");   tag.innerHTML = `<a class="tag__link" href="$ {link}" style="font-size: $ {fontSizeProperty}">$ {name} ($ {numberOfArticles})</a>`;      // Append each tag to the fragment   fragment.appendChild(tag); }

By setting some variables before we get into creating our HTML, the code is a lot easier to read. And it also makes our code a little bit more DRY, as we can use the numberOfArticles variable in more than one place.

Once each of the tags has been returned in this .forEach loop, they are collected together in the fragment. After that, we use appendChild() to add them to the tags element. This means the DOM is manipulated only once, instead of being manipulated each time the loop runs, which is a nice performance boost if we happen to have a large number of tags.

Font scaling

What we have now will work fine for us, and we could start writing our CSS. However, our formula for the fontSize variable means that the tag with the most articles (which is “flex” with 25) will be 6em (25 / 25 * 6 = 6), but the tags with only one article are going to be 1/25th the size of that (1 / 25 * 6 = 0.24), making the content unreadable. If we had a tag with 100 articles, the smaller tags would fare even worse (1 / 100 * 6 = 0.06).

To get around this, I have added a simple if statement that if the fontSize that is returned is less than 1, set the fontSize to 1. If not, keep it at its current size. Now, all the tags will be within a font scale of 1em to 6em, rounded off to two decimal places. To increase the size of the largest tag, just change the value of maxFontSizeForTag. You can decide what works best for you based on the amount of content you are dealing with.

function handleResult(result, highestValue) {   // Set our variables   const numberOfArticles = result.tagged_articles.length;   const name = result.title;   const link = result.href;   let fontSize = numberOfArticles / highestValue * maxFontSizeForTag;   fontSize = +fontSize.toFixed(2);      // Make sure our font size will be at least 1em   if (fontSize <= 1) {     fontSize = 1;   } else {     fontSize = fontSize;   }   const fontSizeProperty = `$ {fontSize}em`;      // Then, create a list element for each tag and inline the font size.   tag = document.createElement("li");   tag.classList.add("tag");   tag.innerHTML = `<a class="tag__link" href="$ {link}" style="font-size: $ {fontSizeProperty}">$ {name} ($ {numberOfArticles})</a>`;    // Append each tag to the fragment   fragment.appendChild(tag); }

Now the CSS!

We’re using flexbox for our layout since each of the tags can be of varying width. We then center-align them with justify-content: center, and remove the list bullets.

.tags {   display: flex;   flex-wrap: wrap;   justify-content: center;   max-width: 960px;   margin: auto;   padding: 2rem 0 1rem;   list-style: none;   border: 2px solid white;   border-radius: 5px; }

We’ll also use flexbox for the individual tags. This allows us to vertically align them with align-items: center since they will have varying heights based on their font sizes.

.tag {   display: flex;   align-items: center;   margin: 0.25rem 1rem; }

Each link in the tag cloud has a small bit of padding, just to allow it to be clickable slightly outside of its strict dimensions.

.tag__link {   padding: 5px 5px 0;   transition: 0.3s;   text-decoration: none; }

I find this is handy on small screens especially for people who might find it harder to tap on links. The initial text-decoration is removed as I think we can assume each item of text in the tag cloud is a link and so a special decoration is not needed for them.

I’ll just drop in some colors to style things up a bit more:

.tag:nth-of-type(4n+1) .tag__link {   color: #ffd560; } .tag:nth-of-type(4n+2) .tag__link {   color: #ee4266; } .tag:nth-of-type(4n+3) .tag__link {   color: #9e88f7; } .tag:nth-of-type(4n+4) .tag__link {   color: #54d0ff; }

The color scheme for this was stolen directly from Chris’ blogroll, where every fourth tag starting at tag one is yellow, every fourth tag starting at tag two is red, every fourth tag starting at tag three is purple. and every fourth tag starting at tag four is blue.

Screenshot of the blogroll on Chris Coyier's personal website, showing lots of brightly colored links with the names of blogs included in the blogroll.

We then set the focus and hover states for each link:

.tag:nth-of-type(4n+1) .tag__link:focus, .tag:nth-of-type(4n+1) .tag__link:hover {   box-shadow: inset 0 -1.3em 0 0 #ffd560; } .tag:nth-of-type(4n+2) .tag__link:focus, .tag:nth-of-type(4n+2) .tag__link:hover {   box-shadow: inset 0 -1.3em 0 0 #ee4266; } .tag:nth-of-type(4n+3) .tag__link:focus, .tag:nth-of-type(4n+3) .tag__link:hover {   box-shadow: inset 0 -1.3em 0 0 #9e88f7; } .tag:nth-of-type(4n+4) .tag__link:focus, .tag:nth-of-type(4n+4) .tag__link:hover {   box-shadow: inset 0 -1.3em 0 0 #54d0ff; }

I could probably have created a custom variable for the colors at this stage—like --yellow: #ffd560, etc.—but decided to go with the longhand approach for IE 11 support. I love the box-shadow hover effect. It’s a very small amount of code to achieve something much more visually-appealing than a standard underline or bottom-border. Using em units here means we have decent control over how large the shadow would be in relation to the text it needed to cover.

OK, let’s top this off by setting every tag link to be black on hover:

.tag:nth-of-type(4n+1) .tag__link:focus, .tag:nth-of-type(4n+1) .tag__link:hover, .tag:nth-of-type(4n+2) .tag__link:focus, .tag:nth-of-type(4n+2) .tag__link:hover, .tag:nth-of-type(4n+3) .tag__link:focus, .tag:nth-of-type(4n+3) .tag__link:hover, .tag:nth-of-type(4n+4) .tag__link:focus, .tag:nth-of-type(4n+4) .tag__link:hover {   color: black; }

And we’re done! Here’s the final result:


The post Create a Tag Cloud with some Simple CSS and even Simpler JavaScript appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , , ,
[Top]

Using Trello as a Super Simple CMS

Sometimes our sites need a little sprinkling of content management. Not always. Not a lot. But a bit. The CMS market is thriving with affordable, approachable products, so we’re not short of options. Thankfully, it is a very different world to the one that used to force companies to splash out a ga-jillionty-one dollars (not an exact cost: I rounded to the nearest bazillion) for an all-singing, all-dancing, all-integrating, all-personalizing, big-enterprise-certified™ CMS platform.

Sometimes, though, it’s nice to use a really simple tool that anyone updating content on the site is already familiar with, rather than getting to grips with a new CMS. 

I like Trello a lot for managing ideas and tasks. And it has an API. Why not use it as a content source for a web site? I mean, hey, if we can do it with Google Sheets, then what’s to stop us from trying other things?

Hello, Trello

Here’s a simple site to explore. It gets its content from this Trello board and that content is displayed in sections. Each section is populated by the title and description fields of a card in our Trello board.

Two webpages side-by-side. The left is a Trello board with a bright pink background. The right is a screenshot of the build website using Trello data.

Trello uses Markdown, which comes in handy here. Anyone editing content in a Trello card is able to apply basic text formatting and have the same Markdown flow into the site and transformed into HTML by a build process.

Building blocks

I’m a big fan of this model of running a build which pulls content from various feeds and sources, and then mashes them together with a template to generate the HTML of a website. It decouples the presentation from the management of the content (which is where the term “decoupled” comes from in popular modern CMS products). And it means that we are free to craft the website just the way we want with all of the wizzy tricks and techniques we’ve learned here on CSS-Tricks.

Diagram showing the flow of data, going from Trello as JSON to Build where the data and the template are coupled, then finally, to the front end.

Since we pull in the content at build time, we don’t need to worry about the usage quotas or the performance of our data sources if our sites get popular and bring in loads of traffic. And why wouldn’t they? Look how pretty we made them!

I wanna play!

Fine. You can grab a copy of this site’s code and tinker around to your heart’s content. This version includes information on how to create your own Trello board and use it as the source for content for the build.

If you want to walk through how this works first rather than diving right into it yourself, read on.

Discovering the API

Trello has a well-documented API and set of developer resources. There is also a handy Node module to simplify the task of authenticating and interacting with the API. But you can also explore the API by tinkering with the URLs when you are exploring your Trello boards. 

For example, the URL for the Trello board above is:

https://trello.com/b/Zzc0USwZ/hellotrello

If we add .json to that URL, Trello shows us the content represented as JSON. Take a look.

We can use this technique to inspect the underlying data throughout Trello. Here is the URL for one card in particular:

https://trello.com/c/YVxlSEzy/4-sections-from-cards

If we use this little trick and add .json to the URL we’ll see the data which describes that card

We’ll find interesting things — unique IDs for the board, the list, and the card. We can see the card’s content, and lots of metadata.

I love doing this! Look at all the lovely data! How shall we use it?

Deciding how to use a board

For this example, let’s assume that we have a site with just one page of manageable content. A list or column in our board would be ideal for controlling the sections on that page. An editor could give them titles and content, and drag them around into the order they want.

We’ll need the ID of the list so that we can access it via the API. Luckily, we’ve already seen how to discover that — take a look at the data for any of the cards in the list in question. Each one has an idBoard property. Bingo!

Generating the site

The plan is to fetch the data from Trello and apply it to some templates to populate our site. Most static site generators (SSG) would do the job. That’s what they are good at. I’ll use Eleventy because I think it has the simplest concepts to understand. Plus, it is very efficient at getting data and generating clean HTML with Nunjucks (a popular templating language).

We’ll want to be able to use an expression lin our template that outputs a section element for each item found in a JavaScript object called trello:

<!-- index.njk --> {% for card in trello %} <section>   <h2>{{ card.name }}</h2>   <div>     {% markdown %}       {{- card.desc | safe }}     {% endmarkdown %}   </div> </section> {% endfor %}

Fetching the data for the build

A popular technique with Jamstack sites like this is to run a build with Gulp, Grunt or which goes and fetches data from various APIs and feeds, stashes the data in a suitable format for the SSG, and then runs the SSG to generate the HTML. This works rather nicely.

Eleventy simplifies things here by supporting the execution of JavaScript in its data files. In other words, rather than only leveraging data stored as JSON or YAML, it can use whatever gets returned by JavaScript, opening the door to making requests directly to APIs when the Eleventy build runs. We won’t need a separate build step to go off to fetch data first. Eleventy will do it for us.

Let’s use that to get the data for our trello object in the templates.

We could use the Trello Node client to query the API, but as it turns out all the data we want is right there in the JSON for the board. Everything! In one request! We can just fetch it in one go!

// trello.js module.exports = () => {   const TRELLO_JSON_URL='https://trello.com/b/Zzc0USwZ/hellotrello.json';    // Use node-fetch to get the JSON data about this board   const fetch = require('node-fetch');   return fetch(TRELLO_JSON_URL)     .then(res => res.json())     .then(json => console.log(json)); };

However, we don’t want to show all the data from that board. It includes cards on other lists, cards which have been closed and deleted, and so on. But we can filter the cards to only include the ones of interest thanks to JavaScript’s filter method.

// trello.js module.exports = () => {    const TRELLO_JSON_URL='https://trello.com/b/Zzc0USwZ/hellotrello.json'    const TRELLO_LIST_ID='5e98325d6d6bd120f2b7395f',      // Use node-fetch to get the JSON data about this board    const fetch = require('node-fetch');    return fetch(TRELLO_JSON_URL)    .then(res => res.json())    .then(json => {        // Just focus on the cards which are in the list we want      // and do not have a closed status      let contentCards = json.cards.filter(card => {        return card.idList == TRELLO_LIST_ID && !card.closed;      });        return contentCards;  }); };

That’ll do it! With this saved in a file called trello.js in Eleventy’s data directory, we’ll have this data ready to use in our templates in an object called trello

Done-zo! 🎉

But we can do better. Let’s also handle attached images, and also add a way to have content staged for review before it goes live.

Image attachments

It’s possible to attach files to cards in Trello. When you attach an image, it shows up right there in the card with the source URL of the asset described in the data. We can make use of that!

If a card has an image attachment, we’ll want to get its source URL, and add it as an image tag to what our template inserts into the page at build time. That means adding the Markdown for an image to the Markdown in the description property of our JSON (card.desc). 

Then we can let Eleventy turn that into HTML for us along with everything else. This code looks for cards in our JSON and massages the data into the shape that we’ll need.

// trello.js  // If a card has an attachment, add it as an image  // in the description markdown contentCards.forEach(card => {   if(card.attachments.length) {     card.desc = card.desc + `n![$ {card.name}]($ {card.attachments[0].url} '$ {card.name}')`;   } });

Now we can move images around in our content too. Handy!

Staging content

Let’s add one more flourish to how we can use Trello to manage our site’s content.

There are a few ways that we might want to preview content before launching it to the world. Our Trello board could have one list for staging and one list for production content. But that would make it hard to visualize how new content lives alongside that which is already published.

A better idea would be to use Trello’s labels to signify which cards are published live, and which should only be included on a staged version of the site. This will give us a nice workflow. We can add more content by adding a new card in the right place. Label it with “stage” and filter it out from the cards appearing on our production branch. 

Screenshot of the Trello board with a bright pink background. It has cards in a column called Published.
Label hints in Trello showing what content is staged and what is live

A little more filtering of our JavaScript object is called for:

// trello.js  // only include cards labelled with "live" or with // the name of the branch we are in contentCards = contentCards.filter(card => {   return card.labels.filter(label => (     label.name.toLowerCase() == 'live' ||     label.name.toLowerCase() == BRANCH    )).length;  });

We want the content labelled ‘live’ to show up on every version of the build, staging or not. In addition we’ll look to include cards which have a label matching a variable called “BRANCH”. 

How come? What’s that?

This is where we get crafty! I’ve chosen to host this site on Netlify (disclaimer: I work there). This means that I can run the build from Netlify’s CI/CD environment. This redeploys the site whenever I push changes to its git repository, and also gives access to a couple of other things which are really handy for this site. 

One is Branch deploys. If you want a new environment for a site, you can create one by making a new branch in the Git repository. The build will run in that context, and your site will be published on a subdomain which includes the branch name. Like this.

Take a look and you’ll see all the cards from our list, including the one which has the orange “stage” label. We included it in this build because its label matched the branch name for the build context. BRANCH was an environment variable which contained whichever branch the build ran in.

label.name.toLowerCase() == BRANCH

In theory, we could make as many branches and labels as we like, and have all sorts of staging and testing environments. Ready to promote something from “stage” to “live”? Swap the labels and you’re good to go!

But how does it update though?

The second perk we get from running the site build in a CI/CD such as Netlify’s is that we can trigger a build to run whenever we like. Netlify lets us create build hooks. These are webhooks which initiate a new deployment when you send an HTTP POST to them.

If Trello supports webhooks too, then we could stitch these services together and refresh the site automatically whenever the Trello board changes. And guess what… they do! Hoorah!

To create a Netlify build hook, you’ll need to visit your site’s admin panel. (You can bootstrap this demo site into a new Netlify site in a couple of clicks if you want to try it out.)

Screenshot of the netlify build hooks screen with options to add a build hook and generate a public deploy key.
Creating a Netlify Build hook

Now, armed with a new build hook URL, we’ll need to register a new Trello webhook which calls it when content changes. The method for creating webhooks in Trello is via Trello’s API

The repo for this site includes a little utility to call the Trello API and create the webhook for you. But you’ll need to have a Trello developer token and key. Thankfully, it is easy to create those for free by visiting the Trello Developer portal and following the instructions under “Authorizing a client.”

Got ‘em? Great! If you save them in a .env file in your project, you can run this command to set up the Trello webhook:

npm run hook --url https://api.netlify.com/build_hooks/XXXXX

And with that, we’ve created a nice little flow for managing content on a simple site. We can craft our frontend just the way we want it, and have updates to the content happen on a Trello board which automatically updates the site whenever changes are made.

Could I really use this though?

This is a simplistic example. That’s by design. I really wanted to demonstrate the concepts of decoupling, and of using the API of an external service to drive the content for a site.

This won’t replace a full-featured decoupled CMS for more involved projects. But the principles are totally applicable to more complex sites.

This model, however, could be a great match for the types of websites we see for businesses such as independent shops, bars and restaurants. Imagine a Trello board that has one list for managing a restaurant’s home page, and one for managing their menu items. Very approachable for the restaurant staff to manage, and far nicer than uploading a new PDF of the menu whenever it changes.

Ready to explore an example and experiment with your own board and content? Try this:


The post Using Trello as a Super Simple CMS appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , ,
[Top]

How to Make a Simple CMS With Cloudflare, GitHub Actions and Metalsmith

Let’s build ourselves a CMS. But rather than build out a UI, we’re going to get that UI for free in the form of GitHub itself! We’ll be leveraging GitHub as the way to manage the content for our static site generator (it could be any static site generator). Here’s the gist of it: GitHub is going to be the place to manage, version control, and store files, and also be the place we’ll do our content editing. When edits occur, a series of automations will test, verify, and ultimately deploy our content to Cloudflare.

You can find the completed code for the project is available on GitHub. I power my own website, jonpauluritis.com, this exact way.

What does the full stack look like?

Here’s the tech stack we’ll be working with in this article:

  • Any Markdown Editor (Optional. e.g Typora.io)
  • A Static Site Generator (e.g. Metalsmith)
  • Github w/ Github Actions (CICD and Deployment)
  • Cloudflare Workers

Why should you care about about this setup? This setup is potentially the leanest, fastest, cheapest (~$ 5/month), and easiest way to manage a website (or Jamstack site). It’s awesome both from a technical side and from a user experience perspective. This setup is so awesome I literally went out and bought stock in Microsoft and Cloudflare. 

But before we start…

I’m not going to walk you through setting up accounts on these services, I’m sure you can do that yourself. Here are the accounts you need to setup: 

I would also recommend Typora for an amazing Markdown writing experience, but Markdown editors are a very personal thing, so use which editor feels right for you. 

Project structure

To give you a sense of where we’re headed, here’s the structure of the completed project:

├── build.js ├── .github/workflows │   ├── deploy.yml │   └── nodejs.js ├── layouts │   ├── about.hbs │   ├── article.hbs │   ├── index.hbs │   └── partials │       └── navigation.hbs ├── package-lock.json ├── package.json ├── public ├── src │   ├── about.md │   ├── articles │   │   ├── post1.md │   │   └── post2.md │   └── index.md ├── workers-site └── wrangler.toml

Step 1: Command line stuff

In a terminal, change directory to wherever you keep these sorts of projects and type this:

$  mkdir cms && cd cms && npm init -y

That will create a new directory, move into it, and initialize the use of npm.

The next thing we want to do is stand on the shoulders of giants. We’ll be using a number of npm packages that help us do things, the meat of which is using the static site generator Metalsmith:

$  npm install --save-dev metalsmith metalsmith-markdown metalsmith-layouts metalsmith-collections metalsmith-permalinks handlebars jstransformer-handlebars

Along with Metalsmith, there are a couple of other useful bits and bobs. Why Metalsmith? Let’s talk about that.

Step 2: Metalsmith

I’ve been trying out static site generators for 2-3 years now, and I still haven’t found “the one.” All of the big names — like Eleventy, Gatsby, Hugo, Jekyll, Hexo, and Vuepress — are totally badass but I can’t get past Metalsmith’s simplicity and extensibility.

As an example, this will code will actually build you a site: 

// EXAMPLE... NOT WHAT WE ARE USING FOR THIS TUTORIAL Metalsmith(__dirname)            .source('src')          .destination('dest')        .use(markdown())                .use(layouts())              .build((err) => if (err) throw err);

Pretty cool right?

For sake of brevity, type this into the terminal and we’ll scaffold out some structure and files to start with.

First, make the directories:

$  mkdir -p src/articles &&  mkdir -p layouts/partials 

Then, create the build file:

$  touch build.js

Next, we’ll create some layout files:

$  touch layouts/index.hbs && touch layouts/about.hbs && touch layouts/article.hbs && touch layouts/partials/navigation.hbt

And, finally, we’ll set up our content resources:

$  touch src/index.md && touch src/about.md && touch src/articles/post1.md && touch src/articles/post1.md touch src/articles/post2.md

The project folder should look something like this:

├── build.js ├── layouts │   ├── about.hbs │   ├── article.hbs │   ├── index.hbs │   └── partials │       └── navigation.hbs ├── package-lock.json ├── package.json └── src     ├── about.md     ├── articles     │   ├── post1.md     │   └── post2.md     └── index.md

Step 3: Let’s add some code

To save space (and time), you can use the commands below to create the content for our fictional website. Feel free to hop into “articles” and create your own blog posts. The key is that the posts need some meta data (also called “Front Matter”) to be able to generate properly.  The files you would want to edit are index.md, post1.md and post2.md.

The meta data should look something like this: 

--- title: 'Post1' layout: article.hbs  --- ## Post content here....

Or, if you’re lazy like me, use these terminal commands to add mock content from GitHub Gists to your site:

$  curl https://gist.githubusercontent.com/jppope/35dd682f962e311241d2f502e3d8fa25/raw/ec9991fb2d5d2c2095ea9d9161f33290e7d9bb9e/index.md > src/index.md $  curl https://gist.githubusercontent.com/jppope/2f6b3a602a3654b334c4d8df047db846/raw/88d90cec62be6ad0b3ee113ad0e1179dfbbb132b/about.md > src/about.md $  curl https://gist.githubusercontent.com/jppope/98a31761a9e086604897e115548829c4/raw/6fc1a538e62c237f5de01a926865568926f545e1/post1.md > src/articles/post1.md $  curl https://gist.githubusercontent.com/jppope/b686802621853a94a8a7695eb2bc4c84/raw/9dc07085d56953a718aeca40a3f71319d14410e7/post2.md > src/articles/post2.md

Next, we’ll be creating our layouts and partial layouts (“partials”). We’re going to use Handlebars.js for our templating language in this tutorial, but you can use whatever templating language floats your boat. Metalsmith can work with pretty much all of them, and I don’t have any strong opinions about templating languages.

Build the index layout

<!DOCTYPE html> <html lang="en">   <head>     <style>       /* Keeping it simple for the tutorial */       body {         font-family: 'Avenir', Helvetica, Arial, sans-serif;         -webkit-font-smoothing: antialiased;         -moz-osx-font-smoothing: grayscale;         text-align: center;         color: #2c3e50;         margin-top: 60px;       }       .navigation {         display: flex;         justify-content: center;         margin: 2rem 1rem;       }       .button {         margin: 1rem;         border: solid 1px #ccc;         border-radius: 4px;                 padding: 0.5rem 1rem;         text-decoration: none;       }     </style>   </head>   <body>     {{>navigation }}     <div>        {{#each articles }}         <a href="{{path}}"><h3>{{ title }}</h3></a>         <p>{{ description }}</p>        {{/each }}     </div>   </body> </html>

A couple of notes: 

  • Our “navigation” hasn’t been defined yet, but will ultimately replace the area where {{>navigation }} resides. 
  • {{#each }} will iterate through the “collection” of articles that metalsmith will generate during its build process. 
  • Metalsmith has lots of plugins you can use for things like stylesheets, tags, etc., but that’s not what this tutorial is about, so we’ll leave that for you to explore. 

Build the About page

Add the following to your about.hbs page:

<!DOCTYPE html> <html lang="en">   <head>     <style>       /* Keeping it simple for the tutorial */       body {         font-family: 'Avenir', Helvetica, Arial, sans-serif;         -webkit-font-smoothing: antialiased;         -moz-osx-font-smoothing: grayscale;         text-align: center;         color: #2c3e50;         margin-top: 60px;       }       .navigation {         display: flex;         justify-content: center;         margin: 2rem 1rem;       }       .button {         margin: 1rem;         border: solid 1px #ccc;         border-radius: 4px;                 padding: 0.5rem 1rem;         text-decoration: none;       }         </style>   </head>   <body>     {{>navigation }}     <div>       {{{contents}}}     </div>   </body> </html>

Build the Articles layout

<!DOCTYPE html> <html lang="en">   <head>     <style>       /* Keeping it simple for the tutorial */       body {         font-family: 'Avenir', Helvetica, Arial, sans-serif;         -webkit-font-smoothing: antialiased;         -moz-osx-font-smoothing: grayscale;         text-align: center;         color: #2c3e50;         margin-top: 60px;       }       .navigation {         display: flex;         justify-content: center;         margin: 2rem 1rem;       }       .button {         margin: 1rem;         border: solid 1px #ccc;         border-radius: 4px;                 padding: 0.5rem 1rem;         text-decoration: none;       }     </style>   </head>   <body>     {{>navigation }}     <div>       {{{contents}}}     </div>   </body> </html>

You may have noticed that this is the exact same layout as the About page. It is. I just wanted to cover how to add additional pages so you’d know how to do that. If you want this one to be different, go for it.

Add navigation

Add the following to the layouts/partials/navigation.hbs file

<div class="navigation">   <div>     <a class="button" href="/">Home</a>     <a class="button" href="/about">About</a>   </div> </div>

Sure there’s not much to it… but this really isn’t supposed to be a Metalsmith/SSG tutorial.  ¯_(ツ)_/¯

Step 4: The Build file

The heart and soul of Metalsmith is the build file. For sake of thoroughness, I’m going to go through it line-by-line. 

We start by importing the dependencies

Quick note: Metalsmith was created in 2014, and the predominant module system at the time was common.js , so I’m going to stick with require statements as opposed to ES modules. It’s also worth noting that most of the other tutorials are using require statements as well, so skipping a build step with Babel will just make life a little less complex here.

// What we use to glue everything together const Metalsmith = require('metalsmith'); 
 // compile from markdown (you can use targets as well) const markdown = require('metalsmith-markdown'); 
 // compiles layouts const layouts = require('metalsmith-layouts'); 
 // used to build collections of articles const collections = require('metalsmith-collections'); 
 // permalinks to clean up routes const permalinks = require('metalsmith-permalinks'); 
 // templating const handlebars = require('handlebars'); 
 // register the navigation const fs = require('fs'); handlebars.registerPartial('navigation', fs.readFileSync(__dirname + '/layouts/partials/navigation.hbt').toString()); 
 // NOTE: Uncomment if you want a server for development // const serve = require('metalsmith-serve'); // const watch = require('metalsmith-watch');

Next, we’ll be including Metalsmith and telling it where to find its compile targets:

// Metalsmith Metalsmith(__dirname)               // where your markdown files are   .source('src')         // where you want the compliled files to be rendered   .destination('public')

So far, so good. After we have the source and target set, we’re going to set up the markdown rendering, the layouts rendering, and let Metalsmith know to use “Collections.” These are a way to group files together. An easy example would be something like “blog posts” but it could really be anything, say recipes, whiskey reviews, or whatever. In the above example, we’re calling the collection “articles.”

 // previous code would go here 
   // collections create groups of similar content   .use(collections({      articles: {       pattern: 'articles/*.md',     },   }))   // compile from markdown   .use(markdown())   // nicer looking links   .use(permalinks({     pattern: ':collection/:title'   }))   // build layouts using handlebars templates   // also tell metalsmith where to find the raw input   .use(layouts({     engine: 'handlebars',     directory: './layouts',     default: 'article.html',     pattern: ["*/*/*html", "*/*html", "*html"],     partials: {       navigation: 'partials/navigation',     }   })) 
 // NOTE: Uncomment if you want a server for development // .use(serve({ //   port: 8081, //   verbose: true // })) // .use(watch({ //   paths: { //     "$ CSS-Tricks/**/*": true, //     "layouts/**/*": "**/*", //   } // }))

Next, we’re adding the markdown plugin, so we can use markdown for content to compile to HTML.

From there, we’re using the layouts plugin to wrap our raw content in the layout we define in the layouts folder. You can read more about the nuts and bolts of this on the official plugin site but the result is that we can use {{{contents}}} in a template and it will just work. 

The last addition to our tiny little build script will be the build method:

// Everything else would be above this .build(function(err) {   if (err) {     console.error(err)   }   else {     console.log('build completed!');   } });

Putting everything together, we should get a build script that looks like this:

const Metalsmith = require('metalsmith'); const markdown = require('metalsmith-markdown'); const layouts = require('metalsmith-layouts'); const collections = require('metalsmith-collections'); const permalinks = require('metalsmith-permalinks'); const handlebars = require('handlebars'); const fs = require('fs'); 
 // Navigation handlebars.registerPartial('navigation', fs.readFileSync(__dirname + '/layouts/partials/navigation.hbt').toString()); 
 Metalsmith(__dirname)   .source('src')   .destination('public')   .use(collections({     articles: {       pattern: 'articles/*.md',     },   }))   .use(markdown())   .use(permalinks({     pattern: ':collection/:title'   }))   .use(layouts({     engine: 'handlebars',     directory: './layouts',     default: 'article.html',     pattern: ["*/*/*html", "*/*html", "*html"],     partials: {       navigation: 'partials/navigation',     }   }))   .build(function (err) {     if (err) {       console.error(err)     }     else {       console.log('build completed!');     }   });

I’m a sucker for simple and clean and, in my humble opinion, it doesn’t get any simpler or cleaner than a Metalsmith build. We just need to make one quick update to the package.json file and we’ll be able to give this a run:

 "name": "buffaloTraceRoute",   "version": "1.0.0",   "description": "",   "main": "index.js",   "scripts": {     "build": "node build.js",     "test": "echo "No Tests Yet!" "   },   "keywords": [],   "author": "Your Name",   "license": "ISC",   "devDependencies": {     // these should be the current versions     // also... comments aren't allowed in JSON   } }

If you want to see your handy work, you can uncomment the parts of the build file that will let you serve your project and do things like run npm run build. Just make sure you remove this code before deploying.

Working with Cloudflare

Next, we’re going to work with Cloudflare to get access to their Cloudflare Workers. This is where the $ 5/month cost comes into play.

Now, you might be asking: “OK, but why Cloudflare? What about using something free like GutHub Pages or Netlify?” It’s a good question. There are lots of ways to deploy a static site, so why choose one method over another?

Well, Cloudflare has a few things going for it…

Speed and performance

One of the biggest reasons to switch to a static site generator is to improve your website performance. Using Cloudflare Workers Site can improve your performance even more.

Here’s a graph that shows Cloudflare compared to two competing alternatives:

Courtesy of Cloudflare

The simple reason why Cloudflare is the fastest: a site is deployed to 190+ data centers around the world. This reduces latency since users will be served the assets from a location that’s physically closer to them.

Simplicity

Admittedly, the initial configuration of Cloudflare Workers may be a little tricky if you don’t know how to setup environmental variables. But after you setup the basic configurations for your computer, deploying to Cloudflare is as simple as wrangler publish from the site directory. This tutorial is focused on the CI/CD aspect of deploying to Cloudflare which is a little more involved, but it’s still incredibly simple compared to most other deployment processes. 

(It’s worth mentioning GitHub Pages, Netlify are also killing it in this area. The developer experience of all three companies is amazing.)

More bang for the buck

While Github Pages and Netlify both have free tiers, your usage is (soft) limited to 100GB of bandwidth a month. Don’t get me wrong, that’s a super generous limit. But after that you’re out of luck. GitHub Pages doesn’t offer anything more than that and Netlify jumps up to $ 45/month, making Cloudflare’s $ 5/month price tag very reasonable.

Service Free Tier Bandwidth Paid Tier Price Paid Tier Requests / Bandwidth
GitHub Pages 100GB N/A N/A
Netlify 100GB $ 45 ~150K / 400 GB
Cloudflare Workers Sites none $ 5 10MM / unlimited 
Calculations assume a 3MB average website. Cloudflare has additional limits on CPU use. GitHub Pages should not be used for sites that have credit card transactions.

Sure, there’s no free tier for Cloudflare, but $ 5 for 10 million requests is a steal. I would also be remise if I didn’t mention that GitHub Pages has had a few outages over the last year. That’s totally fine in my book a demo site, but it would be bad news for a business.

Cloudflare offers a ton of additional features for that worth briefly mentioning: free SSL certificates, free (and easy) DNS routing, a custom Workers Sites domain name for your projects (which is great for staging), unlimited environments (e.g. staging), and registering a domain name at cost (as opposed to the markup pricing imposed by other registrars). 

Deploying to Cloudflare

Cloudflare provides some great tutorials for how to use their Cloudflare Workers product. We’ll cover the highlights here.

First, make sure the Cloudflare CLI, Wrangler, is installed:

$  npm i @cloudflare/wrangler -g

Next, we’re going to add Cloudflare Sites to the project, like this:

wrangler init --site cms 

Assuming I didn’t mess up and forget about a step, here’s what we should have in the terminal at this point:

⬇️ Installing cargo-generate... 🔧   Creating project called `workers-site`... ✨   Done! New project created /Users/<User>/Code/cms/workers-site ✨  Succesfully scaffolded workers site ✨  Succesfully created a `wrangler.toml`

There should also be a generated folder in the project root called /workers-site as well as a config file called wrangler.toml — this is where the magic resides.

name = "cms" type = "webpack" account_id = "" workers_dev = true route = "" zone_id = "" 
 [site] bucket = "" entry-point = "workers-site"

You might have already guessed what comes next… we need to add some info to the config file! The first key/value pair we’re going to update is the bucket property.

bucket = "./public"

Next, we need to get the Account ID and Zone ID (i.e. the route for your domain name). You can find them in your Cloudflare account all the way at the bottom of the dashboard for your domain:

Stop! Before going any further, don’t forget to click the “Get your API token” button to grab the last config piece that we’ll need. Save it on a notepad or somewhere handy because we’ll need it for the next section. 

Phew! Alright, the next task is to add the Account ID and Zone ID we just grabbed to the .toml file:

name = "buffalo-traceroute" type = "webpack" account_id = "d7313702f333457f84f3c648e9d652ff" # Fake... use your account_id workers_dev = true # route = "example.com/*"  # zone_id = "805b078ca1294617aead2a1d2a1830b9" # Fake... use your zone_id 
 [site] bucket = "./public" entry-point = "workers-site" (Again, those IDs are fake.)

Again, those IDs are fake. You may be asked to set up credentials on your computer. If that’s the case, run wrangler config in the terminal.

GitHub Actions

The last piece of the puzzle is to configure GitHub to do automatic deployments for us. Having done previous forays into CI/CD setups, I was ready for the worst on this one but, amazingly, GitHub Actions is very simple for this sort of setup.

So how does this work?

First, let’s make sure that out GitHub account has GitHub Actions activated. It’s technically in beta right now, but I haven’t run into any issues with that so far.

Next, we need to create a repository in GitHub and upload our code to it. Start by going to GitHub and creating a repository.

This tutorial isn’t meant to cover the finer points of Git and/or GitHub, but there’s a great introduction. Or, copy and paste the following commands while in the root directory of the project:

# run commands one after the other $  git init $  touch .gitignore && echo 'node_modules' > .gitignore $  git add . $  git commit -m 'first commit' $  git remote add origin https://github.com/{username}/{repo name} $  git push -u origin master

That should add the project to GitHub. I say that with a little hesitance but this is where everything tends to blow up for me. For example, put too many commands into the terminal and suddenly GitHub has an outage, or the terminal unable to location the path for Python. Tread carefully!

Assuming we’re past that part, our next task is to activate Github Actions and create a directory called .github/workflows in the root of the project directory. (GitHub can also do this automatically by adding the “node” workflow when activating actions. At the time of writing, adding a GitHub Actions Workflow is part of GitHub’s user interface.)

Once we have the directory in the project root, we can add the final two files. Each file is going to handle a different workflow:

  1. A workflow to check that updates can be merged (i.e. the “CI” in CI/CD)
  2. A workflow to deploy changes once they have been merged into master (i.e. the “CD” in CI/CD)
# integration.yml name: Integration 
 on:   pull_request:     branches: [ master ] 
 jobs:   build:     runs-on: ubuntu-latest     strategy:       matrix:         node-version: [10.x, 12.x]     steps:     - uses: actions/checkout@v2     - name: Use Node.js $ {{ matrix.node-version }}       uses: actions/setup-node@v1       with:         node-version: $ {{ matrix.node-version }}     - run: npm ci     - run: npm run build --if-present     - run: npm test       env:         CI: true

This is a straightforward workflow. So straightforward, in fact, that I copied it straight from the official GitHub Actions docs and barely modified it. Let’s go through what is actually happening in there:

  1. on: Run this workflow only when a pull request is created for the master branch
  2. jobs: Run the below steps for two-node environments (e.g. Node 10, and Node 12 — Node 12 is currently the recommended version). This will build, if a build script is defined. It will also run tests if a test script is defined.

The second file is our deployment script and is a little more involved.

# deploy.yml name: Deploy 
 on:   push:     branches:       - master 
 jobs:   deploy:     runs-on: ubuntu-latest     name: Deploy     strategy:       matrix:         node-version: [10.x] 
     steps:       - uses: actions/checkout@v2       - name: Use Node.js $ {{ matrix.node-version }}         uses: actions/setup-node@v1         with:           node-version: $ {{ matrix.node-version }}       - run: npm install       - uses: actions/checkout@master       - name: Build site         run: "npm run build"       - name: Publish         uses: cloudflare/wrangler-action@1.1.0         with:           apiToken: $ {{ secrets.CF_API_TOKEN }}

Important! Remember that Cloudflare API token I mentioned way earlier? Now is the time to use it. Go to the project settings and add a secret. Name the secret CF_API_TOKEN and add the API token.

Let’s go through whats going on in this script:

  1. on: Run the steps when code is merged into the master branch
  2. steps: Use Nodejs to install all dependencies, use Nodejs to build the site, then use Cloudflare Wrangler to publish the site

Here’s a quick recap of what the project should look like before running a build (sans node_modules): 

├── build.js ├── dist │   └── worker.js ├── layouts │   ├── about.hbs │   ├── article.hbs │   ├── index.hbs │   └── partials │       └── navigation.hbs ├── package-lock.json ├── package.json ├── public ├── src │   ├── about.md │   ├── articles │   │   ├── post1.md │   │   └── post2.md │   └── index.md ├── workers-site │   ├── index.js │   ├── package-lock.json │   ├── package.json │   └── worker │       └── script.js └── wrangler.toml

A GitHub-based CMS

Okay, so I made it this far… I was promised a CMS? Where is the database and my GUI that I log into and stuff?

Don’t worry, you are at the finish line! GitHub is your CMS now and here’s how it works:

  1. Write a markdown file (with front matter).
  2. Open up GitHub and go to the project repository.
  3. Click into the “Articles” directory, and upload the new article. GitHub will ask whether a new branch should be created along with a pull request. The answer is yes. 
  4. After the integration is verified, the pull request can be merged, which triggers deployment. 
  5. Sit back, relax and wait 10 seconds… the content is being deployed to 164 data centers worldwide.

Congrats! You now have a minimal Git-based CMS that basically anyone can use. 

Troubleshooting notes

  • Metalsmith layouts can sometimes be kinda tricky. Try adding this debug line before the build step to have it kick out something useful: DEBUG=metalsmith-layouts npm run build
  • Occasionally, Github actions needed me to add node_modules to the commit so it could deploy… this was strange to me (and not a recommended practice) but fixed the deployment.
  • Please let me know if you run into any trouble and we can add it to this list!

The post How to Make a Simple CMS With Cloudflare, GitHub Actions and Metalsmith appeared first on CSS-Tricks.

CSS-Tricks

, , , ,
[Top]

Simple Image Placeholders with SVG

A little open-source utility from Tyler Sticka that returns a data URL of an SVG to use as an image placeholder as needed.

I like the idea of self-running utilities like that, rather than depending on some third-party service, like placekitten or whatever. Not that I’d advocate for feature bloat here, but maybe it could be more fun like these generative placeholders, marching ants, or my favorite, adorable avatars.

Direct Link to ArticlePermalink

The post Simple Image Placeholders with SVG appeared first on CSS-Tricks.

CSS-Tricks

, ,
[Top]

Web Scraping Made Simple With Zenscrape

Web scraping has always been taken care of by actual developers, since a lot of coding, proxy management and CAPTCHA-solving is involved. However, the scraped data is very often needed by people that are non-coders: Marketers, Analysts, Business Developers etc.

Zenscrape is an easy-to-use web scraping tool that allows people to scrape websites without having to code.

Let’s run through a quick example together:

Select the data you need

The setup wizard guides you through the process of setting up your data extractor. It allows you to select the information you want to scrape visually. Click on the desired piece of content and specify what type of element you have. Depending on the package you have bought (they also offer a free plan), you can select up to 30 data elements per page.

The scraper is also capable of handling element lists.

Schedule your extractor

Perhaps, you want to scrape the selected data at a specific time interval. Depending on your plan, you can choose any time span between one minute to one hour. Also, decide what is supposed to happen with the scraped data after it has been gathered.

Use your data

In this example, we have chosen the .csv-export method and have selected a 10 minute scraping interval. Our first set of data should be ready by now. Let’s take a look:

Success! Our data is ready for us to be downloaded. We can now access all individual data sets or download all previously gathered data at once, in one file.

Need more flexibility?

Zenscrape also offers a web scraping API that returns the HTML markup of any website. This is especially useful for complicated scraping projects, that require the scraped content to be integrated into a software application for further processing.

Just like the web scraping suite, the API does not forward failed requests and takes care of proxy management, Capotcha-solving and all other maintenance tasks that are usually involved with DIY web scrapers.

Since the API returns the full HTML markup of the related website, you have full flexibility in terms of data selection and further processing.

Try Zenscrape

The post Web Scraping Made Simple With Zenscrape appeared first on CSS-Tricks.

CSS-Tricks

, , ,
[Top]

Simple & Boring

Simplicity is a funny adjective in web design and development. I’m sure it’s a quoted goal for just about every project ever done. Nobody walks into a kickoff meeting like, “Hey team, design something complicated for me. Oh, and make sure the implementation is convoluted as well. Over-engineer that sucker, would ya?”

Of course they want simple. Everybody wants simple. We want simple designs because simple means our customers will understand it and like it. We want simplicity in development. Nobody dreams of going to work to spend all day wrapping their head around a complex system to fix one bug.

Still, there is plenty to talk about when it comes to simplicity. It would be very hard to argue that web development has gotten simpler over the years. As such, the word has lately been on the tongues of many web designers and developers. Let’s take a meandering waltz through what other people have to say about simplicity.

Bridget Stewart recalls a frustrating battle against over-engineering in “A Simpler Web: I Concur.” After being hired as an expert in UI implementation and given the task of getting a video to play on a click…

I looked under the hood and got lost in all the looping functions and the variables and couldn’t figure out what the code was supposed to do. I couldn’t find any HTML <video> being referenced. I couldn’t see where a link or a button might be generated. I was lost.

I asked him to explain what the functions were doing so I could help figure out what could be the cause, because the browser can play video without much prodding. Instead of successfully getting me to understand what he had built, he argued with me about whether or not it was even possible to do. I tried, at first calmly, to explain to him I had done it many times before in my previous job, so I was absolutely certain it could be done. As he continued to refuse my explanation, things got heated. When I was done yelling at him (not the most professional way to conduct myself, I know), I returned to my work area and fired up a branch of the repo to implement it. 20 minutes later, I had it working.

It sounds like the main problem here is that the dude was a territorial dingus, but also his complicated approach literally stood in the way of getting work done.

Simplicity on the web often times means letting the browser do things for us. How many times have you seen a complex re-engineering of a select menu not be as usable or accessible as a <select>?

Jemery Wagner writes in Make it Boring:

Eminently usable designs and architectures result when simplicity is the default. It’s why unadorned HTML works. It beautifully solves the problem of presenting documents to the screen that we don’t even consider all the careful thought that went into the user agent stylesheets that provide its utterly boring presentation. We can take a lesson from this, especially during a time when more websites are consumed as web apps, and make them more resilient by adhering to semantics and native web technologies.

My guess is the rise of static site generators — and sites that find a way to get as much server-rendered as possible — is a symptom of the industry yearning for that brand of resilience.

Do less, as they say. Lyza Danger Gardner found a lot of value in this in her own job:

… we need to try to do as little as possible when we build the future web.

This isn’t a rationalization for laziness or shirking responsibility—those characteristics are arguably not ones you’d find in successful web devs. Nor it is a suggestion that we build bland, homogeneous sites and apps that sacrifice all nuance or spark to the Greater Good of total compatibility.

Instead it is an appeal for simplicity and elegance: putting commonality first, approaching differentiation carefully, and advocating for consistency in the creation and application of web standards.

Christopher T. Miller writes in “A Simpler Web”:

Should we find our way to something simpler, something more accessible?

I think we can. By simplifying our sites we achieve greater reach, better performance, and more reliable conveying of the information which is at the core of any website. I think we are seeing this in the uptick of passionate conversations around user experience, but it cannot stop with the UX team. Developers need to take ownership for the complexity they add to the Web.

It’s good to remember that the complexity we layer onto building websites is opt-in. We often do it for good reason, but it’s possible not to. Garrett Dimon:

You can build a robust, reliable, and fully responsive web application today using only semantic HTML on the front-end. No images. No CSS. No JavaScript. It’s entirely possible. It will work in every modern browser. It will be straightforward to maintain. It may not fit the standard definition of beauty as far as web experiences go, but it will work. In many cases, it will be more usable and accessible than those built with modern front-end frameworks.

That’s not to say that this is the best approach, but it’s a good reminder that the web works by default without all of our additional layers. When we add those additional layers, things break. Or, if we neglect good markup and CSS to begin with, we start out with something that’s already broken and then spend time trying to make it work again.

We assume that complex problems always require complex solutions. We try to solve complexity by inventing tools and technologies to address a problem; but in the process, we create another layer of complexity that, in turn, causes its own set of issues.

— Max Böck, “On Simplicity”

Perhaps the worst reason to choose a complex solution is that it’s new, and the newness makes it feel like choosing it makes you on top of technology and doing your job well. Old and boring may just what you need to do your job well.

Dan McKinley writes:

“Boring” should not be conflated with “bad.” There is technology out there that is both boring and bad. You should not use any of that. But there are many choices of technology that are boring and good, or at least good enough. MySQL is boring. Postgres is boring. PHP is boring. Python is boring. Memcached is boring. Squid is boring. Cron is boring.

The nice thing about boringness (so constrained) is that the capabilities of these things are well understood. But more importantly, their failure modes are well understood.

Rachel Andrew wrote that choosing established technology for the CMS she builds was a no-brainer because it’s what her customers had.

You’re going to hear less about old and boring technology. If you’re consuming a healthy diet of tech news, you probably won’t read many blog posts about old and boring technology. It’s too bad really, I, for one, would enjoy that. But I get it, publications need to have fresh writing and writers are less excited about topics that have been well-trod over decades.

As David DeSandro says, “New tech gets chatter”. When there is little to say, you just don’t say it.

You don’t hear about TextMate because TextMate is old. What would I tweet? Still using TextMate. Still good.

While we hear more about new tech, it’s old tech that is more well known, including what it’s bad at. If newer tech, perhaps more complicated tech, is needed because it solves a known pain point, that’s great, but when it doesn’t…

You are perfectly okay to stick with what works for you. The more you use something, the clearer its pain points become. Try new technologies when you’re ready to address those pain points. Don’t feel obligated to change your workflow because of chatter. New tech gets chatter, but that doesn’t make it any better.

Adam Silver says that a boring developer is full of questions:

“Will debugging code be more difficult?”, “Might performance degrade?” and “Will I be slowed down due to compile times?”

Dan Kim is also proud of being boring:

I have a confession to make — I’m not a rock star programmer. Nor am I a hacker. I don’t know ninjutsu. Nobody has ever called me a wizard.

Still, I take pride in the fact that I’m a good, solid programmer.

Complexity isn’t an enemy. Complexity is valuable. If what we work on had no complexity, it would worth far less, as there would be nothing slowing down the competition. Our job is complexity. Or rather, our job is managing the level of complexity so it’s valuable while still manageable.

Santi Metz has a great article digging into various aspects of this, part of which is about considering how much complicated code needs to change:

We abhor complication, but if the code never changes, it’s not costing us money.

Your CMS might be extremely complicated under the hood, but if you never touch that, who cares. But if your CMS limits what you’re able to do, and you spend a lot of time fighting it, that complexity matters a lot.

It’s satisfying to read Sandi’s analysis that it’s possible to predict where code breaks, and those points are defined by complexity. “Outlier classes” (parts of a code base that cause the most problems) can be identified without even seeing the code base:

I’m not familiar with the source code for these apps, but sight unseen I feel confident making a few predictions about the outlying classes. I suspect that they:

  1. are larger than most other classes,
  2. are laden with conditionals, and
  3. represent core concepts in the domain

I feel seen.

Boring is in it for the long haul.

Cap Watkins writes in “The Boring Designer”:

The boring designer is trusted and valued because people know they’re in it for the product and the user. The boring designer asks questions and leans on others’ experience and expertise, creating even more trust over time. They rarely assume they know the answer.

The boring designer is capable of being one of the best leaders a team can have.

So be great. Be boring.

Be boring!

The post Simple & Boring appeared first on CSS-Tricks.

CSS-Tricks

,
[Top]