Tag: Simple

How to Make a Simple CMS With Cloudflare, GitHub Actions and Metalsmith

Let’s build ourselves a CMS. But rather than build out a UI, we’re going to get that UI for free in the form of GitHub itself! We’ll be leveraging GitHub as the way to manage the content for our static site generator (it could be any static site generator). Here’s the gist of it: GitHub is going to be the place to manage, version control, and store files, and also be the place we’ll do our content editing. When edits occur, a series of automations will test, verify, and ultimately deploy our content to Cloudflare.

You can find the completed code for the project is available on GitHub. I power my own website, jonpauluritis.com, this exact way.

What does the full stack look like?

Here’s the tech stack we’ll be working with in this article:

  • Any Markdown Editor (Optional. e.g Typora.io)
  • A Static Site Generator (e.g. Metalsmith)
  • Github w/ Github Actions (CICD and Deployment)
  • Cloudflare Workers

Why should you care about about this setup? This setup is potentially the leanest, fastest, cheapest (~$ 5/month), and easiest way to manage a website (or Jamstack site). It’s awesome both from a technical side and from a user experience perspective. This setup is so awesome I literally went out and bought stock in Microsoft and Cloudflare. 

But before we start…

I’m not going to walk you through setting up accounts on these services, I’m sure you can do that yourself. Here are the accounts you need to setup: 

I would also recommend Typora for an amazing Markdown writing experience, but Markdown editors are a very personal thing, so use which editor feels right for you. 

Project structure

To give you a sense of where we’re headed, here’s the structure of the completed project:

├── build.js ├── .github/workflows │   ├── deploy.yml │   └── nodejs.js ├── layouts │   ├── about.hbs │   ├── article.hbs │   ├── index.hbs │   └── partials │       └── navigation.hbs ├── package-lock.json ├── package.json ├── public ├── src │   ├── about.md │   ├── articles │   │   ├── post1.md │   │   └── post2.md │   └── index.md ├── workers-site └── wrangler.toml

Step 1: Command line stuff

In a terminal, change directory to wherever you keep these sorts of projects and type this:

$  mkdir cms && cd cms && npm init -y

That will create a new directory, move into it, and initialize the use of npm.

The next thing we want to do is stand on the shoulders of giants. We’ll be using a number of npm packages that help us do things, the meat of which is using the static site generator Metalsmith:

$  npm install --save-dev metalsmith metalsmith-markdown metalsmith-layouts metalsmith-collections metalsmith-permalinks handlebars jstransformer-handlebars

Along with Metalsmith, there are a couple of other useful bits and bobs. Why Metalsmith? Let’s talk about that.

Step 2: Metalsmith

I’ve been trying out static site generators for 2-3 years now, and I still haven’t found “the one.” All of the big names — like Eleventy, Gatsby, Hugo, Jekyll, Hexo, and Vuepress — are totally badass but I can’t get past Metalsmith’s simplicity and extensibility.

As an example, this will code will actually build you a site: 

// EXAMPLE... NOT WHAT WE ARE USING FOR THIS TUTORIAL Metalsmith(__dirname)            .source('src')          .destination('dest')        .use(markdown())                .use(layouts())              .build((err) => if (err) throw err);

Pretty cool right?

For sake of brevity, type this into the terminal and we’ll scaffold out some structure and files to start with.

First, make the directories:

$  mkdir -p src/articles &&  mkdir -p layouts/partials 

Then, create the build file:

$  touch build.js

Next, we’ll create some layout files:

$  touch layouts/index.hbs && touch layouts/about.hbs && touch layouts/article.hbs && touch layouts/partials/navigation.hbt

And, finally, we’ll set up our content resources:

$  touch src/index.md && touch src/about.md && touch src/articles/post1.md && touch src/articles/post1.md touch src/articles/post2.md

The project folder should look something like this:

├── build.js ├── layouts │   ├── about.hbs │   ├── article.hbs │   ├── index.hbs │   └── partials │       └── navigation.hbs ├── package-lock.json ├── package.json └── src     ├── about.md     ├── articles     │   ├── post1.md     │   └── post2.md     └── index.md

Step 3: Let’s add some code

To save space (and time), you can use the commands below to create the content for our fictional website. Feel free to hop into “articles” and create your own blog posts. The key is that the posts need some meta data (also called “Front Matter”) to be able to generate properly.  The files you would want to edit are index.md, post1.md and post2.md.

The meta data should look something like this: 

--- title: 'Post1' layout: article.hbs  --- ## Post content here....

Or, if you’re lazy like me, use these terminal commands to add mock content from GitHub Gists to your site:

$  curl https://gist.githubusercontent.com/jppope/35dd682f962e311241d2f502e3d8fa25/raw/ec9991fb2d5d2c2095ea9d9161f33290e7d9bb9e/index.md > src/index.md $  curl https://gist.githubusercontent.com/jppope/2f6b3a602a3654b334c4d8df047db846/raw/88d90cec62be6ad0b3ee113ad0e1179dfbbb132b/about.md > src/about.md $  curl https://gist.githubusercontent.com/jppope/98a31761a9e086604897e115548829c4/raw/6fc1a538e62c237f5de01a926865568926f545e1/post1.md > src/articles/post1.md $  curl https://gist.githubusercontent.com/jppope/b686802621853a94a8a7695eb2bc4c84/raw/9dc07085d56953a718aeca40a3f71319d14410e7/post2.md > src/articles/post2.md

Next, we’ll be creating our layouts and partial layouts (“partials”). We’re going to use Handlebars.js for our templating language in this tutorial, but you can use whatever templating language floats your boat. Metalsmith can work with pretty much all of them, and I don’t have any strong opinions about templating languages.

Build the index layout

<!DOCTYPE html> <html lang="en">   <head>     <style>       /* Keeping it simple for the tutorial */       body {         font-family: 'Avenir', Helvetica, Arial, sans-serif;         -webkit-font-smoothing: antialiased;         -moz-osx-font-smoothing: grayscale;         text-align: center;         color: #2c3e50;         margin-top: 60px;       }       .navigation {         display: flex;         justify-content: center;         margin: 2rem 1rem;       }       .button {         margin: 1rem;         border: solid 1px #ccc;         border-radius: 4px;                 padding: 0.5rem 1rem;         text-decoration: none;       }     </style>   </head>   <body>     {{>navigation }}     <div>        {{#each articles }}         <a href="{{path}}"><h3>{{ title }}</h3></a>         <p>{{ description }}</p>        {{/each }}     </div>   </body> </html>

A couple of notes: 

  • Our “navigation” hasn’t been defined yet, but will ultimately replace the area where {{>navigation }} resides. 
  • {{#each }} will iterate through the “collection” of articles that metalsmith will generate during its build process. 
  • Metalsmith has lots of plugins you can use for things like stylesheets, tags, etc., but that’s not what this tutorial is about, so we’ll leave that for you to explore. 

Build the About page

Add the following to your about.hbs page:

<!DOCTYPE html> <html lang="en">   <head>     <style>       /* Keeping it simple for the tutorial */       body {         font-family: 'Avenir', Helvetica, Arial, sans-serif;         -webkit-font-smoothing: antialiased;         -moz-osx-font-smoothing: grayscale;         text-align: center;         color: #2c3e50;         margin-top: 60px;       }       .navigation {         display: flex;         justify-content: center;         margin: 2rem 1rem;       }       .button {         margin: 1rem;         border: solid 1px #ccc;         border-radius: 4px;                 padding: 0.5rem 1rem;         text-decoration: none;       }         </style>   </head>   <body>     {{>navigation }}     <div>       {{{contents}}}     </div>   </body> </html>

Build the Articles layout

<!DOCTYPE html> <html lang="en">   <head>     <style>       /* Keeping it simple for the tutorial */       body {         font-family: 'Avenir', Helvetica, Arial, sans-serif;         -webkit-font-smoothing: antialiased;         -moz-osx-font-smoothing: grayscale;         text-align: center;         color: #2c3e50;         margin-top: 60px;       }       .navigation {         display: flex;         justify-content: center;         margin: 2rem 1rem;       }       .button {         margin: 1rem;         border: solid 1px #ccc;         border-radius: 4px;                 padding: 0.5rem 1rem;         text-decoration: none;       }     </style>   </head>   <body>     {{>navigation }}     <div>       {{{contents}}}     </div>   </body> </html>

You may have noticed that this is the exact same layout as the About page. It is. I just wanted to cover how to add additional pages so you’d know how to do that. If you want this one to be different, go for it.

Add navigation

Add the following to the layouts/partials/navigation.hbs file

<div class="navigation">   <div>     <a class="button" href="/">Home</a>     <a class="button" href="/about">About</a>   </div> </div>

Sure there’s not much to it… but this really isn’t supposed to be a Metalsmith/SSG tutorial.  ¯_(ツ)_/¯

Step 4: The Build file

The heart and soul of Metalsmith is the build file. For sake of thoroughness, I’m going to go through it line-by-line. 

We start by importing the dependencies

Quick note: Metalsmith was created in 2014, and the predominant module system at the time was common.js , so I’m going to stick with require statements as opposed to ES modules. It’s also worth noting that most of the other tutorials are using require statements as well, so skipping a build step with Babel will just make life a little less complex here.

// What we use to glue everything together const Metalsmith = require('metalsmith'); 
 // compile from markdown (you can use targets as well) const markdown = require('metalsmith-markdown'); 
 // compiles layouts const layouts = require('metalsmith-layouts'); 
 // used to build collections of articles const collections = require('metalsmith-collections'); 
 // permalinks to clean up routes const permalinks = require('metalsmith-permalinks'); 
 // templating const handlebars = require('handlebars'); 
 // register the navigation const fs = require('fs'); handlebars.registerPartial('navigation', fs.readFileSync(__dirname + '/layouts/partials/navigation.hbt').toString()); 
 // NOTE: Uncomment if you want a server for development // const serve = require('metalsmith-serve'); // const watch = require('metalsmith-watch');

Next, we’ll be including Metalsmith and telling it where to find its compile targets:

// Metalsmith Metalsmith(__dirname)               // where your markdown files are   .source('src')         // where you want the compliled files to be rendered   .destination('public')

So far, so good. After we have the source and target set, we’re going to set up the markdown rendering, the layouts rendering, and let Metalsmith know to use “Collections.” These are a way to group files together. An easy example would be something like “blog posts” but it could really be anything, say recipes, whiskey reviews, or whatever. In the above example, we’re calling the collection “articles.”

 // previous code would go here 
   // collections create groups of similar content   .use(collections({      articles: {       pattern: 'articles/*.md',     },   }))   // compile from markdown   .use(markdown())   // nicer looking links   .use(permalinks({     pattern: ':collection/:title'   }))   // build layouts using handlebars templates   // also tell metalsmith where to find the raw input   .use(layouts({     engine: 'handlebars',     directory: './layouts',     default: 'article.html',     pattern: ["*/*/*html", "*/*html", "*html"],     partials: {       navigation: 'partials/navigation',     }   })) 
 // NOTE: Uncomment if you want a server for development // .use(serve({ //   port: 8081, //   verbose: true // })) // .use(watch({ //   paths: { //     "$ CSS-Tricks/**/*": true, //     "layouts/**/*": "**/*", //   } // }))

Next, we’re adding the markdown plugin, so we can use markdown for content to compile to HTML.

From there, we’re using the layouts plugin to wrap our raw content in the layout we define in the layouts folder. You can read more about the nuts and bolts of this on the official plugin site but the result is that we can use {{{contents}}} in a template and it will just work. 

The last addition to our tiny little build script will be the build method:

// Everything else would be above this .build(function(err) {   if (err) {     console.error(err)   }   else {     console.log('build completed!');   } });

Putting everything together, we should get a build script that looks like this:

const Metalsmith = require('metalsmith'); const markdown = require('metalsmith-markdown'); const layouts = require('metalsmith-layouts'); const collections = require('metalsmith-collections'); const permalinks = require('metalsmith-permalinks'); const handlebars = require('handlebars'); const fs = require('fs'); 
 // Navigation handlebars.registerPartial('navigation', fs.readFileSync(__dirname + '/layouts/partials/navigation.hbt').toString()); 
 Metalsmith(__dirname)   .source('src')   .destination('public')   .use(collections({     articles: {       pattern: 'articles/*.md',     },   }))   .use(markdown())   .use(permalinks({     pattern: ':collection/:title'   }))   .use(layouts({     engine: 'handlebars',     directory: './layouts',     default: 'article.html',     pattern: ["*/*/*html", "*/*html", "*html"],     partials: {       navigation: 'partials/navigation',     }   }))   .build(function (err) {     if (err) {       console.error(err)     }     else {       console.log('build completed!');     }   });

I’m a sucker for simple and clean and, in my humble opinion, it doesn’t get any simpler or cleaner than a Metalsmith build. We just need to make one quick update to the package.json file and we’ll be able to give this a run:

 "name": "buffaloTraceRoute",   "version": "1.0.0",   "description": "",   "main": "index.js",   "scripts": {     "build": "node build.js",     "test": "echo "No Tests Yet!" "   },   "keywords": [],   "author": "Your Name",   "license": "ISC",   "devDependencies": {     // these should be the current versions     // also... comments aren't allowed in JSON   } }

If you want to see your handy work, you can uncomment the parts of the build file that will let you serve your project and do things like run npm run build. Just make sure you remove this code before deploying.

Working with Cloudflare

Next, we’re going to work with Cloudflare to get access to their Cloudflare Workers. This is where the $ 5/month cost comes into play.

Now, you might be asking: “OK, but why Cloudflare? What about using something free like GutHub Pages or Netlify?” It’s a good question. There are lots of ways to deploy a static site, so why choose one method over another?

Well, Cloudflare has a few things going for it…

Speed and performance

One of the biggest reasons to switch to a static site generator is to improve your website performance. Using Cloudflare Workers Site can improve your performance even more.

Here’s a graph that shows Cloudflare compared to two competing alternatives:

Courtesy of Cloudflare

The simple reason why Cloudflare is the fastest: a site is deployed to 190+ data centers around the world. This reduces latency since users will be served the assets from a location that’s physically closer to them.

Simplicity

Admittedly, the initial configuration of Cloudflare Workers may be a little tricky if you don’t know how to setup environmental variables. But after you setup the basic configurations for your computer, deploying to Cloudflare is as simple as wrangler publish from the site directory. This tutorial is focused on the CI/CD aspect of deploying to Cloudflare which is a little more involved, but it’s still incredibly simple compared to most other deployment processes. 

(It’s worth mentioning GitHub Pages, Netlify are also killing it in this area. The developer experience of all three companies is amazing.)

More bang for the buck

While Github Pages and Netlify both have free tiers, your usage is (soft) limited to 100GB of bandwidth a month. Don’t get me wrong, that’s a super generous limit. But after that you’re out of luck. GitHub Pages doesn’t offer anything more than that and Netlify jumps up to $ 45/month, making Cloudflare’s $ 5/month price tag very reasonable.

Service Free Tier Bandwidth Paid Tier Price Paid Tier Requests / Bandwidth
GitHub Pages 100GB N/A N/A
Netlify 100GB $ 45 ~150K / 400 GB
Cloudflare Workers Sites none $ 5 10MM / unlimited 
Calculations assume a 3MB average website. Cloudflare has additional limits on CPU use. GitHub Pages should not be used for sites that have credit card transactions.

Sure, there’s no free tier for Cloudflare, but $ 5 for 10 million requests is a steal. I would also be remise if I didn’t mention that GitHub Pages has had a few outages over the last year. That’s totally fine in my book a demo site, but it would be bad news for a business.

Cloudflare offers a ton of additional features for that worth briefly mentioning: free SSL certificates, free (and easy) DNS routing, a custom Workers Sites domain name for your projects (which is great for staging), unlimited environments (e.g. staging), and registering a domain name at cost (as opposed to the markup pricing imposed by other registrars). 

Deploying to Cloudflare

Cloudflare provides some great tutorials for how to use their Cloudflare Workers product. We’ll cover the highlights here.

First, make sure the Cloudflare CLI, Wrangler, is installed:

$  npm i @cloudflare/wrangler -g

Next, we’re going to add Cloudflare Sites to the project, like this:

wrangler init --site cms 

Assuming I didn’t mess up and forget about a step, here’s what we should have in the terminal at this point:

⬇️ Installing cargo-generate... 🔧   Creating project called `workers-site`... ✨   Done! New project created /Users/<User>/Code/cms/workers-site ✨  Succesfully scaffolded workers site ✨  Succesfully created a `wrangler.toml`

There should also be a generated folder in the project root called /workers-site as well as a config file called wrangler.toml — this is where the magic resides.

name = "cms" type = "webpack" account_id = "" workers_dev = true route = "" zone_id = "" 
 [site] bucket = "" entry-point = "workers-site"

You might have already guessed what comes next… we need to add some info to the config file! The first key/value pair we’re going to update is the bucket property.

bucket = "./public"

Next, we need to get the Account ID and Zone ID (i.e. the route for your domain name). You can find them in your Cloudflare account all the way at the bottom of the dashboard for your domain:

Stop! Before going any further, don’t forget to click the “Get your API token” button to grab the last config piece that we’ll need. Save it on a notepad or somewhere handy because we’ll need it for the next section. 

Phew! Alright, the next task is to add the Account ID and Zone ID we just grabbed to the .toml file:

name = "buffalo-traceroute" type = "webpack" account_id = "d7313702f333457f84f3c648e9d652ff" # Fake... use your account_id workers_dev = true # route = "example.com/*"  # zone_id = "805b078ca1294617aead2a1d2a1830b9" # Fake... use your zone_id 
 [site] bucket = "./public" entry-point = "workers-site" (Again, those IDs are fake.)

Again, those IDs are fake. You may be asked to set up credentials on your computer. If that’s the case, run wrangler config in the terminal.

GitHub Actions

The last piece of the puzzle is to configure GitHub to do automatic deployments for us. Having done previous forays into CI/CD setups, I was ready for the worst on this one but, amazingly, GitHub Actions is very simple for this sort of setup.

So how does this work?

First, let’s make sure that out GitHub account has GitHub Actions activated. It’s technically in beta right now, but I haven’t run into any issues with that so far.

Next, we need to create a repository in GitHub and upload our code to it. Start by going to GitHub and creating a repository.

This tutorial isn’t meant to cover the finer points of Git and/or GitHub, but there’s a great introduction. Or, copy and paste the following commands while in the root directory of the project:

# run commands one after the other $  git init $  touch .gitignore && echo 'node_modules' > .gitignore $  git add . $  git commit -m 'first commit' $  git remote add origin https://github.com/{username}/{repo name} $  git push -u origin master

That should add the project to GitHub. I say that with a little hesitance but this is where everything tends to blow up for me. For example, put too many commands into the terminal and suddenly GitHub has an outage, or the terminal unable to location the path for Python. Tread carefully!

Assuming we’re past that part, our next task is to activate Github Actions and create a directory called .github/workflows in the root of the project directory. (GitHub can also do this automatically by adding the “node” workflow when activating actions. At the time of writing, adding a GitHub Actions Workflow is part of GitHub’s user interface.)

Once we have the directory in the project root, we can add the final two files. Each file is going to handle a different workflow:

  1. A workflow to check that updates can be merged (i.e. the “CI” in CI/CD)
  2. A workflow to deploy changes once they have been merged into master (i.e. the “CD” in CI/CD)
# integration.yml name: Integration 
 on:   pull_request:     branches: [ master ] 
 jobs:   build:     runs-on: ubuntu-latest     strategy:       matrix:         node-version: [10.x, 12.x]     steps:     - uses: actions/checkout@v2     - name: Use Node.js $ {{ matrix.node-version }}       uses: actions/setup-node@v1       with:         node-version: $ {{ matrix.node-version }}     - run: npm ci     - run: npm run build --if-present     - run: npm test       env:         CI: true

This is a straightforward workflow. So straightforward, in fact, that I copied it straight from the official GitHub Actions docs and barely modified it. Let’s go through what is actually happening in there:

  1. on: Run this workflow only when a pull request is created for the master branch
  2. jobs: Run the below steps for two-node environments (e.g. Node 10, and Node 12 — Node 12 is currently the recommended version). This will build, if a build script is defined. It will also run tests if a test script is defined.

The second file is our deployment script and is a little more involved.

# deploy.yml name: Deploy 
 on:   push:     branches:       - master 
 jobs:   deploy:     runs-on: ubuntu-latest     name: Deploy     strategy:       matrix:         node-version: [10.x] 
     steps:       - uses: actions/checkout@v2       - name: Use Node.js $ {{ matrix.node-version }}         uses: actions/setup-node@v1         with:           node-version: $ {{ matrix.node-version }}       - run: npm install       - uses: actions/checkout@master       - name: Build site         run: "npm run build"       - name: Publish         uses: cloudflare/wrangler-action@1.1.0         with:           apiToken: $ {{ secrets.CF_API_TOKEN }}

Important! Remember that Cloudflare API token I mentioned way earlier? Now is the time to use it. Go to the project settings and add a secret. Name the secret CF_API_TOKEN and add the API token.

Let’s go through whats going on in this script:

  1. on: Run the steps when code is merged into the master branch
  2. steps: Use Nodejs to install all dependencies, use Nodejs to build the site, then use Cloudflare Wrangler to publish the site

Here’s a quick recap of what the project should look like before running a build (sans node_modules): 

├── build.js ├── dist │   └── worker.js ├── layouts │   ├── about.hbs │   ├── article.hbs │   ├── index.hbs │   └── partials │       └── navigation.hbs ├── package-lock.json ├── package.json ├── public ├── src │   ├── about.md │   ├── articles │   │   ├── post1.md │   │   └── post2.md │   └── index.md ├── workers-site │   ├── index.js │   ├── package-lock.json │   ├── package.json │   └── worker │       └── script.js └── wrangler.toml

A GitHub-based CMS

Okay, so I made it this far… I was promised a CMS? Where is the database and my GUI that I log into and stuff?

Don’t worry, you are at the finish line! GitHub is your CMS now and here’s how it works:

  1. Write a markdown file (with front matter).
  2. Open up GitHub and go to the project repository.
  3. Click into the “Articles” directory, and upload the new article. GitHub will ask whether a new branch should be created along with a pull request. The answer is yes. 
  4. After the integration is verified, the pull request can be merged, which triggers deployment. 
  5. Sit back, relax and wait 10 seconds… the content is being deployed to 164 data centers worldwide.

Congrats! You now have a minimal Git-based CMS that basically anyone can use. 

Troubleshooting notes

  • Metalsmith layouts can sometimes be kinda tricky. Try adding this debug line before the build step to have it kick out something useful: DEBUG=metalsmith-layouts npm run build
  • Occasionally, Github actions needed me to add node_modules to the commit so it could deploy… this was strange to me (and not a recommended practice) but fixed the deployment.
  • Please let me know if you run into any trouble and we can add it to this list!

The post How to Make a Simple CMS With Cloudflare, GitHub Actions and Metalsmith appeared first on CSS-Tricks.

CSS-Tricks

, , , ,

Simple Image Placeholders with SVG

A little open-source utility from Tyler Sticka that returns a data URL of an SVG to use as an image placeholder as needed.

I like the idea of self-running utilities like that, rather than depending on some third-party service, like placekitten or whatever. Not that I’d advocate for feature bloat here, but maybe it could be more fun like these generative placeholders, marching ants, or my favorite, adorable avatars.

Direct Link to ArticlePermalink

The post Simple Image Placeholders with SVG appeared first on CSS-Tricks.

CSS-Tricks

, ,
[Top]

Web Scraping Made Simple With Zenscrape

Web scraping has always been taken care of by actual developers, since a lot of coding, proxy management and CAPTCHA-solving is involved. However, the scraped data is very often needed by people that are non-coders: Marketers, Analysts, Business Developers etc.

Zenscrape is an easy-to-use web scraping tool that allows people to scrape websites without having to code.

Let’s run through a quick example together:

Select the data you need

The setup wizard guides you through the process of setting up your data extractor. It allows you to select the information you want to scrape visually. Click on the desired piece of content and specify what type of element you have. Depending on the package you have bought (they also offer a free plan), you can select up to 30 data elements per page.

The scraper is also capable of handling element lists.

Schedule your extractor

Perhaps, you want to scrape the selected data at a specific time interval. Depending on your plan, you can choose any time span between one minute to one hour. Also, decide what is supposed to happen with the scraped data after it has been gathered.

Use your data

In this example, we have chosen the .csv-export method and have selected a 10 minute scraping interval. Our first set of data should be ready by now. Let’s take a look:

Success! Our data is ready for us to be downloaded. We can now access all individual data sets or download all previously gathered data at once, in one file.

Need more flexibility?

Zenscrape also offers a web scraping API that returns the HTML markup of any website. This is especially useful for complicated scraping projects, that require the scraped content to be integrated into a software application for further processing.

Just like the web scraping suite, the API does not forward failed requests and takes care of proxy management, Capotcha-solving and all other maintenance tasks that are usually involved with DIY web scrapers.

Since the API returns the full HTML markup of the related website, you have full flexibility in terms of data selection and further processing.

Try Zenscrape

The post Web Scraping Made Simple With Zenscrape appeared first on CSS-Tricks.

CSS-Tricks

, , ,
[Top]

Simple & Boring

Simplicity is a funny adjective in web design and development. I’m sure it’s a quoted goal for just about every project ever done. Nobody walks into a kickoff meeting like, “Hey team, design something complicated for me. Oh, and make sure the implementation is convoluted as well. Over-engineer that sucker, would ya?”

Of course they want simple. Everybody wants simple. We want simple designs because simple means our customers will understand it and like it. We want simplicity in development. Nobody dreams of going to work to spend all day wrapping their head around a complex system to fix one bug.

Still, there is plenty to talk about when it comes to simplicity. It would be very hard to argue that web development has gotten simpler over the years. As such, the word has lately been on the tongues of many web designers and developers. Let’s take a meandering waltz through what other people have to say about simplicity.

Bridget Stewart recalls a frustrating battle against over-engineering in “A Simpler Web: I Concur.” After being hired as an expert in UI implementation and given the task of getting a video to play on a click…

I looked under the hood and got lost in all the looping functions and the variables and couldn’t figure out what the code was supposed to do. I couldn’t find any HTML <video> being referenced. I couldn’t see where a link or a button might be generated. I was lost.

I asked him to explain what the functions were doing so I could help figure out what could be the cause, because the browser can play video without much prodding. Instead of successfully getting me to understand what he had built, he argued with me about whether or not it was even possible to do. I tried, at first calmly, to explain to him I had done it many times before in my previous job, so I was absolutely certain it could be done. As he continued to refuse my explanation, things got heated. When I was done yelling at him (not the most professional way to conduct myself, I know), I returned to my work area and fired up a branch of the repo to implement it. 20 minutes later, I had it working.

It sounds like the main problem here is that the dude was a territorial dingus, but also his complicated approach literally stood in the way of getting work done.

Simplicity on the web often times means letting the browser do things for us. How many times have you seen a complex re-engineering of a select menu not be as usable or accessible as a <select>?

Jemery Wagner writes in Make it Boring:

Eminently usable designs and architectures result when simplicity is the default. It’s why unadorned HTML works. It beautifully solves the problem of presenting documents to the screen that we don’t even consider all the careful thought that went into the user agent stylesheets that provide its utterly boring presentation. We can take a lesson from this, especially during a time when more websites are consumed as web apps, and make them more resilient by adhering to semantics and native web technologies.

My guess is the rise of static site generators — and sites that find a way to get as much server-rendered as possible — is a symptom of the industry yearning for that brand of resilience.

Do less, as they say. Lyza Danger Gardner found a lot of value in this in her own job:

… we need to try to do as little as possible when we build the future web.

This isn’t a rationalization for laziness or shirking responsibility—those characteristics are arguably not ones you’d find in successful web devs. Nor it is a suggestion that we build bland, homogeneous sites and apps that sacrifice all nuance or spark to the Greater Good of total compatibility.

Instead it is an appeal for simplicity and elegance: putting commonality first, approaching differentiation carefully, and advocating for consistency in the creation and application of web standards.

Christopher T. Miller writes in “A Simpler Web”:

Should we find our way to something simpler, something more accessible?

I think we can. By simplifying our sites we achieve greater reach, better performance, and more reliable conveying of the information which is at the core of any website. I think we are seeing this in the uptick of passionate conversations around user experience, but it cannot stop with the UX team. Developers need to take ownership for the complexity they add to the Web.

It’s good to remember that the complexity we layer onto building websites is opt-in. We often do it for good reason, but it’s possible not to. Garrett Dimon:

You can build a robust, reliable, and fully responsive web application today using only semantic HTML on the front-end. No images. No CSS. No JavaScript. It’s entirely possible. It will work in every modern browser. It will be straightforward to maintain. It may not fit the standard definition of beauty as far as web experiences go, but it will work. In many cases, it will be more usable and accessible than those built with modern front-end frameworks.

That’s not to say that this is the best approach, but it’s a good reminder that the web works by default without all of our additional layers. When we add those additional layers, things break. Or, if we neglect good markup and CSS to begin with, we start out with something that’s already broken and then spend time trying to make it work again.

We assume that complex problems always require complex solutions. We try to solve complexity by inventing tools and technologies to address a problem; but in the process, we create another layer of complexity that, in turn, causes its own set of issues.

— Max Böck, “On Simplicity”

Perhaps the worst reason to choose a complex solution is that it’s new, and the newness makes it feel like choosing it makes you on top of technology and doing your job well. Old and boring may just what you need to do your job well.

Dan McKinley writes:

“Boring” should not be conflated with “bad.” There is technology out there that is both boring and bad. You should not use any of that. But there are many choices of technology that are boring and good, or at least good enough. MySQL is boring. Postgres is boring. PHP is boring. Python is boring. Memcached is boring. Squid is boring. Cron is boring.

The nice thing about boringness (so constrained) is that the capabilities of these things are well understood. But more importantly, their failure modes are well understood.

Rachel Andrew wrote that choosing established technology for the CMS she builds was a no-brainer because it’s what her customers had.

You’re going to hear less about old and boring technology. If you’re consuming a healthy diet of tech news, you probably won’t read many blog posts about old and boring technology. It’s too bad really, I, for one, would enjoy that. But I get it, publications need to have fresh writing and writers are less excited about topics that have been well-trod over decades.

As David DeSandro says, “New tech gets chatter”. When there is little to say, you just don’t say it.

You don’t hear about TextMate because TextMate is old. What would I tweet? Still using TextMate. Still good.

While we hear more about new tech, it’s old tech that is more well known, including what it’s bad at. If newer tech, perhaps more complicated tech, is needed because it solves a known pain point, that’s great, but when it doesn’t…

You are perfectly okay to stick with what works for you. The more you use something, the clearer its pain points become. Try new technologies when you’re ready to address those pain points. Don’t feel obligated to change your workflow because of chatter. New tech gets chatter, but that doesn’t make it any better.

Adam Silver says that a boring developer is full of questions:

“Will debugging code be more difficult?”, “Might performance degrade?” and “Will I be slowed down due to compile times?”

Dan Kim is also proud of being boring:

I have a confession to make — I’m not a rock star programmer. Nor am I a hacker. I don’t know ninjutsu. Nobody has ever called me a wizard.

Still, I take pride in the fact that I’m a good, solid programmer.

Complexity isn’t an enemy. Complexity is valuable. If what we work on had no complexity, it would worth far less, as there would be nothing slowing down the competition. Our job is complexity. Or rather, our job is managing the level of complexity so it’s valuable while still manageable.

Santi Metz has a great article digging into various aspects of this, part of which is about considering how much complicated code needs to change:

We abhor complication, but if the code never changes, it’s not costing us money.

Your CMS might be extremely complicated under the hood, but if you never touch that, who cares. But if your CMS limits what you’re able to do, and you spend a lot of time fighting it, that complexity matters a lot.

It’s satisfying to read Sandi’s analysis that it’s possible to predict where code breaks, and those points are defined by complexity. “Outlier classes” (parts of a code base that cause the most problems) can be identified without even seeing the code base:

I’m not familiar with the source code for these apps, but sight unseen I feel confident making a few predictions about the outlying classes. I suspect that they:

  1. are larger than most other classes,
  2. are laden with conditionals, and
  3. represent core concepts in the domain

I feel seen.

Boring is in it for the long haul.

Cap Watkins writes in “The Boring Designer”:

The boring designer is trusted and valued because people know they’re in it for the product and the user. The boring designer asks questions and leans on others’ experience and expertise, creating even more trust over time. They rarely assume they know the answer.

The boring designer is capable of being one of the best leaders a team can have.

So be great. Be boring.

Be boring!

The post Simple & Boring appeared first on CSS-Tricks.

CSS-Tricks

,
[Top]