Tag: GitHub

Beautify GitHub Profile

It wasn’t long ago that Nick Sypteras showed us how to make custom badges for a GitHub repo. Well, Reza Shakeri put Beautify GitHub Profile together and it’s a huuuuuuge repo of different badges that pulls lots of examples together with direct links to the repos you can use to create them.

And it doesn’t stop there! If you’re looking for some sort of embeddable widget, there’s everything from GitHub repo stats and contribution visualizations, all the way to embedded PageSpeed Insights and Spotify playlists. Basically, a big ol’ spot to get some inspiration.

Some things are simply wild!

3D chart of commit history from the Beautify GitHub Profile repo.
I bet Jhey would like to get his hands on those cuboids!

Just scrolling through the repo gives me flashes of the GeoCities days, though. All it needs is a sparkly unicorn and a tiled background image to complete the outfit. 👔


Beautify GitHub Profile originally published on CSS-Tricks. You should get the newsletter.

CSS-Tricks

, ,

Adding Custom GitHub Badges to Your Repo

If you’ve spent time looking at open-source repos on GitHub, you’ve probably noticed that most of them use badges in their README files. Take the official React repository, for instance. There are GitHub badges all over the README file that communicate important dynamic info, like the latest released version and whether the current build is passing.

Showing the header of React's repo displaying GitHub badges.

Badges like these provide a nice way to highlight key information about a repository. You can even use your own custom assets as badges, like Next.js does in its repo.

Showing the Next.js repo header with GitHub badges.

But the most useful thing about GitHub badges by far is that they update by themselves. Instead of hardcoding values into your README, badges in GitHub can automatically pick up changes from a remote server.

Let’s discuss how to add dynamic GitHub badges to the README file of your own project. We’ll start by using an online generator called badgen.net to create some basic badges. Then we’ll make our badges dynamic by hooking them up to our own serverless function via Napkin. Finally, we’ll take things one step further by using our own custom SVG files.

Showing three examples of custom GitHub badges including Apprentice, Intermediate, and wizard skill levels.

First off: How do badges work?

Before we start building some badges in GitHub, let’s quickly go over how they are implemented. It’s actually very simple: badges are just images. README files are written in Markdown, and Markdown supports images like so:

![alt text](path or URL to image)

The fact that we can include a URL to an image means that a Markdown page will request the image data from a server when the page is rendered. So, if we control the server that has the image, we can change what image is sent back using whatever logic we want!

Thankfully, we have a couple options to deploy our own server logic without the whole “setting up the server” part. For basic use cases, we can create our GitHub badge images with badgen.net using its predefined templates. And again, Napkin will let us quickly code a serverless function in our browser and then deploy it as an endpoint that our GitHub badges can talk to.

Making badges with Badgen

Let’s start off with the simplest badge solution: a static badge via badgen.net. The Badgen API uses URL patterns to create templated badges on the fly. The URL pattern is as follows:

https://badgen.net/badge/:subject/:status/:color?icon=github

There’s a full list of the options you have for colors, icons, and more on badgen.net. For this example, let’s use these values:

  • :subject : Hello
  • :status: : World
  • :color: : red
  • :icon: : twitter

Our final URL winds up looking like this:

https://badgen.net/badge/hello/world/red?icon=twitter

Adding a GitHub badge to the README file

Now we need to embed this badge in the README file of our GitHub repo. We can do that in Markdown using the syntax we looked at earlier:

![my badge](https://badgen.net/badge/hello/world/red?icon=twitter)

Badgen provides a ton of different options, so I encourage you to check out their site and play around! For instance, one of the templates lets you show the number of times a given GitHub repo has been starred. Here’s a star GitHub badge for the Next.js repo as an example:

https://badgen.net/github/stars/vercel/next.js

Pretty cool! But what if you want your badge to show some information that Badgen doesn’t natively support? Luckily, Badgen has a URL template for using your own HTTPS endpoints to get data:

https://badgen.net/https/url/to/your/endpoint

For example, let’s say we want our badge to show the current price of Bitcoin in USD. All we need is a custom endpoint that returns this data as JSON like this:

{   "color": "blue",   "status": "$ 39,333.7",   "subject": "Bitcoin Price USD" }

Assuming our endpoint is available at https://some-endpoint.example.com/bitcoin, we can pass its data to Badgen using the following URL scheme:

https://badgen.net/https/some-endpoint.example.com/bitcoin
GitHub badge. On the left is a gray label with white text. On the right is a blue label with white text showing the price of Bitcoin.
The data for the cost of Bitcoin is served right to the GitHub badge.

Even cooler now! But we still have to actually create the endpoint that provides the data for the GitHub badge. 🤔 Which brings us to…

Badgen + Napkin

There’s plenty of ways to get your own HTTPS endpoint. You could spin up a server with DigitalOcean or AWS EC2, or you could use a serverless option like Google Cloud Functions or AWS Lambda; however, those can all still become a bit complex and tedious for our simple use case. That’s why I’m suggesting Napkin’s in-browser function editor to code and deploy an endpoint without any installs or configuration.

Head over to Napkin’s Bitcoin badge example to see an example endpoint. You can see the code to retrieve the current Bitcoin price and return it as JSON in the editor. You can run the code yourself from the editor or directly use the endpoint.

To use the endpoint with Badgen, work with the same URL scheme from above, only this time with the Napkin endpoint:

https://badgen.net/https/napkin-examples.npkn.net/bitcoin-badge

More ways to customize GitHub badges

Next, let’s fork this function so we can add in our own custom code to it. Click the “Fork” button in the top-right to do so. You’ll be prompted to make an account with Napkin if you’re not already signed in.

Once we’ve successfully forked the function, we can add whatever code we want, using any npm modules we want. Let’s add the Moment.js npm package and update the endpoint response to show the time that the price of Bitcoin was last updated directly in our GitHub badge:

import fetch from 'node-fetch' import moment from 'moment'  const bitcoinPrice = async () => {   const res = await fetch("<https://blockchain.info/ticker>")   const json = await res.json()   const lastPrice = json.USD.last+""    const [ints, decimals] = lastPrice.split(".")    return ints.slice(0, -3) + "," + ints.slice(-3) + "." + decimals }  export default async (req, res) => {   const btc = await bitcoinPrice()    res.json({     icon: 'bitcoin',     subject: `Bitcoin Price USD ($ {moment().format('h:mma')})`,     color: 'blue',     status: `$ $ {btc}`   }) }
Deploy the function, update your URL, and now we get this.

You might notice that the badge takes some time to refresh the next time you load up the README file over at GitHub. That’s is because GitHub uses a proxy mechanism to serve badge images.

GitHub serves the badge images this way to prevent abuse, like high request volume or JavaScript code injection. We can’t control GitHub’s proxy, but fortunately, it doesn’t cache too aggressively (or else that would kind of defeat the purpose of badges). In my experience, the TTL is around 5-10 minutes.

OK, final boss time.

Custom SVG badges with Napkin

For our final trick, let’s use Napkin to send back a completely new SVG, so we can use custom images like we saw on the Next.js repo.

A common use case for GitHub badges is showing the current status for a website. Let’s do that. Here are the two states our badge will support:

Badgen doesn’t support custom SVGs, so instead, we’ll have our badge talk directly to our Napkin endpoint. Let’s create a new Napkin function for this called site-status-badge.

The code in this function makes a request to example.com. If the request status is 200, it returns the green badge as an SVG file; otherwise, it returns the red badge. You can check out the function, but I’ll also include the code here for reference:

import fetch from 'node-fetch'  const site_url = "<https://example.com>"  // full SVGs at <https://napkin.io/examples/site-status-badge> const customUpBadge = '' const customDownBadge = ''  const isSiteUp = async () => {   const res = await fetch(site_url)   return res.ok }  export default async (req, res) => {   const forceFail = req.path?.endsWith('/400')    const healthy = await isSiteUp()   res.set('content-type', 'image/svg+xml')   if (healthy && !forceFail) {     res.send(Buffer.from(customUpBadge).toString('base64'))   } else {     res.send(Buffer.from(customDownBadge).toString('base64'))   } }

Odds are pretty low that the example.com site will ever go down, so I added the forceFail case to simulate that scenario. Now we can add a /400 after the Napkin endpoint URL to try it:

![status up](https://napkin-examples.npkn.net/site-status-badge/) ![status down](https://napkin-examples.npkn.net/site-status-badge/400)

Very nice 😎


And there we have it! Your GitHub badge training is complete. But the journey is far from over. There’s a million different things where badges like this are super helpful. Have fun experimenting and go make that README sparkle! ✨


Adding Custom GitHub Badges to Your Repo originally published on CSS-Tricks. You should get the newsletter.

CSS-Tricks

, , , ,
[Top]

Automatic Daily GitHub Backups, Restored in Seconds

Any company that uses GitHub for critical applications needs a backup that can be restored quickly when needed. Cyberattacks, human errors, or a forced push are just some of the scenarios that can result in the loss of GitHub data. In the event of an emergency, you can’t be wasting time asking which developer has the most recent copy of your code. You need to restore your code, and you need it restored now.

Why do you need GitHub backups?

GitHub is a git repository hosting and version control platform, not a backup solution. GitHub, like most SaaS platforms, follows the Shared Responsibility Model. This model divides the responsibilities of users and the responsibilities of the platform, with account-level data security falling securely in the realm of users.

GitHub’s terms of service specifically states that they are “not liable to you or any third party for any loss of profits, use, goodwill, or data.” This means that the information stored in your GitHub account — all of your repositories including code, issues, pull requests, and other essential metadata — is your responsibility to back up.

Although the most common cause for data loss is human error, malicious attacks are becoming increasingly common. Recently, GitHub reported a phishing attack named SawFish which even worked against some kinds of two-factor authentication (2FA) attack, according to Sophos. Some Rewind customers also reported choosing Rewind after phishing attacks resulted in their GitHub data being stolen.Having a backup of your code protects your business’s intellectual property (IP) ensuring that your critical data is always recoverable no matter what goes wrong. 

Regular data backups and other data hygiene principles are also often required for compliance purposes. Required data hygiene may include keeping backups with consistent intervals, having offsite backups, regularly testing restores, having an audit report, and a history of data pull requests among many others. 

In-house backup solution — pros and cons

“Build vs. Buy” is a common refrain when investigating new tech tools. After all, why pay for something that you could build yourself? Developing an in-house backup solution for your GitHub repositories and metadata is an option. For teams with developer resources to spare, this can be an economical choice. However, building your own backup solution isn’t as simple as writing a backup script.

In-house backup scripts need to be written, tested, and maintained which has an indirect cost to your business. These scripts are also vulnerable to updates to the GitHub APIs. Since the GitHub API changes periodically, in-house scripts have to be updated and tested to make sure your data is still being backed up. Once you’ve backed up your data, you’ll also need to spend developer resources to figure out how you will restore it quickly in case of an emergency. This is one of the hardest capabilities to build, yet the most important. After all, what good is a data backup if you can’t use it to actually put your data back?

Another thorn to be solved is metadata. Your repo is much more than just code: pull requests, issues, commits, branches and more are all essential to your workflow. Backing up and restoring metadata isn’t the same process as backing up and restoring code. Most companies reported having difficulty with backing up and restoring metadata such as Mercado Libre, which backs up 13,000+ repositories with BackHub by Rewind. Another Rewind customer, a major player in the EdTech space, also reported that they chose Rewind because they were not able to backup their metadata which was essential for their business.  

On the other hand, the main advantage of an inhouse backup solution is that you have more control over your backups. This may be the frequency or time of the backups  among other things. However, this comes at the cost of using your developer and IT resources for developing, maintaining, and testing your in-house solution. Thus, before deciding to build an in-house backup solution, identify your needs and assess your capabilities. Consider if you need to backup your metadata and what your target for time to recovery is. Then, ensure your team has the necessary resources and time to fully develop and maintain the in-house solution. 

Why use BackHub by Rewind for GitHub backups?

BackHub by Rewind automates daily backups of your GitHub repositories, pull requests and associated metadata including:

  • Commits (including comments)
  • Issues
  • Projects
  • Releases
  • Milestones
  • Wikis

BackHub by Rewind is set-up in minutes and allows you to restore your repositories and metadata in a few clicks. As the solution works natively within GitHub, your repositories and associated metadata is directly and securely restored to your GitHub account.

When you do need to perform a data restore, simply install BackHub by Rewind’s dedicated restore app, select the date where everything worked perfectly, and click ‘restore’. Your selected repositories, including associated metadata, will then be pushed and restored directly back into your GitHub account.

BackHub by Rewind follows the security principle of least access, meaning once the app is installed, it only has “read” access to your data. This means that BackHub by Rewind cannot alter, modify, or change the code in your repository in any way. 

To restore your data, “write” access is required, and so BackHub by Rewind has a separate app used solely for data restoration that can be deleted once restoration is complete. This provides an additional layer of security and peace of mind that your code is kept safe, secure, and secret. 

BackHub by Rewind was built with enterprise compliance in mind. Enterprise plans offer advanced features such as 365 days of data retention, full account activity logs, choice of data storage location (US or Europe), and SLA with 99.9% availability. With over 2 PB of data backed up worldwide, Rewind is SOC2 Type 1 certified, and expects to receive SOC2 Type 2 by the end of 2021. 

BackHub by Rewind is a true “set it and forget it” tool, and requires no specialized technical or coding knowledge to operate. Running quietly in the background, BackHub by Rewind backs up your GitHub data everyday and allows you to restore it in a few clicks so that your development team can focus on your core product.

If you are interested in BackHub by Rewind, start your free 14-day trial to test it out yourself. 

Direct Link to ArticlePermalink


The post Automatic Daily GitHub Backups, Restored in Seconds appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , ,
[Top]

GitHub Explains the Open Graph Images

An explanation of those new GitHub social media images:

[…] our custom Open Graph image service is a little Node.js app that uses the GitHub GraphQL API to collect data, generates some HTML from a template, and pipes it to Puppeteer to “take a screenshot” of that HTML.

Jason Etcovich on The GitHub Blog in “A framework for building Open Graph images”

It’s so satisfying to produce templated images from HTML and CSS. It’s the perfect way to do social media images. If you’re doing it at scale like GitHub, there are a couple of nice tricks in here for speeding it up.

Direct Link to ArticlePermalink


The post GitHub Explains the Open Graph Images appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , ,
[Top]

Building a Full-Stack Geo-Distributed Serverless App with Macrometa, GatsbyJS, & GitHub Pages

In this article, we walk through building out a full-stack real-time and completely serverless application that allows you to create polls! All of the app’s static bits (HTML, CSS, JS, & Media) will be hosted and globally distributed via the GitHub Pages CDN (Content Delivery Network). All of the data and dynamic requests for data (i.e., the back end) will be globally distributed and stateful via the Macrometa GDN (Global Data Network).

Macrometa is a geo-distributed stateful serverless platform designed from the ground up to be lightning-fast no matter where the client is located, optimized for both reads and writes, and elastically scalable. We will use it as a database for data collection and maintaining state and stream to subscribe to database updates for real-time action.

We will be using Gatsby to manage our app and deploy it to Github Pages. Let’s do this!

Intro

This demo uses the Macrometa c8db-source-plugin to get some of the data as markdown and then transform it to HTML to display directly in the browser and the Macrometa JSC8 SKD to keep an open socket for real-time fun and manage working with Macrometa’s API.

Getting started

  1. Node.js and npm must be installed on your machine.
  2. After you have that done, install the Gatsby-CLI >> npm install -g gatsby-cli
  3. If you don’t have one already go ahead and signup for a free Macrometa developer account.
  4. Once you’re logged in to Macrometa create a document collection called markdownContent. Then create a single document with title and content fields in markdown format. This creates your data model the app will be using for its static content.

Here’s an example of what the markdownContent collection should look like:

{   "title": "## Real-Time Polling Application",   "content": "Full-Stack Geo-Distributed Serverless App Built with GatsbyJS and Macrometa!" }

content and title keys in the document are in the markdown format. Once they go through gatsby-source-c8db, data in title is converted to <h2></h2>, and content to <p></p>.

  1. Now create a second document collection called polls. This is where the poll data will be stored.

In the polls collection each poll will be stored as a separate document. A sample document is mentioned below:

{   "pollName": "What is your spirit animal?",   "polls": [     {       "editing": false,       "id": "975e41",       "text": "dog",       "votes": 2     },     {       "editing": false,       "id": "b8aa60",       "text": "cat",       "votes": 1     },     {       "editing": false,       "id": "b8aa42",       "text": "unicorn",       "votes": 10     }   ] }

Setting up auth

Your Macrometa login details, along with the collection to be used and markdown transformations, has to be provided in the application’s gatsby-config.js like below:

 {   resolve: "gatsby-source-c8db",   options: {     config: "https://gdn.paas.macrometa.io",     auth: {     email: "<my-email>",      password: "process.env.MM_PW"   },     geoFabric: "_system",     collection: 'markdownContent',     map: {       markdownContent: {          title: "text/markdown",         content: "text/markdown"        }     }   } }

Under password you will notice that it says process.env.MM_PW, instead of putting your password there, we are going to create some .env files and make sure that those files are listed in our .gitignore file, so we don’t accidentally push our Macrometa password back to Github. In your root directory create .env.development and .env.production files.

You will only have one thing in each of those files: MM_PW='<your-password-here>'

Running the app locally

We have the frontend code already done, so you can fork the repo, set up your Macrometa account as described above, add your password as described above, and then deploy. Go ahead and do that and then I’ll walk you through how the app is set up so you can check out the code.

In the terminal of your choice:

  1. Fork this repo and clone your fork onto your local machine
  2. Run npm install
  3. Once that’s done run npm run develop to start the local server. This will start local development server on http://localhost:<some_port> and the GraphQL server at http://localhost:<some_port>/___graphql

How to deploy app (UI) on GitHub Pages

Once you have the app running as expected in your local environment simply run npm run deploy!

Gatsby will automatically generate the static code for the site, create a branch called gh-pages, and deploy it to Github.

Now you can access your site at <your-github-username>.github.io/tutorial-jamstack-pollingapp

If your app isn‘t showing up there for some reason go check out your repo’s settings and make sure Github Pages is enabled and configured to run on your gh-pages branch.

Walking through the code

First, we made a file that loaded the Macrometa JSC8 Driver, made sure we opened up a socket to Macrometa, and then defined the various API calls we will be using in the app. Next, we made the config available to the whole app.

After that we wrote the functions that handle various front-end events. Here’s the code for handling a vote submission:

onVote = async (onSubmitVote, getPollData, establishLiveConnection) => {   const { title } = this.state;   const { selection } = this.state;   this.setState({ loading: true }, () => {   onSubmitVote(selection)     .then(async () => {       const pollData = await getPollData();       this.setState({ loading: false, hasVoted: true, options: Object.values(pollData) }, () => {         // open socket connections for live updates         const onmessage = msg => {           const { payload } = JSON.parse(msg);           const decoded = JSON.parse(atob(payload));           this.setState({ options: decoded[title] });         }         establishLiveConnection(onmessage);       });     })     .catch(err => console.log(err))   }); }

You can check out a live example of the app here

You can create your own poll. To allow multiple people to vote on the same topic just share the vote URL with them.


The post Building a Full-Stack Geo-Distributed Serverless App with Macrometa, GatsbyJS, & GitHub Pages appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , , , ,
[Top]

Custom Styles in GitHub Readme Files

Even though GitHub Readme files (typically ./readme.md) are Markdown, and although Markdown supports HTML, you can’t put <style> or <script> tags init. (Well, you can, they just get stripped.) So you can’t apply custom styles there. Or can you?

  1. You can use SVG as an <img src="./file.svg" alt="" /> (anywhere).
  2. When used that way, even stuff like animations within them play (wow).
  3. SVG has stuff like <text> for textual content, but also <foreignObject> for regular ol’ HTML content.
  4. SVG support <style> tags.
  5. Your readme.md file does support <img> with SVG sources.

Sindre Sorhus combined all that into an example.

That same SVG source will work here:


The post Custom Styles in GitHub Readme Files appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , ,
[Top]

Continuous Performance Analysis with Lighthouse CI and GitHub Actions

Lighthouse is a free and open-source tool for assessing your website’s performance, accessibility, progressive web app metrics, SEO, and more. The easiest way to use it is through the Chrome DevTools panel. Once you open the DevTools, you will see a “Lighthouse” tab. Clicking the “Generate report” button will run a series of tests on the web page and display the results right there in the Lighthouse tab. This makes it easy to test any web page, whether public or requiring authentication.

If you don’t use Chrome or Chromium-based browsers, like Microsoft Edge or Brave, you can run Lighthouse through its web interface but it only works with publicly available web pages. A Node CLI tool is also provided for those who wish to run Lighthouse audits from the command line.

All the options listed above require some form of manual intervention. Wouldn‘t it be great if we could integrate Lighthouse testing in the continuous integration process so that the impact of our code changes can be displayed inline with each pull request, and so that we can fail the builds if certain performance thresholds are not net? Well, that’s exactly why Lighthouse CI exists!

It is a suite of tools that help you identify the impact of specific code changes on you site not just performance-wise, but in terms of SEO, accessibility, offline support, and other best practices. It’s offers a great way to enforce performance budgets, and also helps you keep track of each reported metric so you can see how they have changed over time.

In this article, we’ll go over how to set up Lighthouse CI and run it locally, then how to get it working as part of a CI workflow through GitHub Actions. Note that Lighthouse CI also works with other CI providers such as Travis CI, GitLab CI, and Circle CI in case you prefer to not to use GitHub Actions.

Setting up the Lighthouse CI locally

In this section, you will configure and run the Lighthouse CI command line tool locally on your machine. Before you proceed, ensure you have Node.js v10 LTS or later and Google Chrome (stable) installed on your machine, then proceed to install the Lighthouse CI tool globally:

$  npm install -g @lhci/cli

Once the CLI has been installed successfully, ru lhci --help to view all the available commands that the tool provides. There are eight commands available at the time of writing.

$  lhci --help lhci <command> <options>  Commands:   lhci collect      Run Lighthouse and save the results to a local folder   lhci upload       Save the results to the server   lhci assert       Assert that the latest results meet expectations   lhci autorun      Run collect/assert/upload with sensible defaults   lhci healthcheck  Run diagnostics to ensure a valid configuration   lhci open         Opens the HTML reports of collected runs   lhci wizard       Step-by-step wizard for CI tasks like creating a project   lhci server       Run Lighthouse CI server  Options:   --help             Show help  [boolean]   --version          Show version number  [boolean]   --no-lighthouserc  Disables automatic usage of a .lighthouserc file.  [boolean]   --config           Path to JSON config file

At this point, you‘re ready to configure the CLI for your project. The Lighthouse CI configuration can be managed through (in order of increasing precedence) a configuration file, environmental variables, or CLI flags. It uses the Yargs API to read its configuration options, which means there’s a lot of flexibility in how it can be configured. The full documentation covers it all. In this post, we’ll make use of the configuration file option.

Go ahead and create a lighthouserc.js file in the root of your project directory. Make sure the project is being tracked with Git because the Lighthouse CI automatically infers the build context settings from the Git repository. If your project does not use Git, you can control the build context settings through environmental variables instead.

touch lighthouserc.js

Here’s the simplest configuration that will run and collect Lighthouse reports for a static website project, and upload them to temporary public storage.

// lighthouserc.js module.exports = {   ci: {     collect: {       staticDistDir: './public',     },     upload: {       target: 'temporary-public-storage',     },   }, };

The ci.collect object offers several options to control how the Lighthouse CI collects test reports. The staticDistDir option is used to indicate the location of your static HTML files — for example, Hugo builds to a public directory, Jekyll places its build files in a _site directory, and so on. All you need to do is update the staticDistDir option to wherever your build is located. When the Lighthouse CI is run, it will start a server that’s able to run the tests accordingly. Once the test finishes, the server will automatically shut dow.

If your project requires the use of a custom server, you can enter the command used to start the server through the startServerCommand property. When this option is used, you also need to specify the URLs to test against through the url option. This URL should be serveable by the custom server that you specified.

module.exports = {   ci: {     collect: {       startServerCommand: 'npm run server',       url: ['http://localhost:4000/'],     },     upload: {       target: 'temporary-public-storage',     },   }, };

When the Lighthouse CI runs, it executes the server command and watches for the listen or ready string to determine if the server has started. If it does not detect this string after 10 seconds, it assumes the server has started and continues with the test. It then runs Lighthouse three times against each URL in the url array. Once the test has finished running, it shuts down the server process.

You can configure both the pattern string to watch for and timeout duration through the startServerReadyPattern and startServerReadyTimeout options respectively. If you want to change the number of times to run Lighthouse against each URL, use the numberOfRuns property.

// lighthouserc.js module.exports = {   ci: {     collect: {       startServerCommand: 'npm run server',       url: ['http://localhost:4000/'],       startServerReadyPattern: 'Server is running on PORT 4000',       startServerReadyTimeout: 20000 // milliseconds       numberOfRuns: 5,     },     upload: {       target: 'temporary-public-storage',     },   }, };

The target property inside the ci.upload object is used to configure where Lighthouse CI uploads the results after a test is completed. The temporary-public-storage option indicates that the report will be uploaded to Google’s Cloud Storage and retained for a few days. It will also be available to anyone who has the link, with no authentication required. If you want more control over how the reports are stored, refer to the documentation.

At this point, you should be ready to run the Lighthouse CI tool. Use the command below to start the CLI. It will run Lighthouse thrice against the provided URLs (unless changed via the numberOfRuns option), and upload the median result to the configured target.

lhci autorun

The output should be similar to what is shown below:

✅  .lighthouseci/ directory writable ✅  Configuration file found ✅  Chrome installation found ⚠️   GitHub token not set Healthcheck passed!  Started a web server on port 52195... Running Lighthouse 3 time(s) on http://localhost:52195/web-development-with-go/ Run #1...done. Run #2...done. Run #3...done. Running Lighthouse 3 time(s) on http://localhost:52195/custom-html5-video/ Run #1...done. Run #2...done. Run #3...done. Done running Lighthouse!  Uploading median LHR of http://localhost:52195/web-development-with-go/...success! Open the report at https://storage.googleapis.com/lighthouse-infrastructure.appspot.com/reports/1606403407045-45763.report.html Uploading median LHR of http://localhost:52195/custom-html5-video/...success! Open the report at https://storage.googleapis.com/lighthouse-infrastructure.appspot.com/reports/1606403400243-5952.report.html Saving URL map for GitHub repository ayoisaiah/freshman...success! No GitHub token set, skipping GitHub status check.  Done running autorun. 

The GitHub token message can be ignored for now. We‘ll configure one when it’s time to set up Lighthouse CI with a GitHub action. You can open the Lighthouse report link in your browser to view the median test results for reach URL.

Configuring assertions

Using the Lighthouse CI tool to run and collect Lighthouse reports works well enough, but we can go a step further and configure the tool so that a build fails if the tests results do not match certain criteria. The options that control this behavior can be configured through the assert property. Here’s a snippet showing a sample configuration:

// lighthouserc.js module.exports = {   ci: {     assert: {       preset: 'lighthouse:no-pwa',       assertions: {         'categories:performance': ['error', { minScore: 0.9 }],         'categories:accessibility': ['warn', { minScore: 0.9 }],       },     },   }, };

The preset option is a quick way to configure Lighthouse assertions. There are three options:

  • lighthouse:all: Asserts that every audit received a perfect score
  • lighthouse:recommended: Asserts that every audit outside performance received a perfect score, and warns when metric values drop below a score of 90
  • lighthouse:no-pwa: The same as lighthouse:recommended but without any of the PWA audits

You can use the assertions object to override or extend the presets, or build a custom set of assertions from scratch. The above configuration asserts a baseline score of 90 for the performance and accessibility categories. The difference is that failure in the former will result in a non-zero exit code while the latter will not. The result of any audit in Lighthouse can be asserted so there’s so much you can do here. Be sure to consult the documentation to discover all of the available options.

You can also configure assertions against a budget.json file. This can be created manually or generated through performancebudget.io. Once you have your file, feed it to the assert object as shown below:

// lighthouserc.js module.exports = {   ci: {     collect: {       staticDistDir: './public',       url: ['/'],     },     assert: {       budgetFile: './budget.json',     },     upload: {       target: 'temporary-public-storage',     },   }, };

Running Lighthouse CI with GitHub Actions

A useful way to integrate Lighthouse CI into your development workflow is to generate new reports for each commit or pull request to the project’s GitHub repository. This is where GitHub Actions come into play.

To set it up, you need to create a .github/workflow directory at the root of your project. This is where all the workflows for your project will be placed. If you’re new to GitHub Actions, you can think of a workflow as a set of one or more actions to be executed once an event is triggered (such as when a new pull request is made to the repo). Sarah Drasner has a nice primer on using GitHub Actions.

mkdir -p .github/workflow

Next, create a YAML file in the .github/workflow directory. You can name it anything you want as long as it ends with the .yml or .yaml extension. This file is where the workflow configuration for the Lighthouse CI will be placed.

cd .github/workflow touch lighthouse-ci.yaml

The contents of the lighthouse-ci.yaml file will vary depending on the type of project. I‘ll describe how I set it up for my Hugo website so you can adapt it for other types of projects. Here’s my configuration file in full:

# .github/workflow/lighthouse-ci.yaml name: Lighthouse on: [push] jobs:   ci:     runs-on: ubuntu-latest     steps:       - name: Checkout code         uses: actions/checkout@v2         with:           token: $ {{ secrets.PAT }}           submodules: recursive        - name: Setup Hugo         uses: peaceiris/actions-hugo@v2         with:           hugo-version: "0.76.5"           extended: true        - name: Build site         run: hugo        - name: Use Node.js 15.x         uses: actions/setup-node@v2         with:           node-version: 15.x       - name: Run the Lighthouse CI         run: |           npm install -g @lhci/cli@0.6.x           lhci autorun

The above configuration creates a workflow called Lighthouse consisting of a single job (ci) which runs on an Ubuntu instance and is triggered whenever code is pushed to any branch in the repository. The job consists of the following steps:

  • Check out the repository that Lighthouse CI will be run against. Hugo uses submodules for its themes, so it’s necessary to ensure all submodules in the repo are checked out as well. If any submodule is in a private repo, you need to create a new Personal Access Token with the repo scope enabled, then add it as a repository secret at https://github.com/<username>/<repo>/settings/secret. Without this token, this step will fail if it encounters a private repo.
  • Install Hugo on the GitHub Action virtual machine so that it can be used to build the site. This Hugo Setup Action is what I used here. You can find other setup actions in the GitHub Actions marketplace.
  • Build the site to a public folder through the hugo command.
  • Install and configure Node.js on the virtual machine through the setup-node action
  • Install the Lighthouse CI tool and execute the lhci autorun command.

Once you’ve set up the config file, you can commit and push the changes to your GitHub repository. This will trigger the workflow you just added provided your configuration was set up correctly. Go to the Actions tab in the project repository to see the status of the workflow under your most recent commit.

If you click through and expand the ci job, you will see the logs for each of the steps in the job. In my case, everything ran successfully but my assertions failed — hence the failure status. Just as we saw when we ran the test locally, the results are uploaded to the temporary public storage and you can view them by clicking the appropriate link in the logs.

Setting up GitHub status checks

At the moment, the Lighthouse CI has been configured to run as soon as code is pushed to the repo whether directly to a branch or through a pull request. The status of the test is displayed on the commit page, but you have click through and expand the logs to see the full details, including the links to the report.

You can set up a GitHub status check so that build reports are displayed directly in the pull request. To set it up, go to the Lighthouse CI GitHub App page, click the “Configure” option, then install and authorize it on your GitHub account or the organization that owns the GitHub repository you want to use. Next, copy the app token provided on the confirmation page and add it to your repository secrets with the name field set to LHCI_GITHUB_APP_TOKEN.

The status check is now ready to use. You can try it out by opening a new pull request or pushing a commit to an already existing pull request.

Historical reporting and comparisons through the Lighthouse CI Server

Using the temporary public storage option to store Lighthouse reports is great way to get started, but it is insufficient if you want to keep your data private or for a longer duration. This is where the Lighthouse CI server can help. It provides a dashboard for exploring historical Lighthouse data and offers an great comparison UI to uncover differences between builds.

To utilize the Lighthouse CI server, you need to deploy it to your own infrastructure. Detailed instructions and recipes for deploying to Heroku and Docker can be found on GitHub.

Conclusion

When setting up your configuration, it is a good idea to include a few different URLs to ensure good test coverage. For a typical blog, you definitely want to include to include the homepage, a post or two which is representative of the type of content on the site, and any other important pages.

Although we didn’t cover the full extent of what the Lighthouse CI tool can do, I hope this article not only helps you get up and running with it, but gives you a good idea of what else it can do. Thanks for reading, and happy coding!


The post Continuous Performance Analysis with Lighthouse CI and GitHub Actions appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , ,
[Top]

MDN on GitHub

Looks like all the content of MDN is on GitHub now. That’s pretty rad. That’s been the public plan for a while. Chris Mills:

We will be using GitHub’s contribution tools and features, essentially moving MDN from a Wiki model to a pull request (PR) model. This is so much better for contribution, allowing for intelligent linting, mass edits, and inclusion of MDN docs in whatever workflows you want to add it to (you can edit MDN source files directly in your favorite code editor).

Looks like that transition is happening basically today, and it’s a whole new back-end and front-end architecture.

Say you wanted to update the article for :focus-within. There will be a button on that page that takes you to the file in the repo (rather than the wiki editing page), and you can edit it from the GitHub UI (or however you like to do Git, but that seems like right-on-GitHub will be where the bulk of editing happens). Saving the changed document will automatically become a Pull Request, and presumably, there is a team in place to approve those.

We think that your changes should be live on the site in 48 hours as a worst case scenario.

Big claps from me here, I think this is a smart move. I can’t speak to the tech, but the content model is smart. I’d maybe like to see the content in Markdown with less specialized classes and such, but I suspect that kind of thing can evolve over time and this is already a behemoth of an update to ship all at once.

In August 2020, the entire MDN (writers) team was laid off, so it looks like the play here is to open up the creation and editing of these technical docs to the world of developers. Will it work? It super didn’t work for the “Web Platform Docs” (remember those?). But MDN has way more existing content, mindshare, and momentum. I suspect it will work great for the maintenance of existing docs, decently for new docs on whatever is hot ‘n’ fresh, but less so for anything old and “boring”.

Seems a little risky to fire all the writers before you find out if it works, so that speaks to the product direction. Things are changing and paying a content creation team directly just ain’t a part of whatever the new direction is.


The post MDN on GitHub appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

[Top]

Optimize Images with a GitHub Action

I was playing with GitHub Actions the other day. Such a nice tool! Short story: you can have it run code for you, like run your build processes, tests, and deployments. But it’s just configuration files that can run whatever you need. There is a whole marketplace of Actions wanting to do work for you.

What I wanted to do was run code to do image optimization. That way I never have to think about it. Any image in the repo has been optimized.

There is an action for this already, Calibre’s image-actions, which we’ll leverage here. You’ll also need to ensure Actions is enabled for the repo. I know in my main organization we only flip on Actions on a per-repo basis, which is one of the options.

Then you make a file at ./github/workflows/optimize-images.yml. That’s where you can configure this action. All your actions can have separate files, if you want them to. I made this a separate file because (1) it only works with “pushes to pull requests,” so if you have other actions that run on different triggers, they won’t mix nicely, and (2) That’s what is in their docs and looks like the suggested usage.

name: Optimize images on: pull_request jobs:   build:     name: calibreapp/image-actions     runs-on: ubuntu-latest     steps:       - name: Checkout Repo         uses: actions/checkout@master        - name: Compress Images         uses: calibreapp/image-actions@master         with:           githubToken: $ {{ secrets.GITHUB_TOKEN }}

Now if you make a pull request, you’ll see it run:

That successful run then leaves a comment on the pull request saying what it was able to optimize:

It will literally re-commit those files back to the pull request as well, so if you’re going to stay on the pull request and keep working, you’ll need to push again before you can push to get the optimized images.

I can look at that automatic commit and see the difference:

The commit preview in Git Tower.

How I can merge the PR knowing all is well:

Pretty cool. Is optimizing your images locally particularly hard? No. Is never having to think about it again better? Yeah. You’re taking on a smidge of technical debt here, but reducing it elsewhere, which is a very fair trade, at least in my book.


The post Optimize Images with a GitHub Action appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , ,
[Top]

The GitHub Profile Trick

Monica Powell shared a really cool trick the other day:

The profile README is created by creating a new repository that’s the same name as your username. For example, my GitHub username is m0nica so I created a new repository with the name m0nica.

Now the README.md from that repo is essentially the homepage of her profile. Above the usual list of popular repos, you can see the rendered version of that README on her profile:

Lemme do a lame version for myself real quick just to try it out…

OK, I start like this:

Screenshot of the default profile page for Chris Coyier.

Then I’ll to go repo.new (hey, CodePen has one of those cool domains too!) and make a repo on my personal account that is exactly the same as my username:

Screenshot showing the create new repo screen on GitHub. The repository name is set to chriscoyier.

I chose to initialize the repo with a README file and nothing else. So immediately I get:

Screenshot of the code section of the chriscoyier repo, which only contains a read me file that says hi there.

I can edit this directly on the web, and if I do, I see more helpful stuff:

Screenshot of editing the read me file directly in GitHub.

Fortunately, my personal website has a Markdown bio ready to use!

Screenshot of Chris Coyier's personal website homepage. It has a dark background and a large picture of Chris wearing a red CodePen hat next to some text welcoming people to the site.

I’ll copy and paste that over.

Screenshot showing the Markdown code from the personal website in the GitHub editor.

After committing that change, my own profile shows it!

Screenshot of the updated GitHub profile page, showing the welcome text from the personal website homepage.

Maybe I’ll get around to doing something more fun with it someday. Monica’s post has a bunch of fun examples in it. My favorite is Kaya Thomas’ profile, which I saw Jina Anne share:

You can’t use CSS in there (because GitHub strips it out), so I love the ingenuity of using old school <img align="right"> to pull off the floating image look.


The post The GitHub Profile Trick appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, ,
[Top]