Tag: from

From a Single Repo, to Multi-Repos, to Monorepo, to Multi-Monorepo

I’ve been working on the same project for several years. Its initial version was a huge monolithic app containing thousands of files. It was poorly architected and non-reusable, but was hosted in a single repo making it easy to work with. Later, I “fixed” the mess in the project by splitting the codebase into autonomous packages, hosting each of them on its own repo, and managing them with Composer. The codebase became properly architected and reusable, but being split across multiple repos made it a lot more difficult to work with.

As the code was reformatted time and again, its hosting in the repo also had to adapt, going from the initial single repo, to multiple repos, to a monorepo, to what may be called a “multi-monorepo.”

Let me take you on the journey of how this took place, explaining why and when I felt I had to switch to a new approach. The journey consists of four stages (so far!) so let’s break it down like that.

Stage 1: Single repo

The project is leoloso/PoP and it’s been through several hosting schemes, following how its code was re-architected at different times.

It was born as this WordPress site, comprising a theme and several plugins. All of the code was hosted together in the same repo.

Some time later, I needed another site with similar features so I went the quick and easy way: I duplicated the theme and added its own custom plugins, all in the same repo. I got the new site running in no time.

I did the same for another site, and then another one, and another one. Eventually the repo was hosting some 10 sites, comprising thousands of files.

A single repository hosting all our code.

Issues with the single repo

While this setup made it easy to spin up new sites, it didn’t scale well at all. The big thing is that a single change involved searching for the same string across all 10 sites. That was completely unmanageable. Let’s just say that copy/paste/search/replace became a routine thing for me.

So it was time to start coding PHP the right way.

Stage 2: Multirepo

Fast forward a couple of years. I completely split the application into PHP packages, managed via Composer and dependency injection.

Composer uses Packagist as its main PHP package repository. In order to publish a package, Packagist requires a composer.json file placed at the root of the package’s repo. That means we are unable to have multiple PHP packages, each of them with its own composer.json hosted on the same repo.

As a consequence, I had to switch from hosting all of the code in the single leoloso/PoP repo, to using multiple repos, with one repo per PHP package. To help manage them, I created the organization “PoP” in GitHub and hosted all repos there, including getpop/root, getpop/component-model, getpop/engine, and many others.

In the multirepo, each package is hosted on its own repo.

Issues with the multirepo

Handling a multirepo can be easy when you have a handful of PHP packages. But in my case, the codebase comprised over 200 PHP packages. Managing them was no fun.

The reason that the project was split into so many packages is because I also decoupled the code from WordPress (so that these could also be used with other CMSs), for which every package must be very granular, dealing with a single goal.

Now, 200 packages is not ordinary. But even if a project comprises only 10 packages, it can be difficult to manage across 10 repositories. That’s because every package must be versioned, and every version of a package depends on some version of another package. When creating pull requests, we need to configure the composer.json file on every package to use the corresponding development branch of its dependencies. It’s cumbersome and bureaucratic.

I ended up not using feature branches at all, at least in my case, and simply pointed every package to the dev-master version of its dependencies (i.e. I was not versioning packages). I wouldn’t be surprised to learn that this is a common practice more often than not.

There are tools to help manage multiple repos, like meta. It creates a project composed of multiple repos and doing git commit -m "some message" on the project executes a git commit -m "some message" command on every repo, allowing them to be in sync with each other.

However, meta will not help manage the versioning of each dependency on their composer.json file. Even though it helps alleviate the pain, it is not a definitive solution.

So, it was time to bring all packages to the same repo.

Stage 3: Monorepo

The monorepo is a single repo that hosts the code for multiple projects. Since it hosts different packages together, we can version control them together too. This way, all packages can be published with the same version, and linked across dependencies. This makes pull requests very simple.

The monorepo hosts multiple packages.

As I mentioned earlier, we are not able to publish PHP packages to Packagist if they are hosted on the same repo. But we can overcome this constraint by decoupling development and distribution of the code: we use the monorepo to host and edit the source code, and multiple repos (at one repo per package) to publish them to Packagist for distribution and consumption.

The monorepo hosts the source code, multiple repos distribute it.

Switching to the Monorepo

Switching to the monorepo approach involved the following steps:

First, I created the folder structure in leoloso/PoP to host the multiple projects. I decided to use a two-level hierarchy, first under layers/ to indicate the broader project, and then under packages/, plugins/, clients/ and whatnot to indicate the category.

Showing the HitHub repo for a project called PoP. The screen in is dark mode, so the background is near black and the text is off-white, except for blue links.
The monorepo layers indicate the broader project.

Then, I copied all source code from all repos (getpop/engine, getpop/component-model, etc.) to the corresponding location for that package in the monorepo (i.e. layers/Engine/packages/engine, layers/Engine/packages/component-model, etc).

I didn’t need to keep the Git history of the packages, so I just copied the files with Finder. Otherwise, we can use hraban/tomono or shopsys/monorepo-tools to port repos into the monorepo, while preserving their Git history and commit hashes.

Next, I updated the description of all downstream repos, to start with [READ ONLY], such as this one.

Showing the GitHub repo for the component-model project. The screen is in dark mode, so the background is near black and the text is off-white, except for blue links. There is a sidebar to the right of the screen that is next to the list of files in the repo. The sidebar has an About heading with a description that reads: Read only, component model for Pop, over which the component-based architecture is based." This is highlighted in red.
The downstream repo’s “READ ONLY” is located in the repo description.

I executed this task in bulk via GitHub’s GraphQL API. I first obtained all of the descriptions from all of the repos, with this query:

{   repositoryOwner(login: "getpop") {     repositories(first: 100) {       nodes {         id         name         description       }     }   } }

…which returned a list like this:

{   "data": {     "repositoryOwner": {       "repositories": {         "nodes": [           {             "id": "MDEwOlJlcG9zaXRvcnkxODQ2OTYyODc=",             "name": "hooks",             "description": "Contracts to implement hooks (filters and actions) for PoP"           },           {             "id": "MDEwOlJlcG9zaXRvcnkxODU1NTQ4MDE=",             "name": "root",             "description": "Declaration of dependencies shared by all PoP components"           },           {             "id": "MDEwOlJlcG9zaXRvcnkxODYyMjczNTk=",             "name": "engine",             "description": "Engine for PoP"           }         ]       }     }   } }

From there, I copied all descriptions, added [READ ONLY] to them, and for every repo generated a new query executing the updateRepository GraphQL mutation:

mutation {   updateRepository(     input: {       repositoryId: "MDEwOlJlcG9zaXRvcnkxODYyMjczNTk="       description: "[READ ONLY] Engine for PoP"     }   ) {     repository {       description     }   } }

Finally, I introduced tooling to help “split the monorepo.” Using a monorepo relies on synchronizing the code between the upstream monorepo and the downstream repos, triggered whenever a pull request is merged. This action is called “splitting the monorepo.” Splitting the monorepo can be achieved with a git subtree split command but, because I’m lazy, I’d rather use a tool.

I chose Monorepo builder, which is written in PHP. I like this tool because I can customize it with my own functionality. Other popular tools are the Git Subtree Splitter (written in Go) and Git Subsplit (bash script).

What I like about the Monorepo

I feel at home with the monorepo. The speed of development has improved because dealing with 200 packages feels pretty much like dealing with just one. The boost is most evident when refactoring the codebase, i.e. when executing updates across many packages.

The monorepo also allows me to release multiple WordPress plugins at once. All I need to do is provide a configuration to GitHub Actions via PHP code (when using the Monorepo builder) instead of hard-coding it in YAML.

To generate a WordPress plugin for distribution, I had created a generate_plugins.yml workflow that triggers when creating a release. With the monorepo, I have adapted it to generate not just one, but multiple plugins, configured via PHP through a custom command in plugin-config-entries-json, and invoked like this in GitHub Actions:

- id: output_data   run: |     echo "quot;::set-output name=plugin_config_entries::$ (vendor/bin/monorepo-builder plugin-config-entries-json)"

This way, I can generate my GraphQL API plugin and other plugins hosted in the monorepo all at once. The configuration defined via PHP is this one.

class PluginDataSource {   public function getPluginConfigEntries(): array   {     return [       // GraphQL API for WordPress       [         'path' => 'layers/GraphQLAPIForWP/plugins/graphql-api-for-wp',         'zip_file' => 'graphql-api.zip',         'main_file' => 'graphql-api.php',         'dist_repo_organization' => 'GraphQLAPI',         'dist_repo_name' => 'graphql-api-for-wp-dist',       ],       // GraphQL API - Extension Demo       [         'path' => 'layers/GraphQLAPIForWP/plugins/extension-demo',         'zip_file' => 'graphql-api-extension-demo.zip',         'main_file' =>; 'graphql-api-extension-demo.php',         'dist_repo_organization' => 'GraphQLAPI',         'dist_repo_name' => 'extension-demo-dist',       ],     ];   } }

When creating a release, the plugins are generated via GitHub Actions.

Dark mode screen in GitHub showing the actions for the project.
This figure shows plugins generated when a release is created.

If, in the future, I add the code for yet another plugin to the repo, it will also be generated without any trouble. Investing some time and energy producing this setup now will definitely save plenty of time and energy in the future.

Issues with the Monorepo

I believe the monorepo is particularly useful when all packages are coded in the same programming language, tightly coupled, and relying on the same tooling. If instead we have multiple projects based on different programming languages (such as JavaScript and PHP), composed of unrelated parts (such as the main website code and a subdomain that handles newsletter subscriptions), or tooling (such as PHPUnit and Jest), then I don’t believe the monorepo provides much of an advantage.

That said, there are downsides to the monorepo:

  • We must use the same license for all of the code hosted in the monorepo; otherwise, we’re unable to add a LICENSE.md file at the root of the monorepo and have GitHub pick it up automatically. Indeed, leoloso/PoP initially provided several libraries using MIT and the plugin using GPLv2. So, I decided to simplify it using the lowest common denominator between them, which is GPLv2.
  • There is a lot of code, a lot of documentation, and plenty of issues, all from different projects. As such, potential contributors that were attracted to a specific project can easily get confused.
  • When tagging the code, all packages are versioned independently with that tag whether their particular code was updated or not. This is an issue with the Monorepo builder and not necessarily with the monorepo approach (Symfony has solved this problem for its monorepo).
  • The issues board needs proper management. In particular, it requires labels to assign issues to the corresponding project, or risk it becoming chaotic.
Showing the list of reported issues for the project in GitHub in dark mode. The image shows just how crowded and messy the screen looks when there are a bunch of issues from different projects in the same list without a way to differentiate them.
The issues board can become chaotic without labels that are associated with projects.

All these issues are not roadblocks though. I can cope with them. However, there is an issue that the monorepo cannot help me with: hosting both public and private code together.

I’m planning to create a “PRO” version of my plugin which I plan to host in a private repo. However, the code in the repo is either public or private, so I’m unable to host my private code in the public leoloso/PoP repo. At the same time, I want to keep using my setup for the private repo too, particularly the generate_plugins.yml workflow (which already scopes the plugin and downgrades its code from PHP 8.0 to 7.1) and its possibility to configure it via PHP. And I want to keep it DRY, avoiding copy/pastes.

It was time to switch to the multi-monorepo.

Stage 4: Multi-monorepo

The multi-monorepo approach consists of different monorepos sharing their files with each other, linked via Git submodules. At its most basic, a multi-monorepo comprises two monorepos: an autonomous upstream monorepo, and a downstream monorepo that embeds the upstream repo as a Git submodule that’s able to access its files:

A giant red folder illustration is labeled as the downstream monorepo and it contains a smaller green folder showing the upstream monorepo.
The upstream monorepo is contained within the downstream monorepo.

This approach satisfies my requirements by:

  • having the public repo leoloso/PoP be the upstream monorepo, and
  • creating a private repo leoloso/GraphQLAPI-PRO that serves as the downstream monorepo.
The same illustration as before, but now the large folder is a bright pink and is labeled as with the project name, and the smaller folder is a purplish-blue and labeled with the name of the public downstream module,.
A private monorepo can access the files from a public monorepo.

leoloso/GraphQLAPI-PRO embeds leoloso/PoP under subfolder submodules/PoP (notice how GitHub links to the specific commit of the embedded repo):

This figure show how the public monorepo is embedded within the private monorepo in the GitHub project.

Now, leoloso/GraphQLAPI-PRO can access all the files from leoloso/PoP. For instance, script ci/downgrade/downgrade_code.sh from leoloso/PoP (which downgrades the code from PHP 8.0 to 7.1) can be accessed under submodules/PoP/ci/downgrade/downgrade_code.sh.

In addition, the downstream repo can load the PHP code from the upstream repo and even extend it. This way, the configuration to generate the public WordPress plugins can be overridden to produce the PRO plugin versions instead:

class PluginDataSource extends UpstreamPluginDataSource {   public function getPluginConfigEntries(): array   {     return [       // GraphQL API PRO       [         'path' => 'layers/GraphQLAPIForWP/plugins/graphql-api-pro',         'zip_file' => 'graphql-api-pro.zip',         'main_file' => 'graphql-api-pro.php',         'dist_repo_organization' => 'GraphQLAPI-PRO',         'dist_repo_name' => 'graphql-api-pro-dist',       ],       // GraphQL API Extensions       // Google Translate       [         'path' => 'layers/GraphQLAPIForWP/plugins/google-translate',         'zip_file' => 'graphql-api-google-translate.zip',         'main_file' => 'graphql-api-google-translate.php',         'dist_repo_organization' => 'GraphQLAPI-PRO',         'dist_repo_name' => 'graphql-api-google-translate-dist',       ],       // Events Manager       [         'path' => 'layers/GraphQLAPIForWP/plugins/events-manager',         'zip_file' => 'graphql-api-events-manager.zip',         'main_file' => 'graphql-api-events-manager.php',         'dist_repo_organization' => 'GraphQLAPI-PRO',         'dist_repo_name' => 'graphql-api-events-manager-dist',       ],     ];   } }

GitHub Actions will only load workflows from under .github/workflows, and the upstream workflows are under submodules/PoP/.github/workflows; hence we need to copy them. This is not ideal, though we can avoid editing the copied workflows and treat the upstream files as the single source of truth.

To copy the workflows over, a simple Composer script can do:

{   "scripts": {     "copy-workflows": [       "php -r "copy('submodules/PoP/.github/workflows/generate_plugins.yml', '.github/workflows/generate_plugins.yml');"",       "php -r "copy('submodules/PoP/.github/workflows/split_monorepo.yaml', '.github/workflows/split_monorepo.yaml');""     ]   } }

Then, each time I edit the workflows in the upstream monorepo, I also copy them to the downstream monorepo by executing the following command:

composer copy-workflows

Once this setup is in place, the private repo generates its own plugins by reusing the workflow from the public repo:

This figure shows the PRO plugins generated in GitHub Actions.

I am extremely satisfied with this approach. I feel it has removed all of the burden from my shoulders concerning the way projects are managed. I read about a WordPress plugin author complaining that managing the releases of his 10+ plugins was taking a considerable amount of time. That doesn’t happen here—after I merge my pull request, both public and private plugins are generated automatically, like magic.

Issues with the multi-monorepo

First off, it leaks. Ideally, leoloso/PoP should be completely autonomous and unaware that it is used as an upstream monorepo in a grander scheme—but that’s not the case.

When doing git checkout, the downstream monorepo must pass the --recurse-submodules option as to also checkout the submodules. In the GitHub Actions workflows for the private repo, the checkout must be done like this:

- uses: actions/checkout@v2   with:     submodules: recursive

As a result, we have to input submodules: recursive to the downstream workflow, but not to the upstream one even though they both use the same source file.

To solve this while maintaining the public monorepo as the single source of truth, the workflows in leoloso/PoP are injected the value for submodules via an environment variable CHECKOUT_SUBMODULES, like this:

env:   CHECKOUT_SUBMODULES: "";  jobs:   provide_data:     steps:       - uses: actions/checkout@v2         with:           submodules: $ {{ env.CHECKOUT_SUBMODULES }} 

The environment value is empty for the upstream monorepo, so doing submodules: "" works well. And then, when copying over the workflows from upstream to downstream, I replace the value of the environment variable to "recursive" so that it becomes:

env:   CHECKOUT_SUBMODULES: "recursive" 

(I have a PHP command to do the replacement, but we could also pipe sed in the copy-workflows composer script.)

This leakage reveals another issue with this setup: I must review all contributions to the public repo before they are merged, or they could break something downstream. The contributors would also completely unaware of those leakages (and they couldn’t be blamed for it). This situation is specific to the public/private-monorepo setup, where I am the only person who is aware of the full setup. While I share access to the public repo, I am the only one accessing the private one.

As an example of how things could go wrong, a contributor to leoloso/PoP might remove CHECKOUT_SUBMODULES: "" since it is superfluous. What the contributor doesn’t know is that, while that line is not needed, removing it will break the private repo.

I guess I need to add a warning!

env:   ### ☠️ Do not delete this line! Or bad things will happen! ☠️   CHECKOUT_SUBMODULES: "" 

Wrapping up

My repo has gone through quite a journey, being adapted to the new requirements of my code and application at different stages:

  • It started as a single repo, hosting a monolithic app.
  • It became a multirepo when splitting the app into packages.
  • It was switched to a monorepo to better manage all the packages.
  • It was upgraded to a multi-monorepo to share files with a private monorepo.

Context means everything, so there is no “best” approach here—only solutions that are more or less suitable to different scenarios.

Has my repo reached the end of its journey? Who knows? The multi-monorepo satisfies my current requirements, but it hosts all private plugins together. If I ever need to grant contractors access to a specific private plugin, while preventing them to access other code, then the monorepo may no longer be the ideal solution for me, and I’ll need to iterate again.

I hope you have enjoyed the journey. And, if you have any ideas or examples from your own experiences, I’d love to hear about them in the comments.


The post From a Single Repo, to Multi-Repos, to Monorepo, to Multi-Monorepo appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , ,

Learnings From a WebPageTest Session on CSS-Tricks

I got together with Tim Kadlec from over at WebPageTest the other day to use do a bit of performance testing on CSS-Tricks. Essentially use the tool, poke around, and identify performance pain points to work on. You can watch the video right here on the site, or over on their Twitch channel, which is worth a subscribe for more performance investigations like these.

Web performance work is twofold:

Step 1) Measure Things & Explore Problems
Step 2) Fix it

Tim and I, through the amazing tool that is WebPageTest, did a lot of Step 1. I took notes as we poked around. We found a number of problem areas, some fairly big! Of course, after all that, I couldn’t get them out of my head, so I had to spring into action and do the Step 2 stuff as soon as I could, and I’m happy to report I’ve done most of it and seen improvement. Let’s dig in!

Identified Problem #1) Poor LCP

Largest Contentful Paint (LCP) is one of the Core Web Vitals (CWV), which everyone is carefully watching right now with Google telling us it’s an SEO factor. My LCP was clocking in at 3.993s which isn’t great.

WebPageTest clearly tells you if there are problems with your CWV.

I also learned from time that it’s ideal if the First Contentful Paint (FCP) contains the LCP. We could see that wasn’t happening through WebPageTest.

Things to fix:

  • Make sure the LCP area, which was ultimately a big image, is properly optimized, has a responsive srcset, and is CDN hosted. All those things were failing on that particular image despire working elsewhere.
  • The LCP image had loading="lazy" on it, which we just learned isn’t a good place for that.

Fixing technique and learnings:

  • All the proper image handling stuff was in place, but for whatever reason, none of it works for .gif files, which is what that image was the day of the testing. We probably just shouldn’t use .gif files for that area anyway.
  • Turn off lazy loading of LCP image. This is a WordPress featured image, so I essentially had to do <?php the_post_thumbnail('', array('loading' => 'eager')); ?>. If it was an inline image, I’d do <img data-no-lazy="1" ... /> which tells WordPress what it needs to know.

Identified Problem #2) First Byte to Start Render gap

Tim saw this right away as a fairly obvious problem.

In the waterfall above (here’s a super detailed article on reading waterfalls from Matt Hobbs), you can see the HTML arrives in about 0.5 seconds, but the start of rendering (what people see, big green line), doesn’t start until about 2.9 seconds. That’s too dang long.

The chart also identifies the problem in a yellow line. I was linking out to a third-party CSS file, which then redirects to my own CSS files that contain custom fonts. That redirect costs time, and as we dug into, not just first-page-load time, but every single page load, even cached page loads.

Things to fix:

  • Eliminate the CSS file redirect.
  • Self-host fonts.

Fixing technique and learnings:

  • I’ve been eying up some new fonts anyway. I noted not long ago that I really love Mass-Driver’s licensing innovation (priced by # of employees), but I equally love MD Primer, so I bought that. For body type, I stuck with a comfortable serif with Blanco, which mercifully came with very nicely optimized RIBBI1 versions. Next time I swear I’m gonna find a variable font, but hey, you gotta follow your heart sometimes. I purchased these, and am now self-hosting the font-files.
  • Use @font-face right in my own CSS, with no redirects. Also using font-display: swap;, but gotta work a bit more on that loading technique. Can’t wait for size-adjust.

After re-testing with the change in place, you can see on a big article page the start render is a full 2 seconds faster on a 4G connection:

That’s a biiiiiig change. Especially as it affects cached page loads too.
See how the waterfall pulls back to the left without the CSS redirect.

Identified Problem #3) CLS on the Grid Guide is Bad

Tim had a neat trick up his sleeve for measuring Cumulative Layout Shift (CLS) on pages. You can instruct WebPageTest to scroll down the page for you. This is important for something like CLS, because layout shifting might happen on account of scrolling.

See this article about CLS and WebPageTest.

The trick is using an advanced setting to inject custom JavaScript into the page during the test:

At this point, we were testing not the homepage, but purposefully a very important page: our Complete Guide to Grid. With this in place, you can see the CWV are in much worse shape:

I don’t know what to think exactly about the LCP. That’s being triggered by what happens to be the largest image pretty far down the page.

I’m not terribly worried about the LCP with the scrolling in place. That’s just some image like any other on the page, lazily loaded.

The CLS is more concerning, to me, because any shifting layout is always obnoxious to users. See all these dotted orange lines? That is CLS happening:

The orange CLS lines correlate with images loading (as the page scrolls down and the lazy loaded images come in).

Things to fix:

  • CLS is bad because of lazy loaded images coming in and shifting the layout.

Fixing technique and learnings:

  • I don’t know! All those images are inline <img loading="lazy" ...> elements. I get that lazy loading could cause CLS, but these images have proper width and height attributes, which is supposed to reserve the exact space necessary for the image (even when fluid, thanks to aspect ratio) even before it loads. So… what gives? Is it because they are SVG?

If anyone does know, feel free to hit me up. Such is the nature of performance work, I find. It’s a mixture of easy wins from silly mistakes, little battles you can fight and win, bigger battles that sometimes involves outside influences that are harder to win, and mysterious unknowns that it takes time to heal. Fortunately we have tools like WebPageTest to tell us the real stories happening on our site and give us the insight we need to fight these performance battles.


  1. RIBBI, I just learned, means Regular, Italic, Bold, and Bold Italic. The classic combo that most body copy on the web needs.

The post Learnings From a WebPageTest Session on CSS-Tricks appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , ,
[Top]

Migrating from Parcel to Snowpack

I find build tooling endlessly interesting, especially right now as we’re in a juicy next-gen transitional period with players like Vite, wmr, Snowpack, and esbuild. Hugh Haworth has a good run-down of the new players, and we’ve chatted on ShopTalk about them several times. I especially like it when people blog their personal journeys in moving build tools, like Ben Frain has done.

It’s not like buying a new car where the new one is faster, but they both have steering wheels and doors and brake pedals and stuff. They share a similarity in that they are trying to provide DX locally and UX (via performance) on production — otherwise their approach, what they provide, and what they expect are all quite different.

Those differences mean re-training your brain in how you expect things to work. Here’s Ben on Snowpack:

In Snowpack land, your index.html file needs to reference the transformed version of the files – even though they don’t exist on your file system.

Wait, what?

Let me say that again as it’s jolly important. You link to files that don’t exist.

That’s just weird, right?

But Ben was switching from Parcel to Snowpack, and Parcel was weird too. In the Gulp days, we were super explicit about what files we were selecting, running tasks on, and where the transformed code went. In webpack, there are very explicit entry and output destinations, and it’s very focused on JavaScript being the input. But Parcel really wanted an HTML file to be the entry point and it would explore itself from there.

I always thought Parcel would have struck a nerve with the WordPress crowd harder, since you could point it at the template file where you link up your assets in a WordPress template and have it do its thing. I guess WordPress is too funky with all the wp_enqueue_style stuff and it just didn’t work?

Ben gives Snowpack a tentative thumbs up.

If you are starting a greenfield project I’d have zero qualms opting for Snowpack. It doesn’t have the depth of support documentation or stack overflow questions if you find yourself in the weeds, but generally speaking, it is solid enough to pick up and run with.

Me, next time I get a chance to play with build tools, I think my bar is going to be incredible speed. I’ve spent too much time on projects in my life where the developer experience is slow. I want smokin’ fast updates for whatever I’m working on.

Direct Link to ArticlePermalink


The post Migrating from Parcel to Snowpack appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , ,
[Top]

Going From Solid to Knockout Text on Scroll

Here’s a fun CSS trick to show your friends: a large title that switches from a solid color to knockout text as the background image behind it scrolls into place. And we can do it using plain ol’ HTML and CSS!

This effect is created by rendering two containers with fixed <h1> elements. The first container has a white background with knockout text. The second container has a background image with white text. Then, using some fancy clipping tricks, we hide the first container’s text when the user scrolls beyond its boundaries and vice-versa. This creates the illusion that the text background is changing.

Before we begin, please note that this won’t work on older versions of Internet Explorer. Also, fixed background images can be cumbersome on mobile WebKit browsers. Be sure to think about fallback behavior for these circumstances.

Setting up the HTML

Let’s start by creating our general HTML structure. Inside an outer wrapper, we create two identical containers, each with an <h1> element that is wrapped in a .title_wrapper.

<header>    <!-- First container -->   <div class="container container_solid">     <div class="title_wrapper">       <h1>The Great Outdoors</h1>     </div>   </div>    <!-- Second container -->   <div class="container container_image">     <div class="title_wrapper">       <h1>The Great Outdoors</h1>     </div>   </div>  </header>

Notice that each container has both a global .container class and its own identifier class — .container_solid and .container_image, respectively. That way, we can create common base styles and also target each container separately with CSS.

Initial styles

Now, let’s add some CSS to our containers. We want each container to be the full height of the screen. The first container needs a solid white background, which we can do on its .container_solid class. We also want to add a fixed background image to the second container, which we can do on its .container_image class.

.container {   height: 100vh; }  /* First container */ .container_solid {   background: white; }  /* Second container */ .container_image {   /* Grab a free image from unsplash */   background-image: url(/path/to/img.jpg);   background-size: 100vw auto;   background-position: center;   background-attachment: fixed; }

Next, we can style the <h1> elements a bit. The text inside .container_image can simply be white. However, to get knockout text for the <h1> element inside container_image, we need to apply a background image, then reach for the text-fill-color and background-clip CSS properties to apply the background to the text itself rather than the boundaries of the <h1> element. Notice that the <h1> background has the same sizing as that of our .container_image element. That’s important to make sure things line up.

.container_solid .title_wrapper h1 {   /* The text background */   background: url(https://images.unsplash.com/photo-1575058752200-a9d6c0f41945?ixlib=rb-1.2.1&q=85&fm=jpg&crop=entropy&cs=srgb&ixid=eyJhcHBfaWQiOjE0NTg5fQ);   background-size: 100vw auto;   background-position: center;      /* Clip the text, if possible */   /* Including -webkit` prefix for bester browser support */   /* https://caniuse.com/text-stroke */   -webkit-text-fill-color: transparent;   text-fill-color: transparent;   -webkit-background-clip: text;   background-clip: text;      /* Fallback text color */   color: black; }  .container_image .title_wrapper h1 {   color: white; }

Now, we want the text fixed to the center of the layout. We’ll add fixed positioning to our global .title_wrapper class and tack it to the vertical center of the window. Then we use text-align to horizontally center our <h1> elements.

.header-text {   display: block;   position: fixed;    margin: auto;   width: 100%;   /* Center the text wrapper vertically */   top: 50%;   -webkit-transform: translateY(-50%);       -ms-transform: translateY(-50%);           transform: translateY(-50%); }  .header-text h1 {   text-align: center; }

At this point, the <h1> in each container should be positioned directly on top of one another and stay fixed to the center of the window as the user scrolls. Here’s the full, organized, code with some shadow added to better see the text positioning.

Clipping the text and containers

This is where things start to get really interesting. We only want a container’s <h1> to be visible when its current scroll position is within the boundaries of its parent container. Normally this can be solved using overflow: hidden; on the parent container. However, with both of our <h1> elements using fixed positioning, they are now positioned relative to the browser window, rather than the parent element. In this case using overflow: hidden; will have no effect.

For the parent containers to hide fixed overflow content, we can use the CSS clip property with absolute positioning. This tells our browser hide any content outside of an element’s boundaries. Let’s replace the styles for our .container class to make sure they don’t display any overflowing elements, even if those elements use fixed positioning.

.container {   /* Hide fixed overflow contents */   clip: rect(0, auto, auto, 0);    /* Does not work if overflow = visible */   overflow: hidden;    /* Only works with absolute positioning */   position: absolute;    /* Make sure containers are full-width and height */   height: 100vh;   left: 0;   width: 100%; }

Now that our containers use absolute positioning, they are removed from the normal flow of content. And, because of that, we need to manually position them relative to their respective parent element.

.container_solid {   /* ... */    /* Position this container at the top of its parent element */   top: 0; }  .container_image {   /* ... */  /* Position the second container below the first container */   top: 100vh; }

At this point, the effect should be taking shape. You can see that scrolling creates an illusion where the knockout text appears to change backgrounds. Really, it is just our clipping mask revealing a different <h1> element depending on which parent container overlaps the center of the screen.

Let’s make Safari happy

If you are using Safari, you may have noticed that its render engine is not refreshing the view properly when scrolling. Add the following code to the .container class to force it to refresh correctly.

.container {   /* ... */    /* Safari hack */   -webkit-mask-image: -webkit-linear-gradient(top, #ffffff 0%,#ffffff 100%); }

Here’s the complete code up to this point.

Time to clean house

Let’s make sure our HTML is following accessibility best practices. Users not using assistive tech can’t tell that there are two identical <h1> elements in our document, but those using a screen reader sure will because both headings are announced. Let’s add aria-hidden to our second container to let screen readers know it is purely decorative.

<!-- Second container --> <div class="container container_image" aria-hidden="true">   <div class="title_wrapper">     <h1>The Great Outdoors</h1>   </div> </div>

Now, the world is our oyster when it comes to styling. We are free to modify the fonts and font sizes to make the text just how we want. We could even take this further by adding a parallax effect or replacing the background image with a video. But, hey, at that point, just be sure to put a little additional work into the accessibility so those who prefer less motion get the right experience.

That wasn’t so hard, was it?


The post Going From Solid to Knockout Text on Scroll appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , ,
[Top]

Dynamically Switching From One HTML Element to Another in Vue

A friend once contacted me asking if I had a way to dynamically change one HTML element into another within Vue’s template block. For instance, shifting a <div> element to a <span> element based on some criteria. The trick was to do this without relying on a series of v-if and v-else code.

I didn’t think much of it because I couldn’t see a strong reason to do such a thing; it just doesn’t come up that often. Later that same day, though, he reached out again and told me he learned how to change element types. He excitedly pointed out that Vue has a built-in component that can be used as a dynamic element in the very way that he needed.

This small feature can keep code in the template nice and tidy. It can reduce v-if and v-else glut down to a smaller amount of code that’s easier to understand and maintain. This allows us to use methods or computed methods to create nicely-coded, and yet more elaborate, conditions in the script block. That’s where such things belong: in the script, not the template block.

I had the idea for this article mainly because we use this feature in several places in the design system where I work. Granted it’s not a huge feature and it is barely mentioned in the documentation, at least as far as I can tell. Yet it has potential to help render specific HTML elements in components.

Vue’s built-in <component> element

There are several features available in Vue that allow for easy dynamic changes to the view. One such feature, the built-in <component> element, allows components to be dynamic and switched on demand. In both the Vue 2 and the Vue 3 documentation, there is a small note about using this element with HTML elements; that is the part we shall now explore.

The idea is to leverage this aspect of the <component> element to swap out common HTML elements that are somewhat similar in nature; yet with different functionality, semantics, or visuals. The following basic examples will show the potential of this element to help with keeping Vue components neat and tidy.

Buttons and links are often used interchangeably, but there are big differences in their functionality, semantics, and even visuals. Generally speaking, a button (<button>) is intended for an internal action in the current view tied to JavaScript code. A link (<a>), on the other hand, is intended to point to another resource, either on the host server or an external resource; most often web pages. Single page applications tend to rely more on the button than the link, but there is a need for both.

Links are often styled as buttons visually, much like Bootstrap’s .btn class that creates a button-like appearance. With that in mind, we can easily create a component that switches between the two elements based on a single prop. The component will be a button by default, but if an href prop is applied, it will render as a link.

Here is the <component> in the template:

<component   :is="element"   :href="href"   class="my-button" >   <slot /> </component>

This bound is attribute points to a computed method named element and the bound href attribute uses the aptly named href prop. This takes advantage of Vue’s normal behavior that the bound attribute does not appear in the rendered HTML element if the prop has no value. The slot provides the inner content regardless whether the final element is a button or a link.

The computed method is simple in nature:

element () {   return this.href ? 'a' : 'button'; }

If there’s an href prop,. then an <a> element is applied; otherwise we get a <button>.

<my-button>this is a button</my-button> <my-button href="https://www.css-tricks.com">this is a link</my-button>

The HTML renders as so:

<button class="my-button">this is a button</button> <a href="https://www.css-tricks.com" class="my-button">this is a link</a>

In this case, there could be an expectation that these two are similar visually, but for semantic and accessibility needs, they are clearly different. That said, there’s no reason the two outputted elements have to be styled the same. You could either use the element with the selector div.my-button in the style block, or create a dynamic class that will change based on the element.

The overall goal is to simplify things by allowing one component to potentially render as two different HTML elements as needed — without v-if or v-else!

Ordered or unordered list?

A similar idea as the button example above, we can have a component that outputs different list elements. Since an unordered list and an ordered list make use of the same list item (<li>) elements as children, then that’s easy enough; we just swap <ul> and <ol>. Even if we wanted to have an option to have a description list, <dl>, this is easily accomplished since the content is just a slot that can accept <li> elements or <dt>/<dd>combinations.

The template code is much the same as the button example:

<component   :is="element"   class="my-list" >   <slot>No list items!</slot> </component>

Note the default content inside the slot element, I’ll get to that in a moment.

There is a prop for the type of list to be used that defaults to <ul>:

props: {   listType: {     type: String,     default: 'ul'   } }

Again, there is a computed method named element:

element () {   if (this.$  slots.default) {     return this.listType;   } else {     return 'div';   } }

In this case, we are testing if the default slot exists, meaning it has content to render. If it does, then the the list type passed through the listType prop is used. Otherwise, the element becomes a <div> which would show the “No list items!” message inside the slot element. This way, if there are no list items, then the HTML won’t render as a list with one item that says there are no items. That last aspect is up to you, though it is nice to consider the semantics of a list with no apparent valid items. Another thing to consider is the potential confusion of accessibility tools suggesting this is a list with one item that just states there are no items.

Just like the button example above, you could also style each list differently. This could be based on selectors that target the element with the class name, ul.my-list. Another option is to dynamically change the class name based on the chosen element.

This example follows a BEM-like class naming structure:

<component   :is="element"   class="my-list"   :class="`my-list__$  {element}`" >   <slot>No list items!</slot> </component>

Usage is as simple as the previous button example:

<my-list>   <li>list item 1</li> </my-list>  <my-list list-type="ol">   <li>list item 1</li> </my-list>  <my-list list-type="dl">   <dt>Item 1</dt>   <dd>This is item one.</dd> </my-list>  <my-list></my-list>

Each instance renders the specified list element. The last one, though, results in a <div> stating no list items because, well, there’s no list to show!

One might wonder why create a component that switches between the different list types when it could just be simple HTML. While there could be benefits to keeping lists contained to a component for styling reasons and maintainability, other reasons could be considered. Take, for instance, if some forms of functionality were being tied to the different list types? Maybe consider a sorting of a <ul> list that switches to a <ol> to show sorting order and then switching back when done?

Now we’re controlling the elements

Even though these two examples are essentially changing the root element component, consider deeper into a component. For instance, a title that might need to change from an <h2> to an <h3> based on some criteria.

If you find yourself having to use ternary solutions to control things beyond a few attributes, I would suggest sticking with the v-if. Having to write more code to handle attributes, classes, and properties just complicates the code more than the v-if. In those cases, the v-if makes for simpler code in the long run and simpler code is easier to read and maintain.

When creating a component and there’s a simple v-if to switch between elements, consider this small aspect of a major Vue feature.

Expanding the idea, a flexible card system

Consider all that we’ve covered so far and put it to use in a flexible card component. This example of a card component allows for three different types of cards to be placed in specific parts of the layout of an article:

  • Hero card: This is expected to be used at the top of the page and draw more attention than other cards.
  • Call to action card: This is used as a line of user actions before or within the article.
  • Info card: This is intended for pull quotes.

Consider each of these as following a design system and the component controls the HTML for semantics and styling.

In the example above, you can see the hero card at the top, a line of call-to-action cards next, and then — scrolling down a bit — you’ll see the info card off to the right side.

Here is the template code for the card component:

<component :is="elements('root')" :class="'custom-card custom-card__' + type" @click="rootClickHandler">   <header class="custom-card__header" :style="bg">     <component :is="elements('header')" class="custom-card__header-content">       <slot name="header"></slot>     </component>   </header>   <div class="custom-card__content">     <slot name="content"></slot>   </div>   <footer class="custom-card__footer">     <component :is="elements('footer')" class="custom-card__footer-content" @click="footerClickHandler">       <slot name="footer"></slot>     </component>   </footer> </component>

There are three of the “component” elements in the card. Each represents a specific element inside the card, but will be changed based on what kind of card it is. Each component calls the elements() method with a parameter identifying which section of the card is making the call.

The elements() method is:

elements(which) {   const tags = {     hero: { root: 'section', header: 'h1', footer: 'date' },     cta: { root: 'section', header: 'h2', footer: 'div' },     info: { root: 'aside', header: 'h3', footer: 'small' }   }   return tags[this.type][which]; }

There are probably several ways of handing this, but you’ll have to go in the direction that works with your component’s requirements. In this case, there is an object that keeps track of HTML element tags for each section in each card type. Then the method returns the needed HTML element based on the current card type and the parameter passed in.

For the styles, I inserted a class on the root element of the card based on the type of card it is. That makes it easy enough to create the CSS for each type of card based on the requirements. You could also create the CSS based on the HTML elements themselves, but I tend to prefer classes. Future changes to the card component could change the HTML structure and less likely to make changes to the logic creating the class.

The card also supports a background image on the header for the hero card. This is done with a simple computed placed on the header element: bg. This is the computed:

bg() {   return this.background ? `background-image: url($  {this.background})` : null; }

If an image URL is provided in the background prop, then the computed returns a string for an inline style that applies the image as a background image. A rather simple solution that could easily be made more robust. For instance, it could have support for custom colors, gradients, or default colors in case of no image provided. There’s a large number of possibilities that his example doesn’t approach because each card type could potentially have their own optional props for developers to leverage.

Here’s the hero card from this demo:

<custom-card type="hero" background="https://picsum.photos/id/237/800/200">   <template v-slot:header>Article Title</template>   <template v-slot:content>Lorem ipsum...</template>   <template v-slot:footer>January 1, 2011</template> </custom-card>

You’ll see that each section of the card has its own slot for content. And, to keep things simple, text is the only thing expected in the slots. The card component handles the needed HTML element based solely on the card type. Having the component just expect text makes using the component rather simplistic in nature. It replaces the need for decisions over HTML structure to be made and in turn the card is simply implemented.

For comparison, here are the other two types being used in the demo:

<custom-card type="cta">   <template v-slot:header>CTA Title One</template>   <template v-slot:content>Lorem ipsum dolor sit amet, consectetur adipiscing elit.</template>   <template v-slot:footer>footer</template> </custom-card>  <custom-card type="info">   <template v-slot:header>Here's a Quote</template>   <template v-slot:content>“Maecenas ... quis.”</template>   <template v-slot:footer>who said it</template> </custom-card>

Again, notice that each slot only expects text as each card type generates its own HTML elements as defined by the elements() method. If it’s deemed in the future that a different HTML element should be used, it’s a simple matter of updating the component. Building in features for accessibility is another potential future update. Even interaction features can be expanded, based on card types.

The power is in the component that’s in the component

The oddly named <component> element in Vue components was intended for one thing but, as often happens, it has a small side effect that makes it rather useful in other ways. The <component> element was intended to dynamically switch Vue components inside another component on demand. A basic idea of this could be a tab system to switch between components acting as pages; which is actually demonstrated in the Vue documentation. Yet it supports doing the same thing with HTML elements.

This is an example of a new technique shared by a friend that has become s surprisingly useful tool in the belt of Vue features that I’ve used. I hope that this article carries forward the ideas and information about this small feature for you to explore how you might leverage this in your own Vue projects.


The post Dynamically Switching From One HTML Element to Another in Vue appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , ,
[Top]

Re-Creating the Porky Pig Animation from Looney Tunes in CSS

You know, Porky Pig coming out of those red rings announcing the end of a Looney Tunes cartoon. We’ll get there, but first we need to cover some CSS concepts.

Everything in CSS is a box, or rectangle. Rectangles stack, and can be displayed on top of, or below, other rectangles. Rectangles can contain other rectangles and you can style them such that the inner rectangle is visible outside the outer rectangle (so they overflow) or that they’re clipped by the outer rectangle (using overflow: hidden). So far, so good.

What if you want a rectangle to be visible outside its surrounding rectangle, but only on one side. That’s not possible, right?

The first rectangle contains an inner element that overflows both the top and bottom edges with the text "Possible" below it. The second rectangle clips the inner element on both sides, with "Also possible" below it. The third rectangle clips the inner element on the bottom, but shows it overflowing at the top, with the text "...Not possible" below it.

Perhaps, when you look at the image above, the wheels start turning: What if I copy the inner rectangle and clip half of it and then position it exactly?. But when it comes down to it, you can’t choose to have an element overflow at the top but clip at the bottom.

Or can you?

3D transforms

Using 3D transforms you can rotate, transform, and translate elements in 3D space. Here’s a group of practical examples I gathered showcasing some possibilities.

For 3D transforms to do their thing, you need two CSS properties:

  • perspective, using a value in pixels, to determine how pronounced the 3D effect is
  • transform-style: preserve-3d, to tell the browser to keep elements positioned in 3D space.

Even with the good support that 3D transforms have, you don’t see 3D transforms ‘in the wild’ all that much, sadly. Websites are still a “2D” thing, a flat page that scrolls. But as I started playing around with 3D transforms and scouting examples, I found one that was by far the most interesting as far as 3D transforms go:

Three planes floating above each other in 3D space

The image clearly shows three planes but this effect is achieved using a single <div>. The two other planes are the ::before and ::after pseudo-elements that are moved up and down respectively, using translate(), to stack on top of each other in 3D space. What is noticeable here is how the ::after element, that normally would be positioned on top of an element, is behind that element. The creator was able to achieve this by adding transform: translateZ(-1px);.

Even though this was one of many 3D transforms I had seen at this point, it was the first one that made me realize that I was actually positioning elements in 3D space. And if I can do that, I can also make elements intersect:

Two planes intersecting each other in 3D space

I couldn’t think of how this sort of thing would be useful, but then I saw the Porky Pig cartoon animation. He emerges from behind the bottom frame, but his face overlaps and stacks on top of the top edge of the same frame — the exact same sort of clipping situation we saw earlier. That’s when my wheels started turning. Could I replicate that effect using just CSS? And for extra credit, could I replicate it using a single <div>?

I started playing around and relatively quickly had this to show for it:

An orange rectangle that intersects through a blue frame. At the top of the image it's above the frame and at the bottom of the image it's below the blue frame.

Here we have a single <div> with its ::before and an ::after pseudo-elements. The div itself is transparent, the ::before has a blue border and the ::after has been rotated along the x-axis. Because the div has perspective, everything is positioned in 3D and, because of that, the ::after pseudo-element is above the border at the top edge of the frame and behind the border at the bottom edge of the frame.

Here’s that in code:

div {   transform: perspective(3000px);   transform-style: preserve-3d;   position: relative;   width: 200px;   height: 200px; }  div::before {   content: "";   width: 100%;   height: 100%;   border:10px solid darkblue; }  div::after {   content: "";   position: absolute;   background: orangered;   width: 80%;   height: 150%;   display: block;   left: 10%;   bottom: -25%;   transform: rotateX(-10deg); }

With perspective, we can determine how far a viewer is from “z=0” which we can consider to be the “horizon” of our CSS 3D space. The larger the perspective, the less pronounced the 3D effect, and vice versa. For most 3D scenes, a perspective value between 500 and 1,000 pixels works best, though you can play around with it to get the exact effect you want. You can compare this with perspective drawing: If you draw two horizon points close together, you get a very strong perspective; but if they’re far apart, then things appear flatter.

From rectangles to cartoons

Rectangles are fun, but what I really wanted to build was something like this:

A film cell of Porky Pig coming out of a circle with the text "That's all folks."

I couldn‘t find or create a nicely cut-out version of Porky Pig from that image, but the Wikipedia page contains a nice alternative, so we’ll use that.

First, we need to split the image up into three parts:

  • <div>: the blue background behind Porky
  • ::after: all the red circles that form a sort of tunnel
  • ::before: Porky Pig himself in all his glory, set as a background image

We’ll start with the <div>. That will be the background as well as the base for the rest of the elements. It’ll also contain the perspective and transform-style properties I called out earlier, along with some sizes and the background color:

div {   transform: perspective(3000px);   transform-style:preserve-3d;   position: relative;   width: 200px;   height: 200px;   background: #4992AD; }

Alright, next up, we‘ll move to the red circles. The element itself has to be transparent because that’s the opening where Porky emerges. So how shall we go about it? We can use a border just like the example earlier in this article, but we only have one border and that can have a solid color. We need a bunch of circles that can accept gradients. We can use box-shadow instead, chaining multiple shadows in the property values. This gets us all of the circles we need, and by using a blur radius value of 0 with a large spread radius, we can create the appearance of multiple “borders.”

box-shadow: <x-offset> <y-offset> <blur-radius> <spread-radius> <color>;

We‘ll use a border-radius that‘s as large as the <div> itself, making the ::before a circle. Then we’ll add the shadows. When we add a few red circles with a large spread and add blurry white, we get an effect that looks very similar to the Porky’s tunnel.

box-shadow: 0 0 20px   0px #fff, 0 0 0  30px #CF331F,             0 0 20px  30px #fff, 0 0 0  60px #CF331F,             0 0 20px  60px #fff, 0 0 0  90px #CF331F,             0 0 20px  90px #fff, 0 0 0 120px #CF331F,             0 0 20px 120px #fff, 0 0 0 150px #CF331F;

Here, we’re adding five circles, where each is 30px wide. Each circle has a solid red background. And, by using white shadows with a blur radius of 20px on top of that, we create the gradient effect.

The background and circles in pure CSS without Porky

With the background and the circles sorted, we’re now going to add Porky. Let’s start with adding him at the spot we want him to end up, for now above the circles.

div::before {   position: absolute;   content: "";   width: 80%;   height: 150%;   display: block;   left: 10%;   bottom: -12%;   background: url("Porky_Pig.svg") no-repeat center/contain; }

You might have noticed that slash in “center/contain” for the background. That’s the syntax to set both the position (center) and size (contain) in the background shorthand CSS property. The slash syntax is also used in the font shorthand CSS property where it’s used to set the font-size and line-height like so: <font-size>/<line-height>.

The slash syntax will be used more in future versions of CSS. For example, the updated rgb() and hsl() color syntax can take a slash followed by a number to indicate the opacity, like so: rgb(0 0 0 / 0.5). That way, there’s not need to switch between rgb() and rgba(). This already works in all browsers, except Internet Explorer 11.

Porky Pig positioned above the circles

Both the size and positioning here is a little arbitrary, so play around with that as you see fit. We’re a lot closer to what we want, but now need to get it so the bottom portion of Porky is behind the red circles and his top half remains visible.

The trick

We need to transpose both the circles as well as Porky in 3D space. If we want to rotate Porky, there are a few requirements we need to meet:

  • He should not clip through the background.
  • We should not rotate him so far that the image distorts.
  • His lower body should be below the red circles and his upper body should be above them.

To make sure Porky doesn‘t clip through the background, we first move the circles in the Z direction to make them appear closer to the viewer. Because preserve-3d is applied it means they also zoom in a bit, but if we only move them a smidge, the zoom effect isn’t noticeable and we end up with enough space between the background and the circles:

transform: translateZ(20px);

Now Porky. We’re going to rotate him around the X-axis, causing his upper body to move closer to us, and the lower part to move away. We can do this with:

transform: rotateX(-10deg);

This looks pretty bad at first. Porky is partially hidden behind the blue background, and he’s also clipping through the circles in a weird way.

Porky Pig partially clipped by the background and the circles

We can solve this by moving Porky “closer” to us (like we did with the circles) using translateZ(), but a better solution is to change the position of our rotation point. Right now it happens from the center of the image, causing the lower half of the image to rotate away from us.

If we move the starting point of the rotation toward the bottom of the image, or even a little bit below that, then the entirety of the image rotates toward us. And because we already moved the circles closer to us, everything ends up looking as it should:

transform: rotateX(-10deg); transform-origin: center 120%;
Porky Pig emerges from the circle, with his legs behind the circles but his head above them.

To get an idea of how everything works in 3D, click “show debug” in the following Pen:

Animation

If we keep things as they are — a static image — then we wouldn’t have needed to go through all this trouble. But when we animate things, we can reveal the layering and enhance the effect.

Here‘s the animation I’m going for: Porky starts out small at the bottom behind the circles, then zooms in, emerging from the blue background over the red circles. He stays there for a bit, then moves back out again.

We’ll use transform for the animation to get the best performance. And because we’re doing that, we need to make sure we keep the rotateX in there as well.

@keyframes zoom {   0% {     transform: rotateX(-10deg) scale(0.66);   }   40% {     transform: rotateX(-10deg) scale(1);   }   60% {     transform: rotateX(-10deg) scale(1);   }   100% {     transform: rotateX(-10deg) scale(0.66);   } }

Soon, we’ll be able to directly set different transforms, as browsers have started implementing them as individual CSS properties. That means that repeating that rotateX(-10deg) will eventually be unnecessary; but for now, we have a little bit of duplication.

We zoom in and out using the scale() function and, because we’ve already set a transform-origin, scaling happens from the center-bottom of the image, which is precisely the effect we want! We’re animating the scale up to 60% of Porky’s actual size, we have the little break at the largest point, where he fully pops out of the circle frame.

The animation goes on the ::before pseudo-element. To make the animation look a little more natural, we’re using an ease-in-out timing function, which slows down the animation at the start and end.

div::before {   animation-name: zoom;   animation-duration: 4s;   animation-iteration-count: infinite;   animation-fill-mode:forwards;   animation-timing-function: ease-in-out; }

What about reduced motion?

Glad you asked! For people who are sensitive to animations and prefer reduced or no motion, we can reach for the prefers-reduced-motion media query. Instead of removing the full animation, we’ll target those who prefer reduced motion and use a more subtle fade effect rather than the full-blown animation.

@media (prefers-reduced-motion: reduce) {    @keyframes zoom {     0% {       opacity:0;     }     100% {       opacity: 1;     }   }    div::before {     animation-iteration-count: 1;   } }

By overwriting the @keyframes inside a media query, the browser will automatically pick it up. This way, we still accentuate the effect of Porky emerging from the circles. And by setting animation-iteration-count to 1, we still let people see the effect, but then stop to prevent continued motion.

Finishing touches

Two more things we can do to make this a bit more fun:

  • We can create more depth in the image by adding a shadow behind Porky that grows as he emerges and appears to zoom in closer to the view.
  • We can turn Porky as he moves, to embellish the pop-out effect even further.

That second part we can implement using rotateZ() in the same animation. Easy breezy.

But the first part requires an additional trick. Because we use an image for Porky, we can’t use box-shadow because that creates a shadow around the box of the ::before pseudo-element instead of around the shape of Porky Pig.

That’s where filter: drop-shadow() comes to the rescue. It looks at the opaque parts of the element and adds a shadow to that instead of around the box.

@keyframes zoom {   0% {     transform: rotateX(-10deg) scale(0.66);     filter: drop-shadow(-5px 5px 5px rgba(0,0,0,0));   }   40% {     transform: rotateZ(-10deg) rotateX(-10deg) scale(1);     filter: drop-shadow(-10px 10px 10px rgba(0,0,0,0.5));   }    60% {     transform: rotateZ(-10deg) rotateX(-10deg) scale(1);     filter: drop-shadow(-10px 10px 10px rgba(0,0,0,0.5));   }    100% {     transform: rotateX(-10deg) scale(0.66);     filter: drop-shadow(-5px 5px 5px rgba(0,0,0,0));   } }

And that‘s how I re-created the Looney Tunes animation of Porky Pig. All I can say now is, “That’s all Folks!”


The post Re-Creating the Porky Pig Animation from Looney Tunes in CSS appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , ,
[Top]

WordPress.com Business Plan (Business-Class WordPress Hosting + Support from WordPress Experts)

WordPress.com is where you go to use WordPress that is completely hosted for you. You don’t have to worry about anything but building your site. There is a free plan to get started with, and paid plans that offer more features. The Business plan is particularly interesting, and my guess is that most people don’t fully understand everything that it unlocks for you, so let’s dig into that.

You get straight up SFTP access to your site.

Here’s me using Transmit to pop right into one of my sites over SFTP.

What this means is that you can do local WordPress development like you normally would, then use real deployment tools to kick your work out to production (which is your WordPress.com site). That’s what I do with Buddy. (Here a screencast demonstrating the workflow.)

That means real control.

I can upload and use whatever plugins I want. I can upload and use whatever themes I want. The database too — I get literal direct MySQL access.

I can even manage what PHP version the site uses. That’s not something I’d normally even need to do, but that’s just how much access there is.

A big jump in storage.

200 GB. You’ll probably never get anywhere near that limit, unless you are uploading video, and if you are, now you’ve got the space to do it.

Backups you’ll probably actually use.

You don’t have to worry about anything nasty happening on WordPress.com, like your server being hacked and losing all your data or anything. So in that sense, WordPress.com is handling your backups for you. But with the Business plan, you’ll see a backup log right in your dashboard:

That’s a backup of your theme, data, assets… everything. You can download it anytime you like.

The clutch feature? You can restore things to any point in time with the click of a button.

Powered by a global CDN

Not every site on WordPress.com is upgraded to the global CDN. Yours will be if it’s on the Business plan. That means speed, and speed is important for every reason, including SEO. And speaking of SEO tools, those are unlocked for you on the Business plan as well.

Some of the best themes unlock at the Premium/Business plan level.

You can buy them one-off, but you don’t have to if you’re on the Business plan because it opens the door for more playing around. This Aquene theme is pretty stylish with a high-end design:

It’s only $ 300/year.

(Or $ 33/month billed monthly.)

So it’s not ultra-budget hosting, but the price tag is a lot less if you consider all the things we covered here and how much they cost if you were to cobble something together yourself. And we didn’t even talk about support, which is baked right into the plan.

Hosting, backups, monitoring, performance, security, plugins, themes, and support — toss in a free year or domain registration, and that’s a lot of website for $ 300.

They have less expensive plans as well. But the Business plan is the level where serious control, speed, and security kick in.

Coupon code CSSTRICKS gets you 15% off the $ 300/year Business Plan. Valid until the end of February 2021.


The post WordPress.com Business Plan (Business-Class WordPress Hosting + Support from WordPress Experts) appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , , , , ,
[Top]

Converting and Optimizing Images From the Command Line

Images take up to 50% of the total size of an average web page. And if images are not optimized, users end up downloading extra bytes. And if they’re downloading extra bytes, the site not only takes that much more time to load, but users are using more data, both of which can be resolved, at least in part, by optimizing the images before they are downloaded.

Researchers around the world are busy developing new image formats that possess high visual quality despite being smaller in size compared to other formats like PNG or JPG. Although these new formats are still in development and generally have limited browser support, one of them, WebP, is gaining a lot of attention. And while they aren’t really in the same class as raster images, SVGs are another format many of us have been using in recent years because of their inherently light weight.

There are tons of ways we can make smaller and optimized images. In this tutorial, we will write bash scripts that create and optimize images in different image formats, targeting the most common formats, including JPG, PNG, WebP, and SVG. The idea is to optimize images before we serve them so that users get the most visually awesome experience without all the byte bloat.

Our targeted directory of images

Our directory of optimized images

This GitHub repo has all the images we’re using and you’re welcome to grab them and follow along.

Set up

Before we start, let’s get all of our dependencies in order. Again, we’re writing Bash scripts, so we’ll be spending time in the command line.

Here are the commands for all of the dependencies we need to start optimizing images:

sudo apt-get update sudo apt-get install imagemagick webp jpegoptim optipng npm install -g svgexport svgo

It’s a good idea to know what we’re working with before we start using them:

OK, we have our images in the original-images directory from the GitHub repo. You can follow along at commit 3584f9b.

Note: It is strongly recommended to backup your images before proceeding. We’re about to run programs that alter these images, and while we plan to leave the originals alone, one wrong command might change them in some irreversible way. So back anything up that you plan to use on a real project to prevent cursing yourself later.

Organize images

OK, we’re technically set up. But before we jump into optimizing all the things, we should organize our files a bit. Let’s organize them by splitting them up into different sub-directories based on their MIME type. In fact, we can create a new bash to do that for us!

The following code creates a script called organize-images.sh:

#!/bin/bash  input_dir="$ 1"  if [[ -z "$ input_dir" ]]; then   echo "Please specify an input directory."   exit 1 fi  for img in $ ( find $ input_dir -type f -iname "*" ); do   # get the type of the image   img_type=$ (basename `file --mime-type -b $ img`)    # create a directory for the image type   mkdir -p $ img_type    # move the image into its type directory   rsync -a $ img $ img_type done

This might look confusing if you’re new to writing scripts, but what it’s doing is actually pretty simple. We give the script an input directory where it looks for images. the script then goes into that input directory, looks for image files and identifies their MIME type. Finally, it creates subdirectories in the input folder for each MIME type and drops a copy of each image into their respective sub-directory.

Let’s run it!

bash organize-images.sh original-images

Sweet. The directory looks like this now. Now that our images are organized, we can move onto creating variants of each image. We’ll tackle one image type at a time.

Convert to PNG

We will convert three types of images into PNG in this tutorial: WebP, JPEG, and SVG. Let’s start by writing a script called webp2png.sh, which pretty much says what it does: convert WebP files to PNG files.

#!/bin/bash  # directory containing images input_dir="$ 1"  if [[ -z "$ input_dir" ]]; then   echo "Please specify an input directory."   exit 1 fi  # for each webp in the input directory for img in $ ( find $ input_dir -type f -iname "*.webp" ); do   dwebp $ img -o $ {img%.*}.png done

Here’s what happening:

  • input_dir="$ 1": Stores the command line input to the script
  • if [[ -z "$ input_dir" ]]; then: Runs the subsequent conditional if the input directory is not defined
  • for img in $ ( find $ input_dir -type f -iname "*.webp" );: Loops through each file in the directory that has a .webp extension.
  • dwebp $ img -o $ {img%.*}.png: Converts the WebP image into a PNG variant.

And away we go:

bash webp2png.sh webp

We now have our PNG images in the webp directory. Next up, let’s convert JPG/JPEG files to PNG with another script called jpg2png.sh:

#!/bin/bash  # directory containing images input_dir="$ 1"  if [[ -z "$ input_dir" ]]; then   echo "Please specify an input directory."   exit 1 fi  # for each jpg or jpeg in the input directory for img in $ ( find $ input_dir -type f -iname "*.jpg" -o -iname "*.jpeg" ); do   convert $ img $ {img%.*}.png done

This uses the convert command provided by the ImageMagick package we installed. Like the last script, we provide an input directory that contains JPEG/JPG images. The script looks in that directory and creates a PNG variant for each matching image. If you look closely, we have added -o -iname "*.jpeg" in the find. This refers to Logical OR, which is the script that finds all the images that have either a .jpg or .jpeg extension.

Here’s how we run it:

bash jpg2png.sh jpeg

Now that we have our PNG variants from JPG, we can do the exact same thing for SVG files as well:

#!/bin/bash  # directory containing images input_dir="$ 1"  # png image width width="$ 2"  if [[ -z "$ input_dir" ]]; then   echo "Please specify an input directory."   exit 1 elif [[ -z "$ width" ]]; then   echo "Please specify image width."   exit 1 fi  # for each svg in the input directory for img in $ ( find $ input_dir -type f -iname "*.svg" ); do   svgexport $ img $ {img%.*}.png $ width: done

This script has a new feature. Since SVG is a scalable format, we can specify the width directive to scale our SVGs up or down. We use the svgexport package we installed earlier to convert each SVG file into a PNG:

bash svg2png.sh svg+xml

Commit 76ff80a shows the result in the repo.

We’ve done a lot of great work here by creating a bunch of PNG files based on other image formats. We still need to do the same thing for the rest of the image formats before we get to the real task of optimizing them.

Convert to JPG

Following in the footsteps of PNG image creation, we will convert WebP, JPEG, and SVG into JPG. Let’s start by writing a script called png2jpg.sh that converts PNG to SVG:

#!/bin/bash  # directory containing images input_dir="$ 1"  # jpg image quality quality="$ 2"  if [[ -z "$ input_dir" ]]; then   echo "Please specify an input directory."   exit 1 elif [[ -z "$ quality" ]]; then   echo "Please specify image quality."   exit 1 fi  # for each png in the input directory for img in $ ( find $ input_dir -type f -iname "*.png" ); do   convert $ img -quality $ quality% $ {img%.*}.jpg done

You might be noticing a pattern in these scripts by now. But this one introduces a new power where we can set a -quality directive to convert PNG images to JPG images. Rest is the same.

And here’s how we run it:

bash png2jpg.sh png 90

Woah. We now have JPG images in our png directory. Let’s do the same with a webp2jpg.sh script:

#!/bin/bash  # directory containing images input_dir="$ 1"  # jpg image quality quality="$ 2"  if [[ -z "$ input_dir" ]]; then   echo "Please specify an input directory."   exit 1 elif [[ -z "$ quality" ]]; then   echo "Please specify image quality."   exit 1 fi  # for each webp in the input directory for img in $ ( find $ input_dir -type f -iname "*.webp" ); do   # convert to png first   dwebp $ img -o $ {img%.*}.png    # then convert png to jpg   convert $ {img%.*}.png -quality $ quality% $ {img%.*}.jpg done

Again, this is the same thing we wrote for converting WebP to PNG. However, there is a twist. We cannot convert WebP format directly into a JPG format. Hence, we need to get a little creative here and convert WebP to PNG using dwebp and then convert PNG to JPG using convert. That is why, in the for loop, we have two different steps.

Now, let’s run it:

bash webp2jpg.sh jpeg 90

Voilà! We have created JPG variants for our WebP images. Now let’s tackle SVG to JPG:

#!/bin/bash  # directory containing images input_dir="$ 1"  # jpg image width width="$ 2"  # jpg image quality quality="$ 3"  if [[ -z "$ input_dir" ]]; then   echo "Please specify an input directory."   exit 1 elif [[ -z "$ width" ]]; then   echo "Please specify image width."   exit 1 elif [[ -z "$ quality" ]]; then   echo "Please specify image quality."   exit 1 fi  # for each svg in the input directory for img in $ ( find $ input_dir -type f -iname "*.svg" ); do     svgexport $ img $ {img%.*}.jpg $ width: $ quality% done

You might bet thinking that you have seen this script before. You have! We used the same script for to create PNG images from SVG. The only addition to this script is that we can specify the quality directive of our JPG images.

bash svg2jpg.sh svg+xml 512 90

Everything we just did is contained in commit 884c6cf in the repo.

Convert to WebP

WebP is an image format designed for modern browsers. At the time of this writing, it enjoys roughly 90% global browser support, including with partial support in Safari. WebP’s biggest advantage is it’s a much smaller file size compared to other mage formats, without sacrificing any visual quality. That makes it a good format to serve to users.

But enough talk. Let’s write a png2webp.sh that — you guessed it — creates WebP images out of PNG files:

#!/bin/bash  # directory containing images input_dir="$ 1"  # webp image quality quality="$ 2"  if [[ -z "$ input_dir" ]]; then   echo "Please specify an input directory."   exit 1 elif [[ -z "$ quality" ]]; then   echo "Please specify image quality."   exit 1 fi  # for each png in the input directory for img in $ ( find $ input_dir -type f -iname "*.png" ); do   cwebp $ img -q $ quality -o $ {img%.*}.webp done

This is just the reverse of the script we used to create PNG images from WebP files. Instead of using dwebp, we use cwebp.

bash png2webp.sh png 90

We have our WebP images. Now let’s convert JPG images. The tricky thing is that there is no way to directly convert a JPG files into WebP. So, we will first convert JPG to PNG and then convert the intermediate PNG to WebP in our jpg2webp.sh script:

#!/bin/bash  # directory containing images input_dir="$ 1"  # webp image quality quality="$ 2"  if [[ -z "$ input_dir" ]]; then   echo "Please specify an input directory."   exit 1 elif [[ -z "$ quality" ]]; then   echo "Please specify image quality."   exit 1 fi  # for each webp in the input directory for img in $ ( find $ input_dir -type f -iname "*.jpg" -o -iname "*.jpeg" ); do   # convert to png first   convert $ img $ {img%.*}.png    # then convert png to webp   cwebp $ {img%.*}.png -q $ quality -o $ {img%.*}.webp done

Now we can use it like this to get our WebP variations of JPG files:

bash jpg2webp.sh jpeg 90

Commit 6625f26 shows the result.

Combining everything into a single directory

Now that we are done converting stuff, we’re one step closer to optimize our work. But first, we’re gong to bring all of our images back into a single directory so that it is easy to optimize them with fewer commands.

Here’s code that creates a new bash script called combine-images.sh:

#!/bin/bash  input_dirs="$ 1" output_dir="$ 2"  if [[ -z "$ input_dirs" ]]; then   echo "Please specify an input directories."   exit 1 elif [[ -z "$ output_dir" ]]; then   echo "Please specify an output directory."   exit 1 fi  # create a directory to store the generated images mkdir -p $ output_dir  # split input directories comma separated string into an array input_dirs=($ {input_dirs//,/ })  # for each directory in input directory for dir in "$ {input_dirs[@]}" do   # copy images from this directory to generated images directory   rsync -a $ dir/* $ output_dir/ done

The first argument is a comma-separated list of input directories that will transfer images to a target combined directory. The second argument is defines that combined directory.

bash combine-images.sh jpeg,svg+xml,webp,png generated-images

The final output can be seen in the repo.

Optimize SVG

Let us start by optimizing our SVG images. Add the following code to optimize-svg.sh:

#!/bin/bash  # directory containing images input_dir="$ 1"  if [[ -z "$ input_dir" ]]; then echo "Please specify an input directory." exit 1 fi  # for each svg in the input directory for img in $ ( find $ input_dir -type f -iname "*.svg" ); do   svgo $ img -o $ {img%.*}-optimized.svg done

We’re using the SVGO package here. It’s got a lot of options we can use but, to keep things simple, we’re just sticking with the default behavior of optimizing SVG files:

bash optimize-svg.sh generated-images
This gives us a 4KB saving on each image. Let’s say we were serving 100 SVG icons — we just saved 400KB!

The result can be seen in the repo at commit 75045c3.

Optimize PNG

Let’s keep rolling and optimize our PNG files using this code to create an optimize-png.sh command:

#!/bin/bash  # directory containing images input_dir="$ 1"  if [[ -z "$ input_dir" ]]; then   echo "Please specify an input directory."   exit 1 fi  # for each png in the input directory for img in $ ( find $ input_dir -type f -iname "*.png" ); do   optipng $ img -out $ {img%.*}-optimized.png done

Here, we are using the OptiPNG package to optimize our PNG images. The script looks for PNG images in the input directory and creates an optimized version of each one, appending -optimized to the file name. There is one interesting argument, -o, which we can use to specify the optimization level. The default value is 2 **and values range from 0 to 7. To optimize our PNGs, we run:

bash optimize-png.sh generated-images
PNG optimization depends upon the information stored in the image. Some images can be greatly optimized while some show little to no optimization.

As we can see, OptiPNG does a great job optimizing the images. We can play around with the -o argument to find a suitable value by trading off between image quality and size. Check out the results in commit 4a97f29.

Optimize JPG

We have reached the final part! We’re going to wrap things up by optimizing JPG images. Add the following code to optimize-jpg.sh:

#!/bin/bash  # directory containing images input_dir="$ 1"  # target image quality quality="$ 2"  if [[ -z "$ input_dir" ]]; then   echo "Please specify an input directory."   exit 1 elif [[ -z "$ quality" ]]; then   echo "Please specify image quality."   exit 1 fi  # for each jpg or jpeg in the input directory for img in $ ( find $ input_dir -type f -iname "*.jpg" -o -iname "*.jpeg" ); do   cp $ img $ {img%.*}-optimized.jpg   jpegoptim -m $ quality $ {img%.*}-optimized.jpg done

This script uses JPEGoptim. The problem with this package is that it doesn’t have any option to specify the output file. We can only optimize the image file in place. We can overcome this by first creating a copy of the image, naming it whatever we like, then optimizing the copy. The -m argument is used to specify image quality. It is good to experiment with it a bit to find the right balance between quality and file size.

bash optimize-jpg.sh generated-images 95

The results are shows in commit 35630da.

Wrapping up

See that? With a few scripts, we can perform heavy-duty image optimizations right from the command line, and use them on any project since they’re installed globally. We can set up CI/CD pipelines to create different variants of each image and serve them using valid HTML, APIs, or even set up our own image conversion websites.

I hope you enjoyed reading and learning something from this article as much as I enjoyed writing it for you. Happy coding!


The post Converting and Optimizing Images From the Command Line appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , ,
[Top]

Three Ways to Distinguish a Site From the Norm

In an age where so much web design is already neat, clean, and simple, I can think of three ways to distinguish your site from the norm:

  1. Stunning visuals that cannot be created in UI vector editors, like Figma and Sketch
  2. Beautifully-animated interactions that cannot be dreamt in the language of Stacks of Rectangles
  3. Typography

The third is the most accessible, and an awesome place to differentiate your brand. Accordingly, look for a renaissance of type — a flourishing of serifs, throwbacks, quirky fonts, and genre-bending typefaces. Expect that font pairing will become an even more important skill, and picking great fonts for your brand will carry even more weight in the near future.

After all, it’s basically a design cheat code.


The post Three Ways to Distinguish a Site From the Norm appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , ,
[Top]

What’s Missing from CSS?

The survey results from the State of CSS aren’t out yet, but they made this landing page that randomly shows you what one person wrote to answer that question. Just clicking the reload button a bunch, I get the sense that the top answers are:

  • Container Queries
  • Parent Selectors
  • Nesting
  • Something extremely odd that doesn’t really make sense and makes me wonder about people

Direct Link to ArticlePermalink


The post What’s Missing from CSS? appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, ,
[Top]