👋 The demos in this article experiment with a non-standard bug related to CSS gradients and sub-pixel rendering. Their behavior may change at any time in the future. They’re also heavy as heck. We’re serving them async where you click to load, but still want to give you a heads-up in case your laptop fan starts spinning.
Do you remember that static noise on old TVs with no signal? Or when the signal is bad and the picture is distorted? In case the concept of a TV signal predates you, here’s a GIF that shows exactly what I mean.
View image (contains auto-playing media)
Yes, we are going to do something like this using only CSS. Here is what we’re making:
Before we start digging into the code, I want to say that there are better ways to create a static noise effect than the method I am going to show you. We can use SVG, <canvas>, the filter property, etc. In fact, Jimmy Chion wrote a good article showing how to do it with SVG.
What I will be doing here is kind of a CSS experiment to explore some tricks leveraging a bug with gradients. You can use it on your side projects for fun but using SVG is cleaner and more suitable for a real project. Plus, the effect behaves differently across browsers, so if you’re checking these out, it’s best to view them in Chrome, Edge, or Firefox.
Let’s make some noise!
To make this noise effect we are going to use… gradients! No, there is no secret ingredient or new property that makes it happen. We are going to use stuff that’s already in our CSS toolbox!
The “trick” relies on the fact that gradients are bad at anti-aliasing. You know those kind of jagged edges we get when using hard stop colors? Yes, I talk about them in most of my articles because they are a bit annoying and we always need to add or remove a few pixels to smooth things out:
As you can see, the second circle renders better than the first one because there is a tiny difference (0.5%) between the two colors in the gradient rather than using a straight-up hard color stop using whole number values like the first circle.
Here’s another look, this time using a conic-gradient where the result is more obvious:
An interesting idea struck me while I was making these demos. Instead of fixing the distortion all the time, why not trying to do the opposite? I had no idea what would happen but it was a fun surprise! I took the conic gradient values and started to decrease them to make the poor anti-aliasing results look even worse.
Do you see how bad the last one is? It’s a kind of scrambled in the middle and nothing is smooth. Let’s make it full-screen with smaller values:
I suppose you see where this is going. We get a strange distorted visual when we use very small decimal values for the hard colors stops in a gradient. Our noise is born!
We are still far from the grainy noise we want because we can still see the actual conic gradient. But we can decrease the values to very, very small ones — like 0.0001% — and suddenly there’s no more gradient but pure graininess:
Tada! We have a noise effect and all it takes is one CSS gradient. I bet if I was to show this to you before explaining it, you’d never realize you’re looking at a gradient. You have to look very carefully at center of the gradient to see it.
We can increase the randomness by making the size of the gradient very big while adjusting its position:
The gradient is applied to a fixed 3000px square and placed at the 60% 60% coordinates. We can hardly notice its center in this case. The same can be done with radial gradient as well:
And to make things even more random (and closer to a real noise effect) we can combine both gradients and use background-blend-mode to smooth things out:
Our noise effect is perfect! Even if we look closely at each example, there’s no trace of either gradient in there, but rather beautiful grainy static noise. We just turned that anti-aliasing bug into a slick feature!
Now that we have this, let’s see a few interesting examples where we might use it.
Animated no TV signal
Getting back to the demo we started with:
If you check the code, you will see that I am using a CSS animation on one of the gradients. It’s really as simple as that! All we’re doing is moving the conic gradient’s position at a lightning fast duration (.1s) and this is what we get!
I used this same technique on a one-div CSS art challenge:
Grainy image filter
Another idea is to apply the noise to an image to get an old-time-y look. Hover each image to see them without the noise.
I am using only one gradient on a pseudo-element and blending it with the image, thanks to mix-blend-mode: overlay.
We can get an even funnier effect if we use the CSS filter property
And if we add a mask to the mix, we can make even more effects!
Grainy text treatment
We can apply this same effect to text, too. Again, all we need is a couple of chained gradients on a background-image and then blend the backgrounds. The only difference is that we’re also reaching for background-clip so the effect is only applied to the bounds of each character.
Generative art
If you keep playing with the gradient values, you may get more surprising results than a simple noise effect. We can get some random shapes that look a lot like generative art!
Of course, we are far from real generative art, which requires a lot of work. But it’s still satisfying to see what can be achieved with something that is technically considered a bug!
I hope you enjoyed this little CSS experiment. We didn’t exactly learn something “new” but we took a little quirk with gradients and turned it into something fun. I’ll say it again: this isn’t something I would consider using on a real project because who knows if or when anti-aliasing will be addressed at some point in time. Instead, this was a very random, and pleasant, surprise when I stumbled into it. It’s also not that easy to control and it behaves inconsistently across browsers.
This said, I am curious to see what you can do with it! You can play with the values, combine different layers, use a filter, or mix-blend-mode, or whatever, and you will for sure get something really cool. Share your creations in the comment section — there are no prizes but we can get a nice collection going!
An Event Apart 2022 Denver wrapped up yesterday. And while I was unable to make it to all three days this time, I did catch yesterday’s action — and it was awesome. I’m not very social or outgoing, but this was the first conference I’ve been to in at least a couple of years, and seeing folks in person was incredibly refreshing.
I took notes, of course! I thought I’d post ’em here because sharing is caring. At least, that’s what my six-year-old told me the other day when asking for a bite of my dessert last night.
I’ll break this down by speaker. Fair warning: I’m all about handwritten notes and a pretty visual fella, so my notes tend to be less… structured than most. And these notes are just things that stood out to me. They may not even be the presenter’s main idea, but they caught my attention!
Chris has given this talk before (we linked it up just last week), but this time expanded it substantially, particularly with details on container relative units which, when combined with clamp(), make for more accurate responsiveness because the values are relative to the container rather than the viewport. So, you know how we often use viewport width (vh) units for fluid type?
font-size: clamp(1rem, 1rem + 2vw, 2rem);
Well, we can use a container relative unit like container query inline-size (cqi) instead, where 1cqi is equal to 1% of the container’s inline size (here’s the draft spec on that):
font-size: clamp(1rem, 1rem + 1cqi, 2rem);
Chris also talked quite a bit about the performance benefits of hosting at the edge. Probably no surprise because he’s writtenabout ithere more than a few times. Even as someone who had already read those articles, I honestly didn’t realize the complete concept of computing at the edge.
The idea is deceptively simple: global CDNs can serve assets quickly because they host them geographically close to the user. That’s pretty standard practice for serving raster images. But it has extended to static files, such as the same HTML, CSS, and JavaScript files that power a site — build them in advance and serve the already compiled and optimized files from the speedy global CDN. That’s the whole Jamstack concept!
But what if you still require a server response from it? That ain’t very edge-y, is it? Well, now we have handlers capable of running on a single URL fetching data in advance, and injecting it ahead of render — directly from the CDN. Sure, there’s extra work happening in the background. Still, the fact that we can dynamically fetch data, inject it, pre-build it, serve it statically on demand, and have it run geographically closer to the user makes this blazingly fast.
Tolu Adegbite: How to Win at ARIA and Influence Web Accessibility
Good gosh was this an excellent presentation! Tolu Adegbite schooled me so hard on WAI-ARIA that I had a hard time jotting down all the gems she shared — Roles! States! Labeling! Descriptions! Everything was extraordinarily well-covered, and stuff that I know I’ll be coming back to time and again.
But one specific thing that caught my attention is the accessibility of inline SVG. Even though SVG is related to other types of design assets, the fact that it is markup at the end of the day sets it apart because it isn’t always identified as an image.
<!-- Image tag is easily recognized as an image --> <img src="cat.svg" alt="An illustrated brown and white tabby kitten looking lovingly into the camera."> <!-- Could be an image, maybe not? --> <svg viewBox="0 0 100 100"> <!-- etc. --> </svg>
Assistive tech is more likely to read inline SVG as an image by giving it a proper accessible role and label:
<svg role="image" aria-label="An illustrated brown and white tabby kitten looking lovingly into the camera." viewBox="0 0 100 100" > <!-- etc. --> </svg>
Hey, another CSS-Tricks alum! Miriam has been spending a bunch of time and effort on the Cascade Layers specification. She also wrote a big ol’ guide about them here at CSS-Tricks and talked about them at An Event Apart.
What has stuck with me most is how big of a mental shift this is. The concept isn’t complicated, per se. Declare @layer at the top of the CSS document, list the layers in order of specificity, then write styles in those layers. But for an old dinosaur like me who has been writing CSS for a while, I’m going to have to get used to the fact that Cascade Layers make it possible for a simple class selector to beat out something that usually wields a higher specificity, like an ID.
🤯
Miriam also reminded the room that Cascade Layers are just one tool we have in our specificity-managing toolbelt, in addition to selectors that affect specificity (e.g., :is(), :where(), and :has()).
Oh, and this is an interesting tidbit. As Miriam walked through the history of specificity in CSS, she recalled that !important was initially designed as a tool for users to override user agent and author styles. But somewhere down the line, we’ve adopted it to force author styles to the top. Cascade Layers help remove the excuse need to use !important because they provide us the power to “prioritize layers and protect inheritance.”
That is beautifully said, Miriam!
Dave Rupert: Unblocking Your Accessibility Backlog
Imagine waking up one day to hundreds of GitHub notifications about reported issues on your site. Where do you even begin? Maybe close your laptop and get a root canal instead? That happened to Dave! An automated accessibility audit returned a massive pile of errors and assigned them as tickets for Dave to fix.
But he noticed a pattern after taking an Excel spreadsheet of those issues, moving them to Notion for a better view, hiding unnecessary columns, categorizing everything, and displaying the results in logical groups. Many of the reported issues were the same issue repeated on multiple pages. Just because an automated test returns a handful of errors doesn’t mean they’re all unique. That reduced a nice chunk of the tickets.
He went on to show how — with relatively little effort — the backlog of issues dwindled by nearly 50%.
There’s a lot to glean there, especially regarding how we process and organize our work. The biggest takeaway for me is when Dave said we have to emphasize individuals and interactions over processes and tools. Tools like the one scanning for accessibility errors are helpful, but they might not tell the entire story. Rather than take them at their words, it’s worth asking questions and gaining more context before diving into the mess.
As a bonus, reorganizing the issues in Notion allowed Dave to group issues in a way that clearly shows which impairments his product was actively discriminating against, giving him greater empathy for those misses and how to prioritize them.
One more virtual session by Hui Jing Chen capped the day, but admittedly, I missed about half of it because I was having a hallway conversation. The conversation was worth it, even though I am bummed I missed the presentation. I’ll be watching the video of it when it’s published!
The GOV.UK team recently published “How and why we removed jQuery from GOV.UK“. This was an insightful look at how an organization can assess its tooling and whether something is still the best tool for the job. This is not a nudge to strip libraries out of your current project right now! Many of us may still be supporting legacy projects and browser requirements that prevent this from being a viable option.
Some of the criticism appears to be that the library size argument is negligible on modern network speeds and caching.
This article also makes the case for improving maintenance. Instead of upgrading disparate outdated versions of code and having to address security updates in a piecemeal approach, removing the dependency reduces this footprint. This is the dream of having the luxury for addressing technical debt.
What caught my eye in particular was the link to the documentation on how to remove jQuery. Understanding how to decouple and perform migration steps are maintenance tasks that will continue to come up for websites and it’s reassuring to have a guide from someone that had to do the same.
Cloudinary is a host for your digital assets like images and video. If you don’t already know them, you should, because you can build it into the asset management you almost certainly need to do if you run any size of website. Cloudinary helps you serve the assets as efficiently as technologically possible, meaning optimization, resizing, CDN hosting, and goes further in allowing interesting transforms on those assets.
If you already use it, unless you use it entirely through the APIs, you’ll know Cloudinary has a Media Library that gives you a UI dashboard for everything you’ve ever uploaded to Cloudinary. This is where you find your assets and open them up to play with the settings and transformations and such (e.g. blur it — then serve in the best possible format with automatic quality adjustments). You can always pop over to cloudinary.com to use this. But wouldn’t it be nice if this process was made a bit easier?
That clutch moment where you get the URL of the image you need.
There are all sorts of moments while bopping the web around doing our jobs as developers where you might need to get your fingers on an asset URL.
Gimme that URL!
Here’s a personal example: we have a little custom CMS thing for building our weekly email The CodePen Spark. It expects a URL to an image.
This is the exact kind of moment that the brand newChrome Media Library Extension could help. Essentially it gives you a context menu you can use right in the browser to snag a URL to an asset. Right click, Insert and Asset URL.
It pops up a UI right inline (where you are on the web) of your Media Library, and you pick an image from there. Find the one you want, open it up, and you can either “edit” it to customize it to your liking, or just Insert it straight away.
Then it plops the URL right onto the site (probably an input) where you need it.
You can set up defaults to your liking, but I really like how the defaults are f_auto and q_auto which are Cloudinary classics that you’ll almost surely want. They mean “serve in the best possible format” and “optimize it intelligently”.
Say your team creates social posts on a browser tab on an automated marketing application. To locate a media asset, you must open another tab to search for the asset within the Media Library, copy the related URL, and paste it into the app. In some cases, you even have to download an asset and then upload it into the app.
Talk about a classic example of menial, mundane, and repetitive chores!
Exactly. I like the idea of having tools to optimize workflows that should be easy. I’d also call Cloudinary a bit of a technical/developer tool, and there is an aspect to this that could be set up on anyone’s machine that would allow them to pick assets from your Media Library easily, without any access control worries.
Mondrian is famous for paintings with big thick black lines forming a grid, where each cell is white, red, yellow, or blue. This aesthetic pairs well with the notoriously rectangular web, and that hasn’t gone unnoticed over the years with CSS developers. I saw some Mondrian Art in CSS going around the other day and figured I’d go looking for others I’ve seen over the years and round them up.
Many people have tried to recreate a work of art by Mondriaan with CSS. It seems like a nice and simple exercise: rectangles are easy with CSS, and now with grid, it is easy to recreate most of his works. I tried it as well, and it turned out to be a bit more complicated than I thought. And the results are, well, surprising.
I started with Mondrian not because he is my favorite artist (he is not), or that his work is very recognizeable (it is), but because I thought it would be a fun (yes) and easy start (lol nope) to this project.
Say you have an elements with CSS tooltips and you’re going to position those tooltips such that it opens up next to the element on hover (or probably better: when clicked/tapped). Next to it where? Above it? What if the element is already really close to the top of the screen? In that case, it should probably open below it. Or vice versa — and the same goes for the left and right edges of the screen. You definitely want it to be visible rather than overflowing the viewport.
Sometimes when you open new UI elements, they need to be edge-aware to prevent the content inside from triggering weird scrollbars, or worse, cutting off content.
Very important what?!
This is an age-old problem on the web. I remember using jQuery UI tooltips on purpose because it had this special ability to be edge-aware. You can imagine the JavaScript behind it. You figure out where the element is going to be and use positioning math to figure out if it will be within the viewport. If it won’t be, try a different position that does fit.
As ever, everything old is new again. Check out Floating UI, designed just for this problem.
Floating UI is a low-level toolkit to position floating elements while intelligently keeping them in view. Tooltips, popovers, dropdowns, menus, and more.
It looks super well done. I like the focus, the demos are super well done, and it’s a pretty tiny dependency.
But ya know what would be even cooler? If CSS could do this all by itself. That’s the vibe with CSS Anchored Positioning — for now just an “explainer” document:
When building interactive components or applications, authors frequently want to leverage UI elements that can render in a “top-layer”. Examples of such UI elements include content pickers, teaching UI, tooltips, and menus. “Enabling Popups” introduced a new popup element to make many of these top-layer elements easier to author.
Authors frequently wish to “pin” or “anchor” such top-layer UI to a point on another element, referred to here as an “anchor element”. How the top-layer UI is positioned with respect to its anchor element is further influenced or constrained by the edges of the layout viewport.
I love it. The web platform at its best. Seeing what authors are needing to do and reaching for libraries to do, and trying to step in and do it natively (and hopefully better).
I’ve built websites that are used by millions of people all over the world. I’ve made more mistakes than I care to count and I’ve had to deal with the repercussions of those mistakes for years thereafter. Through all of this, my team and I have been trying to strike the best balance between user and developer experience. We have built custom solutions and used libraries/tools all in an effort to solve problems with our user experience and increase our own productivity. The struggle is real.
Then Remix came around and reduced that struggle for me a great deal. I rebuilt my website with Remix and was blown away. Honestly, I felt like I could create an amazing user experience and not be ashamed of the code it took to get me there. I love Remix so much that I eventually joined the team (so: disclaimer). In case you haven’t heard of Remix before, it’s a web framework for building stellar user experiences that “remixes” the web as it has worked since the 90s with the awesome technology we have today. Here are a few of the things I love most about it:
Seamless client-server code: I mean, there’s definitely a separation between what runs on the client and what runs on the server and it’s clear, but it’s so easy to move between the two within the same file that I feel like I can say “Yes” to more product feature ideas.
Progressive enhancement: Remix allows me to #useThePlatform better than anything else I’ve used. Their use of web APIs means that the better I get at Remix, the better I get at the web. And because apps I build with Remix work without JavaScript, I get real progressive enhancement for poor network conditions where the JavaScript takes a long time to load or fails to load entirely.
CSS — Bringing the cascade back: Because Remix allows me to easily control which of my CSS files is on the page at any given time, I don’t have all the problems that triggered the JavaScript community to invent workarounds like CSS-in-JS.
Nested routing: This allows Remix to optimize the data requests it makes as the user navigates around the page (which means it’s faster and cheaper for users who pay for limited internet). It also allows me to handle errors in the context of the part of the app that fail without taking down the entire page in the process.
Simple mutations: Instead of a complicated JavaScript library for managing mutations, Remix just uses the platform <form />. And Remix manages your client-cache so you don’t have to worry about invalidating your cache at all. In fact, with Remix you don’t think about this at all. It’s managed for you and you just get a nice declarative API.
Normalized platforms: We have lots of options for where we deploy our apps. Remix normalizes these (kinda like jQuery for platforms). So whether you want to deploy to serverless, cloudflare workers, or a regular node app, it doesn’t matter. Just write the same code and deploy wherever you like.
There’s a lot to love about Remix, but I’ll wrap it up here.
I realize that not everyone can migrate their site over to Remix, and that’s ok. The tagline of Remix is: Build better websites (sometimes with Remix). The one thing I think I want to encourage you to do to make your website better is to learn about and from Remix and apply some of the ideas to your website. And if you can migrate to Remix, all the better. 😆
Remember, we’re all just trying to make the world a little bit better, and my hope in writing this is that I’ve given you ideas of how you can make your own corner of the world better. Good luck!
Videos appeal to humans in a way no other form of the content does. A video includes motion, music, still images, text, speech, and a few other elements, all of which combine to deliver engagement like never before.
According to research, users spend 88% more time on a website with videos, and video content receives 1200% more shares than images or text. This is corroborated by the trend we see on key social media platforms, with businesses on these platforms now preferring video content over still images.
When streaming videos on one’s website or app, hosting a video on a platform like YouTube might not be a viable option. This is especially true if you want to present a native-looking user experience and control it, or you do not want to share revenues with a third party.
In such cases, cloud providers like AWS become the go-to choice for hosting and streaming videos.
Streaming optimized videos from S3 — current methods and drawbacks
A typical setup includes hosting the videos on AWS S3, the most popular cloud object storage around, and then directly streaming them on the users’ device.
For an even better video load time, you can use AWS CloudFront CDN with your S3 bucket.
Typical video streaming setup with AWS S3 and CloudFront CDN without any video optimizations
However, given that the original videos are often massive in size and that neither S3 nor CloudFront have built-in video optimization capabilities, this is far from ideal.
Therefore, the video streaming is slow and results in unnecessary waste of bandwidth, lower average video playback time, and lower user retention and revenues.
To solve this, we need to optimize videos for different devices, network speeds, and our website or app layout before streaming from the S3 storage.
While you could use a solution like AWS Elemental MediaConvert for all kinds of transcoding and optimizations, such solutions often have a steep learning curve and often become very complex to manage. For example:
You need to understand how to set up these tools with your existing workflow and storage, understand job creation in these tools, and manage job queues and job completion notifications in your workflow.
To create MediaConvert jobs, you would have to understand different streaming protocols, encoding, formats, resolution, codecs, etc., for video streaming and map them to your requirements to build the right jobs.
AWS MediaConvert is not an on-the-fly video conversion tool in itself. You need to pair it with other AWS services to make it work in real-time, making the setup even more complex and expensive. If you do not opt for on-the-fly video conversion, you need to configure MediaConvert’s jobs correctly from the go. Any new requirement will require the job to be changed and re-run on all videos to generate a new encoding or format. With an ever-changing video streaming landscape coupled with new device sizes, this can be a severe limitation if you have a lot of videos coming in regularly.
< Description: Typical setup with on-the-fly video conversion with AWS MediaConvert involving multiple components. (Source: AWS)
With videos becoming the choice of media online, it should be easier to stream videos from your S3 storage so that you can focus on delivering a better user experience and the core business rather than understanding the intricacies of video streaming.
This is where ImageKit comes in.
ImageKit is a complete media management and delivery platform with a Video API that allows you to stream videos stored in AWS S3 and optimize and transform them in real-time with just a few minutes of setup.
That’s right. You will be able to deliver videos perfected for every device without learning about the complications of video streaming and without spending hours configuring a tool.
ImageKit + AWS S3 video streaming setup with on-the-fly video optimizations
Let’s look at how you can use ImageKit for streaming videos from your S3 bucket.
S3 video streaming with ImageKit
To be able to stream videos from in our S3 bucket with ImageKit, we would need to do two things:
Allow ImageKit to read the objects in the S3 bucket
Access the video in the S3 bucket via their ImageKit URLs
Let’s look at these two steps in detail.
1. Connecting your bucket with ImageKit for access
ImageKit can pull assets from all popular storage and servers, including private and public S3 buckets. Access to a private S3 bucket is done using a pair of read-only keys that you create in your AWS account. You can read more about adding the S3 bucket origin in detail here.
If your bucket or the video objects inside it are public, you can add it as a web folder type origin. To do this, you would need to use the S3 bucket’s domain name as the base URL for this origin.
Also, ensure that you map the S3 bucket to a URL endpoint in the ImageKit dashboard.
That’s it! You have now connected your S3 bucket with ImageKit.
Bonus: ImageKit also provides an integrated Media Library, file storage built on AWS S3, with easy-to-use UI and APIs for advanced media management and search. If you do not have an S3 bucket or want a better solution to organize your video content, you can host your videos in ImageKit’s Media Library as well.
2. Video Streaming from S3 using ImageKit URLs
With your S3 bucket now attached to ImageKit, you can now access any video file in your bucket via ImageKit.
For example, we have the video sample.mp4 in our test bucket at the key videos/woman-walking.mp4. We can access it with ImageKit using the following URL:
And because ImageKit comes integrated with AWS CloudFront CDN, your videos will get delivered in milliseconds to your users, thereby improving the overall streaming experience.
Using video optimizations and transformations with S3 streaming
We have seen how to stream videos from S3 using ImageKit on our apps. However, as mentioned earlier in the article, we need to optimize videos for different devices, website layout, and other conditions.
ImageKit helps us there as well. It can optimize and scale your videos or encode them to different formats in real-time. In addition, ImageKit makes most of the necessary optimizations automatically, using the best defaults, or exposes the functionalities as easy-to-use URL parameters that work in real-time.
So, unlike the other tools like AWS MediaConvert, there is minimal learning you need to do. And you definitely won’t have to learn the nuances of video streaming and encoding.
Let’s look at the video optimizations and transformations ImageKit is capable of, apart from direct streaming of videos from S3.
1. Automatic conversion to best format while streaming
ImageKit automatically identifies the video formats supported in browsers and combines them with the original video information to encode it to the best possible output format.
The video format optimization is done in real-time when you request your S3 video via ImageKit and results in a smaller video size sent to the user.
To enable this automatic format conversion, you have to enable the corresponding setting in your ImageKit dashboard. No complex setup, no configurations needed.
For example, after enabling the above setting, an MP4 video gets delivered in WebM format in Chrome and MP4 in Safari.
Same video delivered in WebM format in Chrome browser
2. Automatic compression of videos while streaming
Modern cameras capture videos that can easily run into a few 100MBs, if not GBs. It is not ideal to deliver such large video files to your users. It will take them ages to stream the video, results in a poor user experience, and impact your business.
Therefore, we need to compress the video to deliver it quickly to our users. However, we need to do so while maintaining the visual quality of the video.
That is again something ImageKit does for you automatically.
You can turn on the corresponding setting from the ImageKit dashboard and set the default compression level for your videos. By default, this value would be 50, which strikes a good balance between output size and visual quality.
Turning on automatic quality optimization in the dashboard
Because of the compression, though the following video has the same format as its original video, it is lighter by almost 15% at 12.6MB compared to the original video at 14.6MB.
Using real-time video compression for slow network users
As explained with examples later in this article, ImageKit allows you to transform videos in real-time. One of the real-time URL transformations possible in ImageKit is to modify the compression level.
This transformation allows us to override the default compression level selected in the dashboard.
The video below, for example, is compressed to a quality level of 30 and is almost 70% smaller than the video we encoded using the default quality level of 50 set in the dashboard.
This real-time compression-level transformation allows us to adapt our videos to users on slow networks.
For example, if someone is experiencing poor network conditions, you can stream a more compressed, lighter video to them.
3. Scaling S3 videos in real-time while streaming
Imagine you want to load a video on your page and have only a 200px width available for your content. The resolution of the original video you have, almost always, would be a lot higher than this size. Loading anything significantly larger than the size you require is a waste of bandwidth and offers no benefit to the user.
With ImageKit, you can scale the video to any size in real-time before streaming it on the device. Just like its real-time image transformation API, you can add a width or height parameter to the URL, and you will get a video with the required dimensions in real-time.
For example, we have scaled down our video to a width of 200px using the URL given below.
4. Adapting videos to different placeholders and creating vertical videos
We often shoot videos in landscape mode, i.e., the video’s width is greater than its height. However, you would often require a portrait or vertical video where the height is greater than the video’s width.
A very common use case would be converting your landscape video to a vertical one like an Instagram story.
ImageKit makes it super simple with its real-time video transformations. You can add the width and height parameters to the video URL and get the output video in the requested size.
ImageKit streams the video in the most suitable format and at the right compression level for all the transformations above. The video is delivered via its integrated AWS CloudFront CDN for a fast load time.
For the demo video used in this article, we started with a 14.6MB original video, but after all the optimizations and scaling it down, we were able to bring down the size to 1.8MB in the last example of the vertical video.
Signup with ImageKit for S3 video streaming for free
ImageKit offers a Forever Free plan that includes video delivery as well as processing. On this plan, you would be able to optimize and transform over 15 minutes of fresh video content every month and then deliver it to thousands of users without paying a single penny or even providing your credit card. This is perfect for a small business or your side project where you can connect your S3 bucket to ImageKit and start streaming optimized videos immediately.
Sign up now for ImageKit to start streaming optimized videos from S3 for free.
Animating elements with CSS can either be quite easy or quite difficult depending on what you are trying to do. Changing the background color of a button when you hover over it? Easy. Animating the position and size of an element in a performant way that also affects the position of other elements? Tricky! That’s exactly what we’ll get into here in this article.
A common example is removing an item from a stack of items. The items stacked on top need to fall downwards to account for the space of an item removed from the bottom of the stack. That is how things behave in real life, and users may expect this kind of life-like motion on a website. When it doesn’t happen, it’s possible the user is confused or momentarily disorientated. You expect something to behave one way based on life experience and get something completely different, and users may need extra time to process the unrealistic movement.
Here is a demonstration of a UI for adding items (click the button) or removing items (click the item).
You could paper over the poor UI slightly by adding a “fade out” animation or something, but the result won’t be that great, as the list will will abruptly collapse and cause those same cognitive issues.
Applying CSS-only animations to a dynamic DOM event (adding brand new elements and fully removing elements) is extremely tricky work. We’re going to face this problem head-on and go over three very different types of animations that handle this, all accomplishing the same goal of helping users understand changes to a list of items. By the time we’re done, you’ll be armed to use these animations, or build your own based on the concepts.
We will also touch upon accessibility and how elaborate HTML layouts can still retain some compatibility with accessibility devices with the help of ARIA attributes.
The Slide-Down Opacity Animation
A very modern approach (and my personal favorite) is when newly-added elements fade-and-float into position vertically depending on where they are going to end up. This also means the list needs to “open up” a spot (also animated) to make room for it. If an element is leaving the list, the spot it took up needs to contract.
Because we have so many different things going on at the same time, we need to change our DOM structure to wrap each .list-item in a container class appropriately titled .list-container. This is absolutely essential in order to get our animation to work.
Now, the styling for this is unorthodox because, in order to get our animation effect to work later on, we need to style our list in a very specific way that gets the job done at the expense of sacrificing some customary CSS practices.
First, we’re using margin-top to create vertical space between the elements in the stack. There’s no margin on the bottom so that the other list items can fill the gap created by removing a list item. That way, it still has margin on the bottom even though we have set the container height to zero. That extra space is created between the list item that used to be directly below the deleted list item. And that same list item should move up in reaction to the deleted list item’s container having zero height. And because this extra space expands the vertical gap between the list items further then we want it to. So that’s why we use margin-top — to prevent that from happening.
But we only do this if the item container in question isn’t the first one in the list. That’s we used :not(:first-child) — it targets all of the containers except the very first one (an enabling selector). We do this because we don’t want the very first list item to be pushed down from the top edge of the list. We only want this to happen to every subsequent item thereafter instead because they are positioned directly below another list item whereas the first one isn’t.
Now, this is unlikely to make complete sense because we are not setting any elements to zero height at the moment. But we will later on, and in order to get the vertical spacing between the list elements correct, we need to set the margin like we do.
A note about positioning
Something else that is worth pointing out is the fact that the .list-item elements nested inside of the parent .list-container elements are set to have a position of absolute, meaning that they are positioned outside of the DOM and in relation to their relatively-positioned .list-container elements. We do this so that we can get the .list-item element to float upwards when removed, and at the same time, get the other .list-item elements to move and fill the gap that removing this .list-item element has left. When this happens, the .list-container element, which isn’t positioned absolute and is therefore affected by the DOM, collapses its height allowing the other .list-container elements to fill its place, and the .list-item element — which is positioned with absolute — floats upwards, but doesn’t affect the structure of the list as it isn’t affected by the DOM.
Handling height
Unfortunately, we haven’t yet done enough to get a proper list where the individual list-items are stacked one by one on top of each other. Instead, all we will be able to see at the moment is just a single .list-item that represents all of the list items piled on top of each other in the exact same place. That’s because, although the .list-item elements may have some height via their padding property, their parent elements do not, but have a height of zero instead. This means that we don’t have anything in the DOM that is actually separating these elements out from each other because in order to do that, we would need our .list-item containers to have some height because, unlike their child element, they are affected by the DOM.
To get the height of our list containers to perfectly match the height of their child elements, we need to use JavaScript. So, we store all of our list items within a variable. Then, we create a function that is called immediately as soon as the script is loaded.
This becomes the function that handles the height of the list container elements:
const listItems = document.querySelectorAll('.list-item'); function calculateHeightOfListContainer(){ }; calculateHeightOfListContainer();
The first thing that we do is extract the very first .list-item element from the list. We can do this because they are all the same size, so it doesn’t matter which one we use. Once we have access to it, we store its height, in pixels, via the element’s clientHeight property. After this, we create a new <style> element that is prepended to the document’s body immediately after so that we can directly create a CSS class that incorporates the height value we just extracted. And with this <style> element safely in the DOM, we write a new .list-container class with styles that automatically have priority over the styles declared in the external stylesheet since these styles come from an actual <style> tag. That gives the .list-container classes the same height as their .list-item children.
Right now, our list looks a little drab — the same as the what we saw in the first example, just without any of the addition or removal logic, and styled in a completely different way to the list constructed from <ul> and <li> tags list that were used in that opening example.
We’re going to do something now that may seem inexplicable at the moment and modify our .list-container and .list-item classes. We’re also creating extra styling for both of these classes that will only be added to them if a new class, .show, is used in conjunction with both of these classes separately.
The purpose we’re doing this is to create two states for both the .list-container and the .list-item elements. One state is without the .show classes on both of these elements, and this state represents the elements as they are animated out from the list. The other state contains the .show class added to both of these elements. It represents the specified .list-item as firmly instantiated and visible in the list.
In just a bit, we will switch between these two states by adding/removing the .show class from both the parent and the container of a specific .list-item. We’ll combined that with a CSS transition between these two states.
Notice that combining the .list-item class with the .show class introduces some extra styles to things. Specifically, we’re introducing the animation that we are creating where the list item fades downwards and into visibility when it is added to the list — the opposite happens when it is removed. Since the most performant way to animate elements positions is with the transform property, that is what we will use here, applying opacity along the way to handle the visibility part. Because we already applied a transition property on both the .list-item and the .list-container elements, a transition automatically takes place whenever we add or remove the .show class to both of these elements due to the extra properties that the .show class brings, causing a transition whenever we either add or remove these new properties.
In response to the .show class, we are going back to our JavaScript file and changing our only function so that the .list-container element are only given a height property if the element in question also has a .show class on it as well, Plus, we are applying a transition property to our standard .list-container elements, and we will do it in a setTimeout function. If we didn’t, then our containers would animate on the initial page load when the script is loaded, and the heights are applied the first time, which isn’t something we want to happen.
Now, if we go back and view the markup in DevTools, then we should be able to see that the list has disappeared and all that is left is the button. The list hasn’t disappeared because these elements have been removed from the DOM; it has disappeared because of the .show class which is now a required class that must be added to both the .list-item and the .list-container elements in order for us to be able to view them.
The way to get the list back is very simple. We add the .show class to all of our .list-container elements as well as the .list-item elements contained inside. And once this is done we should be able to see our pre-created list items back in their usual place.
We won’t be able to interact with anything yet though because to do that — we need to add more to our JavaScript file.
The first thing that we will do after our initial function is declare references to both the button that we click to add a new list item, and the .list element itself, which is the element that wraps around every single .list-item and its container. Then we select every single .list-container element nested inside of the parent .list element and loop through them all with the forEach method. We assign a method in this callback, removeListItem, to the onclick event handler of each .list-container. By the end of the loop, every single .list-container instantiated to the DOM on a new page load calls this same method whenever they are clicked.
Once this is done, we assign a method to the onclick event handler for addBtn so that we can activate code when we click on it. But obviously, we won’t create that code just yet. For now, we are merely logging something to the console for testing.
Starting work on the onclick event handler for addBtn, the first thing that we want to do is create two new elements: container and listItem. Both elements represent the .list-item element and their respective .list-container element, which is why we assign those exact classes to them as soon as we create the them.
Once these two elements are prepared, we use the append method on the container to insert the listItem inside of it as a child, the same as how these elements that are already in the list are formatted. With the listItem successfully appended as a child to the container, we can move the container element along with its child listItem element to the DOM with the insertBefore method. We do this because we want new items to appear at the bottom of the list but before the addBtn, which needs to stay at the very bottom of the list. So, by using the parentNode attribute of addBtn to target its parent, list, we are saying that we want to insert the element as a child of list, and the child that we are inserting (container) will be inserted before the child that is already on the DOM and that we have targeted with the second argument of the insertBefore method, addBtn.
Finally, with the .list-item and its container successfully added to the DOM, we can set the container’s onclick event handler to match the same method as every other .list-item already on the DOM by default.
If we try this out, then we won’t be able to see any changes to our list no matter how many times we click the addBtn. This isn’t an error with the click event handler. Things are working exactly how they should be. The .list-item elements (and their containers) are added to the list in the correct place, it is just that they are getting added without the .show class. As a result, they don’t have any height to them, which is why we can’t see them and is why it looks like nothing is happening to the list.
To get each newly added .list-item to animate into the list whenever we click on the addBtn, we need to apply the .show class to both the .list-item and its container, just as we had to do to view the list items already hard-coded into the DOM.
The problem is that we cannot just add the .show class to these elements instantly. If we did, the new .list-item statically pops into existence at the bottom of the list without any animation. We need to register a few styles before the animation additional styles that override those initial styles for an element to know what transition to make. Meaning, that if we just apply the .show class to are already in place — so no transition.
The solution is to apply the .show classes in a setTimeout callback, delaying the activation of the callback by 15 milliseconds, or 1.5/100th of a second. This imperceptible delay is long enough to create a transition from the proviso state to the new state that is created by adding the .show class. But that delay is also short enough that we will never know that there was a delay in the first place.
Success! It is now time to handle how we remove list items when they are clicked.
Removing list items shouldn’t be too hard now because we have already gone through the difficult task of adding them. First, we need to make sure that the element we are dealing with is the .list-container element instead of the .list-item element. Due to event propagation, it is likely that the target that triggered this click event was the .list-item element.
Since we want to deal with the associated .list-container element instead of the actual .list-item element that triggered the event, we’re using a while-loop to loop one ancestor upwards until the element held in container is the .list-container element. We know it works when container gets the .list-container class, which is something that we can discover by using the contains method on the classList property of the container element.
Once we have access to the container, we promptly remove the .show class from both the container and its .list-item once we have access to that as well.
function removeListItem(e) { let container = e.target; while (!container.classList.contains('list-container')) { container = container.parentElement; } container.classList.remove('show'); const listItem = container.querySelector('.list-item'); listItem.classList.remove('show'); }
And here is the finished result:
Accessibility & Performance
Now you may be tempted to just leave the project here because both list additions and removals should now be working. But it is important to keep in mind that this functionality is only surface level and there are definitely some touch ups that need to be made in order to make this a complete package.
First of all, just because the removed elements have faded upwards and out of existence and the list has contracted to fill the gap that it has left behind does not mean that the removed element has been removed from the DOM. In fact, it hasn’t. Which is a performance liability because it means that we have elements in the DOM that serve no purpose other than to just accumulate in the background and slow down our application.
To solve this, we use the ontransitionend method on the container element to remove it from the DOM but only when the transition caused by us removing the .show class has finished so that its removal couldn’t possibly interrupt our transition.
function removeListItem(e) { let container = e.target; while (!container.classList.contains('list-container')) { container = container.parentElement; } container.classList.remove('show'); const listItem = container.querySelector('.list-item'); listItem.classList.remove('show'); container.ontransitionend = function(){ container.remove(); } }
We shouldn’t be able to see any difference at this point because allwe did was improve the performance — no styling updates.
The other difference is also unnoticeable, but super important: compatibility. Because we have used the correct <ul> and <li> tags, devices should have no problem with correctly interpreting what we have created as an unordered list.
Other considerations for this technique
A problem that we do have however, is that devices may have a problem with the dynamic nature of our list, like how the list can change its size and the number of items that it holds. A new list item will be completely ignored and removed list items will be read as if they still exist.
So, in order to get devices to re-interpret our list whenever the size of it changes, we need to use ARIA attributes. They help get our nonstandard HTML list to be recognized as such by compatibility devices. That said, they are not a guaranteed solution here because they are never as good for compatibility as a native tag. Take the <ul> tag as an example — no need to worry about that because we were able to use the native unordered list element.
We can use the aria-live attribute to the .list element. Everything nested inside of a section of the DOM marked with aria-live becomes responsive. In other words, changes made to an element with aria-live is recognized, allowing them to issue an updated response. In our case, we want things highly reactive and we do that be setting the aria live attribute to assertive. That way, whenever a change is detected, it will do so, interrupting whatever task it was currently doing at the time to immediately comment on the change that was made.
This is a more subtle animation where, instead of list items floating either up or down while changing opacity, elements instead just collapse or expand outwards as they gradually fade in or out; meanwhile, the rest of the list repositions itself to the transition taking place.
The cool thing about the list (and perhaps some remission for the verbose DOM structure we created), would be the fact that we can change the animation very easily without interfering with the main effect.
So, to achieve this effect, we start of by hiding overflow on our .list-container. We do this so that when the .list-container collapses in on itself, it does so without the child .list-item flowing beyond the list container’s boundaries as it shrinks. Apart from that, the only other thing that we need to do is remove the transform property from the .list-item with the .show class since we don’t want the .list-item to float upwards anymore.
This last animation technique is strikingly different fromithe others in that the container animation and the .list-item animation are actually out of sync. The .list-item is sliding to the right when it is removed from the list, and sliding in from the right when it is added to the list. There needs to be enough vertical room in the list to make way for a new .list-item before it even begins animating into the list, and vice versa for the removal.
As for the styling, it’s very much like the Slide Down Opacity animation, only thing that the transition for the .list-item should be on the x-axis now instead of the y-axis.
As for the onclick event handler of the addBtn in our JavaScript, we’re using a nested setTimeout method to delay the beginning of the listItem animation by 350 milliseconds after its container element has already started transitioning.
In the removeListItem function, we remove the list item’s .show class first so it can begin transitioning immediately. The parent container element then loses its .show class, but only 350 milliseconds after the initial listItem transition has already started. Then, 600 milliseconds after the container element starts to transition (or 950 milliseconds after the listItem transition), we remove the container element from the DOM because, by this point, both the listItem and the container transitions should have come to an end.
There you have it, three different methods for animating items that are added and removed from a stack. I hope that with these examples you are now confident to work in a situation where the DOM structure settles into a new position in reaction to an element that has either been added or removed from the DOM.
As you can see, there’s a lot of moving parts and things to consider. We started with that we expect from this type of movement in the real world and considered what happens to a group of elements when one of them is updated. It took a little balancing to transition between the showing and hiding states and which elements get them at specific times, but we got there. We even went so far as to make sure our list is both performant and accessible, things that we’d definitely need to handle on a real project.
Anyway, I wish you all the best in your future projects. And that’s all from me. Over and out.
I’ve been working on the same project for several years. Its initial version was a huge monolithic app containing thousands of files. It was poorly architected and non-reusable, but was hosted in a single repo making it easy to work with. Later, I “fixed” the mess in the project by splitting the codebase into autonomous packages, hosting each of them on its own repo, and managing them with Composer. The codebase became properly architected and reusable, but being split across multiple repos made it a lot more difficult to work with.
As the code was reformatted time and again, its hosting in the repo also had to adapt, going from the initial single repo, to multiple repos, to a monorepo, to what may be called a “multi-monorepo.”
Let me take you on the journey of how this took place, explaining why and when I felt I had to switch to a new approach. The journey consists of four stages (so far!) so let’s break it down like that.
Stage 1: Single repo
The project is leoloso/PoP and it’s been through several hosting schemes, following how its code was re-architected at different times.
It was born as this WordPress site, comprising a theme and several plugins. All of the code was hosted together in the same repo.
Some time later, I needed another site with similar features so I went the quick and easy way: I duplicated the theme and added its own custom plugins, all in the same repo. I got the new site running in no time.
I did the same for another site, and then another one, and another one. Eventually the repo was hosting some 10 sites, comprising thousands of files.
A single repository hosting all our code.
Issues with the single repo
While this setup made it easy to spin up new sites, it didn’t scale well at all. The big thing is that a single change involved searching for the same string across all 10 sites. That was completely unmanageable. Let’s just say that copy/paste/search/replace became a routine thing for me.
Fast forward a couple of years. I completely split the application into PHP packages, managed via Composer and dependency injection.
Composer uses Packagist as its main PHP package repository. In order to publish a package, Packagist requires a composer.json file placed at the root of the package’s repo. That means we are unable to have multiple PHP packages, each of them with its own composer.json hosted on the same repo.
As a consequence, I had to switch from hosting all of the code in the single leoloso/PoP repo, to using multiple repos, with one repo per PHP package. To help manage them, I created the organization “PoP” in GitHub and hosted all repos there, including getpop/root, getpop/component-model, getpop/engine, and many others.
In the multirepo, each package is hosted on its own repo.
Issues with the multirepo
Handling a multirepo can be easy when you have a handful of PHP packages. But in my case, the codebase comprised over 200 PHP packages. Managing them was no fun.
The reason that the project was split into so many packages is because I also decoupled the code from WordPress (so that these could also be used with other CMSs), for which every package must be very granular, dealing with a single goal.
Now, 200 packages is not ordinary. But even if a project comprises only 10 packages, it can be difficult to manage across 10 repositories. That’s because every package must be versioned, and every version of a package depends on some version of another package. When creating pull requests, we need to configure the composer.json file on every package to use the corresponding development branch of its dependencies. It’s cumbersome and bureaucratic.
I ended up not using feature branches at all, at least in my case, and simply pointed every package to the dev-master version of its dependencies (i.e. I was not versioning packages). I wouldn’t be surprised to learn that this is a common practice more often than not.
There are tools to help manage multiple repos, like meta. It creates a project composed of multiple repos and doing git commit -m "some message" on the project executes a git commit -m "some message" command on every repo, allowing them to be in sync with each other.
However, meta will not help manage the versioning of each dependency on their composer.json file. Even though it helps alleviate the pain, it is not a definitive solution.
So, it was time to bring all packages to the same repo.
Stage 3: Monorepo
The monorepo is a single repo that hosts the code for multiple projects. Since it hosts different packages together, we can version control them together too. This way, all packages can be published with the same version, and linked across dependencies. This makes pull requests very simple.
The monorepo hosts multiple packages.
As I mentioned earlier, we are not able to publish PHP packages to Packagist if they are hosted on the same repo. But we can overcome this constraint by decoupling development and distribution of the code: we use the monorepo to host and edit the source code, and multiple repos (at one repo per package) to publish them to Packagist for distribution and consumption.
The monorepo hosts the source code, multiple repos distribute it.
Switching to the Monorepo
Switching to the monorepo approach involved the following steps:
First, I created the folder structure in leoloso/PoP to host the multiple projects. I decided to use a two-level hierarchy, first under layers/ to indicate the broader project, and then under packages/, plugins/, clients/ and whatnot to indicate the category.
I didn’t need to keep the Git history of the packages, so I just copied the files with Finder. Otherwise, we can use hraban/tomono or shopsys/monorepo-tools to port repos into the monorepo, while preserving their Git history and commit hashes.
Next, I updated the description of all downstream repos, to start with [READ ONLY], such as this one.
The downstream repo’s “READ ONLY” is located in the repo description.
I executed this task in bulk via GitHub’s GraphQL API. I first obtained all of the descriptions from all of the repos, with this query:
{ repositoryOwner(login: "getpop") { repositories(first: 100) { nodes { id name description } } } }
…which returned a list like this:
{ "data": { "repositoryOwner": { "repositories": { "nodes": [ { "id": "MDEwOlJlcG9zaXRvcnkxODQ2OTYyODc=", "name": "hooks", "description": "Contracts to implement hooks (filters and actions) for PoP" }, { "id": "MDEwOlJlcG9zaXRvcnkxODU1NTQ4MDE=", "name": "root", "description": "Declaration of dependencies shared by all PoP components" }, { "id": "MDEwOlJlcG9zaXRvcnkxODYyMjczNTk=", "name": "engine", "description": "Engine for PoP" } ] } } } }
From there, I copied all descriptions, added [READ ONLY] to them, and for every repo generated a new query executing the updateRepository GraphQL mutation:
Finally, I introduced tooling to help “split the monorepo.” Using a monorepo relies on synchronizing the code between the upstream monorepo and the downstream repos, triggered whenever a pull request is merged. This action is called “splitting the monorepo.” Splitting the monorepo can be achieved with a git subtree split command but, because I’m lazy, I’d rather use a tool.
I feel at home with the monorepo. The speed of development has improved because dealing with 200 packages feels pretty much like dealing with just one. The boost is most evident when refactoring the codebase, i.e. when executing updates across many packages.
The monorepo also allows me to release multiple WordPress plugins at once. All I need to do is provide a configuration to GitHub Actions via PHP code (when using the Monorepo builder) instead of hard-coding it in YAML.
This figure shows plugins generated when a release is created.
If, in the future, I add the code for yet another plugin to the repo, it will also be generated without any trouble. Investing some time and energy producing this setup now will definitely save plenty of time and energy in the future.
Issues with the Monorepo
I believe the monorepo is particularly useful when all packages are coded in the same programming language, tightly coupled, and relying on the same tooling. If instead we have multiple projects based on different programming languages (such as JavaScript and PHP), composed of unrelated parts (such as the main website code and a subdomain that handles newsletter subscriptions), or tooling (such as PHPUnit and Jest), then I don’t believe the monorepo provides much of an advantage.
That said, there are downsides to the monorepo:
We must use the same license for all of the code hosted in the monorepo; otherwise, we’re unable to add a LICENSE.md file at the root of the monorepo and have GitHub pick it up automatically. Indeed, leoloso/PoP initially provided several libraries using MIT and the plugin using GPLv2. So, I decided to simplify it using the lowest common denominator between them, which is GPLv2.
There is a lot of code, a lot of documentation, and plenty of issues, all from different projects. As such, potential contributors that were attracted to a specific project can easily get confused.
When tagging the code, all packages are versioned independently with that tag whether their particular code was updated or not. This is an issue with the Monorepo builder and not necessarily with the monorepo approach (Symfony has solved this problem for its monorepo).
The issues board needs proper management. In particular, it requires labels to assign issues to the corresponding project, or risk it becoming chaotic.
The issues board can become chaotic without labels that are associated with projects.
All these issues are not roadblocks though. I can cope with them. However, there is an issue that the monorepo cannot help me with: hosting both public and private code together.
I’m planning to create a “PRO” version of my plugin which I plan to host in a private repo. However, the code in the repo is either public or private, so I’m unable to host my private code in the public leoloso/PoP repo. At the same time, I want to keep using my setup for the private repo too, particularly the generate_plugins.yml workflow (which already scopes the plugin and downgrades its code from PHP 8.0 to 7.1) and its possibility to configure it via PHP. And I want to keep it DRY, avoiding copy/pastes.
It was time to switch to the multi-monorepo.
Stage 4: Multi-monorepo
The multi-monorepo approach consists of different monorepos sharing their files with each other, linked via Git submodules. At its most basic, a multi-monorepo comprises two monorepos: an autonomous upstream monorepo, and a downstream monorepo that embeds the upstream repo as a Git submodule that’s able to access its files:
The upstream monorepo is contained within the downstream monorepo.
This approach satisfies my requirements by:
having the public repo leoloso/PoP be the upstream monorepo, and
creating a private repo leoloso/GraphQLAPI-PRO that serves as the downstream monorepo.
A private monorepo can access the files from a public monorepo.
leoloso/GraphQLAPI-PRO embeds leoloso/PoP under subfolder submodules/PoP (notice how GitHub links to the specific commit of the embedded repo):
This figure show how the public monorepo is embedded within the private monorepo in the GitHub project.
Now, leoloso/GraphQLAPI-PRO can access all the files from leoloso/PoP. For instance, script ci/downgrade/downgrade_code.sh from leoloso/PoP (which downgrades the code from PHP 8.0 to 7.1) can be accessed under submodules/PoP/ci/downgrade/downgrade_code.sh.
In addition, the downstream repo can load the PHP code from the upstream repo and even extend it. This way, the configuration to generate the public WordPress plugins can be overridden to produce the PRO plugin versions instead:
GitHub Actions will only load workflows from under .github/workflows, and the upstream workflows are under submodules/PoP/.github/workflows; hence we need to copy them. This is not ideal, though we can avoid editing the copied workflows and treat the upstream files as the single source of truth.
To copy the workflows over, a simple Composer script can do:
Then, each time I edit the workflows in the upstream monorepo, I also copy them to the downstream monorepo by executing the following command:
composer copy-workflows
Once this setup is in place, the private repo generates its own plugins by reusing the workflow from the public repo:
This figure shows the PRO plugins generated in GitHub Actions.
I am extremely satisfied with this approach. I feel it has removed all of the burden from my shoulders concerning the way projects are managed. I read about a WordPress plugin author complaining that managing the releases of his 10+ plugins was taking a considerable amount of time. That doesn’t happen here—after I merge my pull request, both public and private plugins are generated automatically, like magic.
Issues with the multi-monorepo
First off, it leaks. Ideally, leoloso/PoP should be completely autonomous and unaware that it is used as an upstream monorepo in a grander scheme—but that’s not the case.
When doing git checkout, the downstream monorepo must pass the --recurse-submodules option as to also checkout the submodules. In the GitHub Actions workflows for the private repo, the checkout must be done like this:
As a result, we have to input submodules: recursive to the downstream workflow, but not to the upstream one even though they both use the same source file.
To solve this while maintaining the public monorepo as the single source of truth, the workflows in leoloso/PoP are injected the value for submodules via an environment variable CHECKOUT_SUBMODULES, like this:
The environment value is empty for the upstream monorepo, so doing submodules: "" works well. And then, when copying over the workflows from upstream to downstream, I replace the value of the environment variable to "recursive" so that it becomes:
env: CHECKOUT_SUBMODULES: "recursive"
(I have a PHP command to do the replacement, but we could also pipe sed in the copy-workflows composer script.)
This leakage reveals another issue with this setup: I must review all contributions to the public repo before they are merged, or they could break something downstream. The contributors would also completely unaware of those leakages (and they couldn’t be blamed for it). This situation is specific to the public/private-monorepo setup, where I am the only person who is aware of the full setup. While I share access to the public repo, I am the only one accessing the private one.
As an example of how things could go wrong, a contributor to leoloso/PoP might remove CHECKOUT_SUBMODULES: "" since it is superfluous. What the contributor doesn’t know is that, while that line is not needed, removing it will break the private repo.
I guess I need to add a warning!
env: ### ☠️ Do not delete this line! Or bad things will happen! ☠️ CHECKOUT_SUBMODULES: ""
Wrapping up
My repo has gone through quite a journey, being adapted to the new requirements of my code and application at different stages:
It started as a single repo, hosting a monolithic app.
It became a multirepo when splitting the app into packages.
It was switched to a monorepo to better manage all the packages.
It was upgraded to a multi-monorepo to share files with a private monorepo.
Context means everything, so there is no “best” approach here—only solutions that are more or less suitable to different scenarios.
Has my repo reached the end of its journey? Who knows? The multi-monorepo satisfies my current requirements, but it hosts all private plugins together. If I ever need to grant contractors access to a specific private plugin, while preventing them to access other code, then the monorepo may no longer be the ideal solution for me, and I’ll need to iterate again.
I hope you have enjoyed the journey. And, if you have any ideas or examples from your own experiences, I’d love to hear about them in the comments.