If you use VS Code, you might have enabled the setting for re-opening a previously open file next time the app launches. I do. I like that.
But sometimes you really, really don’t want that to happen.
I recently ran into one of those times! I had to reinstall my local copy of this site and, with it, the 3GB+ database that accompanies it. Being a WordPress site and all, I needed to open up the SQL database file to search-and-replace some stuff.
If you’ve ever tried to open a super duper large file in VS Code, then you know you might need to jiggle a few settings that increase the memory limit and all that. The app is super flexible like that. There’s even a nice extension that’ll both increase the memory and perform a search-and-replace on open.
Anyway, that big ol’ database file crashed VS Code several times and I wound up finding another way to go about things. However, VS Code keeps trying to open that file and inevitably crashes even though I nuked the file. And that means I wait for the MacOS beachball of fun to spin around before the app crashes and I can reopen it again for reals.
Well, I finally decided to fix that today and spent a little time searching around. One Stack Overflow thread suggests disabling extensions and increasing the memory limit via the command line. I’m glad that worked for some folks, but I had to keep looking.
Another thread suggests clearing the app’s cache from the command palette.
Nice, but no dice. 🎲
I wound up going with a scorched earth strategy shared by Jie Jenn in a helpful YouTube video. You’ve gotta manually trash the cached files from VS Code. The video walks through it in Windows, but it’s pretty darn similar in MacOS. The VS Code cache is located in your user folder.
Notice that I have the Backups folder highlighted there. Jie removed the files from the CachedData folder, but all that did was trigger a prompt for me to re-install the app. So, I took a risk and deleted what appeared to be a 3GB+ file in Backups. I showed that file the door and VS Code has been happy ever since.
Ask me again in a week and maybe I’ll find out that I really screwed something up. But so far, so good!
Stoyan is absolutely correct. As much as we all love CSS, it’s still an important player in how websites load and using less of it is a good thing. He has a neat new bookmarklet called CSS Me Not to help diagnose unnecessary CSS files, but we’ll get to that in a moment.
Sometimes our sites use entire stylesheets that are simply unnecessary. I hate to admit it, but WordPress is a notorious offender here, loading stylesheets for plugins and blocks that you might not even really be using. I’m in that position on this site as I write. I just haven’t found the time to root out a couple of little stylesheets I don’t need from loading.
You could find these stylesheets in DevTools as well, but the CSS Me Not bookmarklet makes it extra easy and has a killer bonus feature: turning off those stylesheets. Testing the bookmarklet here on CSS-Tricks, I can see four stylesheets that WordPress loads (because of settings and plugins) that I know I don’t need.
If you wanted to do this in DevTools instead, you could filter your Network requests by CSS, find the stylesheet that you want to turn off, right-click and block it, and re-load.
I’ve been fighting this fight for ages, dequeuing scripts and styles in WordPress that I don’t want.
Removing totally unused stylesheets is an obvious win, but there is the more squirrely issue of removing unused CSS. I mention in that post the one-true-way of really knowing if any particular CSS is unused, which is attaching a background-image to every selector and then checking the server logs after a decent amount of production time to see which of those images were never requested. Stoyan corroborates my story here:
UnCSS is sort of a “lab”. The “real world” may surprise you. So a trick we did at SomeCompany Inc. was to instrument all the CSS declarations at build time, where each selector gets a 1×1 transparent background image. Then rummage through the server logs after a week or so to see what is actually used.
I’ve got a podcast that will be 10 years old this coming January! Most of those episodes have one or more guests (plus me and Dave). Despite fancy modern options for recording podcasts with guests, like Riverside.fm or Zencastrl where guests don’t have to worry about recording their own audio, we haven’t made the leap to one of those yet.
We have the guests record their own audio locally (typically Quicktime Player or Windows Voice Recorder) because that way our editor can make the most of the editing process. At the end of the show, our guest has a file that is ~100MB that they need to send over to us.
How that handoff happens isn’t always completely obvious. Typically we don’t share a Slack with our guests, but when we do, that works for sharing large files like that. Even a Nitro-boosted Discord won’t take a file that big, though. I’d say 70% of the time, our guests chuck the file into their Dropbox and create a sharing link for us to download it. From there, it’s probably Google Drive 20% of the time, and the last 10% is some random thing.
That last 10% is stuff like uploading the file to a web server or file storage service the guest controls and they link us up to the file from there. If we were smarter, we’d probably use “File Request” links on Dropbox or Box.
I usually say something like, Send us that file however you like to send large files, because I don’t want to be too prescriptive about what service someone uses. You never know if someone has a particular aversion to some specific tech or company. I would always mention Firefox Send because it was meant for one-off file sending and I find people generally like and trust Mozilla. Alas, Firefox Send shut down.
Unfortunately, some abusive users were beginning to use Firefox Send to ship malware and conduct phishing attacks. When this problem was reported, we stopped the service. Please see the Mozilla Blog for more details on why this service was discontinued.
I guess it’s responsible to try to shut down bad behavior, but of course it was used for bad behavior. Dickwads use any and every service on the entire internet for bad behavior. The real answer, probably, is that it was just a little random side project that didn’t make any money and they didn’t feel like investing the time and money into fixing it. Fair enough, but of course that always costs you trust points. What else is on the chopping block?
I ran across Wormhole the other day which looks like a direct, if not better, replacement. It uses end-to-end encrypted and has some nice UX touches, like getting a share link before the upload is complete. It doesn’t say anything about how they intend to pay for it and support it long-term, but I’d guess the cost is somewhat minimal as they only host the files for 24 hours. They also don’t say if they intend to prevent bad behavior or if it’s just a free-for-all. Even with all the encryption and whatnot, I would imagine if a site like Google or Twitter found that tons of wormhole.app URLs had malware on them, they’d be blacklisted. That wouldn’t stop people from using it but it certainly stops people from finding it too. I did hear from Feross on this, and they have ideas to fight bad behavior if it comes to that.
The thing I’m the most surprised by is that we don’t get more emails where the email service itself just hosts the file. That might sound silly, as email is notorious for not accepting very large file attachments, but that has changed over the years with some of the big players. When you select a file that’s larger than 25MB in Gmail, it’ll offer to upload it to Google Drive and automatically share it with the person you’re sending the email to. iCloud does largely the same thing with Mail Drop.
Me, I use Dropbox quite a bit, but rarely for sharing one-off files. If I want to make sure I have a copy in perpetuity, sometimes I’ll even use a personal Amazon S3 bucket. But mostly I’ll just upload it to Droplr, which I’ve used for ages just for this kind of thing.
Even though GitHub Readme files (typically ./readme.md) are Markdown, and although Markdown supports HTML, you can’t put <style> or <script> tags init. (Well, you can, they just get stripped.) So you can’t apply custom styles there. Or can you?
You can use SVG as an <img src="./file.svg" alt="" /> (anywhere).
When used that way, even stuff like animations within them play (wow).
SVG has stuff like <text> for textual content, but also <foreignObject> for regular ol’ HTML content.
SVG support <style> tags.
Your readme.md file does support <img> with SVG sources.
The HBO sitcom Silicon Valley hilariously followed Pied Piper, a team of developers with startup dreams to create a compression algorithm so powerful that high-quality streaming and file storage concerns would become a thing of the past.
In the show, Google is portrayed by the fictional company Hooli, which is after Pied Piper’s intellectual property. The funny thing is that, while being far from a startup, Google does indeed have a powerful compression engine in real life called Brotli.
This article is about my experience using Brotli at production scale. Despite being really expensive and a truly unfeasible method for on-the-fly compression, Brotli is actually very economical and saves cost on many fronts, especially when compared with gzip or lower compression levels of Brotli (which we’ll get into).
So what we’re looking at is the next generation of Zopfli, which is an offshoot of zlib, which is essentially gzip.
A story of disappointment
It took a few months for major CDN players to support Brotli, but meanwhile it was seeing widespread adoption in tools, services, browsers and servers. However, the 26% dense compression that Brotli promised was never reflected in production. Some CDNs set a lower compression level internally while others supported Brotli at origin so that they only support it if it was enabled manually at the origin.
Server support for Brotli was pretty good, but to achieve high compression levels, it required rolling your own pre-compression code or using a server module to do it for you — which is not always an option, especially in the case of shared hosting services.
This was really disappointing for me. I wanted to compress every last possible byte for my clients’ websites in a drive to make them faster, but using pre-compression and allowing clients to update files on demand simultaneously was not always easy.
Taking matters into my own hands
I started building my own performance optimization service for my clients.
I had several tricks that could significantly speed up websites. The service categorized all the optimizations in three groups consisting of several “Content,” “Delivery,” and “Cache” optimizations. I had Brotli in mind for the content optimization part of the service for compressible resources.
Like other compression formats, Brotli comes in different levels of power. Brotli’s max level is exactly like the max volume of the guitar amps in This is Spinal Tap: it goes to 11.
Brotli:11, or Brotli compression level 11, can offer significant reduction in the size of compressible files, but has a substantial trade-off: it is painfully slow and not feasible for on demand compression the same way gzip is capable of doing it. It costs significantly more in terms of CPU time.
In my benchmarks, Brotli:11 takes several hundred milliseconds to compress a single minified jQuery file. So, the only way to offer Brotli:11 to my clients was to use it for pre-compression, leaving me to figure out a way to cache files at the server level. Luckily we already had that in place. The only problem was the fear that Brotli could kill all our processing resources.
I put my fears aside and built Brotli:11 as a configurable server option. This way, clients could decide whether enabling it was worth the computing cost.
It’s slow, but gradually pays off
Among several other optimizations, the service for my clients also offers geographic content delivery; in other words, it has a built-in CDN.
Of the several tricks I tried when taking matters into my own hands, one was to combine public CDN (or open-source CDN) and private CDN on a single host so that websites can enjoy the benefits of shared browser cache of public resources without incurring separate DNS lookup and connection cost for that public host. I wanted to avoid this extra connection cost because it has significant impact for mobile users. Also, combining more and more resources on a single host can help get the most of HTTP/2 features, like multiplexing.
I was happy. Clients were happy. But I didn’t have numbers. I started analyzing the impact of enabling this high density compression on public resources. For this, I recorded file transfer sizes of several popular libraries — including jQuery, Bootstrap, React, and other frameworks — that used common compression methods implemented by other CDNs and found that Brotli:11 compression was saving around 21% compared to other compression formats.
It’s important to note that some of the other public CDNs I compared were already using Brotli, but at lower compression levels. So, the 21% extra compression was really satisfying for me. This number is based on a very small subset of libraries but is not incorrect by a big margin as I was seeing this much gain on all of the websites that I tested.
Here is a graphical representation of the savings.
Avg. of Common Compression (A)
(A) / (B) – 1
The results are great, which is what I expected. But what about the overall impact of using Brotli:11 at scale? Turns out that using Brotli:11 for all public resources reduces cost all around:
The smaller file sizes are expected to result in lower TLS overhead. That said, it is not easily measurable, nor is it significant for my service because modern CPUs are very fast at encryption. Still, I believe there is some tiny and repeated saving on account of encryption for every request as smaller files encrypt faster.
It reduces the bandwidth cost. The 21% savings I got across the board is the case in point. And, remember, savings are not a one-time thing. Each request counts as cost, so the 21% savings is repeated time and again, creating a snowball savings for the cost of bandwidth.
We only cache hot files in memory at edge servers. Due to the widespread browser support for Brotli, these hot files are mostly encoded by Brotli and their small size lets us fit more of them in available memory.
This is all so good. The cost we save per request is not significant, but considering we have a near zero cache miss rate for public resources, we can easily amortize the initial high cost of compression in next several hundred requests. After that, we’re looking at a lifetime benefit of reduced overhead.
It doesn’t end there
This all is done smoothly and seamlessly using our integration tools. The added benefit of this approach for clients is that the bandwidth on the public CDN is totally free with unprecedented performance levels.
Try it yourself!
Additionally, you can use our search tool to immediately find a corresponding resource on the public CDN by supplying a URL of a resource on your website. If none of these tools work for you, then you can check the relevant library page and pick the URLs you want.
Looking toward the future
We started by hosting only the most popular libraries in order to prevent malware spread. However, things are changing rapidly and we add new libraries as our users suggest them to us. You are welcome to suggest your favorite ones, too. If you still want to link to a public or private Github repo that is not yet available on our public CDN, you can use our private CDN to connect to a repo and import all new releases as they appear on GitHub and then apply your own aggressive optimizations before delivery.
What do you think?
Everything we covered here is solely based on my personal experience working with Brotli compression at CDN scale. It just happens to be an introduction to my public CDN as well. We are still a small service and our client websites are only in the hundreds. Still, at this scale the aggressive compression seems to pay off.
I achieved high quality results for my clients and now you can use this free service for your websites as well. And, if you like it, please leave feedback at my email and recommend it to others.