Tag: Scale

Linearly Scale font-size with CSS clamp() Based on the Viewport

Responsive typography has been tried in the past with a slew of methods such as media queries and CSS calc().

Here, we’re going to explore a different way to linearly scale text between a set of minimum and maximum sizes as the viewport’s width increases, with the intent of making its behavior at different screen sizes more predictable — All in a single line of CSS, thanks to clamp().

The CSS function clamp() is a heavy hitter. It’s useful for a variety of things, but it’s especially nice for typography. Here’s how it works. It takes three values: 

clamp(minimum, preferred, maximum);

The value it returns will be the preferred value, until that preferred value is lower than the minimum value (at which point the minimum value will be returned) or higher than the maximum value (at which point the maximum will be returned).

In this example, the preferred value is 50%. On the left 50% of the 400px viewport is 200px, which is less than the 300px minimum value that gets used instead. On the right, 50% of the 1400px viewport equals 700px, which is greater than the minimum value and lower than the 800px maximum value, so it equates to 700px.

Wouldn’t it just always be the preferred value then, assuming you aren’t being weird and set it between the minimum and maximum? Well, you’re rather expected to use a formula for the preferred value, like:

.banner {   width: clamp(200px, 50% + 20px, 800px); /* Yes, you can do math inside clamp()! */ }

Say you want to set an element’s minimum font-size to 1rem when the viewport width is 360px or below, and set the maximum to 3.5rem when the viewport width is 840px or above. 

In other words:

1rem   = 360px and below Scaled = 361px - 839px 3.5rem = 840px and above

Any viewport width between 361 and 839 pixels needs a font size linearly scaled between 1 and 3.5rem. That’s actually super easy with clamp()! For example, at a viewport width of 600 pixels, halfway between 360 and 840 pixels, we would get exactly the middle value between 1 and 3.5rem, which is 2.25rem.

Line chart with the vertical axis measured in font size rem unites from 0 to 4, and the horizontal axis measuring viewport width from 0 to 1,060 pixels. There are four blue points on the grid with a blue line connecting them.

What we are trying to achieve with clamp() is called linear interpolation: getting intermediate information between two data points.

Here are the four steps to do this:

Step 1

Pick your minimum and maximum font sizes, and your minimum and maximum viewport widths. In our example, that’s 1rem and 3.5rem for the font sizes, and 360px and 840px for the widths.

Step 2

Convert the widths to rem. Since 1rem on most browsers is 16px by default (more on that later), that’s what we’re going to use. So, now the minimum and maximum viewport widths will be 22.5rem and 52.5rem, respectively.

Step 3

Here, we’re gonna lean a bit to the math side. When paired together, the viewport widths and the font sizes make two points on an X and Y coordinate system, and those points make a line.

A two-dimensional coordinate chart with two points and a red line intersecting them.
(22.5, 1) and (52.5, 3.5)

We kinda need that line — or rather its slope and its intersection with the Y axis to be more specific. Here’s how to calculate that:

slope = (maxFontSize - minFontSize) / (maxWidth - minWidth) yAxisIntersection = -minWidth * slope + minFontSize

That gives us a value of 0.0833 for the slope and -0.875 for the intersection at the Y axis.

Step 4

Now we build the clamp() function. The formula for the preferred value is:

preferredValue = yAxisIntersection[rem] + (slope * 100)[vw]

So the function ends up like this:

.header {   font-size: clamp(1rem, -0.875rem + 8.333vw, 3.5rem); }

You can visualize the result in the following demo:

Go ahead and play with it. As you can see, the font size stops growing when the viewport width is 840px and stops shrinking at 360px. Everything in between changes in linear fashion.

What if the user changes the root’s font size?

You may have noticed a little flaw with this whole approach: it only works as long as the root’s font size is the one you think it is — which is 16px in the previous example — and never changes.

We are converting the widths, 360px and 840px, to rem units by dividing them by 16 because that’s what we assume is the root’s font size. If the user has their preferences set to another root font size, say 18px instead of the default 16px, then that calculation is going to be wrong and the text won’t resize the way we’d expect.

There is only one approach we can use here, and it’s (1) making the necessary calculations in code on page load, (2) listening for changes to the root’s font size, and (3) re-calculating everything if any changes take place.

Here’s a useful JavaScript function to do the calculations:

// Takes the viewport widths in pixels and the font sizes in rem function clampBuilder( minWidthPx, maxWidthPx, minFontSize, maxFontSize ) {   const root = document.querySelector( "html" );   const pixelsPerRem = Number( getComputedStyle( root ).fontSize.slice( 0,-2 ) );    const minWidth = minWidthPx / pixelsPerRem;   const maxWidth = maxWidthPx / pixelsPerRem;    const slope = ( maxFontSize - minFontSize ) / ( maxWidth - minWidth );   const yAxisIntersection = -minWidth * slope + minFontSize    return `clamp( $ { minFontSize }rem, $ { yAxisIntersection }rem + $ { slope * 100 }vw, $ { maxFontSize }rem )`; }  // clampBuilder( 360, 840, 1, 3.5 ) -> "clamp( 1rem, -0.875rem + 8.333vw, 3.5rem )"

I’m deliberately leaving out how to inject the returned string into the CSS because there are a ton of ways to do that depending on your needs and whether your are using vanilla CSS, a CSS-in-JS library, or something else. Also, there is no native event for font size changes, so we would have to manually check for that. We could use setInterval to check every second, but that could come at a performance cost.

This is more of an edge case. Very few people change their browser’s font size and even fewer are going to change it precisely while visiting your site. But if you want your site to be as responsive as possible, then this is the way to go.

For those who don’t mind that edge case

You think you can live without it being perfect? Then I got something for you. I made a small tool to make make the calculations quick and simple.

All you have to do is plug the widths and font sizes into the tool, and the function is calculated for you. Copy and paste the result in your CSS. It’s not fancy and I’m sure a lot of it can be improved but, for the purpose of this article, it’s more than enough. Feel free to fork and modify to your heart’s content.

How to avoid reflowing text

Having such fine-grained control on the dimensions of typography allows us to do other cool stuff — like stopping text from reflowing at different viewport widths.

This is how text normally behaves.

It has a number of lines at a certain viewport width…

…and wraps it’s lines to fit another width

But now, with the control we have, we can make text keep the same number of lines, breaking on the same word always, on whatever viewport width we throw at it.

Viewport width = 400px

Viewport width = 740px

So how do we do this? To start, the ratio between font sizes and viewport widths must stay the same. In this example, we go from 1rem at 320px to 3rem at 960px.

320 / 1 = 320 960 / 3 = 320

If we’re using the clampBuilder() function we made earlier, that becomes:

const text = document.querySelector( "p" ); text.style.fontSize = clampBuilder( 320, 960, 1, 3 );

It keeps the same width-to-font ratio. The reason we do this is because we need to ensure that the text has the right size at every width in order for it to be able to keep the same number of lines. It’ll still reflow at different widths but doing this is necessary for what we are going to do next. 

Now we have to get some help from the CSS character (ch) unit because having the font size just right is not enough. One ch unit is the equivalent to the width of the glyph “0” in an element’s font. We want to make the body of text as wide as the viewport, not by setting width: 100% but with width: Xch, where X is the amount of ch units (or 0s) necessary to fill the viewport horizontally.

To find X, we must divide the minimum viewport width, 320px, by the element’s ch size at whatever font size it is when the viewport is 320px wide. That’s 1rem in this case.

Don’t sweat it, here’s a snippet to calculate an element’s ch size:

// Returns the width, in pixels, of the "0" glyph of an element at a desired font size function calculateCh( element, fontSize ) {   const zero = document.createElement( "span" );   zero.innerText = "0";   zero.style.position = "absolute";   zero.style.fontSize = fontSize;    element.appendChild( zero );   const chPixels = zero.getBoundingClientRect().width;   element.removeChild( zero );    return chPixels; }

Now we can proceed to set the text’s width:

function calculateCh( element, fontSize ) { ... }  const text = document.querySelector( "p" ); text.style.fontSize = clampBuilder( 320, 960, 1, 3 ); text.style.width = `$ { 320 / calculateCh(text, "1rem" ) }ch`;
Umm, who invited you to the party, scrollbar?

Whoa, wait. Something bad happened. There’s a horizontal scrollbar screwing things up!

When we talk about 320px, we are talking about the width of the viewport, including the vertical scrollbar. So, the text’s width is being set to the width of the visible area, plus the width of the scrollbar which makes it overflow horizontally.

Then why not use a metric that doesn’t include the width of the vertical scrollbar? We can’t and it’s because of the CSS vw unit. Remember, we are using vw in clamp() to control font sizes. You see, vw includes the width of the vertical scrollbar which makes the font scale along the viewport width including the scrollbar. If we want to avoid any reflow, then the width must be proportional to whatever width the viewport is, including the scrollbar.

So what do we do? When we do this:

text.style.width = `$ { 320 / calculateCh(text, "1rem") }ch`;

…we can scale the result down by multiplying it by a number smaller than 1. 0.9 does the trick. That means the text’s width is going to be 90% of the viewport width, which will more than account for the small amount of space taken up by the scrollbar. We can make it narrower by using an even smaller number, like 0.6.

function calculateCh( element, fontSize ) { ... }  const text = document.querySelector( "p" ); text.style.fontSize = clampBuilder( 20, 960, 1, 3 ); text.style.width = `$ { 320 / calculateCh(text, "1rem" ) * 0.9 }ch`;
So long, scrollbar!

You might be tempted to simply subtract a few pixels from 320 to ignore the scrollbar, like this:

text.style.width = `$ { ( 320 - 30 ) / calculateCh( text, "1rem" ) }ch`;

The problem with this is that it brings back the reflow issue! That’s because subtracting from 320 breaks the viewport-to-font ratio.

Viewport width = 650px

Viewport width = 670px

The width of text must always be a percentage of the viewport width. Another thing to have in mind is that we need to make sure we’re loading the same font on every device using the site. This sounds obvious doesn’t it? Well, here’s a little detail that could throw your text off. Doing something like font-family: sans-serif won’t guarantee that the same font is used in every browser. sans-serif will set Arial on Chrome for Windows, but Roboto on Chrome for Android. Also, the geometry of some fonts may cause reflow even if you do everything right. Monospaced fonts tend to yield the best results. So always make sure your fonts are on point.

Check out this non-reflowing example in the following demo:

Non-reflowing text inside a container

All we have to do is now is apply the font size and width to the container instead of the text elements directly. The text inside it will just need to be set to width: 100%. This isn’t necessary in the cases of paragraphs and headings since they’re block-level elements anyway and will fill the width of the container automatically.

An advantage of applying this in a parent container is that its children will react and resize automatically without having to set their font sizes and widths one-by-one. Also, if we need to change the font size of a single element without affecting the others, all we’d have to do is change its font size to any em amount and it will be naturally relative to the container’s font size.

Non-reflowing text is finicky, but it’s a subtle effect that can bring a nice touch to a design!

Wrapping up

To cap things off, I put together a little demonstration of how all of this could look in a real life scenario.

In this final example, you can also change the root font size and the clamp() function will be recalculated automatically so the text can have the right size in any situation.

Even though the target of this article is to use clamp() with font sizes, this same technique could be used in any CSS property that receives a length unit. Now, I’m not saying you should use this everywhere. Many times, a good old font-size: 1rem is all you need. I’m just trying to show you how much control you can have when you need it.

Personally, I believe clamp() is one of the best things to arrive in CSS and I can’t wait to see what other usages people come up with as it becomes more and more widespread!


The post Linearly Scale font-size with CSS clamp() Based on the Viewport appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , ,

How I Used Brotli to Get Even Smaller CSS and JavaScript Files at CDN Scale

The HBO sitcom Silicon Valley hilariously followed Pied Piper, a team of developers with startup dreams to create a compression algorithm so powerful that high-quality streaming and file storage concerns would become a thing of the past.

In the show, Google is portrayed by the fictional company Hooli, which is after Pied Piper’s intellectual property. The funny thing is that, while being far from a startup, Google does indeed have a powerful compression engine in real life called Brotli

This article is about my experience using Brotli at production scale. Despite being really expensive and a truly unfeasible method for on-the-fly compression, Brotli is actually very economical and saves cost on many fronts, especially when compared with gzip or lower compression levels of Brotli (which we’ll get into).

Brotli’s beginning…

In 2015, Google published a blog post announcing Brotli and released its source code on GitHub. The pair of developers who created Brotli also created Google’s Zopfli compression two years earlier. But where Zopfli leveraged existing compression techniques, Brotli was written from the ground-up and squarely focused on text compression to benefit static web assets, like HTML, CSS, JavaScript and even web fonts.

At that time, I was working as a freelance website performance consultant. I was really excited for the 20-26% improvement Brotli promised over Zopfli. Zopfli in itself is a dense implementation of the deflate compressor compared with zlib’s standard implementation, so the claim of up to 26% was quite impressive. And what’s zlib? It’s essentially the same as gzip.

So what we’re looking at is the next generation of Zopfli, which is an offshoot of zlib, which is essentially gzip.

A story of disappointment

It took a few months for major CDN players to support Brotli, but meanwhile it was seeing widespread adoption in tools, services, browsers and servers. However, the 26% dense compression that Brotli promised was never reflected in production. Some CDNs set a lower compression level internally while others supported Brotli at origin so that they only support it if it was enabled manually at the origin.

Server support for Brotli was pretty good, but to achieve high compression levels, it required rolling your own pre-compression code or using a server module to do it for you — which is not always an option, especially in the case of shared hosting services.

This was really disappointing for me. I wanted to compress every last possible byte for my clients’ websites in a drive to make them faster, but using pre-compression and allowing clients to update files on demand simultaneously was not always easy.

Taking matters into my own hands

I started building my own performance optimization service for my clients.

I had several tricks that could significantly speed up websites. The service categorized all the optimizations in three groups consisting of several “Content,” “Delivery,” and “Cache” optimizations. I had Brotli in mind for the content optimization part of the service for compressible resources.

Like other compression formats, Brotli comes in different levels of power. Brotli’s max level is exactly like the max volume of the guitar amps in This is Spinal Tap: it goes to 11.

Brotli:11, or Brotli compression level 11, can offer significant reduction in the size of compressible files, but has a substantial trade-off: it is painfully slow and not feasible for on demand compression the same way gzip is capable of doing it. It costs significantly more in terms of CPU time.

In my benchmarks, Brotli:11 takes several hundred milliseconds to compress a single minified jQuery file. So, the only way to offer Brotli:11 to my clients was to use it for pre-compression, leaving me to figure out a way to cache files at the server level. Luckily we already had that in place. The only problem was the fear that Brotli could kill all our processing resources.

Maybe that’s why Pied Piper had to continue rigging its servers for more power.

I put my fears aside and built Brotli:11 as a configurable server option. This way, clients could decide whether enabling it was worth the computing cost.

It’s slow, but gradually pays off

Among several other optimizations, the service for my clients also offers geographic content delivery; in other words, it has a built-in CDN.

Of the several tricks I tried when taking matters into my own hands, one was to combine public CDN (or open-source CDN) and private CDN on a single host so that websites can enjoy the benefits of shared browser cache of public resources without incurring separate DNS lookup and connection cost for that public host. I wanted to avoid this extra connection cost because it has significant impact for mobile users. Also, combining more and more resources on a single host can help get the most of HTTP/2 features, like multiplexing.

I enabled the public CDN and turned on Brotli:11 pre-compression for all compressible resources, including CSS, JavaScript, SVG, and TTF, among other types of files. The overhead of compression did indeed increase on first request of each resource — but after that, everything seemed to run smoothly. Brotli has over 90% browser support and pretty much all the requests hitting my service now use Brotli.

I was happy. Clients were happy. But I didn’t have numbers. I started analyzing the impact of enabling this high density compression on public resources. For this, I recorded file transfer sizes of several popular libraries — including jQuery, Bootstrap, React, and other frameworks — that used common compression methods implemented by other CDNs and found that Brotli:11 compression was saving around 21% compared to other compression formats.

It’s important to note that some of the other public CDNs I compared were already using Brotli, but at lower compression levels. So, the 21% extra compression was really satisfying for me. This number is based on a very small subset of libraries but is not incorrect by a big margin as I was seeing this much gain on all of the websites that I tested.

Here is a graphical representation of the savings.

Vertical bar chart. Compares jQuery, Bootstrap, D3.js, Ant Design, Senamtic UI, Font Awesome, React, Three.js, Bulma and Vue before and after Brotli compression. Brotli compression is always smaller.

You can see the raw data below..Note that the savings for CSS is much more prominent than what JavaScript gets.

Library Original Avg. of Common Compression (A) Brotli:11 (B) (A) / (B) – 1
Ant Design 1,938.99 KB 438.24 KB 362.82 KB 20.79%
Bootstrap 152.11 KB 24.20 KB 17.30 KB 39.88%
Bulma 186.13 KB 23.40 KB 19.30 KB 21.24%
D3.js 236.82 KB 74.51 KB 65.75 KB 13.32%
Font Awesome 1,104.04 KB 422.56 KB 331.12 KB 27.62%
jQuery 86.08 KB 30.31 KB 27.65 KB 9.62%
React 105.47 KB 33.33 KB 30.28 KB 10.07%
Semantic UI 613.78 KB 91.93 KB 78.25 KB 17.48%
three.js 562.75 KB 134.01 KB 114.44 KB 17.10%
Vue.js 91.48 KB 33.17 KB 30.58 KB 8.47%

The results are great, which is what I expected. But what about the overall impact of using Brotli:11 at scale? Turns out that using Brotli:11 for all public resources reduces cost all around:

  • The smaller file sizes are expected to result in lower TLS overhead. That said, it is not easily measurable, nor is it significant for my service because modern CPUs are very fast at encryption. Still, I believe there is some tiny and repeated saving on account of encryption for every request as smaller files encrypt faster.
  • It reduces the bandwidth cost. The 21% savings I got across the board is the case in point. And, remember, savings are not a one-time thing. Each request counts as cost, so the 21% savings is repeated time and again, creating a snowball savings for the cost of bandwidth. 
  • We only cache hot files in memory at edge servers. Due to the widespread browser support for Brotli, these hot files are mostly encoded by Brotli and their small size lets us fit more of them in available memory.
  • Visitors, especially those on mobile devices, enjoy reduced data transfer. This results in less battery use and savings on data charges. That’s a huge win that gets passed on to the users of our clients!

This is all so good. The cost we save per request is not significant, but considering we have a near zero cache miss rate for public resources, we can easily amortize the initial high cost of compression in next several hundred requests. After that,  we’re looking at a lifetime benefit of reduced overhead.

It doesn’t end there

With the mix of public and private CDNs that we introduced as part of our performance optimization service, we wanted to make sure that clients could set lower compression levels for resources that frequently change over time (like custom CSS and JavaScript) on the private CDN and automatically switch to the public CDN for open-source resources that change less often and have pre-configured Brotli:11. This way, our clients can still get a high compression ratio on resources that change less often while still enjoying good compression ratios with instant purge and updates for compressible resources.

This all is done smoothly and seamlessly using our integration tools. The added benefit of this approach for clients is that the bandwidth on the public CDN is totally free with unprecedented performance levels.

Try it yourself!

Testing on a common website, using aggressive compression can easily shave around 50 KB off the page load. If you want to play with the free public CDN and enjoy smaller CSS and JavaScript, you are welcome to use our PageCDN service. Here are some of the most used libraries for your use:

<!-- jQuery 3.5.0 --> <script src="https://pagecdn.io/lib/jquery/3.5.0/jquery.min.js" crossorigin="anonymous" integrity="sha256-xNzN2a4ltkB44Mc/Jz3pT4iU1cmeR0FkXs4pru/JxaQ=" ></script> 
 <!-- FontAwesome 5.13.0 --> <link href="https://pagecdn.io/lib/font-awesome/5.13.0/css/all.min.css" rel="stylesheet" crossorigin="anonymous" integrity="sha256-h20CPZ0QyXlBuAw7A+KluUYx/3pK+c7lYEpqLTlxjYQ=" > 
 <!-- Ionicons 4.6.3 --> <link href="https://pagecdn.io/lib/ionicons/4.6.3/css/ionicons.min.css" rel="stylesheet" crossorigin="anonymous" integrity="sha256-UUDuVsOnvDZHzqNIznkKeDGtWZ/Bw9ZlW+26xqKLV7c=" > 
 <!-- Bootstrap 4.4.1 --> <link href="https://pagecdn.io/lib/bootstrap/4.4.1/css/bootstrap.min.css" rel="stylesheet" crossorigin="anonymous" integrity="sha256-L/W5Wfqfa0sdBNIKN9cG6QA5F2qx4qICmU2VgLruv9Y=" > 
 <!-- React 16.13.1 --> <script src="https://pagecdn.io/lib/react/16.13.1/umd/react.production.min.js" crossorigin="anonymous" integrity="sha256-yUhvEmYVhZ/GGshIQKArLvySDSh6cdmdcIx0spR3UP4=" ></script> 
 <!-- Vue 2.6.11 --> <script src="https://pagecdn.io/lib/vue/2.6.11/vue.min.js" crossorigin="anonymous" integrity="sha256-ngFW3UnAN0Tnm76mDuu7uUtYEcG3G5H1+zioJw3t+68=" ></script>

Our PHP library automatic switches between private and public CDN if you need it to. The same feature is implemented seamlessly in our WordPress plugin that automatically loads public resources over Public CDN. Both of these tools allow full access to the free public CDN. Libraries for JavaScript, Python. and Ruby are not yet available. If you contribute any such library to our Public CDN, I will be happy to list it in our docs.

Additionally, you can use our search tool to immediately find a corresponding resource on the public CDN by supplying a URL of a resource on your website. If none of these tools work for you, then you can check the relevant library page and pick the URLs you want.

Looking toward the future

We started by hosting only the most popular libraries in order to prevent malware spread. However, things are changing rapidly and we add new libraries as our users suggest them to us. You are welcome to suggest your favorite ones, too. If you still want to link to a public or private Github repo that is not yet available on our public CDN, you can use our private CDN to connect to a repo and import all new releases as they appear on GitHub and then apply your own aggressive optimizations before delivery.

What do you think?

Everything we covered here is solely based on my personal experience working with Brotli compression at CDN scale. It just happens to be an introduction to my public CDN as well. We are still a small service and our client websites are only in the hundreds. Still, at this scale the aggressive compression seems to pay off.

I achieved high quality results for my clients and now you can use this free service for your websites as well. And, if you like it, please leave feedback at my email and recommend it to others.

The post How I Used Brotli to Get Even Smaller CSS and JavaScript Files at CDN Scale appeared first on CSS-Tricks.

CSS-Tricks

, , , , , ,
[Top]

How They Fit Together: Transform, Translate, Rotate, Scale, and Offset

Firefox 72 was first out of the gate with “independent transforms.” That is, instead of having to combine transforms together, like:

.el {   transform: rotate(10deg) scale(0.95) translate(10px, 10px); }

…we can do:

.el {   rotate: 10deg;   scale: 0.95;   translate: 10px 10px; }

That’s extremely useful, as having to repeat other transforms when you change a single one, lest remove them, is tedious and prone to error.

But there is some nuance to know about here, and Dan Wilson digs in.

Little things to know:

  • Independent transforms happen first. So, if you also use a transform, that can override them if the same transform types is used.
  • They all share the same transform-origin.
  • The offset-* properties also effectively moves/rotates elements. Those happen after independent transforms and before transform.

Direct Link to ArticlePermalink

The post How They Fit Together: Transform, Translate, Rotate, Scale, and Offset appeared first on CSS-Tricks.

CSS-Tricks

, , , , , ,
[Top]