Month: January 2020

Lightning-Fast Web Performance


Smaller HTML Payloads with Service Workers

Short story: Philip Walton has a clever idea for using service workers to cache the top and bottom of HTML files, reducing a lot of network weight.

Longer thoughts: When you’re building a really simple website, you can get away with literally writing raw HTML. It doesn’t take long to need a bit more abstraction than that. Even if you’re building a three-page site, that’s three HTML files, and your programmer’s mind will be looking for ways to not repeat yourself. You’ll probably find a way to “include” all the stuff at the top and bottom of the HTML, and just change the content in the middle.

I have tended to reach for PHP for that sort of thing in the past (<?php include('header.php); ?>), although these days I’m feeling much more jamstacky and I’d probably do it with Eleventy and Nunjucks.

Or, you could go down the SPA (Single Page App) route just for this basic abstraction if you want. Next and Nuxt are perhaps a little heavy-handed for a few includes, but hey, at least they are easy to work with and the result is a nice static site. The thing about these JavaScript-powered SPA frameworks (Gatsby is in here, too), is that they “hydrate” from static sites into SPAs as the JavaScript loads. Part of the reason for that is speed. No longer does the browser need to reload and request a whole big HTML page again to render; it just asks for whatever smaller amount of data it needs and replaces it on the fly.

So in a sense, you might build a SPA because you have a common header and footer and just want to replace the guts, for efficiencies sake.

Here’s Phil:

In a traditional client-server setup, the server always needs to send a full HTML page to the client for every request (otherwise the response would be invalid). But when you think about it, that’s pretty wasteful. Most sites on the internet have a lot of repetition in their HTML payloads because their pages share a lot of common elements (e.g. the <head>, navigation bars, banners, sidebars, footers etc.). But in an ideal world, you wouldn’t have to send so much of the same HTML, over and over again, with every single page request.

With service workers, there’s a solution to this problem. A service worker can request just the bare minimum of data it needs from the server (e.g. an HTML content partial, a Markdown file, JSON data, etc.), and then it can programmatically transform that data into a full HTML document.

So rather than PHP, Eleventy, a JavaScript framework, or any other solution, Phil’s idea is that a service worker (a native browser technology) can save a cache of a site’s header and footer. Then server requests only need to be made for the “guts” while the full HTML document can be created on the fly.

It’s a super fancy idea, and no joke to implement, but the fact that it could be done with less tooling might be appealing to some. On Phil’s site:

 on this site over the past 30 days, page loads from a service worker had a 47.6% smaller network payloads, and a median First Contentful Paint (FCP) that was 52.3% faster than page loads without a service worker (416ms vs. 851ms).

Aside from configuring a service worker, I’d think the most finicky part is having to configure your server/API to deliver a content-only version of your stuff or build two flat file versions of everything.

Direct Link to ArticlePermalink

The post Smaller HTML Payloads with Service Workers appeared first on CSS-Tricks.


, , , ,

Innovation Can’t Keep the Web Fast

Every so often, the fruits of innovation bear fruit in the form of improvements to the foundational layers of the web. In 2015, HTTP/2 became a published standard in an effort to update an aging protocol. This was was both necessary and overdue, as HTTP/1 rendered web performance as an arcane sort of discipline in the form of strange workarounds of its limitations. Though HTTP/2 proliferation isn’t absolute — and there are kinks yet to be worked out — I don’t think it’s a stretch to say the web is better off because of it.

Unfortunately, the rollout of HTTP/2 has presided over a 102% median increase of bytes transferred over mobile the last four years. If we look at the 90th percentile of that same dataset — because it’s really the long tail of performance we need to optimize for — we see an increase of 239%. From 2016 (PDF warning) to 2019, the average mobile download speed in the U.S. has increased by 73%. In Brazil and India, average mobile download speeds increased by 75% and 28%, respectively, in that same period of time.

While page weight alone doesn’t necessarily tell the whole story of the user experience, it is, at the very least, a loosely related phenomenon which threatens the collective user experience. The story that HTTPArchive tells through data acquired from the Chrome User Experience Export (CrUX) can be interpreted a number of different ways, but this one fact is steadfast and unrelenting: most metrics gleaned from CrUX over the last couple of years show little, if any improvement despite various improvements in browsers, the HTTP protocol, and the network itself.

Given these trends, all that can be said of the impact of these improvements at this point is that it has helped to stem the tide of our excesses, but precious little to reduce them. Despite every significant improvement to the underpinnings of the web and the networks we access it through, we continue to build for it in ways that suggest we’re content with the never-ending Jevons paradox in which we toil.

If we’re to make progress in making a faster web for everyone, we must recognize some of the impediments to that goal:

  1. The relentless desire to monetize every square inch of the web, as well as the army of third party vendors which fuel the research mandated by such fevered efforts.
  2. Workplace cultures that favor unrestrained feature-driven development. This practice adds to — but rarely takes away from — what we cram down the wire to users.
  3. Developer conveniences that make the job of the developer easier, but can place an increasing cost on the client.

Counter-intuitively, owners of mature codebases which embody some or all of these traits continue to take the same unsustainable path to profitability they always have. They do this at their own peril, rather than acknowledge the repeatedly established fact that performance-first development practices will do as much — or more — for their bottom line and the user experience.

It’s with this understanding that I’ve come to accept that our current approach to remedy poor performance largely consists of engineering techniques that stem from the ill effects of our business, product management, and engineering practices. We’re good at applying tourniquets, but not so good at sewing up deep wounds.

It’s becoming increasingly clear that web performance isn’t solely an engineering problem, but a problem of people. This is an unappealing assessment in part because technical solutions are comparably inarguable. Content compression works. Minification works. Tree shaking works. Code splitting works. They’re undeniably effective solutions to what may seem like entirely technical problems.

The intersection of web performance and people, on the other hand, is messy and inconvenient. Unlike a technical solution as clearly beneficial as HTTP/2, how do we qualify what successful performance cultures look like? How do we qualify successful approaches to get there? I don’t know exactly what that looks like, but I believe a good template is the following marriage of cultural and engineering tenets:

  1. An organization can’t be successful in prioritizing performance if it can’t secure the support of its leaders. Without that crucial element, it becomes extremely difficult for organizations to create a culture in which performance is the primary feature of their product.
  2. Even with leadership support, performance can’t be effectively prioritized if the telemetry isn’t in place to measure it. Without measurement, it becomes impossible to explain how product development affects performance. If you don’t have the numbers, no one will care about performance until it becomes an apparent crisis.
  3. When you have the support of leadership to make performance a priority and the telemetry in place to measure it, you still can’t get there unless your entire organization understands web performance. This is the time at which you develop and roll out training, documentation, best practices, and standards the organization can embrace. In some ways, this is the space which organizations have already spent a lot of time in, but the challenging work is in establishing feedback loops to assess how well they understand and have applied that knowledge.
  4. When all of the other pieces are finally in place, you can start to create accountability in the organization around performance. Accountability doesn’t come in the form of reprisals when your telemetry tells you performance has suffered over time, but rather in the form of guard rails put in place in the deployment process to alert you when thresholds have been crossed.

Now comes the kicker: even if all of these things come together in your workplace, good outcomes aren’t guaranteed. Barring some regulation that forces us to address the poorly performing websites in our charge — akin to how the ADA keeps us on our toes with regard to accessibility — it’s going to take continuing evangelism and pressure to ensure performance remains a priority. Like so much of the work we do on the web, the work of maintaining a good user experience in evolving codebases is never done. I hope 2020 is the year that we meaningfully recognize that performance is about people, and adapt accordingly.

As technological innovations such as HTTP/3 and 5G emerge, we must take care not to rest on our laurels and simply assume they will heal our ills once and for all. If we do, we’ll certainly be having this discussion again when the successors to those technologies loom. Innovation alone can’t keep the web fast because making the web fast — and keeping it that way — is the hard work we can only accomplish by working together.

The post Innovation Can’t Keep the Web Fast appeared first on CSS-Tricks.


, , ,

“resize: none;” on textareas is bad UX

Catalin Rosu:

Sometimes you need to type a long reply that consists of many paragraphs and wrapping that text within a tiny textarea box makes it hard to understand and to follow as you type. There were many times when I had to write that text within Notepad++ for example and then just paste the whole reply in that small textarea. I admit I also opened the DevTools to override the resize: none declaration but that’s not really a productive way to do things.

Removing the default resizeability of a <textarea> is generally user-hurting vanity. Even if the resized textarea “breaks” the site layout, too bad, the user is trying to do something very important on this site right now and you should not let anything get in the way of that. I know the web is a big place though, so feel free to prove me wrong in the comments.

This must have been cathartic for Catalin, who has been steadily gaining a reputation on Stack Overflow for an answer on how to prevent a textarea from resizing from almost a decade ago.

Direct Link to ArticlePermalink

The post “resize: none;” on textareas is bad UX appeared first on CSS-Tricks.


, ,

Sticky Table of Contents with Scrolling Active States

Say you have a two-column layout: a main column with content. Say it has a lot of content, with sections that requires scrolling. And let’s toss in a sidebar column that is largely empty, such that you can safely put a position: sticky; table of contents over there for all that content in the main column. A fairly common pattern for documentation.

Bramus Van Damme has a nice tutorial on all this, starting from semantic markup, implementing most of the functionality with HTML and CSS, and then doing the last bit of active nav enhancement with JavaScript.

For example, if you don’t click yourself down to a section (where you might be able to get away with :target styling for active navigation), JavaScript is necessary to tell where you are scrolled to an highlight the active navigation. That active bit is handled nicely with IntersectionObserver, which is, like, the perfect API for this.

Here’s that result:

It reminds me of a very similar demo from Hakim El Hattab he called Progress Nav. The design pattern is exactly the same, but Hakim’s version has this ultra fancy SVG path that draws itself along the way, indenting for sub nav. I’ll embed a video here:

That one doesn’t use IntersectionObserver, so if you want to hack on this, combine ’em!

The post Sticky Table of Contents with Scrolling Active States appeared first on CSS-Tricks.


, , , , ,

Free Website Builder + Free CRM + Free Live Chat = Bitrix24

(This is a sponsored post.)

You may know Bitrix24 as the world’s most popular free CRM and sales management system, used by over 6 million businesses. But the free website builder available inside Bitrix24 is worthy of your attention, too.

Why do I need another free website/landing page builder?

There are many ways to create free websites — Wix, Squarepage, WordPress, etc. And if you need a blog — Medium, Tumblr and others are at your disposal. Bitrix24 is geared toward businesses that need websites to generate leads, sell online, issue invoices or accept payments. And there’s a world of difference between regular website builders and the ones that are designed with specific business needs in mind.

What does a good business website builder do? First, it creates websites that engage visitors so that they start interacting. This is done with the help of tools like website live chat, contact form or a call back request widget. Second, it comes with a landing page designer, because business websites are all about conversion rates, and increasing conversion rates requires endless tweaking and repeated testing. Third, integration between a website and a CRM system is crucial. It’s difficult to attract traffic to websites and advertising expensive. So, it makes sense that every prospect from the website as logged into CRM automatically and that you sell your goods and services to clients not only once but on a regular basis. This is why Bitrix24 comes with email and SMS marketing and advertising ROI calculator.

Another critical requirement for many business websites is ability to accept payments online and function as an ecommerce store, with order processing and inventory management. Bitrix24 does that too. Importantly, unlike other ecommerce platforms, Bitrix24 doesn’t charge any transaction fees or come with sales volume limits.

What else does Bitrix24 offer free of charge?

The only practical limit of the free plan is 12 users inside the account. You can use your own domain free of charge, the bandwidth is free and unlimited and there’s only a technical limit on the number of free pages allowed (around 100) in order to prevent misusing Bitrix24 for SEO-spam pages. In addition to offering free cloud service, Bitrix24 has on-premise editions with open source code access that can be purchased. This means that you can migrate your cloud Bitrix24 account to your own server at any moment, if necessary.

To register your free Bitrix24 account, simply click here. And if you have a public Facebook or Twitter profile and share this post, you’ll be automatically entered into a contest, in which the winner gets a 24-month subscription for the Bitrix24 Professional plan ($ 3,336 value).

Direct Link to ArticlePermalink

The post Free Website Builder + Free CRM + Free Live Chat = Bitrix24 appeared first on CSS-Tricks.


, , , , ,

Understanding Immutability in JavaScript

If you haven’t worked with immutability in JavaScript before, you might find it easy to confuse it with assigning a variable to a new value, or reassignment. While it’s possible to reassign variables and values declared using let or var, you’ll begin to run into issues when you try that with const.

Say we assign the value Kingsley to a variable called firstName:

let firstName = "Kingsley";

We can reassign a new value to the same variable,

firstName = "John";

This is possible because we used let. If we happen to use const instead like this:

const lastName = "Silas";

…we will get an error when we try to assign it to a new value;

lastName = "Doe" // TypeError: Assignment to constant variable.

That is not immutability.

An important concept you’ll hear working with a framework, like React, is that mutating states is a bad idea. The same applies to props. Yet, it is important to know that immutability is not a React concept. React happens to make use of the idea of immutability when working with things like state and props.

What the heck does that mean? That’s where we’re going to pick things up.

Mutability is about sticking to the facts

Immutable data cannot change its structure or the data in it. It’s setting a value on a variable that cannot change, making that value a fact, or sort of like a source of truth — the same way a princess kisses a frog hoping it will turn into a handsome prince. Immutability says that frog will always be a frog.


Objects and arrays, on the other hand, allow mutation, meaning the data structure can be changed. Kissing either of those frogs may indeed result in the transformation of a prince if we tell it to.


Say we have a user object like this:

let user = { name: "James Doe", location: "Lagos" }

Next, let’s attempt to create a newUser object using those properties:

let newUser = user

Now let’s imagine the first user changes location. It will directly mutate the user object and affect the newUser as well:

user.location = "Abia" console.log(newUser.location) // "Abia"

This might not be what we want. You can see how this sort of reassignment could cause unintended consequences.

Working with immutable objects

We want to make sure that our object isn’t mutated. If we’re going to make use of a method, it has to return a new object. In essence, we need something called a pure function.

A pure function has two properties that make it unique:

  1. The value it returns is dependent on the input passed. The returned value will not change as long as the inputs do not change.
  2. It does not change things outside of its scope.

By using Object.assign(), we can create a function that does not mutate the object passed to it. This will generate a new object instead by copying the second and third parameters into the empty object passed as the first parameter. Then the new object is returned.

const updateLocation = (data, newLocation) => {     return {       Object.assign({}, data, {         location: newLocation     })   } }

updateLocation() is a pure function. If we pass in the first user object, it returns a new user object with a new value for the location.

Another way to go is using the Spread operator:

const updateLocation = (data, newLocation) => {   return {,     location: newLocation   } }

OK, so how does this all of this fit into React? Let’s get into that next.

Immutability in React

In a typical React application, the state is an object. (Redux makes use of an immutable object as the basis of an application’s store.) React’s reconciliation process determines if a component should re-render or if it needs a way to keep track of the changes.

In other words, if React can’t figure out that the state of a component has changed, then it will not not know to update the Virtual DOM.

Immutability, when enforced, makes it possible to keep track of those changes. This allows React to compare the old state if an object with it’s new state and re-render the component based on that difference.

This is why directly updating state in React is often discouraged:

this.state.username = "jamesdoe";

React will not be sure that the state has changed and is unable to re-render the component.


Redux adheres to the principles of immutability. Its reducers are meant to be pure functions and, as such, they should not mutate the current state but return a new object based on the current state and action. We’d typically make use of the spread operator like we did earlier, yet it is possible to achieve the same using a library called Immutable.js.

While plain JavaScript can handle immutability, it’s possible to run into a handful of pitfalls along the way. Using Immutable.js guarantees immutability while providing a rich API that is big on performance. We won’t be going into all of the fine details of Immutability.js in this piece, but we will look at a quick example that demonstrates using it in a to-do application powered by React and Redux.

First, lets’ start by importing the modules we need and set up the Todo component while we’re at it.

 const { List, Map } = Immutable; const { Provider, connect } = ReactRedux; const { createStore } = Redux;

If you are following along on your local machine. you’ll need to have these packages installed:

npm install redux react-redux immutable 

The import statements will look like this.

import { List, Map } from "immutable"; import { Provider, connect } from "react-redux"; import { createStore } from "redux";

We can then go on to set up our Todo component with some markup:

const Todo = ({ todos, handleNewTodo }) => {   const handleSubmit = event => {     const text =;     if (event.which === 13 && text.length > 0) {       handleNewTodo(text); = "";     }   };    return (     <section className="section">       <div className="box field">         <label className="label">Todo</label>         <div className="control">           <input             type="text"             className="input"             placeholder="Add todo"             onKeyDown={handleSubmit}           />         </div>       </div>       <ul>         { => (           <div key={item.get("id")} className="box">             {item.get("text")}           </div>         ))}       </ul>     </section>   ); };

We’re using the handleSubmit() method to create new to-do items. For the purpose of this example, the user will only be create new to-do items and we only need one action for that:

const actions = {   handleNewTodo(text) {     return {       type: "ADD_TODO",       payload: {         id: uuid.v4(),         text       }     };   } };

The payload we’re creating contains the ID and the text of the to-do item. We can then go on to set up our reducer function and pass the action we created above to the reducer function:

const reducer = function(state = List(), action) {   switch (action.type) {     case "ADD_TODO":       return state.push(Map(action.payload));     default:       return state;   } };

We’re going to make use of connect to create a container component so that we can plug into the store. Then we’ll need to pass in mapStateToProps() and mapDispatchToProps() functions to connect.

const mapStateToProps = state => {   return {     todos: state   }; };  const mapDispatchToProps = dispatch => {   return {     handleNewTodo: text => dispatch(actions.handleNewTodo(text))   }; };  const store = createStore(reducer);  const App = connect(   mapStateToProps,   mapDispatchToProps )(Todo);  const rootElement = document.getElementById("root");  ReactDOM.render(   <Provider store={store}>     <App />   </Provider>,   rootElement );

We’re making use of mapStateToProps() to supply the component with the store’s data. Then we’re using mapDispatchToProps() to make the action creators available as props to the component by binding the action to it.

In the reducer function, we make use of List from Immutable.js to create the initial state of the app.

const reducer = function(state = List(), action) {   switch (action.type) {     case "ADD_TODO":       return state.push(Map(action.payload));     default:       return state;   } };

Think of List as a JavaScript array, which is why we can make use of the .push() method on state. The value used to update state is an object that goes on to say that Map can be recognized as an object. This way, there’s no need to use Object.assign() or the spread operator, as this guarantees that the current state cannot change. This looks a lot cleaner, especially if it turns out that the state is deeply nested — we do not need to have spread operators sprinkled all over

Immutable states make it possible for code to quickly determine if a change has occurred. We do not need to do a recursive comparison on the data to determine if a change happened. That said, it’s important to mention that you might run into performance issues when working with large data structures — there’s a price that comes with copying large data objects.

But data needs to change because there’s otherwise no need for dynamic sites or applications. The important thing is how the data is changed. Immutability provides the right way to change the data (or state) of an application. This makes it possible to trace the state’s changes and determine what the parts of the application should re-render as a result of that change.

Learning about immutability the first time will be confusing. But you’ll become better as you bump into errors that pop up when the state is mutated. That’s often the clearest way to understand the need and benefits of immutability.

Further reading

The post Understanding Immutability in JavaScript appeared first on CSS-Tricks.


, ,

Uses This

A little interview with me over on Uses This. I’ll skip the intro since you know who I am, but I’ll republish the rest here.

What hardware do you use?

I’m a fairly cliché Mac guy. After my first Commodore 64 (and then 128), the only computers I’ve ever had have been from Apple. I’m a longtime loyalist in that way and I don’t regret a second of it. I use the 2018 MacBook Pro tricked out as much as they would sell it to me. It’s the main tool for my job, so I’ve always made sure I had the best equipment I can. A heaping helping of luck and privilege have baked themselves into moderate success for me such that I can afford that.

At the office, I plug it into two of those LG UltraFine 4k monitors, a Microsoft Ergonomic Keyboard, and a Logitech MX Master mouse. I plug in some Audioengine A2s for speakers. Between all those extras, the desk is more cluttered in wires than I would like and I look forward to an actually wireless future.

I’m only at the office say 60% of the time and aside from that just use the MacBook Pro as it is. I’m probably a more efficient coder at the office, but my work is a lot of email and editing and social media and planning and such that is equally efficient away from the fancy office setup.

And what software?

  • Notion for tons of stuff. Project planning. Meeting notes. Documentation. Public documents.
  • Things for personal TODO lists.
  • BusyCal for calendaring.
  • 1Password for password, credit cards, and other secure documents and notes.
  • Slack for team and community chat.
  • WhatsApp for family chat.
  • Zoom for business face-to-face chat and group podcasting.
  • Audio Hijack for locally recording podcasts.
  • FaceTime for family face to face chat.
  • ScreenFlow for big long-form screen recordings.
  • Kap for small short-form screen recordings.
  • CleanMyMac for tidying up.
  • Local for local WordPress development.
  • VS Code for writing code.
  • TablePlus for dealing with databases.
  • Tower for Git.
  • iTerm for command line work.
  • Figma for design.
  • Mailplane to have a tabbed in-dock closable Gmail app.
  • Bear for notes and Markdown writing.

What would be your dream setup?

I’d happily upgrade to a tricked out 16″ MacBook Pro. If I’m just throwing money at things I’d also happily take Apple’s Pro Display XDR, but the price on those is a little frightening. I already have it pretty good, so I don’t do a ton of dreaming about what could be better.

The post Uses This appeared first on CSS-Tricks.



Resizing Values in Steps in CSS

There actually is a steps() function in CSS, but it’s only used for animation. You can’t, for example, tell an element it’s allowed to grow in height but only in steps of 10px. Maybe someday? I dunno. There would have to be some pretty clear use cases that something like background-repeat: space || round; doesn’t already handle.

Another way to handle steps would be sequential media queries.

@media (max-width: 1500px) { body { font-size: 30px; }} @media (max-width: 1400px) { body { font-size: 28px; }} @media (max-width: 1300px) { body { font-size: 26px; }} @media (max-width: 1200px) { body { font-size: 24px; }} @media (max-width: 1000px) { body { font-size: 22px; }} @media (max-width: 900px) { body { font-size: 20px; }} @media (max-width: 800px) { body { font-size: 18px; }} @media (max-width: 700px) { body { font-size: 16px; }} @media (max-width: 600px) { body { font-size: 14px; }} @media (max-width: 500px) { body { font-size: 12px; }} @media (max-width: 400px) { body { font-size: 10px; }} @media (max-width: 300px) { body { font-size: 8px; }}

That’s just weird, and you’d probably want to use fluid typography, but the point here is resizing in steps and not just fluidity.

I came across another way to handle steps in a StackOverflow answer from John Henkel a while ago. (I was informed Star Simpson also called it out.) It’s a ridiculous hack and you should never use it. But it’s a CSS trick so I’m contractually obliged to share it.

The calc function uses double precision float. Therefore it exhibits a step function near 1e18… This will snap to values 0px, 1024px, 2048px, etc.

calc(6e18px + 100vw - 6e18px);

That’s pretty wacky. It’s a weird “implementation detail” that hasn’t been specced, so you’ll only see it in Chrome and Safari.

You can fiddle with that calculation and apply the value to whatever you want. Here’s me tuning it down quite a bit and applying it to font-size instead.

Try resizing that to see the stepping behavior (in Chrome or Safari).

The post Resizing Values in Steps in CSS appeared first on CSS-Tricks.


, ,

How Do You Do max-font-size in CSS?

CSS doesn’t have max-font-size, so if we need something that does something along those lines, we have to get tricky.

Why would you need it at all? Well, font-size itself can be set in dynamic ways. For example, font-size: 10vw;. That’s using “viewport units” to size the type, which will get larger and smaller with the size of the browser window. If we had max-font-size, we could limit how big it gets (similarly the other direction with min-font-size).

One solution is to use a media query at a certain screen size breakpoint that sets the font size in a non-relative unit.

body {   font-size: 3vw; } @media screen and (min-width: 1600px) {   body {      font-size: 30px;   } }

There is a concept dubbed CSS locks that gets fancier here, slowly scaling a value between a minimum and maximum. We’ve covered that. It can be like…

body {   font-size: 16px; } @media screen and (min-width: 320px) {   body {     font-size: calc(16px + 6 * ((100vw - 320px) / 680));   } } @media screen and (min-width: 1000px) {   body {     font-size: 22px;   } }

We’ve also covered how it’s gotten (or will get) a lot simpler.

There is a max() function in CSS, so our example above becomes a one-liner:

font-size: max(30vw, 30px);

Or double it up with a min and max:

font-size: min(max(16px, 4vw), 22px);

Which is identical to:

font-size: clamp(16px, 4vw, 22px);

Browser compatibility for these functions is pretty sparse as I’m writing this, but Chrome currently has it. It will get there, but look at the first option in this article if you need it right now.

Now that we have these functions, it seems unlikely to me we’ll ever get min-font-size and max-font-size in CSS, since the functions are almost more clear as-is.

The post How Do You Do max-font-size in CSS? appeared first on CSS-Tricks.