CSS evolved and we’re beyond the point where everyone can just do it as a side interest. We all can learn it and build amazing stuff with it, but using it wisely and correctly in a large-scale context isn’t an easy job anymore. It deserves people whose work is to focus on that part of the code.
Anselm is partly in responding to Sacha Greif’s “Is There Too Much CSS Now?” and the overall sentiment that CSS has a much higher barrier to entry for those learning it today than it did, say, in the CSS3 days. Back then, there was a super direct path to see the magic of CSS. Rachel Andrew perfectly captures that magic feeling in a prescient post from 2019:
There is something remarkable about the fact that, with everything we have created in the past 20 years or so, I can still take a complete beginner and teach them to build a simple webpage with HTML and CSS, in a day. […] We just need a text editor and a few hours. This is how we make things show up on a webpage.
Rachel is speaking to the abstraction of frameworks on top of vanilla CSS (and HTML) but you might as well tack big, shiny, and fairly new features on there, like CSS grid, flexbox, container queries, cascade layers, custom properties, and relational pseudo-classes, to name a few. Not that those are abstractions, of course. There’s just a lot to learn right now, whether you’ve been writing CSS for 20 days or 20 years.
But back to Anselm’s post. Do we need to think about CSS as more than just, you know, styling things? I often joke that my job is slapping paint on websites to make them pretty. But, honestly, I know it’s a lot more than that. We all know it’s more than that.
Maybe CSS is an industry in itself. Think of all the possible considerations that have to pass through your brain when writing CSS rules. Heck, Ahmad Shadeed recently shared all the things his brain processes just to style a Hero component. CSS touches so much of the overall user experience — responsiveness, accessibility, performance, cross-browser, etc. — that it clearly goes well beyond “slapping paint on websites”. So far beyond that each of those things could be someone’s full-time gig, depending on the project.
So, yes, CSS has reached a point where I could imagine seeing “CSS Engineer” on some job board. As Anselm said, “[CSS] deserves people whose work is to focus on that part of the code.” Seen that way, it’s not so hard to imagine front-end development as a whole evolving into areas of specialization, just like many other industries.
Spammers are a huge deal nowadays. If you want to share your contact information without getting overwhelmed by spam email you need a solution. I run into this problem a few months ago. While I was researching how to solve it, I found different interesting solutions. Only one of them was perfect for my needs.
In this article, I am going to show you how to easily protect your email address from spam bots with multiple solutions. It’s up to you to decide what technique fits your needs.
Let’s say that you have a website. You want to share your contact details, and you don’t want to share only your social links. The email address must be there. Easy, right? You type something like this:
<a href="mailto:email@address.com">Send me an Email</a>
And then you style it according to your tastes.
Well, even if this solution works, it has a problem. It makes your email address available to literally everyone, including website crawlers and all sorts of spam bots. This means that your inbox can be flooded with tons of unwanted rubbish like promotional offers or even some phishing campaign.
We are looking for a compromise. We want to make it hard for bots to get our email addresses, but as simple as possible for normal users.
The solution is obfuscation.
Obfuscation is the practice of making something difficult to understand. This strategy is used with source code for multiple reasons. One of them is hiding the purpose of the source code to make tampering or reverse-engineering more difficult. We will first look at different solutions that are all based on the idea of obfuscation.
The HTML approach
We can think of bots as software that browse the web and crawl through web pages. Once a bot obtains an HTML document, it interprets the content in it and extracts information. This extraction process is called web scraping. If a bot is looking for a pattern that matches the email format, we can try to disguise it by using a different format. For example, we could use HTML comments:
<p>If you want to get in touch, please drop me an email at<!-- fhetydagzzzgjds --> email@<!-- sdfjsdhfkjypcs -->addr<!-- asjoxp -->ess.com</p>
It looks messy, but the user will see the email address like this:
If you want to get in touch, please drop me an email at email@address.com
Pros:
Easy to set up.
It works with JavaScript disabled.
It can be read by assistive technology.
Cons:
Spam bots can skip known sequences like comments.
It doesn’t work with a mailto: link.
The HTML & CSS approach
What if we use the styling power of CSS to remove some content placed only to fool spam bots? Let’s say that we have the same content as before, but this time we place a span element inside:
<p>If you want to get in touch, please drop me an email at <span class="blockspam" aria-hidden="true">PLEASE GO AWAY!</span> email@<!-- sdfjsdhfkjypcs -->address.com</p>.
Then, we use the following CSS style rule:
span.blockspam { display: none; }
The final user will only see this:
If you want to get in touch, please drop me an email at email@address.com.
…which is the content we truly care about.
Pros:
It works with JavaScript disabled.
It’s more difficult for bots to get the email address.
It can be read by assistive technology.
Con:
It doesn’t work with a mailto: link.
The JavaScript approach
In this example, we use JavaScript to make our email address unreadable. Then, when the page is loaded, JavaScript makes the email address readable again. This way, our users can get the email address.
The easiest solution uses the Base64 encoding algorithm to decode the email address. First, we need to encode the email address in Base64. We can use some websites like Base64Encode.org to do this. Type in your email address like this:
Then, click the button to encode. With these few lines of JavaScript we decode the email address and set the href attribute in the HTML link:
var encEmail = "ZW1haWxAYWRkcmVzcy5jb20="; const form = document.getElementById("contact"); form.setAttribute("href", "mailto:".concat(atob(encEmail)));
Then we have to make sure the email link includes id="contact" in the markup, like this:
<a id="contact" href="">Send me an Email</a>
We are using the atob method to decode a string of Base64-encoded data. An alternative is to use some basic encryption algorithm like the Caesar cipher, which is fairly straightforward to implement in JavaScript.
Pros:
It’s more complicated for bots to get the email address, especially if you use an encryption algorithm.
It works with a mailto: link.
It can be read by assistive technology.
Con:
JavaScript must be enabled on the browser, otherwise, the link will be empty.
The embedded form approach
Contact forms are everywhere. You certainly have used one of them at least once. If you want a way for people to directly contact you, one of the possible solutions is implementing a contact form service on your website.
Formspree is one example of service which provides you all the benefits of a contact form without worrying about server-side code. Wufoo is too. In fact, here is a bunch you can consider for handling contact form submissions for you.
The first step to using any form service is to sign up and create an account. Pricing varies, of course, as do the features offered between services. But one thing most of them do is provide you with an HTML snippet to embed a form you create into any website or app. Here’s an example I pulled straight from a form I created in my Formspring account
In the first line, you should customize action based on your endpoint. This form quite basic, but you can add as many fields as you wish.
Notice the hidden input tag on line 9. This input tag helps you filter the submissions made by regular users and bots. In fact, if Formspree’s back-end sees a submission with that input filled, it will discard it. A regular user wouldn’t do that, so it must be a bot.
Pros:
Your email address is safe since it is not public.
It works with Javascript disabled.
Con:
Relies on a third-party service (which may be a pro, depending on your needs)
There is one other disadvantage to this solution but I left it out of the list since it’s quite subjective and it depends on your use case. With this solution, you are not sharing your email address. You are giving people a way to contact you. What if people want to email you? What if people are looking for your email address, and they don’t want a contact form? A contact form may be a heavy-handed solution in that sort of situation.
Conclusion
We reached the end! In this tutorial, we talked about different solutions to the problem of online email sharing. We walked through different ideas, involving HTML code, JavaScript and even some online services like Formspree to build contact forms. At the end of this tutorial, you should be aware of all the pros and cons of the strategies shown. Now, it’s up to you to pick up the most suitable one for the your specific use case.
I wanted to write down what I think the reasons are here in December of 2021 so that we might revisit it from time to time in the future and see if these reasons are still relevant. I’m a web guy myself, so I’m interested in seeing how the web can evolve to mitigate these concerns.
I’m exclusively focusing on reasons a native app might either be a distinct advantage or at least feel like an advantage compared to a website. Nothing subjective here, like “it’s faster to develop on” or “native apps are more intuitive” or the like, which are too subjective to quantify. I’m also not getting into reasons where the web has the advantage. But in case you are unsure, there are many: it’s an open standardized platform, it will outlast closed systems, it strongly values backward compatibility, anybody can build for the web, it runs cross-platform, and heck, URLs alone are reason enough to go web-first. But that’s not to say native apps don’t have some extremely compelling things they offer, hence this post.
Because they get an icon on the home screen of the device.
It’s a mindshare thing. You pick up your phone, the icon is staring you in the face, begging to be opened. A widely cited report from a few years back suggests 90% of phone usage is in apps (as opposed to a mobile web browser), even though there is plenty of acknowledged gray area there. So, theoretically, you get to be part of that 90% if you make an app rather than being booted into the sad 10% zone.
If reach is the top concern, it seems like the best play is having both a web app and a native app. That way, you’re benefitting from the share-ability and search-ability that URLs give you, along with a strong presence on the device.
Looking at my own phone’s home screen, the vast majority of the apps are both native and web apps: Google Calendar, AccuWeather, Google Maps, Spotify, Notion, Front, Pocket Casts, Instagram, Discord, Twitter, GitHub, Slack, and Gmail. That’s a lot of big players doing it both ways.
Potential Solution: Both iOS and Android have “Add to Home Screen” options for websites. It’s just a fairly “buried” feature and it’s unlikely all that many people use it. Chrome on Android goes a step further, offering a Native App Install Prompt for apps for Progressive Web App (PWA) websites that meet a handful of extra criteria, like the site has been interacted with for at least 30 seconds. Native App Install Prompts are a big tool that levels this playing field, and it would be ideal to see Apple support PWAs better and offer this. There isn’t that much we can do as website authors; we wait and hope mobile operating systems make it better. There is also the whole world of software tools where what you build can be delivered both as a web app and native app, like Flutter.
Because they launch fast.
Native apps have a bunch of the resources required to run the app locally meaning they don’t need to get them from the network when opened.
Potential Solution: This is only partially true, to begin with. When you download the app, you’re downloading the resources just like the web does. The web caches resources by default, and has more advanced caching available through Service Workers. PWAs are competitive here. Native apps aren’t automatically faster.
Because it’s harder for users to block ads and easier to collect data.
But those work only in mobile web browsers, not native apps. If you’d like to block ads in native apps… too bad, I guess? If the point of the thing you are building is to show ads or track users, well, I guess you’ll do better with a native app, assuming you can get the same kind of traffic to it as you can a website (which is questionable).
I’m hesitant to put “because native apps are more secure” as a reason on this list because of the fact that, as a user, you have so little control over how resources load that it doesn’t feel like an increased security risk to me. But the fact that native apps typically go through an approval process before they are available in an app store does offer some degree of protection that the web doesn’t have.
Potential Solution: Allow ad/tracker blocking apps to work with native apps.
Because users stay logged in.
This is a big one! Once you’re logged in to a native app, you tend to stay logged in until you specifically log out, or a special event happens like you change your password on another device. Web apps seem to lose login status far more often than one might like, and that adds to a subconscious feeling about the app. When I open my native Twitter app, it just opens and it’s ready to use. If I thought there was a 30% chance I’d have to log in, I’m sure I’d use it far less. (And hey, that might be a good thing, but a business sure won’t think so.)
There is also a somewhat awkward thing with web apps in that, even if you’re logged in on your mobile devices primary browser, you won’t necessarily be logged into some other app’s in-app browser context — whereas, with native apps, they often intercept links and always open in the native app context.
Potential Solution: There isn’t any amazing solution here. It’s largely just trickery. For example, long cookie expiration dates (six months is about what you can get if you’re lucky, I hear). Or you can do a thing where you keep a JSON Web Token (JWT) in storage and do a rolling re-auth on it behind the scenes. There are some other solutions in this thread, many of which are a bit above my head. Making the log in experience easier is also a thing, like using oAuth or magic email links, but it’s just not the same as that “always logged in” feeling. Maybe smart browser people can think of something.
Because the apps can have that “native feel.”
Typically meaning: fast and smooth, but also that they look the way other apps on that platform look and feel. For example, Apple offers SwiftUI which is specifically for building native apps using pre-built componentry that looks Apple-y. Replicating all that on the web is going to be hard. You have to work your ass off to make it as good as what you get “for free” with something like SwiftUI.
Potential Solution: Mobile platform creators could offer UI kits that bring the design language of that mobile platform to the web. That’s largely what Google has done with Material, although the web version of it isn’t ready to use yet and is just considered a “planned” release.
Because they aren’t sharing a virtual space with competitors a tap away.
There is a sentiment that a web browser is just the wild west as you aren’t in control of where your users can run off to. If they are in your native app, good, that’s where you want them. If they are in a web browser, you’re sharing turf with untold other apps rather than keeping them on your holy ground.
Potential Solution: Get over it. Trying to hide the fact that competitors exist isn’t a good business strategy. The strength of the web is being able to hop around on a shared, standardized, open platform.
Because they get the full feature set of APIs.
All the web can hope for is the same API access as native apps have. For example, access to a camera, photos, GPS, push notifications, contacts, etc. Native apps get the APIs first, then if we’re lucky, years later, we get some kind of access on the web.
Developers might literally be forced into a native app if a crucial API is missing on the web.
One big example is push notifications. Are they generally annoying? Yes, but it’s a heavily used API and often a business requirement. And it makes plenty of sense for lots of apps (“Your turn is coming up in 500 feet,” “Your driver is here,” “Your baggage will be arriving on carousel 9,” etc.). If it’s crucial your app has good push notifications, the web isn’t an option for your app on iOS.
Potential Solution: The web should have feature parity with device APIs and new APIs should launch for both simultaneously.
Because there is an app store.
This is about discoverability. Being the one-and-only app store on a platform means you potentially get eyeballs on your app for free. If you make the best Skee-Ball game for a platform, there is a decent chance that you get good reviews and end up a top search for “Skee-Ball” in that app store, gaining you the majority share of whatever that niche market is. That’s an appealing prospect for a developer. The sea is much larger on the web, not to mention that SEO is a harder game to play and both advertising and marketing are expensive. A developer might pick a native app just because you can be a bigger fish right out of the gate than you can on the web.
And yet, if you build an app for listening to music, you’ll never beat out the major players, especially when the makers of the platform have their own apps to compete with. The web just might offer better opportunities for apps in highly competitive markets because of the wider potential audience.
Potential Solution: Allow web apps into app stores.
Because offline support is more straightforward.
The only offline capability on the web at all is via Service Workers. They are really cool, but I might argue that they aren’t particularly easy to implement, especially if the plan is using them to offer truly offline experiences for a web app that otherwise heavily relies on the network.
Native apps are less reliant on the network for everything. The core assets that make the app work are already available locally. So if a native app doesn’t need the network (say, a game), it works offline just fine. Even if it does need the network to be at its best, having your last-saved data available seems like a typical approach that many apps take.
Potential Solution: Make building offline web apps easier.
I’m a web guy and I feel like building for the web is the smart play for the vast majority of “app” situations. But I gotta admit the list of reasons for a business to go for a native app is thick enough right now that it’s no surprise that many of them do. The most successful seem to do both, despite the extreme cost of maintaining both. Like responsive design was successful in preventing us from having to build multiple versions of a website, I hope this complex situation moves in the direction of having websites be the obvious and only choice for any type of app.
When’s the last time you read your website? Like out loud in the lobby of a Starbucks on a weekday afternoon, over the phone to your parents, or perhaps even as a bedtime story for your kids.
No worries, this isn’t a trick question or anything—just a gut check.
If there’s only one thing you can do to make your website better, then you could do a heckuva lot worse than taking some time to read it. Seriously, do more than look at the words—read them and take in everything that’s being said from the top to the very bottom. And really get in there. I’m talking about opening up everything in the navigation, expanding accordions, opening modals, and taking it all in. Read it the way Wendy’s makes their burgers: no cut corners or nothing.
I’ve heard for years that content is capital “K” King of the capital “W” Web. In my personal experience, though, I routinely see content treated more as a pauper in projects; the lowest rung of the web ladder saved as an afterthought for when everything else is done. FPO all over. It’s not so much that no one cares about what the website says, but that a great deal of attention and effort is placed on design and architecture (among other things, of course), to the extent that there’s little-to-no time to drive an effective strategy for content.
But again, that’s just my experience, and there have certainly been exceptions to that rule.
The problem, I think, is that we know just how effective good content is but often lack the confidence, tooling, and even a clear understanding of content’s role on the web.
Content is problem solving
When crafted with care, content becomes much more than strings of text. That’s because content is more than what we see on the front end. It is in the alt attributes we write. It’s also the structured data in the document <head>. Heck, we sometimes even generate it in the CSS content property.
We start to see the power of content when we open up our understanding of what it is, what it does, and where it’s used. That might make content one of the most extensible problem-solving tools in your metaphorical shed—it makes sites more accessible, extracts Google-juicing superpowers, converts sales, and creates pathways for users to accomplish what they need to do.
Content wants to be seen
Content is like my five-year old daughter: it hates being in the dark. I’d argue there’s little else that’s more defeating to content than hiding it. Remember our metaphorical tool shed? Content is unable to solve problems when it’s out of sight. Content is that cool dorm room poster you couldn’t wait to hang up the moment you stepped foot on campus—show it off, let it do its thing.
The way I see it, if something is important enough to type, then it’s probably important enough to show it too. That means avoiding anything that might obstruct it from a user’s view, like against a background with poor contrast or content that overflows a container. Content is only as good as it is presented. Your website could sport the greatest word-smithing of all time, but it’s no good if it’s hidden. Content wants to be seen.
Are there times when it’s legitimately fine for content to be unseen? You betcha. Skip to content links, for one. I’m not once of those armchair designers (wait, aren’t all designers in some sort of chair?) who is going to tell you certain UI elements—like modals, accordions, and carousels—are evil. It’s more about knowing the best way to present content, and sometimes containing it in a collapsed <details>/<summary> element is the best call.
But please, for the sake of all virtual humanity, use elements to enhance content, not mask its issues.
Content is efficient
They say a picture is worth a thousand words, right? I mean, that’s cool, I guess. But what exactly are those words and who decides what they are? The end-user, of course! Images are mostly subjective like that, which is what makes them a great complement for content. But images as complete content replacement? I imagine there are way more times where pairing content with imagery is more effective than an image alone.
It depends, of course.
Something we can all agree on is that content is way more efficient than most images. For one, there’s less room for ambiguity, making content more efficient to get a point across. Who needs the representation of a thousand words when we can communicate the same thing just as effectively in a few typed words?
Then there are plenty of accessibility considerations to take into account with images. A background image is unable to convey an idea to a screen reader; that is, unless we inline it, set the alt attribute, and use crafty CSS positioning to create some sort of faux background (faukground?). Even then, we’re already dealing with the alt attribute, so we may as well put that content to real use and display it! That’s way more efficient than coding around it.
We haven’t even gotten into how much larger the kilobytes and megabytes of an image file are compared to content bytes, even in an article that’s chock-full of content like this one. And, sure, this warrants a big ol’ caveat in the form of web fonts and the hit those can have on performance, but such is life.
Content and design work better together
Just because I mentioned content being king earlier doesn’t mean I believe it rules everything else. In fact, I find that content relies on design just as much as design relies on content. There is no cage match pitting one against the other in a fight to the death.
The two work hand-in-hand. I’d even go so far as to say that a lot of design is about enhancing what is communicated on a page. There is no upstaging one or the other. Think of content and design as supporting one another, where the sum of both creates a compelling call-to-action, long-form post, hero banner, and so on. We often think of patterns in a design system as a collection of components that are stitched together to create something new. Pairing content and design works much the same way.
So, why are you reading this?
Go read your site… like now. Belt it out loud. Bring it into the light of day and let it be seen and heard. That’s what it’s there for. It’s your friend for making a better site—whether it’s to be more accessible, profitable, performant, findable, and personable. It gets users from Point A to Point B. It’s how we move on the web and how the web is inextricably linked together.
And, please please please, record yourself reading your site out loud. How cool would it be for the comments in this post to be a living collection of celebrating content and letting it be heard!
Many business websites need a multilingual setup. As with anything development-related, implementing one in an easy, efficient, and maintainable way is desirable. Designing and developing to be ready for multiple languages, whether it happens right at launch or is expected to happen at any point in the future, is smart.
Changing the language and content is the easy part. But when you do that, sometimes the language you are changing to has a different direction. For example, text (and thus layout) in English flows left-to-right while text (and thus layout) in Arabic goes right-to-left.
In this article, I want to build a multilingual landing page and share some CSS techniques that make this process easier. Hopefully the next time you’ll need to do the same thing, you’ll have some implementation techniques to draw from.
We’ll cover six major points. I believe that the first five are straightforward. The sixth includes multiple options that you need to think about first.
1. Start with the HTML markup
The lang and dir attributes will define the page’s language and direction.
Then we can use these attributes in selectors to do the the styling. lang and dir attributes are on the HTML tag or a specific element in which the language varies from the rest of the page. Those attributes help improve the website’s SEO by showing the website in the right language for users who search for it in case that each language has a separate HTML document.
Also, we need to ensure that the charset meta tag is included and its value is UTF-8 since it’s the only valid encoding for HTML documents which also supports all languages.
<meta charset="utf-8">
I’ve prepared a landing page in three different languages for demonstration purposes. It includes the HTML, CSS, and JavaScript we need.
2. CSS Custom Properties are your friend
Changing the direction may lead to inverting some properties. So, if you used the CSS property left in a left-to-right layout, you probably need right in the right-to-left layout, and so on. And changing the language may lead to changing font families, font sizes, etc.
These multiple changes may cause unclean and difficult to maintain code. Instead, we can assign the value to a custom property, then change the value when needed. This is also great for responsiveness and other things that might need a toggle, like dark mode. We can change the font-size, margin, padding, colors, etc., in the blink of an eye, where the values then cascade to wherever needed.
Here are some of the CSS custom properties that we are using in this example:
While styling our page, we may add/change some of these custom properties, and that is entirely natural. Although this article is about multi-directional websites, here’s a quick example that shows how we can re-assign custom property values by having one set of values on the <body>, then another set when the <body> contains a .dark class:
That’s the general idea. We’re going to use custom properties in the same sort of way, though for changing language directions.
3) CSS pseudo-classes and selectors
CSS has a few features that help with writing directions. The following two pseudo-classes and attribute are good examples that we can put to use in this example.
The :lang() pseudo-class
We can use :lang() pseudo-class to target specific languages and apply CSS property values to them individually, or together. For example, in this example, we can change the font size when the :lang pseudo-class switches to either Arabic or Japanese:
Once we do that, we also need to change the writing-mode property from its horizontal left-to-right default direction to vertical right-to-left direction account:
The :attr() pseudo-class helps makes the “content” of the pseudo-elements like ::before or ::after “dynamic” in a sense, where we can drop the dir HTML attribute into the CSS content property using the attr() function. That way, the value of dir determines what we’re selecting and styling.
<div dir="ltr"></div> <div dir="rtl"></div>
div::after { content: attr(dir); }
The power is the ability to use any custom data attribute. Here, we’re using a custom data-name attribute whose value is used in the CSS:
This makes it relatively easy to change the content after switching that language without changing the style. But, back to our design. The three-up grid of cards has a yellow “special” or “best” off mark beside an image.
Assign a “special offer” or “best offer” value to it.
Finally, we can use the data-offer attribute in our style:
.offers__item::after { content: attr(data-offer); /* etc. */ }
Select by the dir attribute
Many languages are left-to-right, but some are not. We can specify what should be different in the [dir='rtl']. This attribute must be on the element itself or we can use nesting to reach the wanted element. Since we’ve already added the dir attribute to our HTML tag, then we can use it in nesting. We will use it later on our sample page.
4. Prepare the web fonts
In a multilingual website, we may also want to change the font family between languages because perhaps a particular font is more legible for a particular language.
Fallback fonts
We can benefit from the fallback by writing the right-to-left font after the default one.
font-family: 'Roboto', 'Tajawal', sans-serif;
This helps in cases where the default font doesn’t support right-to-left. That snippet above is using the Roboto font, which doesn’t support Arabic letters. If the default font supports right-to-left (like the Cairo font), and the design needs it to be changed, then this is not a perfect solution.
font-family: 'Cairo', 'Tajawal', sans-serif; /* won't work as expected */
Let’s look at another way.
Using CSS variables and the :lang() pseudo-class
We can mix the previous two technique where we change the font-family property value using custom properties that are re-assigned by the :lang pseudo class.
html { --font-family: 'Roboto', sans-serif; } html:lang(ar){ --font-family: 'Tajawal', sans-serif; } html:lang(jp){ --font-family: 'Noto Sans JP', sans-serif; }
5. CSS Logical Properties
In CSS days past, we used to use left and right to define offsets along the x-axis, and the top and bottom properties to to define offsets along the y-axis. That makes direction switching a headache. Fortunately, CSS supports logical properties that define direction‐relative equivalents of the older physical properties. They support things like positioning, alignment, margin, padding, border, etc.
If the writing mode is horizontal (like English), then the logical inline direction is along the x-axis and the block direction refers to the y-axis. Those directions are flipped in a vertical writing mode, where inline travels the y-axis and and block flows along the x-axis.
Writing Mode
x-axis
y-axis
horizontal
inline
block
vertical
block
inline
In other words, the block dimension is the direction perpendicular to the writing mode and the inline dimension is the direction parallel to the writing mode. Both inline and block levels have start and end values to define a specific direction. For example, we can use margin-inline-start instead of margin-left. This mean the margin direction automatically inverts when the page direction is rtl. It’s like our CSS is direction-aware and adapts when changing contexts.
There is another article on CSS-Tricks, Building Multi-Directional Layouts from Ahmad El-Alfy, that goes into the usefulness of building websites in multiple languages using logical properties.
This is exactly how we can handle margins, padding and borders. We’ll use them in the footer section to change which border gets the rounded edge.
The top-tight edge of the border is rounded in a default ltr writing mode.
As long as we’re using the logical equivalent of border-top-right-radius, CSS will handle that change for us.
.footer { border-start-end-radius: 120px; }
Now, after switching to the rtl direction, it’ll work fine.
The “call to action” section is another great place to apply this:
You might be wondering exactly how the block and inline dimensions reverse when the writing mode changes. Back to the Japanese version, the text is from vertical, going from top-to-bottom. I added this line to the code:
/* The "About" section when langauge is Japanese */ html:lang(jp) .about__text { margin-block-end: auto; width: max-content; }
Although I added margin to the “block” level, it is applied it to the left margin. That’s because the text rotated 90 degrees when the language switched and flows in a vertical direction.
6. Other layout considerations
Even after all this prep, sometimes where elements move to when the direction and language change is way off. There are multiple factors at play, so let’s take a look.
Position
Using an absolute or fixed position to move elements may affect how elements shift when changing directions. Some designs need it. But I’d still recommend asking yourself: do I really need this?
Fro example, the newsletter subscription form in the footer section of our example can be implemented using position. The form itself takes the relative position, while the button takes the absolute position.
<header class="hero relative"> <!-- etc. --> <div class="hero__social absolute"> <div class="d-flex flex-col"> <!-- etc. --> </div> </div> </header>
Note that an .absolute class is in there that applies position: absolute to the hero section’s social widget. Meanwhile, the hero itself is relatively positioned.
How we move the social widget halfway down the y-axis:
In the Arabic, we can fix the ::before pseudo-class position that is used in the background using the same technique we use in the footer form. That said, there are multiple issues we need to fix here:
The clip-path direction
The background linear-gradient
The coffee-cup image direction
The social media box’s position
Let’s use a simple flip trick instead. First, we wrap the hero content, and social content in two distinct wrapper elements instead of one:
Yeah, that’s all. This simple trick is also helpful if the hero’s background is an image.
transform: translate()
This CSS property and value function helps move the element on one or more axes. The difference between ltr and rtl is that the x-axis is the inverse/negative value of the current value. So, we can store the value in a variable and change it according to the language.
html { --about-img-background-move: -20%; } html[dir='rtl']{ --about-img-background-move: 20%; }
We can do the same thing for the background image in the another section:
Margins are used to extend or reduce spaces between elements. It accepts negative values, too. For example, a positive margin-top value (20%) pushes the content down, while a negative value (e.g. -20%) pulls the content up.
If margins values are negative, then the top and left margins move the item up or to the left. However, the right and bottom margins do not. Instead, they pull content that is located in the right of the item towards the left, and the content underneath the item up. For example, if we apply a negative top margin and negative bottom margin together on the same item, the item is moved up and pull the content below it up into the item.
The result of the above code should be something like this:
Let’s add these negative margins to the #d2 element:
#d2 { margin-top: -40px; margin-bottom: -70px; }
Notice how the second box in the diagram moves up, thanks to a negative margin-top value, and the green box also moves up an overlaps the second box, thanks to a negative margin-bottom value.
The next thing you might be asking: But what is the difference between transform: translate and the margins?
When moving an element with a negative margin, the initial space that is taken by the element is no longer there. But in the case of translating the element using a transform, the opposite is true. In other words, a negative margin leads pulls the element up, but the transform merely changes its position, without losing the space reserved for it.
You can see that, although the element is pulled up, its initial space is still there according to the natural document flow.
Flexbox
The display: flex provides a quick way to control the how the elements are aligned in their container. We can use align-items and justify-content to align child elements at the parent level.
In our example, we can use flexbox in almost every section to make the column layout. Plus, we can use it in the “offers” section to center the set of those yellow “special” and “best” marks:
If the flex-direction value is row, then we can benefit from controlling the width for each element. In the “hero” section, we need to set the image on the angled slope of the background where the color transitions from dark gray to yellow.
Both elements take up a a total of 89.83% of the parent container’s width. Since we didn’t specify justify-content on the parent, it defaults to start, leaving the remaining width at the end.
We can combine the flexbox with any of the previous techniques we’ve seen, like transforms and margins. This can help us to reduce how many position instances are in our code. Let’s use it with a negative margin in the “call to action” section to locate the image.
Because we didn’t specify the flex-wrap and flex-basis properties, the image and the text both fit in the parent. However, since we used a negative margin, the image is pulled to the left, along with its width. This saves extra space for the text. We also want to use a logical property, inline-start, instead of left to handle switching to the rtl direction.
Grid
Finally, we can use a grid container to positing the elements. CSS Grid is powerful (and different than flexbox) in that it lays things along both the x-axis and the y-axis as opposed to only one of them.
Suppose that in the “offers” section, the role of the “see all” button is to get extra data that to display on the page. Here’s JavaScript code to repeat the current content:
Our page is far from being the best example of how CSS Grid works. While I was browsing the designs online, I found a design that uses the following structure:
Notice how CSS Grid makes the responsive layout without media queries. And as you might expect, it works well for changing writing modes where we adjust where elements go on the grid based on the current writing mode.
Wrapping up
Here is the final version of the page. I ensured to implement the responsiveness with a mobile-first approach to show you the power of the CSS variables. Be sure to open the demo in full page mode as well.
I hope these techniques help make creating multilingual designs easier for you. We looked at a bunch of CSS properties we can use to apply styles to specific languages. And we looked at different approaches to do that, like selecting the :lang pseudo-class and data attributes using the attr() function. As part of this, we covered what logical properties are in CSS and how they adapt to a document’s writing mode—which is so much nicer than having to write additional CSS rulesets to swap out physical property units that otherwise are unaffected by the writing mode.
We also checked out a number of different positioning and layout techniques, looking specifically at how different techniques are more responsive and maintainable than others. For example, CSS Grid and Flexbox are equipped with features that can re-align elements inside of a container based on changing conditions.
Clearly, there are lots of moving pieces when working with a multilingual site. There are probably other requirements you need to consider when optimizing a site for specific languages, but the stuff we covered here together should give you all of the layout-bending superpowers you need to create robust layouts that accommodate any number of languages and writing modes.
Many business websites need a multilingual setup. As with anything development-related, implementing one in an easy, efficient, and maintainable way is desirable. Designing and developing to be ready for multiple languages, whether it happens right at launch or is expected to happen at any point in the future, is smart.
Changing the language and content is the easy part. But when you do that, sometimes the language you are changing to has a different direction. For example, text (and thus layout) in English flows left-to-right while text (and thus layout) in Arabic goes right-to-left.
In this article, I want to build a multilingual landing page and share some CSS techniques that make this process easier. Hopefully the next time you’ll need to do the same thing, you’ll have some implementation techniques to draw from.
We’ll cover six major points. I believe that the first five are straightforward. The sixth includes multiple options that you need to think about first.
1. Start with the HTML markup
The lang and dir attributes will define the page’s language and direction.
Then we can use these attributes in selectors to do the the styling. lang and dir attributes are on the HTML tag or a specific element in which the language varies from the rest of the page. Those attributes help improve the website’s SEO by showing the website in the right language for users who search for it in case that each language has a separate HTML document.
Also, we need to ensure that the charset meta tag is included and its value is UTF-8 since it’s the only valid encoding for HTML documents which also supports all languages.
<meta charset="utf-8">
I’ve prepared a landing page in three different languages for demonstration purposes. It includes the HTML, CSS, and JavaScript we need.
2. CSS Custom Properties are your friend
Changing the direction may lead to inverting some properties. So, if you used the CSS property left in a left-to-right layout, you probably need right in the right-to-left layout, and so on. And changing the language may lead to changing font families, font sizes, etc.
These multiple changes may cause unclean and difficult to maintain code. Instead, we can assign the value to a custom property, then change the value when needed. This is also great for responsiveness and other things that might need a toggle, like dark mode. We can change the font-size, margin, padding, colors, etc., in the blink of an eye, where the values then cascade to wherever needed.
Here are some of the CSS custom properties that we are using in this example:
While styling our page, we may add/change some of these custom properties, and that is entirely natural. Although this article is about multi-directional websites, here’s a quick example that shows how we can re-assign custom property values by having one set of values on the <body>, then another set when the <body> contains a .dark class:
That’s the general idea. We’re going to use custom properties in the same sort of way, though for changing language directions.
3) CSS pseudo-classes and selectors
CSS has a few features that help with writing directions. The following two pseudo-classes and attribute are good examples that we can put to use in this example.
The :lang() pseudo-class
We can use :lang() pseudo-class to target specific languages and apply CSS property values to them individually, or together. For example, in this example, we can change the font size when the :lang pseudo-class switches to either Arabic or Japanese:
Once we do that, we also need to change the writing-mode property from its horizontal left-to-right default direction to vertical right-to-left direction account:
The :attr() pseudo-class helps makes the “content” of the pseudo-elements like ::before or ::after “dynamic” in a sense, where we can drop the dir HTML attribute into the CSS content property using the attr() function. That way, the value of dir determines what we’re selecting and styling.
<div dir="ltr"></div> <div dir="rtl"></div>
div::after { content: attr(dir); }
The power is the ability to use any custom data attribute. Here, we’re using a custom data-name attribute whose value is used in the CSS:
This makes it relatively easy to change the content after switching that language without changing the style. But, back to our design. The three-up grid of cards has a yellow “special” or “best” off mark beside an image.
Assign a “special offer” or “best offer” value to it.
Finally, we can use the data-offer attribute in our style:
.offers__item::after { content: attr(data-offer); /* etc. */ }
Select by the dir attribute
Many languages are left-to-right, but some are not. We can specify what should be different in the [dir='rtl']. This attribute must be on the element itself or we can use nesting to reach the wanted element. Since we’ve already added the dir attribute to our HTML tag, then we can use it in nesting. We will use it later on our sample page.
4. Prepare the web fonts
In a multilingual website, we may also want to change the font family between languages because perhaps a particular font is more legible for a particular language.
Fallback fonts
We can benefit from the fallback by writing the right-to-left font after the default one.
font-family: 'Roboto', 'Tajawal', sans-serif;
This helps in cases where the default font doesn’t support right-to-left. That snippet above is using the Roboto font, which doesn’t support Arabic letters. If the default font supports right-to-left (like the Cairo font), and the design needs it to be changed, then this is not a perfect solution.
font-family: 'Cairo', 'Tajawal', sans-serif; /* won't work as expected */
Let’s look at another way.
Using CSS variables and the :lang() pseudo-class
We can mix the previous two technique where we change the font-family property value using custom properties that are re-assigned by the :lang pseudo class.
html { --font-family: 'Roboto', sans-serif; } html:lang(ar){ --font-family: 'Tajawal', sans-serif; } html:lang(jp){ --font-family: 'Noto Sans JP', sans-serif; }
5. CSS Logical Properties
In CSS days past, we used to use left and right to define offsets along the x-axis, and the top and bottom properties to to define offsets along the y-axis. That makes direction switching a headache. Fortunately, CSS supports logical properties that define direction‐relative equivalents of the older physical properties. They support things like positioning, alignment, margin, padding, border, etc.
If the writing mode is horizontal (like English), then the logical inline direction is along the x-axis and the block direction refers to the y-axis. Those directions are flipped in a vertical writing mode, where inline travels the y-axis and and block flows along the x-axis.
Writing Mode
x-axis
y-axis
horizontal
inline
block
vertical
block
inline
In other words, the block dimension is the direction perpendicular to the writing mode and the inline dimension is the direction parallel to the writing mode. Both inline and block levels have start and end values to define a specific direction. For example, we can use margin-inline-start instead of margin-left. This mean the margin direction automatically inverts when the page direction is rtl. It’s like our CSS is direction-aware and adapts when changing contexts.
There is another article on CSS-Tricks, Building Multi-Directional Layouts from Ahmad El-Alfy, that goes into the usefulness of building websites in multiple languages using logical properties.
This is exactly how we can handle margins, padding and borders. We’ll use them in the footer section to change which border gets the rounded edge.
The top-tight edge of the border is rounded in a default ltr writing mode.
As long as we’re using the logical equivalent of border-top-right-radius, CSS will handle that change for us.
.footer { border-start-end-radius: 120px; }
Now, after switching to the rtl direction, it’ll work fine.
The “call to action” section is another great place to apply this:
You might be wondering exactly how the block and inline dimensions reverse when the writing mode changes. Back to the Japanese version, the text is from vertical, going from top-to-bottom. I added this line to the code:
/* The "About" section when langauge is Japanese */ html:lang(jp) .about__text { margin-block-end: auto; width: max-content; }
Although I added margin to the “block” level, it is applied it to the left margin. That’s because the text rotated 90 degrees when the language switched and flows in a vertical direction.
6. Other layout considerations
Even after all this prep, sometimes where elements move to when the direction and language change is way off. There are multiple factors at play, so let’s take a look.
Position
Using an absolute or fixed position to move elements may affect how elements shift when changing directions. Some designs need it. But I’d still recommend asking yourself: do I really need this?
Fro example, the newsletter subscription form in the footer section of our example can be implemented using position. The form itself takes the relative position, while the button takes the absolute position.
<header class="hero relative"> <!-- etc. --> <div class="hero__social absolute"> <div class="d-flex flex-col"> <!-- etc. --> </div> </div> </header>
Note that an .absolute class is in there that applies position: absolute to the hero section’s social widget. Meanwhile, the hero itself is relatively positioned.
How we move the social widget halfway down the y-axis:
In the Arabic, we can fix the ::before pseudo-class position that is used in the background using the same technique we use in the footer form. That said, there are multiple issues we need to fix here:
The clip-path direction
The background linear-gradient
The coffee-cup image direction
The social media box’s position
Let’s use a simple flip trick instead. First, we wrap the hero content, and social content in two distinct wrapper elements instead of one:
Yeah, that’s all. This simple trick is also helpful if the hero’s background is an image.
transform: translate()
This CSS property and value function helps move the element on one or more axes. The difference between ltr and rtl is that the x-axis is the inverse/negative value of the current value. So, we can store the value in a variable and change it according to the language.
html { --about-img-background-move: -20%; } html[dir='rtl']{ --about-img-background-move: 20%; }
We can do the same thing for the background image in the another section:
Margins are used to extend or reduce spaces between elements. It accepts negative values, too. For example, a positive margin-top value (20%) pushes the content down, while a negative value (e.g. -20%) pulls the content up.
If margins values are negative, then the top and left margins move the item up or to the left. However, the right and bottom margins do not. Instead, they pull content that is located in the right of the item towards the left, and the content underneath the item up. For example, if we apply a negative top margin and negative bottom margin together on the same item, the item is moved up and pull the content below it up into the item.
The result of the above code should be something like this:
Let’s add these negative margins to the #d2 element:
#d2 { margin-top: -40px; margin-bottom: -70px; }
Notice how the second box in the diagram moves up, thanks to a negative margin-top value, and the green box also moves up an overlaps the second box, thanks to a negative margin-bottom value.
The next thing you might be asking: But what is the difference between transform: translate and the margins?
When moving an element with a negative margin, the initial space that is taken by the element is no longer there. But in the case of translating the element using a transform, the opposite is true. In other words, a negative margin leads pulls the element up, but the transform merely changes its position, without losing the space reserved for it.
You can see that, although the element is pulled up, its initial space is still there according to the natural document flow.
Flexbox
The display: flex provides a quick way to control the how the elements are aligned in their container. We can use align-items and justify-content to align child elements at the parent level.
In our example, we can use flexbox in almost every section to make the column layout. Plus, we can use it in the “offers” section to center the set of those yellow “special” and “best” marks:
If the flex-direction value is row, then we can benefit from controlling the width for each element. In the “hero” section, we need to set the image on the angled slope of the background where the color transitions from dark gray to yellow.
Both elements take up a a total of 89.83% of the parent container’s width. Since we didn’t specify justify-content on the parent, it defaults to start, leaving the remaining width at the end.
We can combine the flexbox with any of the previous techniques we’ve seen, like transforms and margins. This can help us to reduce how many position instances are in our code. Let’s use it with a negative margin in the “call to action” section to locate the image.
Because we didn’t specify the flex-wrap and flex-basis properties, the image and the text both fit in the parent. However, since we used a negative margin, the image is pulled to the left, along with its width. This saves extra space for the text. We also want to use a logical property, inline-start, instead of left to handle switching to the rtl direction.
Grid
Finally, we can use a grid container to positing the elements. CSS Grid is powerful (and different than flexbox) in that it lays things along both the x-axis and the y-axis as opposed to only one of them.
Suppose that in the “offers” section, the role of the “see all” button is to get extra data that to display on the page. Here’s JavaScript code to repeat the current content:
Our page is far from being the best example of how CSS Grid works. While I was browsing the designs online, I found a design that uses the following structure:
Notice how CSS Grid makes the responsive layout without media queries. And as you might expect, it works well for changing writing modes where we adjust where elements go on the grid based on the current writing mode.
Wrapping up
Here is the final version of the page. I ensured to implement the responsiveness with a mobile-first approach to show you the power of the CSS variables. Be sure to open the demo in full page mode as well.
I hope these techniques help make creating multilingual designs easier for you. We looked at a bunch of CSS properties we can use to apply styles to specific languages. And we looked at different approaches to do that, like selecting the :lang pseudo-class and data attributes using the attr() function. As part of this, we covered what logical properties are in CSS and how they adapt to a document’s writing mode—which is so much nicer than having to write additional CSS rulesets to swap out physical property units that otherwise are unaffected by the writing mode.
We also checked out a number of different positioning and layout techniques, looking specifically at how different techniques are more responsive and maintainable than others. For example, CSS Grid and Flexbox are equipped with features that can re-align elements inside of a container based on changing conditions.
Clearly, there are lots of moving pieces when working with a multilingual site. There are probably other requirements you need to consider when optimizing a site for specific languages, but the stuff we covered here together should give you all of the layout-bending superpowers you need to create robust layouts that accommodate any number of languages and writing modes.
I believe that a traditional WordPress theme should be able to work as effectively as a static site or a headless web app. The overwhelming majority of WordPress websites are built with a good ol’ fashioned WordPress theme. Most of them even have good caching layers, and dependency optimizations that make these sites run reasonably fast. But as developers, we have accomplished ways to create better results for our websites. Using a headless WordPress has allowed many sites to have faster load speeds, better user interactions, and seamless transitions between pages.
The problem? Maintenance. Let me show you another possibility!
Let’s start by defining what I mean by “Traditional” WordPress, “Headless” WordPress, and then “Nearly Headless” WordPress.
Traditional WordPress websites
Traditionally, a WordPress website is built using PHP to render the HTML markup that is rendered on the page. Each time a link is clicked, the browser sends another request to the server, and PHP renders the HTML markup for the site that was clicked.
This is the method that most sites use. It’s the easiest to maintain, has the least complexity in the tech, and with the right server-side caching tools it can perform fairly well. The issue is, since it is a traditional website, it feels like a traditional website. Transitions, effects, and other stylish, modern features tend to be more difficult to build and maintain in this type of site.
Pros:
The site is easy to maintain.
The tech is relatively simple.
There is great compatibility with WordPress plugins.
Cons:
Your site may feel a little dated as society expects app-like experiences in the browser.
JavaScript tends to be a little harder to write and maintain since the site isn’t using a JavaScript framework to control the site’s behavior.
Traditional websites tend to run slower than headless and nearly headless options.
Headless WordPress websites
A headless WordPress website uses modern JavaScript and some kind of server-side RESTful service, such as the WordPress REST API or GraphQL. Instead of building, and rendering the HTML in PHP, the server sends minimal HTML and a big ol’ JavaScript file that can handle rendering any page on the site. This method loads pages much faster, and opens up the opportunity to create really cool transitions between pages, and other interesting things.
No matter how you spin it, most headless WordPress websites require a developer on-hand to make any significant change to the website. Want to install a forms plugin? Sorry, you probably need a developer to set that up. Want to install a new SEO plugin? Nope, going to need a developer to change the app. Wanna use that fancy block? Too bad — you’re going to need a developer first.
Pros:
The website itself will feel modern, and fast.
It’s easy to integrate with other RESTful services outside of WordPress.
The entire site is built in JavaScript, which makes it easier to build complex websites.
Cons:
You must re-invent a lot of things that WordPress plugins do out of the box for you.
This set up is difficult to maintain.
Compared to other options, hosting is complex and can get expensive.
See “WordPress and Jamstack” for a deeper comparison of the differences between WordPress and static hosting.
I love the result that headless WordPress can create. I don’t like the maintenance. What I want is a web app that allows me to have fast load speeds, transitions between pages, and an overall app-like feel to my site. But I also want to be able to freely use the plugin ecosystem that made WordPress so popular in the first place. What I want is something headless-ish. Nearly headless.
I couldn’t find anything that fit this description, so I built one. Since then, I have built a handful of sites that use this approach, and have built the JavaScript libraries necessary to make it easier for others to create their own nearly headless WordPress theme.
Introducing Nearly Headless WordPress
Nearly headless is a web development approach to WordPress that gives you many of the app-like benefits that come with a headless approach, as well as the ease of development that comes with using a traditional WordPress theme. It accomplishes this with a small JavaScript app that will handle the routing and render your site much like a headless app, but has a fallback to load the exact same page with a normal WordPress request instead. You can choose which pages load using the fallback method, and can inject logic into either the JavaScript or the PHP to determine if the page should be loaded like this.
You can see this in action on the demo site I built to show off just what this approach can do.
For example, one of the sites implementing this method uses a learning management system called LifterLMS to sell WordPress courses online. This plugin has built-in e-commerce capabilities, and sets up the interface needed to host and place course content behind a paywall. This site uses a lot of LifterLMS’s built-in functionality to work — and a big part of that is the checkout cart. Instead of re-building this entire page to work inside my app, I simply set it to load using the fallback method. Because of this, this page works like any old WordPress theme, and works exactly as intended as a result — all without me re-building anything.
Pros:
This is easy to maintain, when set-up.
The hosting is as easy as a typical WordPress theme.
The website feels just as modern and fast as a headless website.
Cons:
You always have to think about two different methods to render your website.
There are limited choices for JavaScript libraries that are effective with this method.
This app is tied very closely to WordPress, so using third party REST APIs is more-difficult than headless.
How it works
For something to be nearly headless, it needs to be able to do several things, including:
load a page using a WordPress request,
load a page using JavaScript,
allow pages to be identical, regardless of how they’re rendered,
provide a way to know when to load a page using JavaScript, or PHP, and
Ensure 100% parity on all routed pages, regardless of if it’s rendered with JavaScript or PHP.
This allows the site to make use of progressive enhancement. Since the page can be viewed with, or without JavaScript, you can use whichever version makes the most sense based on the request that was made. Have a trusted bot crawling your site? Send them the non-JavaScript version to ensure compatibility. Have a checkout page that isn’t working as-expected? Force it to load without the app for now, and maybe fix it later.
To accomplish each of these items, I released an open-source library called Nicholas, which includes a pre-made boilerplate.
Keeping it DRY
The biggest concern I wanted to overcome when building a nearly-headless app is keeping parity between how the page renders in PHP and JavaScript. I did not want to have to build and maintain my markup in two different places — I wanted a single source for as much of the markup as possible. This instantly limited which JavaScript libraries I could realistically use (sorry React!). With some research, and a lot of experimentation, I ended up using AlpineJS. This library kept my code reasonably DRY. There’s parts that absolutely have to be re-written for each one (loops, for example), but most of the significant chunks of markup can be re-used.
A single post template rendered with PHP might look like something like this:
Both of them use the same PHP template, so all of the code inside the actual loop is DRY:
$ title = $ template->get_param( 'title', '' ); // Get the title that was passed into this template, fallback to empty string.$ content = $ template->get_param( 'content', '' ); // Get the content passed into this template, fallback to empty string. ?> <article x-data="theme.Post(index)"> <!-- This will use the alpine directive to render the title, or if it's in compatibility mode PHP will fill in the title directly --> <h1 x-html="title"><?= $ title ?></h1> <!-- This will use the Alpine directive to render the post content, or if it's in compatibility mode, PHP will fill in the content directly --> <div class="content" x-html="content"><?= $ content ?></div> </article>
Detect when a page should run in compatibility mode
“Compatibility mode” allows you to force any request to load without the JavaScript that runs the headless version of the site. When a page is set to load using compatibility mode, the page will be loaded using nothing but PHP, and the app script never gets enqueued. This allows “problem pages” that don’t work as-expected with the app to run without needing to re-write anything.
There are several different ways you can force a page to run in compatibility mode — some require code, and some don’t. Nicholas adds a toggle to any post type that makes it possible to force a post to load in compatibility mode.
Along with this, you can manually add any URL to force it to load in compatibility mode inside the Nicholas settings.
These are a great start, but I’ve found that I can usually detect when a page needs to load in compatibility mode automatically based on what blocks are stored in a post. For example, let’s say you have Ninja Forms installed on your site, and you want to use the validation JavaScript they provide instead of re-making your own. In this case, you would have to force compatibility mode on any page that has a Ninja Form on it. You could manually go through and add each URL as you need them, or you can use a query to get all of the content that has a Ninja Forms block on the page. Something like this:
That automatically adds any page with a Ninja Forms block to the list of URLs that will load using compatibility mode. This is just using WP_Query arguments, so you could pass anything you want here to determine what content should be added to the list.
Extending the app
Under the hood, Nicholas uses a lightweight router that can be extended using a middleware pattern much like how an Express app handles middleware. When a clicked page is routed, the system runs through each middleware item, and eventually routes the page. By default, the router does nothing; however, it comes with several pre-made middleware pieces that allows you to assemble the router however you see-fit.
A basic example would look something like this:
// Import WordPress-specific middleware import { updateAdminBar, validateAdminPage, validateCompatibilityMode } from 'nicholas-wp/middlewares' // Import generic middleware import { addRouteActions, handleClickMiddleware, setupRouter, validateMiddleware } from "nicholas-router"; // Do these actions, in this order, when a page is routed. addRouteActions( // First, validate the URL validateMiddleware, // Validate this page is not an admin page validateAdminPage, // Validate this page doesn't require compatibility mode validateCompatibilityMode, // Then, we Update the Alpine store updateStore, // Maybe fetch comments, if enabled fetchComments, // Update the history updateHistory, // Maybe update the admin bar updateAdminBar ) // Set up the router. This also uses a middleware pattern. setupRouter( // Setup event listener for clicks handleClickMiddleware )
From here, you could extend what happens when a page is routed. Maybe you want to scan the page for code to highlight, or perhaps you want to change the content of the <head> tag to match the newly routed page. Maybe even introduce a caching layer. Regardless of what you need to-do, adding the actions needed is as simple as using addRouteAction or setupRouter.
Next steps
This was a brief overview of some of the key components I used to implement the nearly headless approach. If you’re interested in going deeper, I suggest that you take my course at WP Dev Academy. This course is a step-by-step guide on how to build a nearly headless WordPress website with modern tooling. I also suggest that you check out my nearly headless boilerplate that can help you get started on your own project.
Optimizing the user experience you offer on your website is essential for the success of any online business. Google does use different user experience-related metrics to rank web pages for SEO and has continued to provide multiple tools to measure and improve web performance.
In its recent attempt to simplify the measurement and understanding of what qualifies as a good user experience, Google standardized the page’s user experience metrics.
These standardized metrics are called Core Web Vitals and help evaluate the real-world user experience on your web page.
Largest Contentful Paint or LCP is one of the Core Web Vitals metrics, which measures when the largest content element in the viewport becomes visible. While other metrics like TTFB and First Contentful Paint also help measure the page experience, they do not represent when the page has become “meaningful” for the user.
Usually, unless the largest element on the page becomes completely visible, the page may not provide much context for the user. LCP is, therefore, more representative of the user’s expectations.As a Core Web Vital metric, LCP accounts for 25% of the Performance Score, making it one of the most important metrics to optimize.
Checking your LCP time
As per Google, the types of elements considered for Largest Contentful Paint are:
<img> elements
<image> elements inside an <svg> element
<video> elements (the poster image is used)
An element with a background image loaded via the url() function (as opposed to a CSS gradient)
Block-level elements containing text nodes or other inline-level text elements children.
Now, there are multiple ways to measure the LCP of your page.
The easiest ways to measure it are PageSpeed Insights, Lighthouse, Search Console (Core Web Vitals Report), and the Chrome User Experience Report.
For example, Google PageSpeed Insights in its report indicates the element considered for calculating the LCP.
What is a good LCP time?
To provide a good user experience, you should strive to have a Largest Contentful Paint of 2.5 seconds or less on your website. A majority of your page loads should be happening under this threshold.
Now that we know what is LCP and what our target should be let’s look at ways to improve LCP on our website.
How to optimize Largest Contentful Paint (LCP)
The underlying principle of reducing LCP in all of the techniques mentioned below is to reduce the data downloaded on the user’s device and reduce the time it takes to send and execute that content.
1. Optimize your images
On most websites, the above-the-fold content usually contains a large image which gets considered for LCP. It could either be a hero image, a banner, or a carousel. It is, therefore, crucial that you optimize these images for a better LCP.
To optimize your images, you should use a third-party image CDN like ImageKit.io. The advantage of using a third-party image CDN is that you can focus on your actual business and leave image optimization to the image CDN.
The image CDN would stay at the edge of technology evolution, and you always get the best possible features with minimum ongoing investment.
ImageKit is a complete real-time image CDN that integrates with any existing cloud storage like AWS S3, Azure, Google Cloud Storage, etc. It even comes with its integrated image storage and manager called the Media Library.
Here is how ImageKit can help you improve your LCP score.
1. Deliver your images in lighter formats
ImageKit detects if the user’s browser supports modern lighter formats like WebP or AVIF and automatically delivers the image in the lightest possible format in real-time. Formats like WebP are over 30% lighter compared to their JPEG equivalents.
2. Automatically compress your images
Not just converting the image to the correct format, ImageKit also compresses your image to a smaller size. In doing so, it balances the image’s visual quality and the output size.
You get the option to alter the compression level (or quality) in real-time by just changing a URL parameter, thereby balancing your business requirements of visual quality and load time.
3. Provide real-time transformations for responsive images
Google uses mobile-first indexing for almost all websites. It is therefore essential to optimize LCP for mobile more than that for desktop. Every image needs to be scaled down to as per the layout’s requirement.
For example, you would need the image in a smaller size on the product listing page and a larger size on the product detail page. This resizing ensures that you are not sending any additional bytes than what is required for that particular page.
ImageKit allows you to transform responsive images in real-time just by adding the corresponding transformation in the image URL. For example, the following image is resized to width 200px and height 300px by adding the height and width transformation parameters in its URL.
4. Cache images and improve delivery time
Image CDNs use a global Content Delivery Network (CDN) to deliver the images. Using a CDN ensures that images load from a location closer to the user instead of your server, which could be halfway across the globe.
ImageKit, for example, uses AWS Cloudfront as its CDN, which has over 220 deliver nodes globally. A vast majority of the images get loaded in less than 50ms. Additionally, it uses the proper caching directives to cache the images on the user’s device, CDN nodes, and even its processing network for a faster load time.
This helps to improve LCP on your website.
2. Preload critical resources
There are certain cases where the browser may not prioritize loading a visually important resource that impacts LCP. For example, a banner image above the fold could be specified as a background image inside a CSS file. Since the browser would never know about this image until the CSS file is downloaded and parsed along with the DOM tree, it will not prioritize loading it.
For such resources, you can preload them by adding a <link> tag with a rel= "preload" attribute to the head section of your HTML document.
<!-- Example of preloading --> <link rel="preload" src="banner_image.jpg" />
While you can preload multiple resources in a document, you should always restrict it to above-the-fold images or videos, page-wide font files, or critical CSS and JS files.
3. Reduce server response times
If your server takes long to respond to a request, then the time it takes to render the page on the screen also goes up. It, therefore, negatively affects every page speed metric, including LCP. To improve your server response times, here is what you should do.
1. Analyze and optimize your servers
A lot of computation, DB queries, and page construction happens on the server. You should analyze the requests going to your servers and identify the possible bottlenecks for responding to the requests. It could be a DB query slowing things down or the building of the page on your server.
You can apply best practices like caching of DB responses, pre-rendering of pages, amongst others, to reduce the time it takes for your server to respond to requests.
Of course, if the above does not improve the response time, you might need to increase your server capacity to handle the number of requests coming in.
2. Use a Content Delivery Network
We have already seen above that using an image CDN like ImageKit improves the loading time for your images. Your users get the content delivered from a CDN node close to their location in milliseconds.
You should extend the same to other content on your website. Using a CDN for your static content like JS, CSS, and font files will significantly speed up their load time. ImageKit does support the delivery of static content through its systems.
You can also try to use a CDN for your HTML and APIs to cache those responses on the CDN nodes. Given the dynamic nature of such content, using a CDN for HTML or APIs can be a lot more complex than using a CDN for static content.
3. Preconnect to third-party origins
If you use third-party domains to deliver critical above-the-fold content like JS, CSS, or images, then you would benefit by indicating to the browser that a connection to that third-party domain needs to be made as soon as possible. This is done using the rel="preconnect" attribute of the <link> tag.
With preconnect in place, the browser can save the domain connection time when it downloads the actual resource later.
Subdomains like static.example.com, of your main website domain example.com are also third-party domains in this context.
You can also use the dns-prefetch as a fallback in browsers that don’t support preconnect. This directive instructs the browser to complete the DNS resolution to the third-party domain even if it cannot establish a proper connection.
4. Serve content cache-first using a Service Worker
Service workers can intercept requests originating from the user’s browser and serve cached responses for the same. This allows us to cache static assets and HTML responses on the user’s device and serve them without going to the network.
While the service worker cache serves the same purpose as the HTTP or browser cache, it offers fine-grained control and can work even if the user is offline. You can also use service workers to serve precached content from the cache to users on slow network speeds, thereby bringing down LCP time.
5. Compress text files
Any text-based data you load on your webpage should be compressed when transferred over the network using a compression algorithm like gzip or Brotli. SVGs, JSONs, API responses, JS and CSS files, and your main page’s HTML are good candidates for compression using these algorithms. This compression significantly reduces the amount of data that will get downloaded on page load, therefore bringing down the LCP.
4. Remove render-blocking resources
When the browser receives the HTML page from your server, it parses the DOM tree. If there is any external stylesheet or JS file in the DOM, the browser has to pause for them before moving ahead with the parsing of the remaining DOM tree.
These JS and CSS files are called render-blocking resources and delay the LCP time. Here are some ways to reduce the blocking time for JS and CSS files:
1. Do not load unnecessary bundles
Avoid shipping huge bundles of JS and CSS files to the browser if they are not needed. If the CSS can be downloaded a lot later, or a JS functionality is not needed on a particular page, there is no reason to load it up front and block the render in the browser.
Suppose you cannot split a particular file into smaller bundles, but it is not critical to the functioning of the page either. In that case, you can use the defer attribute of the script tag to indicate to the browser that it can go ahead with the DOM parsing and continue to execute the JS file at a later stage. Adding the defer attribute removes any blocker for DOM parsing. The LCP, therefore, goes down.
2. Inline critical CSS
Critical CSS comprises the style definitions needed for the DOM that appears in the first fold of your page. If the style definitions for this part of the page are inline, i.e., in each element’s style attribute, the browser has no dependency on the external CSS to style these elements. Therefore, it can render the page quickly, and the LCP goes down.
3. Minify and compress the content
You should always minify the CSS and JS files before loading them in the browser. CSS and JS files contain whitespace to make them legible, but they are unnecessary for code execution. So, you can remove them, which reduces the file size on production. Smaller file size means that the files can load quickly, thereby reducing your LCP time.
Compression techniques, as discussed earlier, use data compression algorithms to bring down the file size delivered over the network. Gzip and Brotli are two compression algorithms. Brotli compression offers a superior compression ratio compared to Gzip and is now supported on all major browsers, servers, and CDNs.
5. Optimize LCP for client-side rendering
Any client-side rendered website requires a considerable amount of Javascript to load in the browser. If you do not optimize the Javascript sent to the browser, then the user may not see or be able to interact with any content on the page until the Javascript has been downloaded and executed.
We discussed a few JS-related optimizations above, like optimizing the bundles sent to the browser and compressing the content. There are a couple of more things you can do to optimize the rendering on client devices.
1. Using server-side rendering
Instead of shipping the entire JS to the client-side and doing all the rendering there, you can generate the page dynamically on the server and then send it to the client’s device. This would increase the time it takes to generate the page, but it will decrease the time it takes to make a page active in the browser.
However, maintaining both client-side and server-side frameworks for the same page can be time-consuming.
2. Using pre-rendering
Pre-rendering is a different technique where a headless browser mimics a regular user’s request and gets the server to render the page. This rendered page is stored during the build cycle once, and then every subsequent request uses that pre-rendered page without any computation on the server, resulting in a fast load time.
This improves the TTFB compared to server-side rendering because the page is prepared beforehand. But the time to interactive might still take a hit as it has to wait for the JS to download for the page to become interactive. Also, since this technique requires pre-rendering of pages, it may not be scalable if you have a large number of pages.
Conclusion
Core Web Vitals, which include LCP, have become a significant search ranking factor and strongly correlate with the user experience. Therefore, if you run an online business, you should optimize these vitals to ensure the success of the same.
The above techniques have a significant impact on optimizing LCP. Using ImageKit as your image CDN will give you a quick headstart.
Sign-up for a forever free account, upload your images to the ImageKit storage, or connect your origin, and start delivering optimized images in minutes.
Both are essentially database-backed systems for managing data. HubSpot is both, and much more. Where a CMS might be very focused on content and the metadata around making content useful, a CRM is focused on leads and making communicating with current and potential customers easier.
They can be brothers-in-arms. We’ll get to that.
Say a CRM is set up for people. You run a Lexus dealership. There is a quote form on the website. People fill it out and enter the CRM. That lead can go to your sales team for taking care of that customer.
But a CRM could be based on other things. Say instead of people it’s based on real estate listings. Each main entry is a property, with essentially metadata like photos, address, square footage, # of bedrooms/baths, etc. Leads can be associated with properties.
That would be a nice CRM setup for a real estate agency, but the data that is in that CRM might be awfully nice for literally building a website around those property listings. Why not tap into that CRM data as literal data to build website pages from?
That’s what I mean by a CRM and CMS being brothers-in-arms. Use them both! That’s why HubSpot can be an ideal home for websites like this.
To keep that tornado of synergy going, HubSpot can also help with marketing, customer service, and integrations. So there is a lot of power packed into one platform.
And with that power, also a lot of comfort and flexibility.
You’re still developing locally.
You’re still using Git.
You can use whatever framework or site-building tools you want.
You’ve got a CLI to control things.
There is a VS Code Extension for super useful auto-complete of your data.
There is a staging environment.
And the feature just keep coming. HubSpot really has a robust set of tools to make sure you can do what you need to do.
Do you have to use some third-party thing for search? Nope, they got it.
As developer-rich as this all is, it doesn’t mean that it’s developer-only. There are loads of tools for working with the website you build that require no coding at all. Dashboard for content management, data wrangling, style control, and even literal drag-and-drop page builders.
It’s all part of a very learnable system.
Themes, templates, modules, and fields are the objects you’ll work with most in HubSpot CMS as a developer. Using these different objects effectively lets you give content creators the freedom to work and iterate on websites independently while staying inside style and layout guardrails you set.
When you load a file from an external server, you’re trusting that the content you request is what you expect it to be. Since you don’t manage the server yourself, you’re relying on the security of yet another third party and increasing the attack surface. Trusting a third party is not inherently bad, but it should certainly be taken into consideration in the context of your website’s security.
A real-world example
This isn’t a purely theoretical danger. Ignoring potential security issues can and has already resulted in serious consequences. On June 4th, 2019, Malwarebytes announced their discovery of a malicious skimmer on the website NBA.com. Due to a compromised Amazon S3 bucket, attackers were able to alter a JavaScript library to steal credit card information from customers.
It’s not only JavaScript that’s worth worrying about, either. CSS is another resource capable of performing dangerous actions such as password stealing, and all it takes is a single compromised third-party server for disaster to strike. But they can provide invaluable services that we can’t simply go without, such as CDNs that reduce the total bandwidth usage of a site and serve files to the end-user much faster due to location-based caching. So it’s established that we need to sometimes rely on a host that we have no control over, but we also need to ensure that the content we receive from it is safe. What can we do?
Solution: Subresource Integrity (SRI)
SRI is a security policy that prevents the loading of resources that don’t match an expected hash. By doing this, if an attacker were to gain access to a file and modify its contents to contain malicious code, it wouldn’t match the hash we were expecting and not execute at all.
Doesn’t HTTPS do that already?
HTTPS is great for security and a must-have for any website, and while it does prevent similar problems (and much more), it only protects against tampering with data-in-transit. If a file were to be tampered with on the host itself, the malicious file would still be sent over HTTPS, doing nothing to prevent the attack.
How does hashing work?
A hashing function takes data of any size as input and returns data of a fixed size as output. Hashing functions would ideally have a uniform distribution. This means that for any input, x, the probability that the output, y, will be any specific possible value is similar to the probability of it being any other value within the range of outputs.
Here’s a metaphor:
Suppose you have a 6-sided die and a list of names. The names, in this case, would be the hash function’s “input” and the number rolled would be the function’s “output.” For each name in the list, you’ll roll the die and keep track of what name each number number corresponds to, by writing the number next to the name. If a name is used as input more than once, its corresponding output will always be what it was the first time. For the first name, Alice, you roll 4. For the next, John, you roll 6. Then for Bob, Mary, William, Susan, and Joseph, you get 2, 2, 5, 1, and 1, respectively. If you use “John” as input again, the output will once again be 6. This metaphor describes how hash functions work in essence.
Name (input)
Number rolled (output)
Alice
4
John
6
Bob
2
Mary
2
William
5
Susan
1
Joseph
1
You may have noticed that, for example, Bob and Mary have the same output. For hashing functions, this is called a “collision.” For our example scenario, it inevitably happens. Since we have seven names as inputs and only six possible outputs, we’re guaranteed at least one collision.
A notable difference between this example and a hash function in practice is that practical hash functions are typically deterministic, meaning they don’t make use of randomness like our example does. Rather, it predictably maps inputs to outputs so that each input is equally likely to map to any particular output.
SRI uses a family of hashing functions called the secure hash algorithm (SHA). This is a family of cryptographic hash functions that includes 128, 256, 384, and 512-bit variants. A cryptographic hash function is a more specific kind of hash function with the properties being effectively impossible to reverse to find the original input (without already having the corresponding input or brute-forcing), collision-resistant, and designed so a small change in the input alters the entire output. SRI supports the 256, 384, and 512-bit variants of the SHA family.
You’ll notice that the slightest change in the input will produce an output that is completely different. This is one of the properties of cryptographic hashes listed earlier.
The format you’ll see most frequently for hashes is hexadecimal, which consists of all the decimal digits (0-9) and the letters A through F. One of the benefits of this format is that every two characters represent a byte, and the evenness can be useful for purposes such as color formatting, where a byte represents each color. This means a color without an alpha channel can be represented with only six characters (e.g., red = ff0000)
This space efficiency is also why we use hashing instead of comparing the entirety of a file to the data we’re expecting each time. While 256 bits cannot represent all of the data in a file that is greater than 256 bits without compression, the collision resistance of SHA-256 (and 384, 512) ensures that it’s virtually impossible to find two hashes for differing inputs that match. And as for SHA-1, it’s no longer secure, as a collision has been found.
Interestingly, the appeal of compactness is likely one of the reasons that SRI hashes don’t use the hexadecimal format, and instead use base64. This may seem like a strange decision at first, but when we take into consideration the fact that these hashes will be included in the code and that base64 is capable of conveying the same amount of data as hexadecimal while being 33% shorter, it makes sense. A single character of base64 can be in 64 different states, which is 6 bits worth of data, whereas hex can only represent 16 states, or 4 bits worth of data. So if, for example, we want to represent 32 bytes of data (256 bits), we would need 64 characters in hex, but only 44 characters in base64. When we using longer hashes, such as sha384/512, base64 saves a great deal of space.
Why does hashing work for SRI?
So let’s imagine there was a JavaScript file hosted on a third-party server that we included in our webpage and we had subresource integrity enabled for it. Now, if an attacker were to modify the file’s data with malicious code, the hash of it would no longer match the expected hash and the file would not execute. Recall that any small change in a file completely changes its corresponding SHA hash, and that hash collisions with SHA-256 and higher are, at the time of this writing, virtually impossible.
Our first SRI hash
So, there are a few methods you can use to compute the SRI hash of a file. One way (and perhaps the simplest) is to use srihash.org, but if you prefer a more programmatic way, you can use:
awk '{print $ 1}' Finds the first section of a string (separated by tab or space) and passes it to xxd. $ 1 represents the first segment of the string passed into it.
And if you’re running Windows:
@echo off set bits=384 openssl dgst -sha%bits% -binary %1% | openssl base64 -A > tmp set /p a= < tmp del tmp echo sha%bits%-%a% pause
@echo off prevents the commands that are running from being displayed. This is particularly helpful for ensuring the terminal doesn’t become cluttered.
set bits=384 sets a variable called bits to 384. This will be used a bit later in the script.
openssl dgst -sha%bits% -binary %1% | openssl base64 -A > tmp is more complex, so let’s break it down into parts.
openssl dgst computes a digest of an input file.
-sha%bits% uses the variable, bits, and combines it with the rest of the string to be one of the possible flag values, sha256, sha384, or sha512.
-binary outputs the hash as binary data instead of a string format, such as hexadecimal.
%1% is the first argument passed to the script when it’s run.
The first part of the command hashes the file provided as an argument to the script.
| openssl base64 -A > tmp converts the binary output piping through it into base64 and writes it to a file called tmp. -A outputs the base64 onto a single line.
set /p a= <tmp stores the contents of the file, tmp, in a variable, a.
del tmp deletes the tmp file.
echo sha%bits%-%a% will print out the type of SHA hash type, along with the base64 of the input file.
pause Prevents the terminal from closing.
SRI in action
Now that we understand how hashing and SRI hashes work, let’s try a concrete example. We’ll create two files:
// file1.js alert('Hello, world!');
and:
// file2.js alert('Hi, world!');
Then we’ll compute the SHA-384 SRI hashes for both:
Place all of these files in the same folder and start a server within that folder (for example, run npx http-server inside the folder containing the files and then open one of the addresses provided by http-server or the server of your choice, such as 127.0.0.1:8080). You should get two alert dialog boxes. The first should say “Hello, world!” and the second, “Hi, world!”
If you modify the contents of the scripts, you’ll notice that they no longer execute. This is subresource integrity in effect. The browser notices that the hash of the requested file does not match the expected hash and refuses to run it.
We can also include multiple hashes for a resource and the strongest hash will be chosen, like so:
The browser will choose the hash that is considered to be the strongest and check the file’s hash against it.
Why is there a “crossorigin” attribute?
The crossorigin attribute tells the browser when to send the user credentials with the request for the resource. There are two options to choose from:
Value (crossorigin=)
Description
anonymous
The request will have its credentials mode set to same-origin and its mode set to cors.
use-credentials
The request will have its credentials mode set to include and its mode set to cors.
Request credentials modes mentioned
Credentials mode
Description
same-origin
Credentials will be sent with requests sent to same-origin domains and credentials that are sent from same-origin domains will be used.
include
Credentials will be sent to cross-origin domains as well and credentials sent from cross-origin domains will be used.
Request modes mentioned
Request mode
Description
cors
The request will be a CORS request, which will require the server to have a defined CORS policy. If not, the request will throw an error.
Why is the “crossorigin” attribute required with subresource integrity?
By default, scripts and stylesheets can be loaded cross-origin, and since subresource integrity prevents the loading of a file if the hash of the loaded resource doesn’t match the expected hash, an attacker could load cross-origin resources en masse and test if the loading fails with specific hashes, thereby inferring information about a user that they otherwise wouldn’t be able to.
When you include the crossorigin attribute, the cross-origin domain must choose to allow requests from the origin the request is being sent from in order for the request to be successful. This prevents cross-origin attacks with subresource integrity.
Using subresource integrity with webpack
It probably sounds like a lot of work to recalculate the SRI hashes of each file every time they are updated, but luckily, there’s a way to automate it. Let’s walk through an example together. You’ll need a few things before you get started.
Node.js and npm
Node.js is a JavaScript runtime that, along with npm (its package manager), will allow us to use webpack. To install it, visit the Node.js website and choose the download that corresponds to your operating system.
Setting up the project
Create a folder and give it any name with mkdir [name of folder]. Then type cd [name of folder] to navigate into it. Now we need to set up the directory as a Node project, so type npm init. It will ask you a few questions, but you can press Enter to skip them since they’re not relevant to our example.
webpack
webpack is a library that allows you automatically combine your files into one or more bundles. With webpack, we will no longer need to manually update the hashes. Instead, webpack will inject the resources into the HTML with integrity and crossorigin attributes included.
Installing webpack
Yu’ll need to install webpack and webpack-cli:
npm i --save-dev webpack webpack-cli
The difference between the two is that webpack contains the core functionalities whereas webpack-cli is for the command line interface.
We’ll edit our package.json to add a scripts section like so:
{ //... rest of package.json ..., "scripts": { "dev": "webpack --mode=development" } //... rest of package.json ..., }
This enable us to run npm run dev and build our bundle.
Setting up webpack configuration
Next, let’s set up the webpack configuration. This is necessary to tell webpack what files it needs to deal with and how.
First, we’ll need to install two packages, html-webpack-plugin, and webpack-subresource-integrity:
npm i --save-dev html-webpack-plugin webpack-subresource-integrity style-loader css-loader
Package name
Description
html-webpack-plugin
Creates an HTML file that resources can be injected into
webpack-subresource-integrity
Computes and inserts subresource integrity information into resources such as <script> and <link rel=…>
style-loader
Applies the CSS styles that we import
css-loader
Enables us to import css files into our JavaScript
Setting up the configuration:
const path = require('path'), HTMLWebpackPlugin = require('html-webpack-plugin'), SriPlugin = require('webpack-subresource-integrity'); module.exports = { output: { // The output file's name filename: 'bundle.js', // Where the output file will be placed. Resolves to // the "dist" folder in the directory of the project path: path.resolve(__dirname, 'dist'), // Configures the "crossorigin" attribute for resources // with subresource integrity injected crossOriginLoading: 'anonymous' }, // Used for configuring how various modules (files that // are imported) will be treated modules: { // Configures how specific module types are handled rules: [ { // Regular expression to test for the file extension. // These loaders will only be activated if they match // this expression. test: /\.css$ /, // An array of loaders that will be applied to the file use: ['style-loader', 'css-loader'], // Prevents the accidental loading of files within the // "node_modules" folder exclude: /node_modules/ } ] }, // webpack plugins alter the function of webpack itself plugins: [ // Plugin that will inject integrity hashes into index.html new SriPlugin({ // The hash functions used (e.g. // <script integrity="sha256- ... sha384- ..." ... hashFuncNames: ['sha384'] }), // Creates an HTML file along with the bundle. We will // inject the subresource integrity information into // the resources using webpack-subresource-integrity new HTMLWebpackPlugin({ // The file that will be injected into. We can use // EJS templating within this file, too template: path.resolve(__dirname, 'src', 'index.ejs'), // Whether or not to insert scripts and other resources // into the file dynamically. For our example, we will // enable this. inject: true }) ] };
Creating the template
We need to create a template to tell webpack what to inject the bundle and subresource integrity information into. Create a file named index.ejs:
<!DOCTYPE html> <html> <body></body> </html>
Now, create an index.js in the folder with the following script:
// Imports the CSS stylesheet import './styles.css' alert('Hello, world!');
Building the bundle
Type npm run build in the terminal. You’ll notice that a folder, called dist is created, and inside of it, a file called index.html that looks something like this:
The CSS will be included as part of the bundle.js file.
This will not work for files loaded from external servers, nor should it, as cross-origin files that need to constantly update would break with subresource integrity enabled.
Thanks for reading!
That’s all for this one. Subresource integrity is a simple and effective addition to ensure you’re loading only what you expect and protecting your users; and remember, security is more than just one solution, so always be on the lookout for more ways to keep your website safe.