Zach did that thing where each of his blog posts has a special URL with the design of social image card that is screenshat by a headless browser (like Puppeteer) and used as a true meta Open Graph image, meaning it’s displayed on Twitter, Facebook, iMessage, Slack, Discord, and whatever else supports that card look.
I like it. Even though I’ve got a pretty good solution cooking now (for WordPress), the templates aren’t controlled with HTML/CSS like I wish they were.
As bit of yang to the ying here, Jim has some thoughts on the not-so-great aspects of Open Graph images:
I feel like they’ve been hijacked by auto-generated computer imagery serving as attention-grabbing filler more than supportive expression.
It’s kinda like… we can add Open Graph images, and we essentially get a totally free massive clickable target for hungry fingers, so we do add Open Graph images — even when that image is, well, boring. Just auto-generated computer barf of title text with branding. Jim’s post has examples.
I get where Jim is coming from, and I suppose I’m guilty to some degree. I feel like we’re a cut-above on CSS-Tricks though, if you’ll pardon a taste of defensiveness, because:
We have a variety of templates to choose from to switch it up, like a quote design.
We incorporate custom imagery into the final card, meaning most cards are somewhat visually unique.
We don’t just brand the cards, we usually incorporate the author for a little extra high five for the person, rather than just our brand.
Putting images on websites is incredibly simple, yes? Actually, yes, it is. You use <img> and link it to a valid source in the href attribute and you’re done. Except that there are (counts fingers) 927 things you could (and some you really should) do that often go overlooked. Let’s see…
Make sure you use sentence-format alt text on the image to describe what the image depicts.
Wrap it in a <figure> and use a <figcaption> if you want visible text that goes with it.
Use a <picture> element with a variety of <source> elements if you want to either…
Have different source images for different screen sizes and other conditions.
Serve different formatted images, to take advantage of next-gen image formats.
Know enough about image formats off the bat, so that you can use formats like SVG when appropropriate.
Know enough about your audience so that you can default to using the most modern format you can.
Use srcset and sizes attributes to serve differently sized images to be bandwidth efficient on differnet devices. Note that to be as efficient as possible, creating these image variations is dependent on the image itself, not pre-determined sizes.
Optimize the image and all it’s variations. Try lots of image optimization software, as they all yield different results. Pick the best. Make sure the optimized versions aren’t accidentally bigger in size or lower quality than the original because that can happen.
Lazy load the images so users don’t have to download images they won’t see. Natively, we’ve got loading="lazy" but this is so important that it’s probably worth polyfilling.
Make sure to include width and height attributes so that the image reserves the correct amount of space in a layout before it loads (even when the image is of fluid width).
Host your images on a fast, cookie-less, global CDN. Probably <link rel="preconnect"> and <link rel="dns-prefetch"> to the CDN domain(s). But don’t do any CDN stuff in dev, only staging or production. The canonical source for your images should be your own servers, so you can move CDNs as needed.
The largest image on the page could preload entirely, like <link rel="preload" as="image"> for the best possible LCP.
Try to figure out when you should use decoding="async" (I have no idea).
Have fallback styling in place for when/if the image doesn’t load.
Consider more pleasing before-load or fail states for the images, like small blurred versions of the images.
Have performance monitoring in place to make sure rogue giant images don’t slip through and mess with your performance budgets, and can watch for broken image errors.
I’m sure I’m forgetting some. Makes you sweat, doesn’t it?
If you ask me, it can’t be done. At least, not all of it together while being practical with your time. Fortunately: computers. I know both Eleventy Image and gatsby-plugin-image are well-regarded in how well they automate this stuff and deliver as many of these best practices as they can.
Here’s the technique I use for most images on this blog: I take the maximum size the image can be displayed in CSS pixels, and I multiply that by two, and I encode it at a lower quality, as it’ll always be displayed at a 2x density or greater. Yep. That’s it.
I feel we, the community, have to acknowledge that CSS is easy to get started with and hard to master. Let’s reflect on the language and find out what makes it hard.
Tim’s reasons CSS is hard (in my own words):
You can look at a matching Ruleset, and still not have the whole styling story. There might be multiple matching rulesets in disparate places, including in places that only apply conditionally, like within @media queries.
Even if you think you’ve got a complete handle on the styling information in the CSS, you still may not, because styling is DOM-dependent. You need information from both places to know how something will be styled.
You have no control over the device, browser, version, resolution, input mode, etc., all of which can be CSS concerns.
Making changes to CSS can be scary because it’s hard to understand everywhere it applies.
I’m not sure people making sweeping generalizations about CSS either being too hard or too easy is helpful for anyone. It’s much more interesting to look at what can be straightforward about CSS and what can be tricky, like Tim has done here.
I was chatting with some front-end folks the other day about why so many companies struggle at making accessible websites. Why are accessible websites so hard to build? We learn about HTML, we make sure things are semantic and — voila! @— we have an accessible website. During the course of conversation, someone mentioned the Domino’s pizza legal case, which is perhaps the most public example of a company being sued because of a lack of accessibility.
Here’s an interesting tidbit from that link:
According to CNBC, the number of lawsuits over inaccessible websites jumped 58 percent last year over 2017, to more than 2,200.
Inaccessible websites are not just a consideration for designers and engineers but a serious problem for a company’s legal team as well. Thankfully, it seems more of these cases will be brought to trial and (my personal hope is) this will get folks to care more about semantics and front-end development best practices. Although I’d like to think that companies would do what’s best for the web and make websites that meet the baseline requirements without a legal threat, we absolutely need to make inaccessible websites illegal for folks to really pay attention to this issue.
However! I also worry about attributing what might simply be a lack of knowledge to malice. I reckon a lot of websites have bad accessibility not because folks don’t care, but because they don’t know there’s an issue in the first place. As my conversation with front-end engineers progressed, I realized that the reason accessibility isn’t tackled seriously probably doesn’t have anything to do with bandwidth, or experience, or money.
I reckon the problem is that the accessibility of a website can be invisibly and silently broken.
So, here’s an idea: what if our text editors caught accessibility issues and showed them to us during development? It could look something like this:
I’m sure there are a ton of other ways we can make accessibility issues more public and visible. There are tools such as Lighthouse and browser extensions that are already out there, but making accessibility (and even performance, another silent fail) a part of our minute-to-minute workflow ensures that we can’t ignore it. Something like this would encourage us to learn about the problems, give us links to potential solutions, and encourage us all to care for a relatively misunderstood part of front-end development.
Development is complicated. Our job is an ongoing battle between getting the job done and doing that job in a safe, long-lasting way.
Developers say things like, “I’m just going to do this quick and dirty first,” because it’s taken as fact that if you code anything quickly, it not only will be prone to mistakes, but that you’ll be deliberately not honoring established conventions and skipping tasks that make for more solid code.
There is probably no practical way to make it impossible to write sloppy, bad code, but it is fascinating to consider how tooling has evolved to make it harder.
The obvious ones are automated code quality tools.
ESLint is configurable and those configurations can be enforced to a team’s liking. If you’d prefer to use some strong and established conventions, I believe the most popular out there is AirBnbs configuration.
There are alternatives to everything, of course. This post isn’t so much about a comprehensive tooling list as it is about considering the types of tools that help us push us toward writing better code. That said, stylelint is good for CSS, PHP_CodeSniffer is good for PHP, and Rubocop is good for Ruby.
Prettier is in a similar, but unique category. It is like a “beautifier” for your code, in that it helps you reformat it not only to look good but to follow team conventions (e.g. single quotes! Two space indentions!) as well. The most common way to use Prettier is that it runs as you save the file. So perhaps you write quickly and don’t worry about formatting as much, because it happens for you the second you save. There is an interesting side-benefit of quality here as Prettier can fail, and if it does, you have a problem in the syntax of your code you need to fix. Super useful.
I’m intrigued by tools like Sonarlint, Code Climate, and Resharper that look, to me, essentially like linters, but deliver only a best-practice analysis rather than configuring things yourself. It also claims to understand your code at a deeper level. Webhint and Deepscan look similarly interesting. Feel free to correct me if I have this wrong because I haven’t gotten a chance to use any of them yet.
Taking linting a step further, you can make passing lint tests a requirement before files can even be committed into Git. Git hooks are the ticket here, and the most popular tool for managing them is Husky.
Similarly, actual tests are powerful preventers of bad code.
It’s always smart to write tests. Deploying code that breaks features is embarrassing, a waste of time, and can negatively impact your business. Yet we do it all too often. The whole point of tests is to prevent that.
Test-Driven Development (TDD) is a practice in which you write the test before you write the actual code that does the thing you’re trying to do. It’s a nice way to work if you can pull it off, as you’ve got code coverage from the get-go.
Another type of automated testing is integration (also known as end-to-end) testing. I’m a fan of Cypress for that. It simulates a user actually using a browser. Go to this URL! Click this! Fill out this field and submit the form! Does this thing exist now? Is the URL what it’s supposed to be? Is this other thing visible? That kind of testing is powerful in that a lot of things have to be going right for these to pass, so there is a ton of implied testing.
As a CSS kinda guy, I’m also a fan of tests that watch to make sure the site looks how it’s supposed to look and there aren’t unintended consequences of styling changings. Percy is awesome for that (see our video).
Languages and language features that help us, wittingly or not
Take JSX, for example. It’s entirely possible to write bad HTML in JSX, but you can’t write broken HTML. The component will error out entirely and you’ll know as you’re working. That’s not even close to the reason JSX exists, but I find it an interesting side effect. I’ve fixed many bugs in my career that had to do with malformed HTML causing problems, ranging from tiny side effects to massive layout blunders.
Similarly, a tool like Emmet can help generate valid HTML. I use Emmet all the time, and didn’t even think of that until it was mentioned to me.
I also think of React features, like PropTypes, that throw errors when missing or unexpected data is thrown at them. Not to mention you can configure your linter to yell at you if you’re missing the PropType. That’s pretty powerful testing to be enforced for a fairly small amount of labor (compared to, say, writing a test). You can even force them to help with accessibility.
It would be impossible to not mention TypeScript here. One of the major points of using TypeScript is code safety. The fact that it’s getting huge (listen to Laurie Voss on this) points to the fact that we want to enforce that safety. I remember when Angular 2 came out, there were long, solid explanations as to why. People also talk about the tooling improvements you get with TypeScript: advanced autocompletion, navigation, and refactoring. They are all, in a way, also about code safety — having the editor help you write correct file names and function names. TypeScript or not, any sort of autocomplete/IntelliSense is great to have.
The whole idea of this post came from me thinking about how GraphQL has this “you can’t screw it up” quality to it. You can’t ask for data that isn’t there, as it will error right as you’re working with it — and then you’ll fix it. And you can’t get back data that you aren’t expecting, as you’ve described exactly what you want back and that’s what GraphQL does. It’s not that you can’t write bad code that uses GraphQL or write a bad GraphQL implementation, but the technology sort of encourages better code and I’m fascinated by that.
CSS-in-JS, while that’s probably too broad a term generally, applies to this discussion. Most of the solutions on that spectrum involve some kind of style scoping, and style scoping provides this “you can’t screw it up” topic we’re focusing on. You can’t cause unintended side effects when the selector you’ve just written compiles to something you’ve never hand-written, like .SpecificComponent_root_34lkj4x.
Your co-workers are an awesome line of defense
First, give y’allselves a system. Nothing goes to the master branch directly, and everything has to be a Merge/Pull Request. That gives you a spot to talk about code quality — not to mention a place where you can run a suite of automated tests before the code is dangerously close to production.
GitLab has a concept of approvers for a Merge Request. You pick some people that have to approve the branch before it can be merged.
GitHub has the same concept with protected branches. Perhaps the best thing you can do to prevent bad code is to widen the responsibility. There is always a risk this just becomes a glance-at-the-code-for-two-seconds-and-give-it-a-👍 motion, but that’s on y’all to make sure reviews are taken seriously. I’ve seen lots of value in a requirement that many sets of eyeballs need to be on code before it goes out. “Given enough eyeballs, all bugs are shallow” and all that.
We’ll always be screwing up code, but we can also always be finding ways not to.