Tag: They

What should someone learn about CSS if they last boned up during CSS3?

“CSS3” was a massive success for CSS. A whole bunch of stuff dropped essentially at once that was all very terrific to get our hands on in CSS. Gradients, animation/transition, border-radius, box-shadow, transformwoot! And better, the banner name CSS3 (and the spiritual umbrella “HTML5”) took off and the industry was just saturated in learning material about it all. Just look at all the “CSS3”-dubbed material that’s been published around here at CSS-Tricks over the years.

No doubt loads of people boned up on these technologies during that time. I also think there is no doubt there are lots of people that haven’t learned much CSS since then.

So what would we tell them?

Some other folks have speculated similarly. Scott Vandehey in “Modern CSS in a Nutshell” wrote about his friend who hasn’t kept up with CSS since about 2015 and doesn’t really know what to learn. I’ll attempt to paraphrase Scott’s list and what’s changed since the days of CSS3.

Preprocessors are still widely used since the day of CSS3, but the reasons to use them have dwindled, so maybe don’t even bother. This includes more newfangled approaches like polyfilling future features. This also includes Autoprefixer. CSS-in-JS is common, but only on projects where the entire workflow is already in JavaScript. You’ll know when you’re on a relevant project and can learn the syntax then if you need to. You should learn Custom Properties, Flexbox, and Grid for sure.

Sounds about right to me. But allow me to make my own list of post-CSS3 goodies that expands upon that list a smidge.

What’s new since CSS3?

And by “CSS3” let’s say 2015 or so.


.card {   display: grid;   grid-template-columns:     150px 1fr;   gap: 1rem; } .card .nav {   display: flex;   gap: 0.5rem; }

Layout

You really gotta learn Flexbox and Grid if you haven’t — they are really cornerstones of CSS development these days. Even more so than any feature we got in CSS3.

Grid is extra powerful when you factor in subgrid and masonry, neither of which is reliable cross-browser yet but probably will be before too long.

html {   --bgColor: #70f1d9;      --font-size-base:      clamp(1.833rem, 2vw + 1rem, 3rem);   --font-size-lrg:     clamp(1.375rem, 2vw + 1rem, 2.25rem); }  html.dark {   --bgColor: #2d283e; } 

CSS Custom Properties

Custom properties are also a big deal for several reasons. They can be your home for design tokens on your project, making a project easier to maintain and keep consistent. Color theming is a big use case, like dark mode.

You can go so far as designing entire sites using mostly custom properties. And along those lines, you can’t ignore Tailwind these days. The approach of styling an entire site with classes in HTML strikes the right chord with a lot of people (and the wrong chord with a lot of people, so no worries if it doesn’t jive with you).

@media    (prefers-reduced-motion: reduce) {   * {     animation-duration: 0.001s !important;   } }  @media    (prefers-color-scheme: dark) {   :root {     --bg: #222;   } } 

Preference Queries

Preference queries are generally @media queries like we’ve been using to respond to different browsers sizes for year, but now include ways to detect specific user preferences at the OS level. Two examples are prefers-reduced-motion and prefers-color-scheme. These allow us to build interfaces that more closely respect a user’s ideal experience. Read Una’s post.

.block {   background:      hsl(0 33% 53% / 0.5);    background:     rgb(255 0 0);    background:     /* can display colors         no other format can */     color(display-p3 0.9176 0.2003 0.1386)    background:     lab(52.2345% 40.1645 59.9971 / .5);}    background:     hwb(194 0% 0% / .5); }

Color Changes

The color syntax is moving to functions that accept alpha (transparency) without having the change the function name. For example, if you wanted pure blue in the CSS3 days, you might do rgb(0, 0, 255). Today, however, you can do it no-comma style (both work): rgb(0 0 255), and then add alpha without using a different function: rgb(0 0 255 / 0.5). Same exact situation for hsl(). Just a small nicety, and how future color functions will only work.

Speaking of future color syntaxes:

body {  font-family: 'Recursive', sans-serif;  font-weight: 950;  font-variation-settings: 'MONO' 1, 'CASL' 1; }

Variable Fonts

Web fonts became a big thing in CSS3. Now there are variable fonts. You might as well know they exist. They both unlock some cool design possibilities and can sometimes be good for performance (like no longer needing to load different font files for bold and italic versions of the same font, for example). There is a such thing as color fonts too, but I’d say they haven’t seen much popularity on the web, despite the support.

.cut-out {   clip-path: polygon(25% 0%, 75% 0%, 100% 50%, 75% 100%, 25% 100%, 0% 50%); }
.mask {   mask: url(mask.png) right bottom / 100px repeat-y; }
.move-me {   offset-path: path('M 5 5 m -4, 0 a 4,4 0 1,0 8,0 a 4,4 0 1,0 -8,0');   animation: move 3s linear infinite; }  @keyframes move {   100% {      offset-distance: 100%;   } }

Paths

SVG has also exploded since CSS3. You can straight up crop any element into shapes via clip-path, bringing SVG-like qualities to CSS. Not only that, but you can animate elements along paths, float elements along paths, and even update the paths of SVG elements.

These all feel kind of spirtually connected to me:

  • clip-path — allows us to literally crop elements into shapes.
  • masks — similar to clipping, but a mask can have other qualities like being based on the alpha channel of the mask.
  • offset-path — provides a path that an element can be placed on, generally for the purpose of animating it along that path.
  • shape-outside — provides a path on a floated element that other elements wrap around.
  • d — an SVG’s d attribute on a <path> can be updated via CSS.

.disable {   filter:      blur(1px)     grayscale(1); }  .site-header {   backdrop-filter:      blur(10px); }  .styled-quote {   mix-blend-mode:      exclusion; } 

CSS Filters

There is a lot of image manipulation (not to mention other DOM elements) that is possible these days directly in CSS. There is quite literally filter, but its got friends and they all have different uses.

These all feel kind of spirtually connected to me:

  • filter — all sorts of useful Photoshop-like effects like brightness, contrast, grayscale, sautration, etc. Blurring is a really unique power.
  • background-blend-mode — again, evocative of Photoshop in how you can blend layers. Multiply the layers to darken and combine. Overlay to mix the background and color. Lighten and darken are classic effects that have real utility in web design, and you never know when a more esoteric lighting effect will create a cool look.
  • backdrop-filter — the same abilities you have with filter, but they only apply to the background and not the entire element. Blurring just the background is a particularly useful effect.
  • mix-blend-mode — the same abilities you have with background-blend-mode, but for the entire element rather than bring limited to backgrounds.

import "https://unpkg.com/extra.css/confetti.js";
body {   background: paint(extra-confetti);   height: 100vh;   margin: 0; }

Houdini

Houdini is really a collection of technologies that are all essentially based around extending CSS with JavaScript, or at least at the intersection of CSS and JavaScript.

  • Paint API — returns an image that is built from <canvas> APIs and can be controlled through custom properties.
  • Properties & Values API / Typed OM — gives types to values (e.g. 10px) that would have otherwise been strings.
  • Layout API — create your own display properties.
  • Animation API

Combined, these make for some really awesome demos, though browser support is scattered. Part of the magic of Houdini is that it ships as Worklets that are pretty easy to import and use, so it has the potential to modularize powerful functionality while making it trivially easy to use.

my-component {   --bg: lightgreen; }  :host(.dark) {    background: black;  }  my-component:part(foo) {   border-bottom: 2px solid black; }

Shadow DOM

The Shadow DOM comes up a bit if you’ve played with <svg> and the <use> element. The “cloned” element that comes through has a shadow DOM that has limitations on how you can select “through” it. Then, when you get into <web-components>, it’s the same ball of wax.

If you find yourself needing to style web components, know there are essentially four options from the “outside.” And you might be interested in knowing about native CSS modules and constructible stylesheets.

The CSS Working Group

It’s notable that the CSS working group has its own way of drawing lines in the sand year-to-year, noting where certain specs are at a given point in time:

These are pretty dense though. Sure, they’re great references and document things where we can see what’s changed since CSS3. But no way I’d send a casual front-end developer to these to choose what to learn.

Yeah — but what’s coming?

I’d say probably don’t worry about it. 😉

The point of this is catching up to useful things to know now since the CSS3 era. But if you’re curious about what the future of CSS holds in store…

  • Container queries will be a huge deal. You’ll be able to make styling choices based on the size of a container element rather than the browser size alone. And it’s polyfillable today.
  • Container units will be useful for sizing things based on the size of a container element.
  • Independant transforms, e.g. scale: 1.2;, will feel more logical to use than always having to use transform.
  • Nesting is a feature that all CSS preprocessor have had forever and that developers clearly like using, particularly for media queries. It’s likely we’ll get it in native CSS soon.
  • Scoping will be a way to tell a block of CSS to only apply to a certain area (the same way CSS-in-JS libraries do), and helps with the tricky concept of proximity.
  • Cascade layers open up an entirely new concept of what styles “win” on elements. Styles on higher layers will beat styles on lower layers, regardless of specificity.
  • Viewport units will greatly improve with the introduction of “relative” viewport lengths. The super useful ones will be dvh and dvw, as they factor in the actual usable space in a browser window, preventing terrible issues like the browser UI overlapping a site’s UI.

Bramus Van Damme has a pretty good article covering these things and more in his “CSS in 2022” roundup. It looks like 2022 should be a real banner year for CSS. Perhaps more of a banner year than the CSS3 of 2015.


What should someone learn about CSS if they last boned up during CSS3? originally published on CSS-Tricks. You should get the newsletter and become a supporter.

CSS-Tricks

, , , , , , , ,

Help Users Accomplish What They Came For

From my perspective, the question of what one thing we can do to make a website better is not a technical one. The more I browse the internet, the more I realize that the biggest issue with a lot of websites is the fact that they don’t let me accomplish the task I am looking to get done. Whether it is the usability or the information architecture or the performance it doesn’t really matter. Over the years, new browser capabilities and the tech stack have made it possible to add more and more complexity to an average website. We see it everywhere: in pages presenting the product, in booking services, in portfolios, and in online shops. We try to delight the user instead of focusing on one simple task: helping the user accomplish what they came to do.

If I were to point out one thing that people can do to make their website better, it is to take a moment to think about the most crucial actions that we want our users to be able to do on a page and make them as easy and accessible as possible.

All visual effects, fancy graphics, beautiful interactions, and tracking scripts should come second.

I can give you an example from my own experience. A few years ago, I went on holiday in a remote area with very limited internet access. My luggage was lost and there were not many places where I could buy extra clothing or cosmetics. I could not find where my luggage was or when it would be delivered because the airline website did not load on my limited data—it would not even show a phone number I could call, and the email address I found someplace else turned out to be outdated. The website did not follow the rules of progressive enhancement and graceful degradation; it only allowed privileged users with a good enough internet connection to download a huge amount of JavaScript that’s responsible for building the whole experience. In their case, a simple form with two text inputs and basic text information as a fallback would have easily solved my issue. I can bet that the developers spent countless hours making the experience delightful, yet I was unable to even see it.

It is easy to get caught up in the moment and follow the milestones for the project as they are described through tickets in Jira or some other project management software. It is easy to reuse the solutions we are used to. and we can easily copy/paste from previous projects or Stack Overflow. It’s also easy to assume that if something “works on my machine” it will also work for everyone else.

What’s difficult is taking a moment to look past the features that add new value to the project and to focus on parts of the app that may have been overlooked in the process. It is hard to stay on top of things like new features and browser APIs are being released. It is hard to think that someone might not have the same privilege as we do.

Take a moment to rethink what is its true value to the user visiting and to try to look at the page with a fresh eye.

It can be challenging as we get used to the solutions we built. It is hard for us to imagine how people can fail to follow the instructions or clues we left for them on the screen, or imagine how the page might feel to an unsighted user or someone who can only navigate using the keyboard. We forget to test edge cases and anything that goes beyond the “happy path” of the user and instead tend to overlook the fact that we are using a powerful MacBook with a sharp display and internet flowing at a steady pace. We forget that some people are not native English speakers and consider that a word that is self-explicable to us can mean nothing to a user who does not use it in everyday conversation.

I challenge you to take time to look at your website as if it was your first time around.

Use it in the production environment with the stream of third-party resources that might not be there when you are in the development mode. Use it with a very poor internet connection and measure how long it takes to accomplish a simple task like filling a form. Try using it with a different device you might have not used before.

.

.

.

I challenge you to find a real user of your website and take a moment to watch how they use what you built during a user testing session.

You probably have some assumptions about what causes headaches for your users and what doesn’t. I could bet that some of these assumptions will be challenged and you wind up creating a whole list of things to fix that you wouldn’t otherwise consider.


I hope that progressive enhancement doesn’t become yet another buzzword and that you really take a moment to help the user accomplish what they came for. If you are interested in learning more on this topic I can recommend getting familiar with one of Jeremy Keith’s presentations on that topic or the article by Aaron Gustafson that popularized the idea.

CSS-Tricks

, , , ,
[Top]

I’m confused about Static Site Generators. Are they only for informational sites where I can’t log in or display any dynamic data?

(This is a sponsored post.)

I got this question from a listener the other day. Fair question, I’d say. The word “Static” in “Static Site Generator” is at-odds with the word “Dynamic.” It seems to imply that the website created with a Static Site Generator (SSG) is locked in stone, only to be changed when it is run again some future date. That’s a somewhat unfortunate implication, if entirely understandable.

“Static Sites,” in actuality, can be as dynamic as any other side because of one¹ thing baked right in an available to any website: JavaScript. JavaScript can, say, hit an API and update the otherwise statically generated markup of a page. Just think. JavaScript. APIs. Markup… J… A… M… Jamstack!

Part of the trick to understanding this Jamstack world (aka Static Sites that do Dynamic Things) is just looking at what Netlify offers. Netlify literally only offers static hosting. No server-side languages (think Ruby, PHP, or Python) serving up individual pages on Netlify. So SSGs and Netlify go hand in hand. But let’s go through it as a list:

  • Netlify runs your build process for you. Which very likely includes a SSG. The point of which is largely that you keep your built site files out of your version control system, which would otherwise be a wasteful mess.
  • Netlify processes your forms. No need to run an always-on server just for this dynamic feature.
  • Netlify offers authentication. That’s right reader, auth is a first-class citzen of the platform.
  • Netlify runs your server side code in the form of cloud functions. Static hosting doesn’t mean you can’t do server side things. It means you do server side things with modern, cheap, secure, focused, fast, powerful cloud functions.
  • Netlify can build pages on-demand. Meaning you don’t have to pre-build all your pages if you don’t want to.

That’s just some of the feature set. Here’s a fun blog post from a little while ago with some of the staff’s favorite features, many of which aren’t in the list above. Jamstack is starting to literally mean that indeed dynamic things are happening to a static site.

I hope that answers the question for this particular reader and anyone else with the same confusion. SSGs can produce entirely static websites with zero dynamic data and that offer no special logged-in experience. But they can also be every bit as dynamic as any other site, just built from a more solid, static foundation.

  1. Well, let’s say two things. Dynamic things can be done via edge handlers as well, without the need for client-side JavaScript.

CSS-Tricks

, , , , , , , , , , , , ,
[Top]

Merge Conflicts: What They Are and How to Deal with Them​

This article is part of our “Advanced Git” series. Be sure to follow Tower on Twitter or sign up for their newsletter to hear about the next articles.

Merge conflicts… Nobody likes them. Some of us even fear them. But they are a fact of life when you’re working with Git, especially when you’re teaming up with other developers. In most cases, merge conflicts aren’t as scary as you might think. In this fourth part of our “Advanced Git” series we’ll talk about when they can happen, what they actually are, and how to solve them.

Advanced Git series:

  1. Part 1: Creating the Perfect Commit in Git
  2. Part 2: Branching Strategies in Git
  3. Part 3: Better Collaboration With Pull Requests
  4. Part 4: Merge Conflicts (You are here!)
  5. Part 5: Rebase vs. Merge (Coming soon!)
  6. Part 6: Interactive Rebase
  7. Part 7: Cherry-Picking Commits in Git
  8. Part 8: Using the Reflog to Restore Lost Commits

How and when merge conflicts occur

The name gives it away: a merge conflict can occur when you integrate (or “merge”) changes from a different source into your current working branch. Keep in mind that integration is not limited to just merging branches. Conflicts can also happen during a rebase or an interactive rebase, when you’re cherry picking in Git (i.e. when you choose a commit from one branch and apply it to another), when you’re running git pull or even when reapplying a stash.

All of these actions perform some kind of integration, and that’s when merge conflicts can happen. Of course, this doesn’t mean that every one of those actions results in a merge conflict every time — thank goodness! But when exactly do conflicts occur?

Actually, Git’s merging capabilities are one of its greatest advantages: merging branches works flawlessly most of the time because Git is usually able to figure things out on its own and knows how to integrate changes.

But there are situations where contradictory changes are made — and that’s when technology simply cannot decide what’s right or wrong. These situations require a decision from a human being. For example, when the exact same line of code was changed in two commits, on two different branches, Git has no way of knowing which change you prefer. Another situation that is a bit less common: a file is modified in one branch and deleted in another one. Git will ask you what to do instead of just guessing what works best.

How to know when a merge conflict has occurred

So, how do you know a merge conflict has occurred? Don’t worry about that — Git will tell you and it will also make suggestions on how to resolve the problem. It will let you know immediately if a merge or rebase fails. For example, if you have committed changes that are in conflict with someone else’s changes, Git informs you about the problem in the terminal and tells you that the automatic merge failed:

$  git merge develop CONFLICT (content): Merge conflict in index.html Automatic merge failed; fix conflicts and then commit the result.

You can see that I ran into a conflict here and that Git tells me about the problem right away. Even if I had missed that message, I am reminded about the conflict the next time I type git status.

If you’re working with a Git desktop GUI like Tower, the app makes sure you don’t overlook any conflicts:

In any case: don’t worry about not noticing merge conflicts!

How to undo a merge conflict and start over

You can’t ignore a merge conflict — instead, you have to deal with it before you can continue your work. You basically have the following two options:

  • Resolve the conflict(s)
  • Abort or undo the action that caused the conflict(s)

Before we go into resolving conflicts, let’s briefly talk about how to undo and start over (it’s very reassuring to know this is possible). In many cases, this is as simple as using the --abort parameter, e.g. in commands like git merge --abort and git rebase --abort. This will undo the merge/rebase and bring back the state before the conflict occurred.

This also works when you’ve already started resolving the conflicted files and, even then, when you find yourself at a dead end, you can still undo the merge. This should give you the confidence that you really can’t mess up. You can always abort, return to a clean state, and start over.

What merge conflicts really look like in Git

Let’s see what a conflict really looks like under the hood. It’s time to demystify those little buggers and get to know them better. Once you understand a merge conflict, you can stop worrying.

As an example, let’s look at the contents of an index.html file that currently has a conflict:

Screenshot of an open code editor with HTML markup for a navigation that contains an unordered list of links. There are three lines of text injected by Git, the first says HEAD, the second is a line of equals signs, and the last says develop.

Git is kind enough to mark the problematic areas in the file. They’re surrounded by <<<<<<< and >>>>>>>. The content after the first marker originates from our current working branch (HEAD). The line with seven equals signs (=======) separates the two conflicting changes. Finally, the changes from the other branch are displayed (develop in our example).

Your job is to clean up those lines and solve the conflict: in a text editor, in your preferred IDE, in a Git desktop GUI, or in a Diff & Merge Tool.

How to solve a conflict in Git

It doesn’t matter which tool or application you use to resolve a merge conflict — when you’re done, the file has to look exactly as you want it to look. If it’s just you, you can easily decide to get rid of a code change. But if the conflicting change comes from someone else, you might have to talk to them before you decide which code to keep. Maybe it’s yours, maybe it’s someone else’s, and maybe it’s a combination of those two.

The process of cleaning up the file and making sure it contains what you actually want doesn’t have to involve any magic. You can do this simply by opening your text editor or IDE and making your changes.

Sometimes, though, you’ll find that this is not the most efficient way. That’s when dedicated tools can save time and effort. For example, there are various Git desktop GUIs which can be helpful when you’re resolving merge conflicts.

Let’s take Tower as an example. It offers a dedicated “Conflict Wizard” that makes these otherwise abstract situations more visual. This helps to better understand where the changes are coming from, what type of modification occurred, and ultimately solve the situation:

Especially for more complicated conflicts, it can be great to have a dedicated Diff & Merge Tool at hand. It can help you understand diffs even better by offering advanced features like special formatting and different presentation modes (e.g. side-by-side, combined in a single column, etc.).

There are several Diff & Merge Tools on the market (here are some for Mac and for Windows). You can configure your tool of choice using the git config command. (Consult the tool’s documentation for detailed instructions.) In case of a conflict, you can invoke it by simply typing git mergetool. As an example, I’m using the Kaleidoscope app on my Mac:

After cleaning up the file — either manually in a text editor, in a Git desktop GUI, or with a Merge Tool — you can commit the file like any other change. By typing git add <filename>, you inform Git that the conflict has been resolved.

When all merge conflicts have been solved and added to the Staging Area, you simply create a regular commit. And this completes the conflict resolution.

Don’t panic!

As you can see, a merge conflict is nothing to worry about and certainly no reason to panic. Once you understand what actually happened to cause the conflict, you can decide to undo the changes or resolve the conflict. Remember that you can’t break things — even if you realize you made a mistake while resolving a conflict, you can still undo it: just roll back to the commit before the great catastrophe and start over again.

If you want to dive deeper into advanced Git tools, feel free to check out my (free!) “Advanced Git Kit”: it’s a collection of short videos about topics like branching strategies, Interactive Rebase, Reflog, Submodules and much more.

Advanced Git series:

  1. Part 1: Creating the Perfect Commit in Git
  2. Part 2: Branching Strategies in Git
  3. Part 3: Better Collaboration With Pull Requests
  4. Part 4: Merge Conflicts (You are here!)
  5. Part 5: Rebase vs. Merge (Coming soon!)
  6. Part 6: Interactive Rebase
  7. Part 7: Cherry-Picking Commits in Git
  8. Part 8: Using the Reflog to Restore Lost Commits

The post Merge Conflicts: What They Are and How to Deal with Them​ appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , ,
[Top]

How They Fit Together: Transform, Translate, Rotate, Scale, and Offset

Firefox 72 was first out of the gate with “independent transforms.” That is, instead of having to combine transforms together, like:

.el {   transform: rotate(10deg) scale(0.95) translate(10px, 10px); }

…we can do:

.el {   rotate: 10deg;   scale: 0.95;   translate: 10px 10px; }

That’s extremely useful, as having to repeat other transforms when you change a single one, lest remove them, is tedious and prone to error.

But there is some nuance to know about here, and Dan Wilson digs in.

Little things to know:

  • Independent transforms happen first. So, if you also use a transform, that can override them if the same transform types is used.
  • They all share the same transform-origin.
  • The offset-* properties also effectively moves/rotates elements. Those happen after independent transforms and before transform.

Direct Link to ArticlePermalink

The post How They Fit Together: Transform, Translate, Rotate, Scale, and Offset appeared first on CSS-Tricks.

CSS-Tricks

, , , , , ,
[Top]

“All these things are quite easy to do, they just need somebody to sit down and just go through the website”

I saw a video posted on Twitter from Channel 5 News in the UK (I have no idea what the credibility of them is, it’s an ocean away from me) with anchor Claudia Liza asking Glen Turner and Kristina Barrick questions about website accessibility.

Apparently, they often post videos with captions, but this particular video doesn’t (ironically). So, I’ve transcribed it here as I found them pretty well-spoken.

[Claudia Liza]: … you do have a visual impairment. How does that make it difficult for you to shop online?

[Glen Turner]: Well, I use various special features on my devices to shop online to make it easier. So, I enlarge the text, I’ll invert the colors to make the background dark so that I don’t have glare. I will zoom in on pictures, I will use speech to read things to me because it’s too difficult sometimes. But sometimes websites and apps aren’t designed in a way that is compatible with that. So sometimes the text will be poorly contrasted so you’ll have things like brown on black, or red on black, or yellow on white, something like that. Or the menu system won’t be very easy to navigate, or images won’t have descriptions for the visually impaired because images can have descriptions embedded that a speech reader will read back to them. So all these various factors make it difficult or impossible to shop on certain websites.

[Claudia Liza]: What do you need retailers to do? How do they need to change their technology on their websites and apps to make it easier?

It’s quite easy to do a lot of these things, really. Check the colors on your website. Make sure you’ve got light against dark and there is a very clear distinctive contrast. Make sure there are descriptions for the visually impaired. Make sure there are captions on videos for the hearing impaired. Make sure your menus are easy to navigate and make it easy to get around. All these things are quite easy to do, they just need somebody to sit down and just go through the website and check that it’s all right and consult disabled people as well. Ideally, you’ve got disabled people in your organization you employ, but consult the wider disabled community as well. There is loads of us online there is loads of us spread all over the country. There is 14 million of us you can talk to, so come and talk to us and say, “You know, is our website accessible for you? What can we do to improve it?” Then act on it when we give you our advice.

[Claudia Liza]: It makes sense doesn’t it, Glen? It sounds so simple. But Christina, it is a bit tricky for retailers. Why is that? What do other people with disabilities tell you?

So, we hear about content on websites being confusing in the way it’s written. There’s lots of information online about how to make an accessible website. There’s a global minimum legal standard called WCAG and there’s lot of resources online. Scope has their own which has loads of information on how to make your website accessible.

I think the problem really is generally lack of awareness. It doesn’t get spoken about a lot. I think that disabled consumers – there’s not a lot of places to complain. Sometimes they’ll go on a website and there isn’t even a way to contact that business to tell them that their website isn’t accessible. So what Scope is trying to do is raise the voices of disabled people. We have crowdsourced a lot of people’s feedback on where they experience inaccessible websites. We’re raising that profile and trying to get businesses to change.

[Claudia Liza]: So is it legal when retails aren’t making their websites accessible?

Yeah, so, under the Equality Act 2010, it’s not legal to create an inaccessible website, but what we’ve found is that government isn’t generally enforcing that as a law.

[Claudia Liza]: Glenn, do you feel confident that one day you’ll be able to buy whatever you want online?

I would certainly like to think that would be the case. As I say, you raise enough awareness and get the message out there and alert business to the fact that there is a huge consumer market among the disabled community, and we’ve got a 274 billion pound expenditure a year that we can give to them. Then if they are aware of that, then yeah, hopefully they will open their doors to us and let us spend our money with them.

The post “All these things are quite easy to do, they just need somebody to sit down and just go through the website” appeared first on CSS-Tricks.

CSS-Tricks

, , , , , , , , , , ,
[Top]

Preloading Pages Just Before They are Needed

The typical journey for a person browsing a website: view a page, click a link, browser loads new page. That’s assuming no funny business like a Single Page App, which still follows that journey, but the browser doesn’t load a new page — the client fakes it for the sake of a snappier transition.

What if you could load that new page before the person clicks the link so that, when they do, the loading of that next page is much faster? There are two notable projects that try to help with that:

  • quicklink: detects visible links, waits for the browser to be idle and if it isn’t on slow connection, it prefetches those links.
  • instant.page: if you hover over a link for 65ms, it preloads that link. The new Version 2 allows you to configure of the time delay or whether to wait for a click or press before preloading.

Combine those things with technological improvements like paint holding, and building a SPA architecture just for the speed benefits may become unnecessary (though it may still be desirable for other reasons, like code-splitting, putting the onus of routing onto front-end developers, etc.).

The post Preloading Pages Just Before They are Needed appeared first on CSS-Tricks.

CSS-Tricks

, , , , ,
[Top]