Tag: Website

Chapter 3: The Website

Previously in web history…

Berners-Lee, motivated by his own curiosity, creates the World Wide Web at CERN. He releases its technologies to the public domain, which enables the development of several new browsers for every operating system. Mosaic proves to the most popular, and its introduction of color images directly inline in content changes fundamentally the way people think about the web.

The very first website was about the web. That kind of thing is not all that unusual. The first email sent to another person was about email As technology progresses, we may have lost a bit of theatrics. The first telegraph, for instance, read “WHAT HATH GOD WROUGHT.” However, in most cases, telecommunication firsts follow this meta template.

Anyway, the first website was instructive for a reason. If you were a brand new web user, it is the first thing you would see. If that page didn’t manage to convince you the web was worth sinking a bit of time into, then that was the end of the story. You’d go and check out Gopher instead. So, as a starting point for new web users, the first website was critical.

The URL was info.cern.ch. Its existence on the CERN server should be of no surprise. The first website was created by the web’s inventor, Tim Berners-Lee, while he was still working there.

It was a simple page. A list of headers and links — to download web browser code, find out more info about the web, and get all of the technical details — was divided only by short descriptions o f each section. One link brought you to a list of websites. Berners-Lee collected a list of links that were sent to him, or plucked them from mailing lists whenever he found them. Every time he found a link he added it to the CERN website, loosely organized by category. It was a short list. In July of 1993, there were still only about 130 websites in the world.

(A few years back, some enterprising folks took it upon themselves to re-create the first website at CERN. So you can go and browse it now, just as it was then.)

As far as websites go, it was noting spectacular. The language was plain enough, though a bit technical. The instructions were clear, as long as you had some background in programming or computers. The web before the web was difficult to explain. The primary goal of the website was to prompt a bit of exploration from those who visited it. By that measure, it was successful.

But Berners-Lee never meant for the CERN website to be the most important page on the web. It was just there to serve as an example for others to recreate in their own image.

Tim Berners-Lee also created the first browser. It gave users the ability to both read — and crucially to publish — websites. In his conception, each consumer of the web would have their own personal homepage. The homepage could be anything. For most people, he thought, it would likely be a private place to store personal bookmarks or jot down notes. Others might chose to publish their site for the public, using it as an opportunity to introduce themselves, or explore some passion (similar to what services like Geocities would offer later). Berners-Lee imagined that when you opened your browser, any browser, your own homepage would be the first thing that you saw.

By the time other browsers hit the market, the publishing capabilities faded away. People were left to simply surf, and not to author, the web. For the earliest of web users, the CERN website remained a popular destination. With usage still growing, it was the best place to find a concise list of websites. But if the web was going to succeed — truly succeed — it was going to have to be more than links. The web was going to need to find its utility.

Fortunately Berners-Lee had created the URL. Anyone could create a website. Heck, he’d even post a link to it.


“Louise saw the web as a godsend,” Berners-Lee wrote in his personal retelling of the web’s history. The Louise in question is Louise Addis, librarian at SLAC for over 40 years before she retired in the mid-90s. Along with Paul Kunz, Tony Johnson, and several others, she helped create the first web server in the United States and one of the most influential websites of the early web. She would later put it a bit differently. “The Web was a revolution!” That may be true, but it wouldn’t have been a revolution if not for what she helped create.

As we found in the first chapter, Berners-Lee’s curiosity led him on a path to set information free. Louise Addis was also curious. Her curiosity led her to try to connect people to that information. She studied International Relations at Stanford University only to bounce around at a few jobs and land herself back at her alma mater working for a secret research lab known simply as Project M in 1960. Though she had no experience in the field, she worked there as a librarian, eventually moving up to head librarian. After a couple of years, the lab would go public and become formally known as the Stanford Linear Accelerator Center, or SLAC.

SLAC’s primary mission was to advance the research of American scientists in the wake of World War II. It houses a two-mile long linear accelerator, the longest in the world. SLAC recruits scientists across a broad set of fields, but its primary focus is particle physics. It has produced a number of Nobel prizes and has shared groundbreaking new discoveries across the world.

Research is at the center of the work done at SLAC. While she was there, Addis was relentless in her quest to connect her peers with research. When she learned that there wasn’t a good system for keeping track of the multitude of authors attributed to particle physics papers (some had over 1,000 authors on a single paper), she picked up a bit of programming with no formal training. “If I needed to know something, I asked someone to show me how to do a particular task. Then I went back to the Library and tried it on my own.”

A couple of years after she discovered the web, Addis would start the first unofficial tech support group for web newcomers known as the WWW Wizards. The Wizards worked — mostly in their spare time — to help new web users come online. They were a profoundly important resource for the early web. Addis continually made it her mission to help people find the information they needed.

She used her ad-hoc programming experience in the late 1960’s to create the SPIRES-HEP database, a digital library with hundreds of thousands of bibliographic records for particle physics papers. It is still in use today, though it’s newest iteration is called INSPIRE-HEP. The SPIRES-HEP database was a foundational resource. If you were a particle physics researcher anywhere in the world, you would be accessing it frequently. It ran on an IBM mainframe that looked like this:

An IBM mainframe console from the 70's

The mainframe used a very specific programming language also developed by IBM, which has since gone into disuse. Locked inside was a very well organized bibliography of research papers. Accessing it was another thing entirely. There were a few ways to do that.

The first required a bit of programming knowledge. If you were savvy enough, you could log directly into the SPIRES-HEP database remotely and, using the database-specific SPIRES query language, pull the records you needed directly from the mainframe. This was the quickest option, but required the most technical know-how and a healthy dose of tenacity. Let’s consider this method the high bar.

The middle bar was an interface built by SLAC researcher Paul Kunz that let you email the server to pull out the records you needed. You still needed to know the SPIRES query language, but it solved the remote access part of the equation.

The low bar was to email or message a librarian at SLAC so they could pull the record for you and send it back. The easiest bar to clear, this was the method that most people used. Which meant that the most widely accessed particle physics database in the world was beset by a bottleneck of librarians at SLAC who needed to ferry bibliographic records back and forth from researchers.

The SPIRES-HEP database was invaluable, but widespread access remained its largest obstacle.


For a second time in the web’s history, the NeXT computer played an important role in its fate. For a computer that was short-lived, and largely unheard of, it is a key piece of the web’s history.

Like Tim Berners-Lee, SLAC physicist Paul Kunz, creator of the SPIRES-HEP instant messaging and email service, used a NeXT computer. When Berners-Lee called him into his office on one of his visits, Berners-Lee invited him into his office. The only reason Kunz agreed to go was to see how somebody else was using a NeXT computer. While he was there, Berners-Lee showed Kunz the web. And then Kunz went back to SLAC and showed the web to Addis.

Kunz and Addis were both enthusiastic purveyors of research at SLAC. They each played their part in advancing information discovery. When Kunz told Addis about the web, they both had the same idea about what to do with it. SLAC was going to need a website. Kunz built a web server at Stanford — the first in the United States. Addis, meanwhile, wrangled a few colleagues to help her build the SLAC website. The site launched on December 12, 1991, a year after Berners-Lee first published his own website at CERN.

Most of the programmers and researchers that began tinkering on the web in the early days were drawn by a nerdy fascination. They liked to play around with browsers, mess around with some code. The website was, in some cases, the mere after-effect of a technological experiment. That wasn’t the case for Addis. The draw of the web wasn’t its technology. It was what it enabled her to do.

The SLAC website started out with two links. The first one let you search through a list of phone numbers at SLAC. That link wasn’t all that interesting. (But it was a nice nod to the web’s origin. The most practical early use of the web was as an Internet-enabled phonebook at CERN.) The second link was far more interesting. It was labeled “HEP.” Clicking on it brought you to a simple page with a single text field. Type a query into that field, click Enter and you got live results of records directly from the SPIRES-HEP database. And that was the SLAC website. Its primary purpose was to act as an interface in front of the SPIRES-HEP database and pull down queried results.

When Berners-Lee demoed the SLAC website a couple of months later at a conference, it was met with wild applause, practically a standing ovation.

The importance was obviously not lost on that audience. No longer would researchers be forced to wrestle with complicated programming languages, or emails to SLAC librarians. The SLAC website took the low bar of access for the SPIRES-HEP database and dropped it all the way to the floor. It made searching the database easy (and within a couple of years, it would even add links to downloadable PDFs).

The SLAC website, nothing more than a searchable bibliography, was the beginning of something on the web. Physicists began using it, and it rebounded from one research lab to the next. The web’s first micro-explosion happened the day Berners-Lee demoed the site. It began reverberating around the physics community, and then outside of it.

SLAC was the website that showed what the would could do. GNN was going to be the first that made the web look good doing it.


Global Network Navigator was going to be exciting. A bold experiment on and with the web. The web was a wall of research notes and scientific diagrams; plain black text on stark white backgrounds as far as the eye could see. GNN would change that. It would be fun. Lively. Interactive.

That was the pitch made to designer Jennifer Robbins by O’Reilly co-founder Dale Dougherty in 1993. Robbins’ mind immediately jumped to the possibilities of this incredible, new, digital medium.

She met with another O’Reilly employee, Rob Raisch. A couple of years after that pitch, Raisch would propose one of the first examples of a stylesheet. At the time, he was just the person at the company who happened to know the most about the web, which had only recently cracked a hundred total sites. When Robbins walked into his office, the first thing he said to her was: “You know, you probably can’t do what you want.” He had a point. The language of the web was limiting. But the GNN team was going to find a way around that.

GNN was the brainchild of Dale Dougherty. By the early 90s, Dougherty had become a minor celebrity for experiments just like this one. From the early days of O’Reilly media, the book publisher he co-founded, he was always cooking up some project or another.

Wherever technology is going, Dougherty has a knack for being there first. At one conference early on in O’Reilly’s history, he sold self-printed copies of a Unix manual for $ 5 apiece just before Unix exploded on the scene. After spending decades in book publishing, he’s recently turned his attention to the maker culture. He has been called a godfather of the Maker movement.

That was no less true for the web. He became one of the web’s earliest adopters and its most prolific early champion. He brought together Tim Beners-Lee and the developers of NCSA Mosaic, including Marc Andreessen, for the first time in a meeting in Cambridge. That meeting would eventually lead to the creation of the W3C. He’d be responsible for early experiments with web advertising, basically on the first day advertising was allowed. He would later coin the term Web 2.0, in the wake of transformation after the dot-com boom. Dougherty loved the web.

But staring at the web for the first time in the early 90s, he didn’t exactly know what to do with it. His first thought was to put a book on the web. After all, O’Reilly had a gigantic back catalog, and the web was mostly text. But Dougherty knew that the web’s greatest asset was the hyperlink. He needed a book that could act as a springboard to bring people to different parts of the web. He found it in the newly-published bestseller by author Ed Krol, The Whole Internet User’s Guide and Catalog. The book was a guided tour through the technologies of the Internet. It had a paragraph on the web. Not exactly a lot, but enough for Dougherty to make the connection.

Dougherty had recruited Pei-Yuan Wei, creator of the popular ViolaWWW browser to make an earlier version of an interactive Internet guide. But he pulled a together a production team — led by managing editor Gina Blaber — of writers, designers, programmers, and sales staff. They launched GNN, the web’s first true commercial website, in early 1993.

GNN was created before any other commercial websites, before blogs, and online magazines. Digital publishing was something new altogether. As a result, GNN didn’t quite know what it wanted to be. It operated somewhere between a portal and a magazine. Navigating the site was an exercise in tumbling down one rabbit hole after another.

In one section, the site included the Whole Internet Catalog repurposed and ported to the web. Contained within were pages upon pages of best-of lists; collections of popular websites sorted into categories like finance, literature and cooking.

Another section, labeled GNN Magazine, jumped to a different group of sortable webpages known as metacenters. These were, in the website’s own description, “special-interest magazines that gather together the best Internet resources on topics such as travel, music, education, and computers. Each metacenter contains articles, columns, reference guides, and discussion groups.” Though conceptually similar to modern day media portals, the nickname “metacenter” never truly caught on. The site’s content and design was produced and maintained by the GNN staff. Not to be outdone by their print predecessors, GNN magazine contained interviews, features, biographies, and explainers. One hyperlink after another.

Over time, GNN would expand to affiliated publications. When the Mosaic team got too busy working on the web’s most popular browser, they handed off their browser homepage to the GNN team. The page was called What’s New, and it featured the most interesting links around the web for the day. The GNN seized the opportunity to expand their platform even further.

Explaining what GNN was to someone who had never heard of the web, let alone a website, was an onerous task. Blaber explained GNN as giving “users a way to navigate through the information highway by providing insightful editorial content, easy point-and-click commands, and direct electronic links to information resources.” That’s a meaningful description of the site. It was a way into the web, one that wasn’t as fractured or unorganized as jumping in blind. It was also, however, the kind of thing you needed to see to understand.

And it was something to see. Years before stylesheets and armed with nothing but a handful of HTML tags, the GNN team set about creating the most ambitious project with the web medium yet. Browsers had only just begun allowing inline graphics, and GNN took full advantage. The homepage in particular featured big colorful graphics, including the hot air balloon that would endure for years as the GNN logo. They laid out their pages meticulously — most pages had a unique design. They used images as headers to break up the page. Most pages featured large graphics, and colored text and backgrounds. Wherever the envelope was, they’d push it a little further.

The result: a brand new kind of interactive experience. The web was a sea of plain websites with no design mostly coming from research institutions and colleges. Before Mosaic, bold graphics and colors weren’t even possible. And even after Mosaic’s release, the web was mostly filled with dense websites of scrolling text with nothing more than scientific diagrams to break it up, or sparse websites with a link, an email and a phone number. Most sites had nothing in the way of hierarchy or interactivity. Content was difficult to follow unless it was exactly what you were looking for. There was a ton of information on the web, but no one had thought to organize it to any meaningful degree. Imagine seeing all of that, day after day, and then one day you click a link and come to this:

Screenshot of what GNN looked like when it launched in 1993, with its famous hot air balloon logo

It looks dated now, but a splash page with bold colors and big graphics, organized into sections and layered with interesting content… that was something to see.

The GNN team was creating the rules of web design, a field that had yet to be invented. In the first few years of the web, there were some experiments. The Vatican had scanned a number of materials from its archives and put them on a website. The Exploratorium took that one step further, creating the first online museum, with downloadable sounds and pictures. But they were still very much constrained by the simplicity of the web experience. Click this link, download this file, and that was it. GNN began to take things further. Dale Dougherty recalls that their goal was to “shift from the Internet as command line retrieval to the internet as this more digital interface… like a book.” A perfectly reasonable goal for a book publisher but a tall order for the web.

To accomplish their goal, GNN’s staff used the rules of graphic design as a roadmap (as philosopher Marshall McLuhan once said, “the content of any medium is always another medium”). But the team was also writing a brand new rulebook, on the fly, as they went. There were open questions about how to handle web graphics, new patterns for designing user interfaces, and best practices for writing HTML. Once the team closed one loop, they moved on to the next one. It was as if they writing the manual for flying a rocketship — while strapped to the wings and hurtling towards space.

As browsers got better, GNN evolved to take advantage of the latest design possibilities. They began to use image maps to make more complex navigation. They added font tags and frames. GNN was also the first site on the web with a sponsored link, and even that was careful and considered. Before the popup would plague our browsing experience, GNN created simple, unobtrusive, informational adverts inserted in between their other listings.

GNN provided a template for the commercial web. As soon as they launched, dozens of copycats quickly followed. Many adopted a similar style and tone. Within a few years, web portals and online magazines would become so common they were considered trite and uninteresting. But very few sites that followed it had the lasting impact GNN did on a new generation of digital designers.


Ranjit Bhatnagar has an offbeat sort of humor. He’s a philosopher and a musician. He’s smart. He’s a fan of the weird and the banal. He’s anti-consumerist, or at the very least, opposed to consumerist culture. I won’t go as far as to say he’s pedantic, but he certainly revels in the most minute of details. He enjoys lively debates and engaged discourse. He’s fascinated by dreams, and once had a dream where he was flying through the air with his mother taking in the sights.

I’ve never met Bhatnagar. I know all of this because I read it on his website. Anyone can. And his website started with lunch.

Bhatnagar’s website was called Ranjit’s HTTP Playground. Playground describes it rather well; hyperlinks are scattered across the homepage like so many children’s toys. One link takes you to a half-finished web experiment. Another takes you to a list of his favorite bookmarks arranged by category. Yet another might contain a rant about the web, or a long-winded tribute to Kinder eggs. If you’re in the mood for a debate you can post your own thoughts to a page devoted to the single question: Are nuts wood? There’s still no consensus on that one.

Browsing Ranjit’s HTTP Playgroundis like peeling back the layers of Bhatnagar’s brain. He added new entries to his site pretty regularly, never more than a sentence or two, arranged in a series of dated bullet points. Pages were laid out on garish backgrounds, scalding bright green on jet black, or surrounded by a dizzying dance of animated GIFs. Each page was littered with links to more pages, seemingly at random. Every time you think you’ve reached the end of a thread, there’s another link to click. And every once in a while, you’ll find yourself back on the homepage wondering how you got there and how much time had passed in the meantime. This was the magic of the early web.

Bhatnagar first published his website in late 1993, just a few months after the GNN website went up. The very first thing Bhatnagar posted to his website was what he ordered for lunch every day. It was arranged in reverse chronological order, his most recent lunch order right at the top.

SLAC captured the utility of the web. GNN realized its popular appeal. Bhatnagar, and others like him, made the web personal.

Claudio Pinhanez began adding daily entries to the MIT Media Lab website in 1994. He posted movie and book reviews, personal musings, and shared his favorite links. He followed the same format as Bhatnagar’s Lunch Server. Entries were arranged on the page in reverse chronological order. Each entry was short and to the point — no longer than a sentence or two. This movie was good. This meal was bad. Isn’t it interesting that… and so on.

In early 1995, Carolyn Burke began posting daily entries to her website in one of the earliest examples of an online diary. Each one was a small slice from her life. The posts were longer than the short-burst of Pinhanez and Bhatnagar. Burke took her time with narrative anecdotes and meandering asides. She was loquacious and insightful. Her writing was conversational, and she promised readers that she would be honest. “I notice now that I have held back in being frank. My academic analysis skills come out, and I write with them things that I’ve known for a long time,” she wrote in an entry from the first few months, “But this is therapy for me… honesty and freedom therapy. Wow, that’s a loaded word. freedom.

Perhaps no site was more honest, or more free as Burke puts it, than Links from the Underground. Its creator, Swarthmore undergraduate Justin Hall, had transformed inviting others into his life into an art form. What began as a simple link dump quickly transformed into a network of short stories and poems, diary entries, and personal details from his own life. The layout of the site matched that of Bhatnagar, scattered and unorganized. But his tone was closer to Burke’s, long and deeply, deeply personal. Just about every day, Hall would post to his website. It was his daily inner monologue made public.

Sometimes, he would cross a line. If you were a friend of Justin’s, he might share a secret that you told him in confidence, or disparage you on a fully public post. But he also shared the most intimate details from his own life, from dorm room drama to his greatest fears and inadequacies. He told stories from his troubled past, and publicly tried to come to terms with an alcoholic father. His good humor was often tinged with tragedy. He was clearly working through something emotional and personally profound, and he was using the web to do it out in the open.

But for Hall, this was all in the service of something far greater than himself. Describing the web to newcomers in a documentary about his experience on the web, Hall’s primary message was about its ability to create — not to tear down — connections.

What’s so great about the web is I was able to go out there and talk about what I care about, what I feel strongly about and people responded to it. Because every high school’s got a poet, whether it’s a rich high school or a poor high school, you know, they got somebody that’s in to writing, that’s in to getting people to tell their stories. You give them access to this technology and all of a sudden they’re telling stories to people in Israel, to people in Japan, to people in their own town that they never would have been able to talk to. And that’s, you know, that’s a revolution.

There’s that word again. Revolution. Though coming at the web from very different places, Addis and Hall agreed on at least one thing. I would venture to guess that they agreed on a whole lot more.

Justin Hall became a presence on the web not soon forgotten by those that came across him. He’s had two documentaries made about him (one of which he made himself). He’s appeared on talk shows. He’s toured the country. He’s had very public mental breakdowns. But he believed deeply that the web meant nothing at all unless it was a place for people to share their own stories.

When Tim Berners-Lee first imagined the web, he believed that everybody would have their own homepage. He designed his first browser with authoring capabilities for just that reason. That dream never came true. But Hall and Burke and Bhatnagar channeled a similar idea when they decided to make the web personal. They created their own homepages, even if it meant having to spend a few hours, or a few weeks, learning HTML.

Within a couple of years, the web filled up with these homepages. There were some notable breakthrough websites, like when David Farley began posting daily webcomics to Doctor Fun or VJ Adam Curry co-opted the MTV website to post his own personal brand of music entertainment. There were extreme examples. In 1996, Jennifer Ringley stuck a webcam in her room and beamed images every few seconds, so anyone could watch her entire life in real time. She called it Jennicam, a name that would ultimately lead to the moniker cam girl. Ringley appeared on talk shows and became an overnight sensation for her strange website that let others peer directly into her world.

But mostly, homepages acted as a creative outlet — short biographies, photo albums of families and pets, short stories, status updates. There were a lot of diaries. People posted their art, their “hot takes” and their deepest secrets and greatest passions. There were fan pages dedicated to discontinued television shows and boy bands. A dizzying array of style and personality with no purpose other than to simply exist.

Then came the links. At the bottom of a homepage: a list of links to other homepages. Scattered in diary posts, links to other websites. In one entry, Hall might post a link to Bhatnagar’s site, musing about the influence it had on his own website. Bhatnagar’s own site had his own chaotic list of his favorites. Eventually, so did Burke’s. Half the fun of a homepage was obsessing over which others to share.

As the web turned on a moment of connection, the process of discovery became its greatest asset. The fantastic intrigue of clicking on a link and being transported into the world and mind of another person was — in the end — the defining feature of the web. There would be plenty of opportunities to use the web to find something you want or need. The lesson of the homepage is that what people really wanted to find was each other. The web does that better than any technology that has come before it.


At the end of 1993, there were just over 600 websites. One year later, at the end of 1994, there were over 10,000. They no longer fit on a single page on the CERN website maintained by the web’s creator.

The personal website would become the cornerstone of the web. The web would be filled with more applications, like SLAC. And more businesses, like GNN. But it would mostly be filled with people. When the web’s next wave came crashing down, it would become truly social.


The post Chapter 3: The Website appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

,

Every Website is an Essay

Every website that’s made me oooo and aaahhh lately has been of a special kind; they’re written and designed like essays. There’s an argument, a playfulness in the way that they’re not so much selling me something as they are trying to convince me of the thing. They use words and type and color in a way that makes me sit up and listen.

And I think that framing our work in this way lets us web designers explore exciting new possibilities. Instead of throwing a big carousel on the page and being done with it, thinking about making a website like an essay encourages us to focus on the tough questions. We need an introduction, we need to provide evidence for our statements, we need a conclusion, etc. This way we don’t have to get so caught up in the same old patterns that we’ve tried again and again in our work.

And by treating web design like an essay, we can be weird with the design. We can establish a distinct voice and make it sound like an honest-to-goodness human being wrote it, too.

One example of a website-as-an-essay is the Analogue Pocket site which uses real paragraphs to market their fancy new device.

Another example is the new email app Hey in which the website is nothing but paragraphs — no screenshots, no fancy product information. It’s almost feels like a political manifesto hammered onto a giant wooden door.

Apple’s marketing sites are little essays, too. Take this one section from the iPad Pro all about the LiDAR Scanner. It’s not so much trying to sell you an iPad at this point so much as it is trying to argue the case for LiDAR. And as with all good essays it answers the who, what, why, when, and how.

Another example is Stripe’s recent beautiful redesign. What I love more than the outrageously gorgeous animated gradients is the argument that the website is making. What is Stripe? How can I trust them? How easy is it to get set up? Who, what, why, when, how.

To be my own devil’s advocate for a bit though, we’re all familiar with this line of reasoning: Why care about the writing so much when people don’t read? Folks skim through a website. They don’t persevere with the text, they don’t engage with the writing, and you only have half a millisecond to hit them with something flashy before they leave. They can’t handle complex words or sentences. They can’t grasp complex ideas. So keep those paragraphs short! Remove all text from the page!

The implication here is that users are dumb. They can’t focus and they don’t care. You have to shout at them. And I kinda sorta hate that.

Instead, I think the opposite is true. They’ve seen the same boring websites for years. Everyone is tired of lifeless, humorless copywriting. They’ve seen all the animations, witnessed all the cool fonts, and in the face of all that stuff, they yawn. They yawn because it supports a bad argument, or more precisely, a bad essay; one that doesn’t charm the reader, or give them a reason to care.

So what if we made our websites more like essays and less like billboards that dot the freeways? What would that look like?


The post Every Website is an Essay appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, ,
[Top]

In Defense of a Fussy Website

The other day I was doom-scrolling twitter, and I saw a delightful article titled “The Case for Fussy Breakfasts.” I love food and especially breakfast, and since the pandemic hit I’ve been using my breaks in between meetings (or sometimes on meetings, shh) to make a full bacon, poached egg, vegetable plate, so I really got into the article. This small joy of creating a bit of space for myself for the most important meal of the day has been meaningful to me — while everything else feels out of control, indulging in some ceremony has done a tiny part to offset the intensity of our collective situation.

It caused me to think of this “fussiness” as applied to other inconsequential joys. A walk. A bath. What about programming?

While we’re all laser-focused on shipping the newest feature with the hottest software and the best Lighthouse scores, I’ve been missing a bit of the joy on the web. Apps are currently conveying little care for UX, guidance, richness, and — well, for humans trying to communicate through a computer, we’re certainly bending a lot to… the computer.

I’m getting a little tired of the web being seen as a mere document reader, and though I do love me a healthy lighthouse score, some of these point matrixes seem to live and die more by our developer ego in this gamification than actually considering what we can do without incurring much weight. SVGs can be very small while still being impactful. Some effects are tiny bits of CSS. JS animations can be lazy-loaded. You can even dazzle with words, color, and layout if you’re willing to be a bit adventurous, no weight at all!

A few of my favorite developer sites lately have been Josh Comeau, Johnson Ogwuru and Cassie Evans. The small delights and touches, the little a-ha moments, make me STAY. I wander around the site, exploring, learning, feeling actually more connected to each of these humans rather than as if I’m glancing at a PDF of their resume. They flex their muscles, show me the pride they have in building things, and it intrigues me! These small bits are more than the fluff that many portray any “excess” as: they do the job that the web is intending. We are communicating using this tool- the computer- as an extension of ourselves.

Nuance can be challenging. It’s easy as programmers to get stuck in absolutes, and one of these of late has been that if you’re having any bit of fun, any bit of style, that must mean it’s “not useful.” Honestly, I’d make the case that the opposite is true. Emotions attach to the limbic system, making memories easier to recall. If your site is a flat bit of text, how will anyone remember it?

Don’t you want to build the site that teams in companies the world over remember and cite as an inspiration? I’ve been at four different companies where people have mentioned Stripe as a site they would aspire to be like. Stripe took chances. Stripe told stories. Stripe engaged the imagination of developers, spoke directly to us.

I’m sad acknowledging the irony that after thinking about how spot on Stripe was, most of those companies ignored much of what they learned while exploring it. Any creativity, risk, and intention was slowly, piece by piece, chipped away by the drumbeat of “usefulness,” missing the forest for the trees.

When a site is done with care and excitement you can tell. You feel it as you visit, the hum of intention. The craft, the cohesiveness, the attention to detail is obvious. And in turn, you meet them halfway. These are the sites with the low bounce rates, the best engagement metrics, the ones where they get questions like “can I contribute?” No gimmicks needed.

What if you don’t have the time? Of course, we all have to get things over the line. Perhaps a challenge: what small thing can you incorporate that someone might notice? Can you start with a single detail? I didn’t start with a poached egg in my breakfast, one day I made a goofy scrambled one. It went on from there. Can you challenge yourself to learn one small new technique? Can you outsource one graphic? Can you introduce a tiny easter egg? Say something just a little differently from the typical corporate lingo?

If something is meaningful to you, the audience you’ll gather will likely be the folks that find it meaningful, too.

The post In Defense of a Fussy Website appeared first on CSS-Tricks.

CSS-Tricks

, ,
[Top]

How to Add Lunr Search to your Gatsby Website

The Jamstack way of thinking and building websites is becoming more and more popular.

Have you already tried Gatsby, Nuxt, or Gridsome (to cite only a few)? Chances are that your first contact was a “Wow!” moment — so many things are automatically set up and ready to use. 

There are some challenges, though, one of which is search functionality. If you’re working on any sort of content-driven site, you’ll likely run into search and how to handle it. Can it be done without any external server-side technology? 

Search is not one of those things that come out of the box with Jamstack. Some extra decisions and implementation are required.

Fortunately, we have a bunch of options that might be more or less adapted to a project. We could use Algolia’s powerful search-as-service API. It comes with a free plan that is restricted to non-commercial projects with  a limited capacity. If we were to use WordPress with WPGraphQL as a data source, we could take advantage of WordPress native search functionality and Apollo Client. Raymond Camden recently explored a few Jamstack search options, including pointing a search form directly at Google.

In this article, we will build a search index and add search functionality to a Gatsby website with Lunr, a lightweight JavaScript library providing an extensible and customizable search without the need for external, server-side services. We used it recently to add “Search by Tartan Name” to our Gatsby project tartanify.com. We absolutely wanted persistent search as-you-type functionality, which brought some extra challenges. But that’s what makes it interesting, right? I’ll discuss some of the difficulties we faced and how we dealt with them in the second half of this article.

Getting started

For the sake of simplicity, let’s use the official Gatsby blog starter. Using a generic starter lets us abstract many aspects of building a static website. If you’re following along, make sure to install and run it:

gatsby new gatsby-starter-blog https://github.com/gatsbyjs/gatsby-starter-blog cd gatsby-starter-blog gatsby develop

It’s a tiny blog with three posts we can view by opening up http://localhost:8000/___graphql in the browser.

Showing the GraphQL page on the localhost installation in the browser.

Inverting index with Lunr.js 🙃

Lunr uses a record-level inverted index as its data structure. The inverted index stores the mapping for each word found within a website to its location (basically a set of page paths). It’s on us to decide which fields (e.g. title, content, description, etc.) provide the keys (words) for the index.

For our blog example, I decided to include all titles and the content of each article. Dealing with titles is straightforward since they are composed uniquely of words. Indexing content is a little more complex. My first try was to use the rawMarkdownBody field. Unfortunately, rawMarkdownBody introduces some unwanted keys resulting from the markdown syntax.

Showing an attempt at using markdown syntax for links.

I obtained a “clean” index using the html field in conjunction with the striptags package (which, as the name suggests, strips out the HTML tags). Before we get into the details, let’s look into the Lunr documentation.

Here’s how we create and populate the Lunr index. We will use this snippet in a moment, specifically in our gatsby-node.js file.

const index = lunr(function () {   this.ref('slug')   this.field('title')   this.field('content')   for (const doc of documents) {     this.add(doc)   } })

 documents is an array of objects, each with a slug, title and content property:

{   slug: '/post-slug/',   title: 'Post Title',   content: 'Post content with all HTML tags stripped out.' }

We will define a unique document key (the slug) and two fields (the title and content, or the key providers). Finally, we will add all of the documents, one by one.

Let’s get started.

Creating an index in gatsby-node.js 

Let’s start by installing the libraries that we are going to use.

yarn add lunr graphql-type-json striptags

Next, we need to edit the gatsby-node.js file. The code from this file runs once in the process of building a site, and our aim is to add index creation to the tasks that Gatsby executes on build. 

CreateResolvers is one of the Gatsby APIs controlling the GraphQL data layer. In this particular case, we will use it to create a new root field; Let’s call it LunrIndex

Gatsby’s internal data store and query capabilities are exposed to GraphQL field resolvers on context.nodeModel. With getAllNodes, we can get all nodes of a specified type:

/* gatsby-node.js */ const { GraphQLJSONObject } = require(`graphql-type-json`) const striptags = require(`striptags`) const lunr = require(`lunr`)  exports.createResolvers = ({ cache, createResolvers }) => {   createResolvers({     Query: {       LunrIndex: {         type: GraphQLJSONObject,         resolve: (source, args, context, info) => {           const blogNodes = context.nodeModel.getAllNodes({             type: `MarkdownRemark`,           })           const type = info.schema.getType(`MarkdownRemark`)           return createIndex(blogNodes, type, cache)         },       },     },   }) }

Now let’s focus on the createIndex function. That’s where we will use the Lunr snippet we mentioned in the last section. 

/* gatsby-node.js */ const createIndex = async (blogNodes, type, cache) => {   const documents = []   // Iterate over all posts    for (const node of blogNodes) {     const html = await type.getFields().html.resolve(node)     // Once html is resolved, add a slug-title-content object to the documents array     documents.push({       slug: node.fields.slug,       title: node.frontmatter.title,       content: striptags(html),     })   }   const index = lunr(function() {     this.ref(`slug`)     this.field(`title`)     this.field(`content`)     for (const doc of documents) {       this.add(doc)     }   })   return index.toJSON() }

Have you noticed that instead of accessing the HTML element directly with  const html = node.html, we’re using an  await expression? That’s because node.html isn’t available yet. The gatsby-transformer-remark plugin (used by our starter to parse Markdown files) does not generate HTML from markdown immediately when creating the MarkdownRemark nodes. Instead,  html is generated lazily when the html field resolver is called in a query. The same actually applies to the excerpt that we will need in just a bit.

Let’s look ahead and think about how we are going to display search results. Users expect to obtain a link to the matching post, with its title as the anchor text. Very likely, they wouldn’t mind a short excerpt as well.

Lunr’s search returns an array of objects representing matching documents by the ref property (which is the unique document key slug in our example). This array does not contain the document title nor the content. Therefore, we need to store somewhere the post title and excerpt corresponding to each slug. We can do that within our LunrIndex as below:

/* gatsby-node.js */ const createIndex = async (blogNodes, type, cache) => {   const documents = []   const store = {}   for (const node of blogNodes) {     const {slug} = node.fields     const title = node.frontmatter.title     const [html, excerpt] = await Promise.all([       type.getFields().html.resolve(node),       type.getFields().excerpt.resolve(node, { pruneLength: 40 }),     ])     documents.push({       // unchanged     })     store[slug] = {       title,       excerpt,     }   }   const index = lunr(function() {     // unchanged   })   return { index: index.toJSON(), store } }

Our search index changes only if one of the posts is modified or a new post is added. We don’t need to rebuild the index each time we run gatsby develop. To avoid unnecessary builds, let’s take advantage of the cache API:

/* gatsby-node.js */ const createIndex = async (blogNodes, type, cache) => {   const cacheKey = `IndexLunr`   const cached = await cache.get(cacheKey)   if (cached) {     return cached   }   // unchanged   const json = { index: index.toJSON(), store }   await cache.set(cacheKey, json)   return json }

Enhancing pages with the search form component

We can now move on to the front end of our implementation. Let’s start by building a search form component.

touch src/components/search-form.js 

I opt for a straightforward solution: an input of type="search", coupled with a label and accompanied by a submit button, all wrapped within a form tag with the search landmark role.

We will add two event handlers, handleSubmit on form submit and handleChange on changes to the search input.

/* src/components/search-form.js */ import React, { useState, useRef } from "react" import { navigate } from "@reach/router" const SearchForm = ({ initialQuery = "" }) => {   // Create a piece of state, and initialize it to initialQuery   // query will hold the current value of the state,   // and setQuery will let us change it   const [query, setQuery] = useState(initialQuery)      // We need to get reference to the search input element   const inputEl = useRef(null)    // On input change use the current value of the input field (e.target.value)   // to update the state's query value   const handleChange = e => {     setQuery(e.target.value)   }      // When the form is submitted navigate to /search   // with a query q paramenter equal to the value within the input search   const handleSubmit = e => {     e.preventDefault()     // `inputEl.current` points to the mounted search input element     const q = inputEl.current.value     navigate(`/search?q=$ {q}`)   }   return (     <form role="search" onSubmit={handleSubmit}>       <label htmlFor="search-input" style={{ display: "block" }}>         Search for:       </label>       <input         ref={inputEl}         id="search-input"         type="search"         value={query}         placeholder="e.g. duck"         onChange={handleChange}       />       <button type="submit">Go</button>     </form>   ) } export default SearchForm

Have you noticed that we’re importing navigate from the @reach/router package? That is necessary since neither Gatsby’s <Link/> nor navigate provide in-route navigation with a query parameter. Instead, we can import @reach/router — there’s no need to install it since Gatsby already includes it — and use its navigate function.

Now that we’ve built our component, let’s add it to our home page (as below) and 404 page.

/* src/pages/index.js */ // unchanged import SearchForm from "../components/search-form" const BlogIndex = ({ data, location }) => {   // unchanged   return (     <Layout location={location} title={siteTitle}>       <SEO title="All posts" />       <Bio />       <SearchForm />       // unchanged

Search results page

Our SearchForm component navigates to the /search route when the form is submitted, but for the moment, there is nothing behing this URL. That means we need to add a new page:

touch src/pages/search.js 

I proceeded by copying and adapting the content of the the index.js page. One of the essential modifications concerns the page query (see the very bottom of the file). We will replace allMarkdownRemark with the LunrIndex field. 

/* src/pages/search.js */ import React from "react" import { Link, graphql } from "gatsby" import { Index } from "lunr" import Layout from "../components/layout" import SEO from "../components/seo" import SearchForm from "../components/search-form" 
 // We can access the results of the page GraphQL query via the data props const SearchPage = ({ data, location }) => {   const siteTitle = data.site.siteMetadata.title      // We can read what follows the ?q= here   // URLSearchParams provides a native way to get URL params   // location.search.slice(1) gets rid of the "?"    const params = new URLSearchParams(location.search.slice(1))   const q = params.get("q") || "" 
   // LunrIndex is available via page query   const { store } = data.LunrIndex   // Lunr in action here   const index = Index.load(data.LunrIndex.index)   let results = []   try {     // Search is a lunr method     results = index.search(q).map(({ ref }) => {       // Map search results to an array of {slug, title, excerpt} objects       return {         slug: ref,         ...store[ref],       }     })   } catch (error) {     console.log(error)   }   return (     // We will take care of this part in a moment   ) } export default SearchPage export const pageQuery = graphql`   query {     site {       siteMetadata {         title       }     }     LunrIndex   } `

Now that we know how to retrieve the query value and the matching posts, let’s display the content of the page. Notice that on the search page we pass the query value to the <SearchForm /> component via the initialQuery props. When the user arrives to the search results page, their search query should remain in the input field. 

return (   <Layout location={location} title={siteTitle}>     <SEO title="Search results" />     {q ? <h1>Search results</h1> : <h1>What are you looking for?</h1>}     <SearchForm initialQuery={q} />     {results.length ? (       results.map(result => {         return (           <article key={result.slug}>             <h2>               <Link to={result.slug}>                 {result.title || result.slug}               </Link>             </h2>             <p>{result.excerpt}</p>           </article>         )       })     ) : (       <p>Nothing found.</p>     )}   </Layout> )

You can find the complete code in this gatsby-starter-blog fork and the live demo deployed on Netlify.

Instant search widget

Finding the most “logical” and user-friendly way of implementing search may be a challenge in and of itself. Let’s now switch to the real-life example of tartanify.com — a Gatsby-powered website gathering 5,000+ tartan patterns. Since tartans are often associated with clans or organizations, the possibility to search a tartan by name seems to make sense. 

We built tartanify.com as a side project where we feel absolutely free to experiment with things. We didn’t want a classic search results page but an instant search “widget.” Often, a given search keyword corresponds with a number of results — for example, “Ramsay” comes in six variations.  We imagined the search widget would be persistent, meaning it should stay in place when a user navigates from one matching tartan to another.

Let me show you how we made it work with Lunr.  The first step of building the index is very similar to the gatsby-starter-blog example, only simpler:

/* gatsby-node.js */ exports.createResolvers = ({ cache, createResolvers }) => {   createResolvers({     Query: {       LunrIndex: {         type: GraphQLJSONObject,         resolve(source, args, context) {           const siteNodes = context.nodeModel.getAllNodes({             type: `TartansCsv`,           })           return createIndex(siteNodes, cache)         },       },     },   }) } const createIndex = async (nodes, cache) => {   const cacheKey = `LunrIndex`   const cached = await cache.get(cacheKey)   if (cached) {     return cached   }   const store = {}   const index = lunr(function() {     this.ref(`slug`)     this.field(`title`)     for (node of nodes) {       const { slug } = node.fields       const doc = {         slug,         title: node.fields.Unique_Name,       }       store[slug] = {         title: doc.title,       }       this.add(doc)     }   })   const json = { index: index.toJSON(), store }   cache.set(cacheKey, json)   return json }

We opted for instant search, which means that search is triggered by any change in the search input instead of a form submission.

/* src/components/searchwidget.js */ import React, { useState } from "react" import lunr, { Index } from "lunr" import { graphql, useStaticQuery } from "gatsby" import SearchResults from "./searchresults" 
 const SearchWidget = () => {   const [value, setValue] = useState("")   // results is now a state variable    const [results, setResults] = useState([]) 
   // Since it's not a page component, useStaticQuery for quering data   // https://www.gatsbyjs.org/docs/use-static-query/   const { LunrIndex } = useStaticQuery(graphql`     query {       LunrIndex     }   `)   const index = Index.load(LunrIndex.index)   const { store } = LunrIndex   const handleChange = e => {     const query = e.target.value     setValue(query)     try {       const search = index.search(query).map(({ ref }) => {         return {           slug: ref,           ...store[ref],         }       })       setResults(search)     } catch (error) {       console.log(error)     }   }   return (     <div className="search-wrapper">       // You can use a form tag as well, as long as we prevent the default submit behavior       <div role="search">         <label htmlFor="search-input" className="visually-hidden">           Search Tartans by Name         </label>         <input           id="search-input"           type="search"           value={value}           onChange={handleChange}           placeholder="Search Tartans by Name"         />       </div>       <SearchResults results={results} />     </div>   ) } export default SearchWidget

The SearchResults are structured like this:

/* src/components/searchresults.js */ import React from "react" import { Link } from "gatsby" const SearchResults = ({ results }) => (   <div>     {results.length ? (       <>         <h2>{results.length} tartan(s) matched your query</h2>         <ul>           {results.map(result => (             <li key={result.slug}>               <Link to={`/tartan/$ {result.slug}`}>{result.title}</Link>             </li>           ))}         </ul>       </>     ) : (       <p>Sorry, no matches found.</p>     )}   </div> ) export default SearchResults

Making it persistent

Where should we use this component? We could add it to the Layout component. The problem is that our search form will get unmounted on page changes that way. If a user wants to browser all tartans associated with the “Ramsay” clan, they will have to retype their query several times. That’s not ideal.

Thomas Weibenfalk has written a great article on keeping state between pages with local state in Gatsby.js. We will use the same technique, where the wrapPageElement browser API sets persistent UI elements around pages. 

Let’s add the following code to the gatsby-browser.js. You might need to add this file to the root of your project.

/* gatsby-browser.js */ import React from "react" import SearchWrapper from "./src/components/searchwrapper" export const wrapPageElement = ({ element, props }) => (   <SearchWrapper {...props}>{element}</SearchWrapper> )

Now let’s add a new component file:

touch src/components/searchwrapper.js

Instead of adding SearchWidget component to the Layout, we will add it to the SearchWrapper and the magic happens. ✨

/* src/components/searchwrapper.js */ import React from "react" import SearchWidget from "./searchwidget" 
 const SearchWrapper = ({ children }) => (   <>     {children}     <SearchWidget />   </> ) export default SearchWrapper

Creating a custom search query

At this point, I started to try different keywords but very quickly realized that Lunr’s default search query might not be the best solution when used for instant search.

Why? Imagine that we are looking for tartans associated with the name MacCallum. While typing “MacCallum” letter-by-letter, this is the evolution of the results:

  • m – 2 matches (Lyon, Jeffrey M, Lyon, Jeffrey M (Hunting))
  • ma – no matches
  • mac – 1 match (Brighton Mac Dermotte)
  • macc – no matches
  • macca – no matches
  • maccal – 1 match (MacCall)
  • maccall – 1 match (MacCall)
  • maccallu – no matches
  • maccallum – 3 matches (MacCallum, MacCallum #2, MacCallum of Berwick)

Users will probably type the full name and hit the button if we make a button available. But with instant search, a user is likely to abandon early because they may expect that the results can only narrow down letters are added to the keyword query.

 That’s not the only problem. Here’s what we get with “Callum”:

  • c – 3 unrelated matches
  • ca – no matches
  • cal – no matches
  • call – no matches
  • callu – no matches
  • callum – one match 

You can see the trouble if someone gives up halfway into typing the full query.

Fortunately, Lunr supports more complex queries, including fuzzy matches, wildcards and boolean logic (e.g. AND, OR, NOT) for multiple terms. All of these are available either via a special query syntax, for example: 

index.search("+*callum mac*")

We could also reach for the index query method to handle it programatically.

The first solution is not satisfying since it requires more effort from the user. I used the index.query method instead:

/* src/components/searchwidget.js */ const search = index   .query(function(q) {     // full term matching     q.term(el)     // OR (default)     // trailing or leading wildcard     q.term(el, {       wildcard:         lunr.Query.wildcard.LEADING | lunr.Query.wildcard.TRAILING,     })   })   .map(({ ref }) => {     return {       slug: ref,       ...store[ref],     }   })

Why use full term matching with wildcard matching? That’s necessary for all keywords that “benefit” from the stemming process. For example, the stem of “different” is “differ.”  As a consequence, queries with wildcards — such as differe*, differen* or  different* — all result in no matches, while the full term queries differe, differen and different return matches.

Fuzzy matches can be used as well. In our case, they are allowed uniquely for terms of five or more characters:

q.term(el, { editDistance: el.length > 5 ? 1 : 0 }) q.term(el, {   wildcard:     lunr.Query.wildcard.LEADING | lunr.Query.wildcard.TRAILING, })

The handleChange function also “cleans up” user inputs and ignores single-character terms:

/* src/components/searchwidget.js */   const handleChange = e => {   const query = e.target.value || ""   setValue(query)   if (!query.length) {     setResults([])   }   const keywords = query     .trim() // remove trailing and leading spaces     .replace(/*/g, "") // remove user's wildcards     .toLowerCase()     .split(/s+/) // split by whitespaces   // do nothing if the last typed keyword is shorter than 2   if (keywords[keywords.length - 1].length < 2) {     return   }   try {     const search = index       .query(function(q) {         keywords           // filter out keywords shorter than 2           .filter(el => el.length > 1)           // loop over keywords           .forEach(el => {             q.term(el, { editDistance: el.length > 5 ? 1 : 0 })             q.term(el, {               wildcard:                 lunr.Query.wildcard.LEADING | lunr.Query.wildcard.TRAILING,             })           })       })       .map(({ ref }) => {         return {           slug: ref,           ...store[ref],         }       })     setResults(search)   } catch (error) {     console.log(error)   } }

Let’s check it in action:

  • m – pending
  • ma – 861 matches
  • mac – 600 matches
  • macc – 35 matches
  • macca – 12 matches
  • maccal – 9 matches
  • maccall – 9 matches
  • maccallu – 3 matches
  • maccallum – 3 matches

Searching for “Callum” works as well, resulting in four matches: Callum, MacCallum, MacCallum #2, and MacCallum of Berwick.

There is one more problem, though: multi-terms queries. Say, you’re looking for “Loch Ness.” There are two tartans associated with  that term, but with the default OR logic, you get a grand total of 96 results. (There are plenty of other lakes in Scotland.)

I wound up deciding that an AND search would work better for this project. Unfortunately, Lunr does not support nested queries, and what we actually need is (keyword1 OR *keyword*) AND (keyword2 OR *keyword2*). 

To overcome this, I ended up moving the terms loop outside the query method and intersecting the results per term. (By intersecting, I mean finding all slugs that appear in all of the per-single-keyword results.)

/* src/components/searchwidget.js */ try {   // andSearch stores the intersection of all per-term results   let andSearch = []   keywords     .filter(el => el.length > 1)     // loop over keywords     .forEach((el, i) => {       // per-single-keyword results       const keywordSearch = index         .query(function(q) {           q.term(el, { editDistance: el.length > 5 ? 1 : 0 })           q.term(el, {             wildcard:               lunr.Query.wildcard.LEADING | lunr.Query.wildcard.TRAILING,           })         })         .map(({ ref }) => {           return {             slug: ref,             ...store[ref],           }         })       // intersect current keywordSearch with andSearch       andSearch =         i > 0           ? andSearch.filter(x => keywordSearch.some(el => el.slug === x.slug))           : keywordSearch     })   setResults(andSearch) } catch (error) {   console.log(error) }

The source code for tartanify.com is published on GitHub. You can see the complete implementation of the Lunr search there.

Final thoughts

Search is often a non-negotiable feature for finding content on a site. How important the search functionality actually is may vary from one project to another. Nevertheless, there is no reason to abandon it under the pretext that it does not tally with the static character of Jamstack websites. There are many possibilities. We’ve just discussed one of them.

And, paradoxically in this specific example, the result was a better all-around user experience, thanks to the fact that implementing search was not an obvious task but instead required a lot of deliberation. We may not have been able to say the same with an over-the-counter solution.

The post How to Add Lunr Search to your Gatsby Website appeared first on CSS-Tricks.

CSS-Tricks

, , ,
[Top]

Let a website be a worry stone

Ethan Marcotte just redesigned his website and wrote about how the process was a distraction from the difficult things that are going on right now. Adding new features to your blog or your portfolio, tidying up performance issues, and improving things bit by bit can certainly relieve a lot of stress. Also? It’s fun!

What about adding a dark mode to our websites? Or playing around with Next.js? How about finally updating to that static site generator we’ve always wanted to experiment with? Or perhaps we could make the background color of our website animate slowly over time, like Robin’s? Or what about rolling up our sleeves and making a buck wild animation like the one on Sarah’s homepage?

Not so long ago, I felt a bout of intense anxiety hit me out of nowhere and I wound up updating my own website — it was nothing short of relaxing, like going to the spa for a day. I suddenly realized that I could just throw all that stress at my website and do something half-useful in the process. One evening I sat down to focus on my Lighthouse score, the next day was all about fonts, and after that I made a bunch of commits to update the layouts on my site.

This isn’t about being productive right now — it’s barely possible to focus on anything for me with the state of things. And also I’m certainly not trying to guilt trip anyone into cranking out more websites. We all need to take a breath and take each day at a time.

But! If treating your website like a worry stone can help, then I think it’s time to roll up our sleeves and make something weird for ourselves.

Direct Link to ArticlePermalink

The post Let a website be a worry stone appeared first on CSS-Tricks.

CSS-Tricks

, ,
[Top]

Let a website be a worry stone

Ethan Marcotte just redesigned his website and wrote about how the process was a distraction from the difficult things that are going on right now. Adding new features to your blog or your portfolio, tidying up performance issues, and improving things bit by bit can certainly relieve a lot of stress. Also? It’s fun!

What about adding a dark mode to our websites? Or playing around with Next.js? How about finally updating to that static site generator we’ve always wanted to experiment with? Or perhaps we could make the background color of our website animate slowly over time, like Robin’s? Or what about rolling up our sleeves and making a buck wild animation like the one on Sarah’s homepage?

Not so long ago, I felt a bout of intense anxiety hit me out of nowhere and I wound up updating my own website — it was nothing short of relaxing, like going to the spa for a day. I suddenly realized that I could just throw all that stress at my website and do something half-useful in the process. One evening I sat down to focus on my Lighthouse score, the next day was all about fonts, and after that I made a bunch of commits to update the layouts on my site.

This isn’t about being productive right now — it’s barely possible to focus on anything for me with the state of things. And also I’m certainly not trying to guilt trip anyone into cranking out more websites. We all need to take a breath and take each day at a time.

But! If treating your website like a worry stone can help, then I think it’s time to roll up our sleeves and make something weird for ourselves.

Direct Link to ArticlePermalink

The post Let a website be a worry stone appeared first on CSS-Tricks.

CSS-Tricks

, ,
[Top]

Emergency Website Kit

Here’s an outstanding idea from Max Böck. He’s created a boilerplate project for building websites that fit within a single HTTP request. This is extremely important for websites that contain critical information for public safety. As Max writes:

In cases of emergency, many organizations need a quick way to publish critical information. But exisiting (CMS) websites are often unable to handle sudden spikes in traffic.

What’s so special about this boilerplate? Well, it does smart stuff like:

  • generates a static site using Eleventy,
  • uses minimal markup with inlined CSS,
  • aims to transmit everything in the first connection roundtrip (~14KB),
  • progressively enables offline-support with Service Workers,
  • uses Netlify CMS for easy content editing, and
  • provides one-click deployment via Netlify to get off the ground quickly

The example website that Max built with this boilerplate is shockingly fast and I would go one step further to argue that all websites should feel as fast as this, not just websites that are useful in an emergency.

Direct Link to ArticlePermalink

The post Emergency Website Kit appeared first on CSS-Tricks.

CSS-Tricks

,
[Top]

15 Things to Improve Your Website Accessibility

This is a really great list from Bruce. There is a lot of directly actionable stuff here. Send it around to your team and make it something that you all go through together.

Here’s a little one that prodded me to finally fix…

Most screen readers allow the user to quickly see a list of links on a page [..] However, if every link has text saying “Click here” or “Read more”, with nothing else to distinguish them, this is useless. The easiest way to solve this is simply to write unique link text, but if that isn’t possible, you can over-ride the link text for assistive technology by using a unique aria-label attribute on each link.

I had links like that right here on CSS-Tricks. Some of them are automatically created by WordPress itself, not something I hand-coded into a template. When you show the_excerpt of a post, you get a “read more” link automatically, and aside from getting your hands dirty with some filters, you don’t have that much control over it.

DevTools showing the DOM of a "read more" link with no context.

Fortunately, I already use a cool plugin called Advanced Excerpt. I poked into the settings to see if I could do something about injecting the post title in there somehow. Lookie lookie:

A setting for Advanced Excerpt that does screen reader links.

That screen-reader-text class is exactly what I already used for that kind of stuff, so it was a one-click fix!

Much nicer DOM now for those links:

Direct Link to ArticlePermalink

The post 15 Things to Improve Your Website Accessibility appeared first on CSS-Tricks.

CSS-Tricks

, , ,
[Top]

Considerations When Choosing Fonts for a Multilingual Website

As a front-end developer working for clients all over the world, I’ve always struggled to deal with multilingual websites — especially cases where both right-to-left (RTL) and left-to-right (LTR) are used. That said, I’ve learned a few things along the way and am going to share a few tips in this post.

Let’s work in Arabic and English, not just because Arabic is my native language, but because it’s a classic example of RTL in use.

Adding RTL support to a site

Before this though, we’ll want to add support for an RTL language on our site. There are two ways we can go about this, both of which aren’t exactly ideal.

Not ideal: Use a specific font for each direction

The first way we could go about this is to rely on the direction (dir) attribute on any given element (which is usually the <html> element so it sets the direction globally):

/* For LTR, use Roboto */ [dir=ltr] body {   font-family: "Roboto", sans-serif; }  /* For RTL, use Amiri */ [dir=rtl] body {   font-family: "Amiri", sans-serif; }

PostCSS-RTL makes it even easier to generate styles for each direction, but the issue with this method is that you are only using one font which is not ideal if you have multiple languages in one paragraph.

Here’s why. You’ll find that multi-lingual paragraphs mess up the UI because the Arabic glyphs are given a default font that doesn’t align with the element.

It might be worse in some browsers over others.

Also not ideal: Use one single font that supports both languages

The second option could be using fonts that offer support for both directions. However, in my personal experience, using just one font for both languages limits creativity and freedom to use a different font for each direction. It might not be that bad, depending on the design requirements. But I’ve definitely worked on projects where it makes a difference.

OK, so what then?

We need a simpler solution. According to MDN:

Font selection does not simply stop at the first font in the list that is on the user’s system. Rather, font selection is done one character at a time, so that if an available font does not have a glyph for a needed character, the latter fonts are tried.

Meaning, we can still use the font-family property but using a fallback in cases where the first font doesn’t have a glyph for a character. This actually solves both of the issues above, two birds with one stone style!

body {   font-family: 'Roboto', 'Amiri', 'Galada', sans-serif; }

Why does this work?

Just like the way flexbox and CSS grid, make CSS layouts a lot more flexible, the font matching algorithm makes it even easier to work with content in different languages. Here’s what W3C says about it matching characters to fonts:

When text contains characters such as combining marks, ideally the base character should be rendered using the same font as the mark, this assures proper placement of the mark. For this reason, the font matching algorithm for clusters is more specialized than the general case of matching a single character by itself. For sequences containing variation selectors, which indicate the precise glyph to be used for a given character, user agents always attempt system font fallback to find the appropriate glyph before using the default glyph of the base character.

(Emphasis mine)

And how are fonts matched? The spec outlines the steps the algorithm takes, which I’ll paraphrase here.

  • The browser looks at a cluster of text and tries to match it to the list of fonts that are declared in CSS.
  • If it finds a font that supports all of the characters, great! That’s what gets used.
  • If the browser doesn’t find a font that supports all of the characters, it re-reads the list of fonts to find one that supports the unmatched characters and applies it to those specific characters. 
  • If the browser doesn’t find a font in the list that matches neither all of the characters in the cluster nor individual ones, then it reaches for the default system font and checks that it supports all of the characters.
  • If the default system font matches, again, great! That’s what gets used.
  • If the system font doesn’t work, that’s where the browser renders a broken glyph.

Let’s talk performance

The sequence we just looked at could be taxing on a site’s performance. Imagine the browser having to loop through every defined fallback, match specific characters to glyphs, and download font files based on what it finds. That can add up to a lot of work, not to mention FOUT and other rendering weirdness.

The goal is to let the font matching algorithm decide which font to apply to each text instead of relying on one font for both languages or adding extra CSS to handle different directions. If a font is never applied to anything (say a particular page is in RTL and happens to not have any LTR text on it, or vice versa) the font further down the stack that isn’t used is never downloaded.

Making that happen requires selecting good multilingual fonts. Good multilingual fonts are ones that have glyphs for as many characters you anticipate using on a page. And if you are unable to find one that supports them all, using one that supports most of them and then falling back to another font that does is an efficient way to go. If that happens to be the default system font, that’s just as great because it’s one less downloaded font file.


The good thing about letting the font-family property decide the font for each glyph (instead of making extra CSS selectors for each direction) is that the behavior is already there as we outlined earlier — we simply need to make use of it. 

The post Considerations When Choosing Fonts for a Multilingual Website appeared first on CSS-Tricks.

CSS-Tricks

, , , ,
[Top]

Do This to Improve Image Loading on Your Website

Jen Simmons explains how to improve image loading by simply using width and height attributes. The issue is that there’s a lot of jank when an image is first loaded because an img will naturally have a height of 0 before the image asset has been successfully downloaded by the browser. Then it needs to repaint the page after that which pushes all the content around. I’ve definitely seen this problem a lot on big news websites.

Anyway, Jen is recommending that we should add height and width attributes to images like so:

<img src="dog.png" height="400" width="1000" alt="A cool dog" />

This is because Firefox will now take those values into consideration and remove all the jank before the image has loaded. That means content will always stay in the same position, even if the image hasn’t loaded yet. In the past, I’ve worked on a bunch of projects where I’ve placed images lower down the page simply because I want to prevent this sort of jank. I reckon this fixes that problem quite nicely.

Direct Link to ArticlePermalink

The post Do This to Improve Image Loading on Your Website appeared first on CSS-Tricks.

CSS-Tricks

, , , ,
[Top]