Tag: experience

The Case for ‘Developer Experience’

A good essay from Jean Yang.

What I mean by developer experience is the sum total of how developers interface with their tools, end-to-end, day-in and day-out. Sure, there’s more focus than ever on how developers use and adopt tools, and there are entire talks and panels devoted to the topic of so-called “DX” — yet large parts of developer experience are still largely ignored. With developers spending less than a third of their time actually writing code, developer experience includes all the other stuff: maintaining code, testing, security issues, addressing incidents, and more. And many of these aspects of developer experience continue getting ignored because they’re complex, they’re messy, and they don’t have “silver bullet” solutions.

She makes the case that DX has perhaps been generally oversimplified and there are categories of tools that have very different DX:

My major revelation was that there are actually two categories of tools — and therefore, two different categories of developer experience needs: abstraction tools (which assume we code in a vacuum) and complexity-exploring tools (which assume we work in complex environments). Most developer experience until now has been solely focused on the former category of abstraction, where there are more straightforward ways to understand good developer experience than the former. 

Reminds me of how Shawn thinks:

It’s time we look beyond the easy questions in developer experience and start addressing the uncomfortable ones.

Direct Link to ArticlePermalink


The post The Case for ‘Developer Experience’ appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, ,

What Google’s New Page Experience Update Means for Images on Your Website

It’s easy to forget that, as a search engine, Google doesn’t just compare keywords to generate search results. Google knows that if people don’t enjoy their experience on a web page, they won’t stay on the page long enough to consume the content — no matter how relevant it is.

As a result, Google has been experimenting with ways to analyze the user experience of web pages using quantifiable metrics. Factoring these into its search engine rankings, it’s hoped to provide users not only with great, relevant content but with awesome user experiences as well.

Google’s soon-to-be-launched page experience update is a major step in this direction. Website owners with image-heavy websites need to be particularly vigilant to adapt to these changes or risk falling by the wayside. In this article, we’ll talk about everything you need to know regarding this update, and how you can take full advantage of it.

Note: Google introduced their plans for Page Experience in May 2020 and announced in November 2020 that it will begin rolling out in May 2021. However, Google has since delayed their plans for a gradual rollout starting mid-Jun 2021. This was done in order to give website admins time to deal with the shifting conditions brought about by the COVID-19 pandemic first.

Some Background

Before we get into the latest iteration of changes to how Google factors user experience metrics into search engine rankings, let’s get some context. In April 2020, Google made its most pivotal move in this direction yet by introducing a new initiative: core web vitals.

Core web vitals (CWV) were introduced to help web developers deal with the challenges of trying to optimize for search engine rankings using testable metrics – something that’s difficult to do with a highly subjective thing like user experience.

To do this, Google identified three key metrics (what it calls “user-centric performance metrics”). These are:

  1. LCP (Largest Contentful Paint): The largest element above the fold when a web page initially loads. Typically, this is a large featured image or header. How fast the largest content element loads plays a huge role in how fast the user perceives the overall loading speed of the page.
  2. FID (First Input Delay): The time it takes between when a user first interacts with the page and when the main thread is free for the browser to process the event. This can be clicking/tapping a button, link, or interacting with any other dynamic element. Delays when interacting with a page can obviously be frustrating to users which is why keeping FID low is crucial.
  3. Cumulative Layout Shift (CLS): This calculates the visual stability of a page when it first loads. The algorithm takes the size of the elements and the distance they move relevant to the viewport into account. Pages that load with high instability can cause miscues by users, also leading to frustrating situations.

These metrics have evolved from more rudimentary ones that have been in use for some time, such as SI (Speed Index), FCP (First Contentful Paint), TTI (Time-to-interactive), etc.

The reason this is important is because images can play a significant role in how your website’s CWVs score. For example, the LCP is more often than not an above-the-fold image or, at the very least, will have to compete with an image to be loaded first. Images that aren’t correctly used can also negatively impact CLS. Slow-loading images can also impact the FID by adding further delays to the overall rendering of the page.

What’s more, this came on the back of Google’s renewed focus on mobile-first indexing. So, not only are these metrics important for your website, but you have to ensure that your pages score well on mobile devices as well.

It’s clear that, in general, Google is increasingly prioritizing user experience when it comes to search engine rankings. Which brings us to the latest update – Google now plans to incorporate page experience as a ranking factor, starting with an early rollout in mid-June 2021.

So, what is page experience? In short, it’s a ranking signal that combines data from a number of metrics to try and determine how good or bad the user experience of a web page is. It consists of the following factors:

  • Core Web Vitals: Using the same, unchanged, core web vitals. Google has established guidelines and recommended rankings that you can find here. You need an overall “good” CWV rating to qualify for a “good” page experience score.
  • Mobile Usability: A URL must have no mobile usability errors to qualify for a “good” page experience score.
  • Security Issues: Any flagged security issues will disqualify websites.
  • HTTPS: Pages must be served via HTTPS to qualify.
  • Ad Experience: Measures to what degree ads negatively affect the user experience on your web page, for example, by being intrusive, distracting, etc.

As part of this change, Google announced its intention to include a visual indicator, or badge, that highlights web pages that have passed its page experience criteria. This will be similar to previous badges the search engine has used to promote AMP (Accelerated Mobile Pages) or mobile-friendly pages.

This official recognition will give high-performing web pages a massive advantage in the highly competitive arena that is Google’s SERPs. This visual cue will undoubtedly boost CTRs and organic traffic, especially for sites that already rank well. This feature may drop as soon as May if it passes its current trial phase.

Another bit of good news for non-AMP users is that all pages will now become eligible for Top Stories in both the browser and Google News app. Although Google states that pages can qualify for Top Stories “irrespective of its Core Web Vitals score or page experience status,” it’s hard to imagine this not playing a role for eligibility now or down the line.

Key Takeaway: What Does This Mean For Images on Your Website?

Google noted a 70% surge in consumer usage of their Lighthouse and PageSpeed Insight tools, showing that website owners are catching up on the importance of optimizing their pages. This means that standards will only become higher and higher when competing with other websites for search engine rankings.

Google has reaffirmed that, despite these changes, content is still king. However, content is more than just the text on your pages, and truly engaging and user-friendly content also consists of thoughtfully used media, the majority of which will likely be images.

With the proposed page experience badges and Top Stories eligibility up for grabs, the stakes have never been higher to rank highly with the Google Search algorithm. You need every advantage that you can get. And, as I’m about to show, optimizing your image assets can have a tangible effect on scoring well according to these metrics.

What Can You Do To Keep Up?

Before I propose my solution to help you optimize image assets for core web vitals, let’s look at why images are often detrimental to performance:

  • Images bloat the overall size of your website pages, especially if the images are unoptimized (i.e. uncompressed, not properly sized, etc.)
  • Images need to be responsive to different devices. You need much smaller image sizes to maintain the same visual quality on smaller screens.
  • Different contexts (browsers, OSs, etc.) have different formats for optimally rendering images. However, most images are still used in .JPG/.PNG format.
  • Website owners don’t always know about the best practices associated with using images on website pages, such as always explicitly specifying width/height attributes.

Using conventional methods, it can take a lot of blood, sweat, and tears to tackle these issues. Most solutions, such as manually editing images and hard-coding responsive syntax have inherent issues with scalability, the ability to easily update/adjust to changes, and bloat your development pipeline.

To optimize your image assets, particularly with a focus on improving CWVs, you need to:

  • Reduce image payloads
  • Implement effective caching
  • Speed up delivery
  • Transform images into optimal next-gen formats
  • Ensure images are responsive
  • Implement run-time logic to apply the optimal setting in different contexts

Luckily, there is a class of tools designed specifically to solve these challenges and provide these solutions — image CDNs. Particularly, I want to focus on ImageEngine which has consistently outperformed other CDNs on page performance tests I’ve conducted.

ImageEngine is an intelligent, device-aware image CDN that you can use to serve your website images (including GIFs). ImageEngine uses WURFL device detection to analyze the context your website pages are accessed from (device, screen size, DPI, OS, browser, etc.) and optimize your image assets accordingly. Based on these criteria, it can optimize images by intelligently resizing, reformatting, and compressing them.

It’s a completely automatic, set-it-and-forget-it solution that requires little to no intervention once it’s set up. The CDN has over 20 global PoPs with the logic built into the edge servers for faster across different regions. ImageEngine claims to achieve cache-hit ratios of as high as 98%+ as well as reduce image payloads by 75%+.

Step-by-Step Test + How to Use ImageEngine to Improve Page Experience

To illustrate the difference using an image CDN like ImageEngine can make, I’ll show you a practical test.

First, let’s take a look at how a page with a massive amount of image content scores using PageSpeed Insights. It’s a simple page, but consists of a large number of high-quality images with some interactive elements, such as buttons and links as well as text.

FID is unique because it relies on data from real-world interactions users have with your website. As a result, FID can only be collected “in the field.” If you have a website with enough traffic, you can get the FID by generating a Page Experience Report in the Google Console.

However, for lab results, from tools like Lighthouse or PageSpeed Insights, we can surmise the impact of blocking resources by looking at TTI and TBT.

Oh, yes, and I’ll also be focussing on the results of a mobile audit for a number of reasons:

  1. Google themselves are prioritizing mobile signals and mobile-first indexing
  2. Optimizing web pages and images assets are often most challenging for mobile devices/general responsiveness
  3. It provides the opportunity to show the maximum improvement a image CDN can provide

With that in mind, here are the results for our page:

So, as you can see, we have some issues. Helpfully, PageSpeed Insights flags the two CWVs present, LCP and CLS. As you can see, because of the huge image payload (roughly 35 MB), we have a ridiculous LCP of nearly 1 minute.

Because of the straightforward layout and the fact that I did explicitly give images width and height attributes, our page happened to be stable with a 0 CLS. However, it’s important to realize that slow loading images can also impact the perceived stability of your pages. So, even if you can’t directly improve on CLS, the faster sizable elements such as images load, the better the overall experience for real-world users.

TTI and TBT were also sub-par. It will take at least two  seconds from the moment the first element appears on the page until when the page can start to become interactive.

As you can see from the opportunities for improvement and diagnostics, almost all issues were image-related:

Setting Up ImageEngine and Testing the Results

Ok, so now it’s time to add ImageEngine into the mix and see how it improves performance metrics on the same page.

Setting up ImageEngine on nearly any website is relatively straightforward. First, go to ImageEngine.io and signup for a free trial. Just follow the simple 3-step signup process where you will need to:

  1. provide the website you want to optimize, 
  2. the web location where your images are stored, and then 
  3. copy the delivery address ImageEngine assigns to you.

The latter will be in the format of {random string}.cdn.imgeng.in but can be updated from within the ImageEngine dashboard.

To serve images via this domain, simply go back to your website markup and update the <img> src attributes. For example:

From:

<img src=”mywebsite.com/images/header-image.jpg”/>

To:

<img src=”myimages.cdn.imgeng.in/images/header-image.jpg”/>

That’s all you need to do. ImageEngine will now automatically pull your images and dynamically optimize them for best results when visitors view your website pages. You can check the official integration guides in the documentation on how to use ImageEngine with Magento, Shopify, Drupal, and more. There is also an official WordPress plugin.

Here’s the results for my ImageEngine test page once it’s set up:

As you can see, the results are nearly flawless. All metrics were improved, scoring in the green – even Speed Index and LCP which were significantly affected by the large images.

As a result, there were no more opportunities for improvement. And, as you can see, ImageEngine reduced the total page payload to 968 kB, cutting down image content by roughly 90%:

Conclusion

To some extent, it’s more of the same from Google who has consistently been moving in a mobile direction and employing a growing list of metrics to hone in on the best possible “page experience” for its search engine users. Along with reaffirming their move in this direction, Google stated that they will continue to test and revise their list of signals.

Other metrics that can be surfaced in their tools, such as TTFB, TTI, FCP, TBT, or possibly entirely new metrics may play even larger roles in future updates.

Finding solutions that help you score highly for these metrics now and in the future is key to staying ahead in this highly competitive environment. While image optimization is just one facet, it can have major implications, especially for image-heavy sites.

An image CDN like ImageEngine can solve almost all issues related to image content, with minimal time and effort as well as future proof your website against future updates.


The post What Google’s New Page Experience Update Means for Images on Your Website appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , , ,
[Top]

Recipe websites, data modeling, and user experience

Simeon Griggs with some nice UX ideas for a recipe website:

  • No math. Swap between units and adjust servings on-the-fly.
  • Offer alternative ingredients.
  • Re-list the ingredient amounts when they’re referenced in the instructions.

I totally agree, especially on that last one:

Of all our improvements I think this is my favourite.

A typical recipe layout contains ingredients with amounts at the start. Then, a bullet point list of instructions to perform the method.

The method though typically does not reference those amounts again, so if you don’t prepare all your amounts ahead of time (which is what you’re probably supposed to do but come on who does that) you’ll have to keep scanning back and forward.

Part of what makes this stuff possible is how you set up the data model. For example, an ingredient might be an Array instead of a String so that you’re set up for offering alternatives right out of the gate.

Direct Link to ArticlePermalink


The post Recipe websites, data modeling, and user experience appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , ,
[Top]

Reconciling Editor Experience and Developer Experience in the CMS

Components are great, aren’t they? They are these reusable sources of truth that you can use to build rock-solid front-ends without duplicating code.

You know what else is super cool? Headless content management! Headless content management system (CMS) products offer a content editing experience while freeing that content in the form of data that can be ported, well, to any API-consuming front-end UI. You can structure your content however you’d like (depending on the product), and pull that content into your front-end applications.

Using these two things together — a distributed CMS solution with component-based front-end applications — is a core tenet of the Jamstack.

But, while components and headless CMSs are great on their own, it can be difficult to get them to play nicely together. I‘m not saying it‘s difficult to hook one up to the other. In a lot of cases, it’s actually quite painless. But, to craft a system of components that is reusable and consistent, and to have that system maintain parity with a well-designed CMS experience is a difficult thing to achieve. It’s that win-win combo of being able to freely write content and then have that content structured into predictable components that makes headless content management so appealing.

Achieving parity between a CMS and front-end components

My favorite demonstrating this complexity is a simple component: a button. Let‘s say we’re working with React to build components and our button looks like this:

<Button to="/">Go Home</Button>

In the lovely land of React, that means the <Button> component has two props (i.e. properties, attributes, arguments, etc.) — to and children. children is a React thing that holds all the content within the opening and closing tags, which is “Go Home” in this case.)

If we’re going to enable users in the content editor to add buttons to the site, we want a system for them that makes it easy to understand how their actions in the CMS affect what appears on screen in the front-end app. But we also want our developer(s) to work productively with component properties that make sense to them and within the framework they’re working (i.e. React in our example).

How do we do that?

We could…

…use fields in the CMS that match the components’ properties, though I’ve had little success with this approach. to and children don‘t make much sense to content editors trying to build a button. Believe me, I‘ve tried. I‘ve tried with beginners and experienced editors alike. I‘ve tried helper text. It doesn’t matter. It’s confusing.

What makes more sense is using words editors are more likely to understand, like label or text for children and url for to.

Showing a mockup of CMS text input fields for a button component, one for a to field with https as a placeholder, and another for children with Buy Now as a placeholder.
😕

Showing a mockup of CMS text input  fields for a button component, one for a URL field with https as a placeholder, and another for Button Label with Buy Now as a placeholder.
🤓

But then we’d be out of sync with our code.

Or what if we…

masked attributes in the CMS. Most headless CMS solutions enable you to have a different value for the label of the field than the name that is used when delivering content via an API.

We could label our fields Label and URL, but use children and to as the names. We could. But we probably shouldn’t. Remember what Ian Malcolm said?

On the surface, masking attributes makes sense. It’s a separation of concerns. The editors see something that makes them happy and productive, and the developers work with the names that make sense to them. I like it, but only in theory. In practice, it confuses developers. Debugging a content editor issue often requires digging through extra layers (i.e. time) to find the relationship between labels and field names.

Or why not …

…change the properties. Wouldn’t it be easier for developers to be flexible? They’re the ones designing the system, after all.

Yes, that’s true. But if you follow that rule exclusively, it’s inevitable that you’re going to run into some issue along the way. You’ll likely end up fighting against the framework, or props will just feel goofy.

In our example, using label and url as props for a button works totally fine for data that originates from the CMS. But that also means that any time our developers want to use a button within the code, it looks like this:

<Button label="Go Home" url="/" />

That may seem okay on the surface, but it significantly limits the power of the button. Let’s say I want to support some other feature, like adding an icon within the label. I’m going to need some additional logic or another property for it. If I would have used React’s children approach instead, it would have just worked (likely after some custom styling support).

Okay, so… what do we do?

Introducing transformers

The best approach I’ve found is to separately optimize the editor and developer experiences. Craft a CMS experience that is catered to the editors. Build a codebase that is easy for developers to navigate, understand, and enhance.

The result is that the two experiences will not be in parity with one another. We need some set of utilities to transform the data from the CMS structure into something that can be used by the front-end, regardless of the framework and tooling you’re using.

I call these utilities transformers. (Aren’t I so good at naming things!?) Transformers are responsible for consuming data from your CMS and transforming it into a shape that can be easily consumed by your components.

Illustration showing a rounded square that says Data Source pointing to a yellow box that says transformer, pointing to another rounded square that says component. There is a small code snippet on both sides of the yellow square showing how it transforms the Label and URL fields to Children and To, respectively.

While I‘ve found that transforming data is the smoothest means to get great experiences in both the CMS and the codebase, I don‘t have an obvious solution for how (or perhaps where) those transformations should happen. I‘ve used three different approaches, all of which have their pros and cons. Let’s take a look at them.

1. Alongside components

One approach is to put transformers right alongside the components they are serving. This is the approach I typically take in organizing component-based projects — to keep related files close to one another.

The same illustrated diagram, but the yellow box is now the button index.js file that points to a transformer.js file that then points to a button component.js file.

That means that I often have a directory for every component with a predictable set of files. The index.js acts as the controller for the component. It is responsible for importing and exporting all other relevant files. That makes it trivial to wrap the component with some logic-based behavior. In other words, it could transform properties of the component before rendering it. Here’s what that might look like for our button example:

import React from "react"  import Component from "./component" import transform from "./transformer"  const Button = props => <Component {...transform(props)} />  export default Button

The transform.js file might look like this:

export default input =&gt; {   return {     ...input,     children: input.children || input.label,     to: input.to || input.url   } }

In this example, if to and children were properties sent to the component, it works just fine! But if label and url were used instead, they are transformed to children and to. That means the <Button> component (component.js) only has to worry about using children and to.

const Button = ({ children, to }) => <a href={to}>{children}</a>

I personally love this approach. It keeps the logic tightly coupled with the component. The biggest downside I‘ve found thus far is that it’s a large number of files and transforms, when the entire dataset for any given page could be transformed earlier in the stack, which would be…

2. At the top of the funnel

The data has to be pulled into the application via some mechanism. Developers use this mechanism to retrieve as much data for the current page or view as possible. Often, the fewer number of queries/requests a page is required to make, the better its performance.

In other words, that mechanism often exists near the top of the funnel (or stack), as opposed to each component pulling its own data in dynamically. (When that’s necessary, I use adapters.)

Another illustration, this time where the yellow box is labeled import mechanism, and it has three arrows, each pointing to a white square, labeled, component 1, component 2, and component 3, respectively.

The mechanism that retrieves the page data could also be responsible for transforming all the data for the given page before it renders any of its components.

In theory, this is a better approach than the first one. It decreases the amount of work the browser has to do, which should improve the front-end performance. That means the server has to do more work, but that’s often a better choice.

In practice, though, this is a lot of work. Data structures can be big, complex, and interwoven. It can take a heck of a lot of work to transform everything into the right format at the top of the funnel, and then pass the transformed data down to components. It’s also more difficult to test because of the potential complexity and variation of the giant data blob retrieved at the top of the stack. With the first approach, testing the transformer logic for the button is trivial. With this approach, you’d want to account for transforming button data anywhere that it might appear in the retrieved data object.

But, if you can pull it off, this is generally the better approach.

3. The middleman engine

The third and final (and magical) approach is to do all this work somewhere else. In this case, we could build an engine (i.e. a small application) that would do the transformations for us, and then make the content available for the application to consume.

Another illustration, this time where the yellow box is the abstracted data source, which points to a white box labeled import mechanism, which then points to the same three white squares representing components that are outlined in the previous illustration.

This is likely even more work than the second approach. And it has added cost and maintenance in running an additional application, which takes more effort to ensure it is rock solid.

The major upside to this approach is that we could build this as an abstracted engine. In other words, any time we bring in data to any front-end application, it goes through this middleman engine. That means if we have two projects that use the same CMS or data source, our work is cut down significantly for the second project.


If you aren‘t doing any of this today and want to start, my advice is to treat these approaches like stepping stones. They grow in complexity and maintenance and power as the application grows. Start with the first approach and see how far that gets you. Then, if you feel like you could benefit from a jump to the second, do it! And if you’re feeling like living dangerously, go for the third!

In the end, what matters most is crafting an experience that both your editors and your developers understand and enjoy. If you can do that, you win!


The post Reconciling Editor Experience and Developer Experience in the CMS appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , ,
[Top]

What is Developer Experience (DX)?

Developer Experience¹ is a term² that has one somewhat obvious meaning — the experience of developers — but it eludes definition in the sense that people invoke it at different times for different reasons referring to different things. For instance, our own Sarah Drasner’s current job title is “VP of Developer Experience” at Netlify. But a job title just one way the term is used. Let’s dig in a bit and apply it to the different ways people think about and use the term.

People think of specific companies.

I hear DX and Stripe together a lot. That makes sense. Stripe is a company almost exclusively for developers. They are serious about providing a good experience for their customers (developers), hence “developer experience.” Just listen to Suz Hinton talk about “friction journals”, which is this idea of using a product (like Stripe) and noting down every single little WTF moment, confusion, and frustration so that improvements can be made:

Netlify is like Stripe in this way, as is Heroku, CodePen, and any number of companies where the entire customer base is developers. For companies like this, it’s almost like DX is what user experience (UX) is for any other company.

People think of specific technologies.

It’s common to hear developer experience invoked when comparing technologies. For instance, some people will say that Vue offers a better developer experience than React. (I’m not trying to start anything, I don’t even have much of an opinion on this.) They are talking about things like APIs. Perhaps the state is more intuitive to manage in one vs. the other. Or they are talking about features. I know Vue and Svelte have animation helpers built-in while React does not. But React has hooks and people generally like those. These are aspects of the DX of these technologies.

Or they might be speaking about the feeling around the tools surrounding the core technology. I know create-react-app is widely beloved, but so is the Vue CLI. React Router is hugely popular, but Vue has a router that is blessed (and maintained) by the core team which offers a certain feeling of trust.

> vue create hello-world

> npx create-react-app my-app

I’m not using JavaScript frameworks/libraries as just any random example. I hear people talk about developer experience as it relates to JavaScript more than anything else — which could just be due to the people who are in my circles, but it feels notable.

People think of the world around the technology.

Everybody thinks good docs are important. There is no such thing as a technology that is better than another but has much worse docs. The one with the better docs is better overall because it has better docs. That’s not the technology itself; that’s the world around it.

Have you ever seen a developer product with an API, and when you view the docs for the API while logged in, it uses API keys and data and settings from your own account to demonstrate? That’s extraordinary to me. That feels like DX to me.

Airtable docs showing me API usage with my own data.

“Make the right thing easy,” notes Jake Dohm.

That word, easy, feels highly related to DX. Technologies that make things easy are technologies with good DX. In usage as well as in understanding. How easily (and quickly) can I understand what your technology does and what I can do with it?

What the technology does is often only half of the story. The happy path might be great, but what happens when it breaks or errors? How is the error reporting and logging? I think of Apollo and GraphQL here in my own experience. It’s such a great technology, but the error reporting feels horrendous in that it’s very difficult to track down even stuff like typos triggering errors in development.

What is the debugging story like? Are there special tools for it? The same goes for testing. These things are fundamental DX issues.

People think of technology offerings.

For instance, a technology might be “good” already. Say it has an API that developers like. Then it starts offering a CLI. That’s (generally) a DX improvement, because it opens up doors for developers who prefer working in that world and who build processes around it.

I think of things like Netlify Dev here. They already have this great platform and then say, here, you can run it all on your own machine too. That’s taking DX seriously.

One aspect of Netlify Dev that is nice: The terminal command to start my local dev environment across all my sites on Netlify, regardless of what technology powers them, is the same: netlify dev

Having a dedicated CLI is almost always a good DX step, assuming it is well done and maintained. I remember WordPress before WP-CLI, and now lots of documentation just assumes you’re using it. I wasn’t even aware Cloudinary had a CLI until the other day when I needed it and was pleasantly surprised that it was there. I remember when npm scripts started taking over the world. (What would npm be without a CLI?) We used to have a variety of different task runners, but now it’s largely assumed a project has run commands built into the package.json that you use to do anything the project needs to do.

Melanie Sumner thinks of CLIs immediately as core DX.

People think of the literal experience of coding.

There is nothing more directly DX than the experience of typing code into code editing software and running it. That’s what “coding” is and that’s what developers do. It’s no wonder that developers take that experience seriously and are constantly trying to improve it for themselves and their teams. I think of things like VS Code in how it’s essentially the DX of it that has made it so dominant in the code editing space in such a short time. VS Code does all kinds of things that developers like, does them well, does them fast, and allows for a very wide degree of customization.

TypeScript keeps growing in popularity no doubt in part due to the experience it offers within VS Code. TypeScript literally helps you code better by showing you, for example, what functions need as parameters, and making it hard to do the wrong thing.

Then there is the experience outside the editor, which in the browser itself. Years ago, I wrote Style Injection is for Winners where my point was, as a CSS developer, the experience of saving CSS code and seeing the changes instantly in the browser is a DX you definitely want to have. That concept continues to live on, growing up to JavaScript as well, where “hot reloading” is goosebump-worthy.

The difference between a poor developer environment (no IDE help, slow saves, manual refreshes, slow pipelines) and a great developer environment (fancy editor assistance, hot reloading, fast everything) is startling. It essentially makes you a better and more productive programmer.

People compare it to user experience (UX).

There is a strong negative connotation to DX sometimes. It happens when people blame it for it existing at the cost of user experience.

I think of things like client-side developer-only libraries. Think of the classic library that everyone loves to dunk: Moment.js. Moment allows you to manipulate dates in JavaScript, and is often used client-side to do that. Users don’t care if you have a fancy API available to manipulate dates because that is entirely a developer convenience. So, you ship this library for yourself (good DX) at the cost of slowing down the website (bad UX). Most client-side JavaScript is in this category.

Equally as often, people connect developer experience and user experience. If developers are empowered and effective, that will “trickle down” to produce good software, the theory goes.

Worst case, we’re in a situation where UX and DX are on a teeter totter. Pile on some DX and UX suffers on the other side. Best case, we find ways to disentangle DX and UX entirely, finding value in both and taking both seriously. Although if one has to win, certainly it should be the users. Like the HTML spec says:

In case of conflict, consider users over authors over implementors over specifiers over theoretical purity.

People think about time.

How long does a technology take to adopt? Good DX considers this. Can I take advantage of it without rewriting everything? How quickly can I spin it up? How well does it play with other technologies I use? What is my time investment?

This kind of thing makes me think of some recent experience with Cloudflare Workers. It’s really cool technology that we don’t have time to get all into right here, but suffice to say it gives you control over a website at a high level that we often don’t think about. Like what if you could manipulate a network request before it even gets to your web server? You don’t have to use it, but because of the level it operates on, new doors open up without caring about or interfering with whatever technologies you are using.

Not only does the technology itself position itself well, the DX of using it, while there are some rough edges, is at least well-considered, providing a browser-based testing environment.

A powerful tool with a high investment cost, eh, that’s cool. But a powerful tool with low investment cost is good DX.

People don’t want to think about it.

They say the best typography goes unnoticed because all you see is the actual thing it’s telling you. That can be true of developer experience. The best DX is that you never notice the tools because they just work.

Good DX is just being able to do your job rather than fight with tools. The tools could be your developer environment, it could be build tooling, it could be hosting stuff, or it could even be whatever APIs you are interfacing with. Is the API intuitive and helpful, or obtuse and tricky?


Feel free to keep going on this in the comments. What is DX to you?

  1. Are we capitalizing Developer Experience? I’m just gonna go for it.
  2. Looks like Michael Mahemoff has a decent claim on coining the term.

The post What is Developer Experience (DX)? appeared first on CSS-Tricks.

CSS-Tricks

,
[Top]

Front-End Developers Have to Manage the Loading Experience

Web performance is a huge complicated topic. There are metrics like total requests, page weight, time to glass, time to interactive, first input delay, etc. There are things to think about like asynchronous requests, render blocking, and priority downloading. We often talk about performance budgets and performance culture.

How that first document comes down from the server is a hot topic. That is where most back-end related performance talk enters the picture. It gives rise to architectures like the JAMstack, where gosh, at least we don’t have to worry about index.html being slow.

Images have a performance story all to themselves (formats! responsive images!). Fonts also (FOUT’n’friends!). CSS also (talk about render blocking!). Service workers can be involved at every level. And, of course, JavaScript is perhaps the most talked about villain of performance. All of this is balanced with perhaps the most important general performance concept: perceived performance. Front-end developers already have a ton of stuff we’re responsible for regarding performance. 80% is the generally quoted number and that sounds about right to me.

For a moment, let’s assume we’re going to build a site and we’re not going to server-side render it. Instead, we’re going to load an empty document and kick off data API calls as quickly as we can, then render the site with that data. Not a terribly rare scenario these days. As you might imagine, >we now have another major concern: handling the loading experience.

I mused about this the other day. Here’s an example:

I’d say that loading experience is pretty janky, and I’m on about the best hardware and internet connection money can buy. It’s not a disaster and surely many, many thousands of people use this particular site successfully every day. That said, it doesn’t feel fast, smooth, or particularly nice like you’d think a high-budget website would in these Future Times.

Part of the reason is probably because that page isn’t server-side rendered. For whatever reason (we can’t really know from the outside), that’s not the way they went. Could be developer efficiency, security, a temporary state during a re-write… who knows! (It probably isn’t ignorance.)

What are we to do? Well, I think this is a somewhat new problem in front-end development. We’ve told the browser: “Hey, we got this. We’re gonna load things all out of order depending on when our APIs cough stuff up to us and our front-end framework decides it’s time to do so.” I can see the perspective here where this isn’t ideal and we’ve given up something that browsers are incredibly good at only to do it less well ourselves. But hey, like I’ve laid out a bit here, the world is complicated.

What is actually happening is that these front-end frameworks are aware of this issue and are doing things to help manage it. Back in April of this year, Dan Abramov introduced React Suspense. It seems like a tool for helping front-end devs like us manage the idea that we now need to deal with more loading state stuff than we ever have before:

At about 14 minutes, he gets into fetching data with placeholder components, caching and such. This issue isn’t isolated to React, of course, but keeping in that theme, here’s a conference talk by Andrew Clark that hit home with me even more quickly (but ultimately uses the same demo and such):

Just the idea of waiting to show spinners for a little bit can go a long way in de-jankifying loading.

Mikael Ainalem puts a point on this in a recent article, A Brief History of Flickering Spinners. He explains more clearly what I was trying to say:

One reason behind this development is the change we’ve seen in asynchronous programming. Asynchronous programming is a lot easier than it used to be. Most modern languages have good support for loading data on the fly. Modern JavaScript has incorporated Promises and with ES7 comes the async and await keywords. With the async/await keywords one can easily fetch data and process it when needed. This means that we need to think a step further about how we show users that data is loading.

Plus, he offers some solutions!

See the Pen Flickering spinners by Mikael Ainalem (@ainalem) on CodePen.

We’ve got to get better at this.

The post Front-End Developers Have to Manage the Loading Experience appeared first on CSS-Tricks.

CSS-Tricks

, , , ,
[Top]

How we made Carousell’s mobile web experience 3x faster

Both a sobering and interesting read from Stacey Tay on how the team at Carousell gathered the metrics to define a performance budget and, in turn, developed a better experience for their customers:

Our new PWA listing page loads 3x faster than our old listing page. After releasing this new page, we’ve had a 63% increase in organic traffic from Indonesia, compared to our our all time-high week. Over a 3 week period, we also saw a 3x increase in ads click-through-rates and a 46% increase in anonymous users who initiated a chat on the listing page.

The team inlined critical CSS, reduced the number of resources the app was loading, and implemented a lazy loading strategy, among many other things. I think it’s interesting to note that they also changed the design of the app in certain ways to make things more performant, too. I reckon it’s easy to fall into the trap of thinking that performance is solely a task for developers and posts like this prove that it’s more collaborative than that.

Direct Link to ArticlePermalink

The post How we made Carousell’s mobile web experience 3x faster appeared first on CSS-Tricks.

CSS-Tricks

, , , ,
[Top]