Tag: can’t

Reactive jQuery for Spaghetti-fied Legacy Codebases (or When You Can’t Have Nice Things)

I can hear you crying out now: “Why on Earth would you want to use jQuery when there are much better tools available? Madness! What sort of maniac are you?” These are reasonable questions, and I’ll answer them with a little bit of context.

In my current job, I am responsible for the care and feeding of a legacy website. It’s old. The front-end relies on jQuery, and like most old legacy systems, it’s not in the best shape. That alone isn’t the worst, but I’m working with additional constraints. For example, we’re working on a full rewrite of the system, so massive refactoring work isn’t being approved, and I’m also not permitted to add new dependencies to the existing system without a full security review, which historically can take up to a year. Effectively, jQuery is the only JavaScript library I can use, since it’s already there. 

My company has only recently come to realize that front-end developers might have important skills to contribute, so the entire front end of the app was written by developers unaware of best practices, and often contemptuous of their assignment. As a result, the code quality is wildly uneven and quite poor and unidiomatic overall.

Yeah, I work in that legacy codebase: quintessential jQuery spaghetti.

Someone has to do it, and since there will always be more legacy code in the world than greenfield projects, there will always be lots of us. I don’t want your sympathy, either. Dealing with this stuff, learning to cope with front-end spaghetti on such a massive scale has made me a better, if crankier, developer.

So how do you know if you’ve got spaghetti jQuery on your hands? One reliable code smell I’ve found is a lack of the venerable old .toggle(). If you’ve managed to successfully not think about jQuery for a while, it is a library that smooths cross-browser compatibility issues while also making DOM queries and mutations incredibly easy. There’s nothing inherently wrong with that, but direct DOM manipulation can be very hard to scale if you’re not careful. The more DOM-manipulation you write, the more defensive against DOM mutation you become. Eventually, you can find yourself with an entire codebase written that way and, combined with less-than-ideal scope management, you are essentially working in an app where all of the state is in the DOM and you can never trust what state the DOM will be in when you need to make changes; changes could swoop in from anywhere in your app whether you like it or not. Your code gets more procedural, bloating things up with more explicit instructions, trying to pull all the data you need from the DOM itself and force it into the state you need it to be in.

This is why .toggle() is often the first thing to go: if you can’t be sure whether an element is visible or not, you have to use .show() and .hide() instead. I’m not saying .show() and .hide() should be Considered Harmful™, but I’ve found they’re a good indication that there might be bigger problems afoot.

What can you do to combat this? One solution my coworkers and I have found takes a hint directly from the reactive frameworks we’d rather be using: observables and state management. We’ve all found that hand-rolling state objects and event-driven update functions while treating our DOM like a one-way dataflow template leads to more predictable results that are easier to change over time.

We each approach the problem a little differently. My take on reactive jQuery is distinctly flavored like Vue drop-in and takes advantage of some “advanced” CSS.

If you check out the script, you’ll see there are two different things happening. First, we have a State object that holds all of the values for our page, and we have a big mess of events.

var State = {   num: 0,   firstName: "",   lastName: "",   titleColor: "black",   updateState: function(key, value){     this[key] = value;              $  ("[data-text]").each(function(index, elem){       var tag = $  (elem).attr("data-tag");       $  (elem).text(State[tag]);     });          $  ("[data-color]").each(function(index, elem){       var tag = $  (elem).attr("data-tag");       $  (elem).attr("data-color", State[tag]);     });   } };

I’ll admit it, I love custom HTML attributes, and I’ve applied them liberally throughout my solution. I’ve never liked how HTML classes often do double-duty as CSS hooks and JavaScript hooks, and how if you use a class for both purposes at once, you’ve introduced brittleness into your script. This problem goes away completely with HTML attributes. Classes become classes again, and the attributes become whatever metadata or styling hook I need.

If you look at the HTML, you’ll find that every element in the DOM that needs to display data has a data-tag attribute with a value that corresponds to a property in the State object that contains the data to be displayed, and an attribute with no value that describes the sort of transformation that needs to happen to the element it’s applied to. This example has two different sorts of transformations, text and color.

<h1 data-tag="titleColor" data-color>jDux is super cool!</h1>

On to the events. Every change we want to make to our data is fired by an event. In the script, you’ll find every event we’re concerned about listed with its own .on() method. Every event triggers an update method and sends two pieces of information: which property in the State object that needs to be updated, and the new value it should be set to.

$  ("#inc").on("click", function(){   State.updateState("num", State.num + 1) });  $  ("#dec").on("click", function(){   State.updateState("num", State.num - 1) });  $  ("#firstNameInput").on("input", function(){   State.updateState("firstName", $  (this).val() ) });  $  ("#lastNameInput").on("input", function(){   State.updateState("lastName", $  (this).val() ) });  $  ('[class^=button]').on("click", function(e) {   State.updateState('titleColor', e.target.innerText); });

This brings us to State.updateState(), the update function that keeps your page in sync with your state object. Every time it runs, it updates all the tagged values on the page. It’s not the most efficient thing to redo everything on the page every time, but it’s a lot simpler, and as I hope I’ve already made clear, this is an imperfect solution for an imperfect codebase.

$  (document).ready(function(){   State.updateState(); });

The first thing the update function does is update the value according to the property it receives. Then it runs the two transformations I mentioned. For text elements, it makes a list of all data-text nodes, grabs their data-tag value, and sets the text to whatever is in the tagged property. Color works a little differently, setting the data-color attribute to the value of the tagged property, and then relies on the CSS, which styles the data-color properties to show the correct style.

I’ve also added a document.ready, so we can run the update function on load and display our default values. You can pull default values from the DOM, or an AJAX call, or just load the State object with them already entered as I’ve done here.

And that’s it! All we do is keep the state in the JavaScript, observe our events, and react to changes as they happen. Simple, right?

What’s the benefit here? Working with a pattern like this maintains a single source of truth in your state object that you control, you can trust and you can enforce. If you ever lose trust that your DOM is correct, all you need to do is re-run the update function with no arguments and your values become consistent with the state object again.

Is this kind of hokey and primitive? Absolutely. Would you want to build an entire system out of this? Certainly not. If you have better tools available to you, you should use them. But if you’re in a highly restrictive legacy codebase like I am, try writing your next feature with Reactive jQuery and see if it makes your code, and your life, simpler.


The post Reactive jQuery for Spaghetti-fied Legacy Codebases (or When You Can’t Have Nice Things) appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , , , ,

Innovation Can’t Keep the Web Fast

Every so often, the fruits of innovation bear fruit in the form of improvements to the foundational layers of the web. In 2015, HTTP/2 became a published standard in an effort to update an aging protocol. This was was both necessary and overdue, as HTTP/1 rendered web performance as an arcane sort of discipline in the form of strange workarounds of its limitations. Though HTTP/2 proliferation isn’t absolute — and there are kinks yet to be worked out — I don’t think it’s a stretch to say the web is better off because of it.

Unfortunately, the rollout of HTTP/2 has presided over a 102% median increase of bytes transferred over mobile the last four years. If we look at the 90th percentile of that same dataset — because it’s really the long tail of performance we need to optimize for — we see an increase of 239%. From 2016 (PDF warning) to 2019, the average mobile download speed in the U.S. has increased by 73%. In Brazil and India, average mobile download speeds increased by 75% and 28%, respectively, in that same period of time.

While page weight alone doesn’t necessarily tell the whole story of the user experience, it is, at the very least, a loosely related phenomenon which threatens the collective user experience. The story that HTTPArchive tells through data acquired from the Chrome User Experience Export (CrUX) can be interpreted a number of different ways, but this one fact is steadfast and unrelenting: most metrics gleaned from CrUX over the last couple of years show little, if any improvement despite various improvements in browsers, the HTTP protocol, and the network itself.

Given these trends, all that can be said of the impact of these improvements at this point is that it has helped to stem the tide of our excesses, but precious little to reduce them. Despite every significant improvement to the underpinnings of the web and the networks we access it through, we continue to build for it in ways that suggest we’re content with the never-ending Jevons paradox in which we toil.

If we’re to make progress in making a faster web for everyone, we must recognize some of the impediments to that goal:

  1. The relentless desire to monetize every square inch of the web, as well as the army of third party vendors which fuel the research mandated by such fevered efforts.
  2. Workplace cultures that favor unrestrained feature-driven development. This practice adds to — but rarely takes away from — what we cram down the wire to users.
  3. Developer conveniences that make the job of the developer easier, but can place an increasing cost on the client.

Counter-intuitively, owners of mature codebases which embody some or all of these traits continue to take the same unsustainable path to profitability they always have. They do this at their own peril, rather than acknowledge the repeatedly established fact that performance-first development practices will do as much — or more — for their bottom line and the user experience.

It’s with this understanding that I’ve come to accept that our current approach to remedy poor performance largely consists of engineering techniques that stem from the ill effects of our business, product management, and engineering practices. We’re good at applying tourniquets, but not so good at sewing up deep wounds.

It’s becoming increasingly clear that web performance isn’t solely an engineering problem, but a problem of people. This is an unappealing assessment in part because technical solutions are comparably inarguable. Content compression works. Minification works. Tree shaking works. Code splitting works. They’re undeniably effective solutions to what may seem like entirely technical problems.

The intersection of web performance and people, on the other hand, is messy and inconvenient. Unlike a technical solution as clearly beneficial as HTTP/2, how do we qualify what successful performance cultures look like? How do we qualify successful approaches to get there? I don’t know exactly what that looks like, but I believe a good template is the following marriage of cultural and engineering tenets:

  1. An organization can’t be successful in prioritizing performance if it can’t secure the support of its leaders. Without that crucial element, it becomes extremely difficult for organizations to create a culture in which performance is the primary feature of their product.
  2. Even with leadership support, performance can’t be effectively prioritized if the telemetry isn’t in place to measure it. Without measurement, it becomes impossible to explain how product development affects performance. If you don’t have the numbers, no one will care about performance until it becomes an apparent crisis.
  3. When you have the support of leadership to make performance a priority and the telemetry in place to measure it, you still can’t get there unless your entire organization understands web performance. This is the time at which you develop and roll out training, documentation, best practices, and standards the organization can embrace. In some ways, this is the space which organizations have already spent a lot of time in, but the challenging work is in establishing feedback loops to assess how well they understand and have applied that knowledge.
  4. When all of the other pieces are finally in place, you can start to create accountability in the organization around performance. Accountability doesn’t come in the form of reprisals when your telemetry tells you performance has suffered over time, but rather in the form of guard rails put in place in the deployment process to alert you when thresholds have been crossed.

Now comes the kicker: even if all of these things come together in your workplace, good outcomes aren’t guaranteed. Barring some regulation that forces us to address the poorly performing websites in our charge — akin to how the ADA keeps us on our toes with regard to accessibility — it’s going to take continuing evangelism and pressure to ensure performance remains a priority. Like so much of the work we do on the web, the work of maintaining a good user experience in evolving codebases is never done. I hope 2020 is the year that we meaningfully recognize that performance is about people, and adapt accordingly.

As technological innovations such as HTTP/3 and 5G emerge, we must take care not to rest on our laurels and simply assume they will heal our ills once and for all. If we do, we’ll certainly be having this discussion again when the successors to those technologies loom. Innovation alone can’t keep the web fast because making the web fast — and keeping it that way — is the hard work we can only accomplish by working together.

The post Innovation Can’t Keep the Web Fast appeared first on CSS-Tricks.

CSS-Tricks

, , ,
[Top]

Why can’t we use Functional CSS and regular CSS at the same time?

Harry Nicholls recently wrote all about simplifying styles with functional CSS and you should definitely check it out. In short, functional CSS is another name for atomic CSS or using “helper” or “utility” classes that would just handle padding or margin, background-color or color, for example.

Harry completely adores the use of adding multiple classes like this to an element:

So what I’m trying to advocate here is taking advantage of the work that others have done in building functional CSS libraries. They’re built on solid foundations in design, people have spent many hours thinking about how these libraries should be built, and what the most useful classes will be.

And it’s not just the classes that are useful, but the fundamental design principles behind Tachyons.

This makes a ton of sense to me. However, Chris notes that he hasn’t heard much about the downsides of a functional/atomic CSS approach:

What happens with big redesigns? Is it about the same, time- and difficulty-wise, or do you spend more time tearing down all those classes? What happens when you need a style that isn’t available? Write your own? Or does that ruin the spirit of all this and put you in dangerous territory? How intense can all the class names get? I can think of areas I’ve styled that have three or more media queries that dramatically re-style an element. Putting all that information in HTML seems like it could get awfully messy. Is consistency harder or easier?

This also makes a ton of sense to me, but here’s the thing: I’m a big fan of both methods and even combine them in the same projects.

Before you get mad, hear me out

At Gusto, the company I work for today, I’ve been trying to design a system that uses both methods because I honestly believe that they can live in harmony with one another. Each solve very different use cases for writing CSS.

Here’s an example: let’s imagine we’re working in a big ol’ React web app and our designer has handed off a page design where a paragraph and a button need more spacing beneath them. Our code looks like this:

<p>Item 1 description goes here</p> <Button>Checkout item</Button>

This is just the sort of problem for functional CSS to tackle. At Gusto, we would do something like this:

<div class="margin-bottom-20px">   <p>Item 1 description goes here</p>   <button>Checkout item</button> </div>

In other words, we use functional classes to make layout adjustments that might be specific to a particular feature that we’re working on. However! That Button component is made up of a regular ol’ CSS file. In btn.scss, we have code like this which is then imported into our btn.jsx component:

.btn {   padding: 10px 15px;   margin: 0 15px 10px;   // rest of the styles go here }

I think making brand new CSS files for custom components is way easier than trying to make these components out of a ton of classes like margin-*, padding-*, etc. Although, we could be using functional styles in our btn.jsx component instead like this:

const Button = ({ onClick, className, children }) => {   return (     <button       className='padding-top-10px padding-bottom-10px padding-left-15px padding-right-15px margin-bottom-none margin-right-15px margin-left-15px margin-bottom-10px $  {className}')}       onClick={onClick}     >       {children}     </button>   ); };

This isn’t a realistic example because we’re only dealing with two properties and we’d probably want to be styling this button’s background color, text color, hover states, etc. And, yes, I know these class names are a little convoluted but I think my point still stands even if you combine vertical and horizontal classes together.

So I reckon that we solve the following three issues with functional CSS by writing our custom styles in a separate CSS file for this particular instance:

  1. Readability
  2. Managing property dependencies
  3. Avoiding the painful fact that visual design doesn’t like math

As you can see in the earlier code example, it’s pretty difficult to read and immediately see which classes have been applied to the button. More classes means more difficulty to scan.

Secondly, a lot of CSS property/value pairs are written in relation to one another. Say, for example, position: relative and position: absolute. In our stylesheets, I want to be able to see these dependencies and I believe it’s harder to do that with functional CSS. CSS often depends on other bits of CSS and it’s important to see those connections with comments or groupings of properties/values.

And, finally, visual design is an issue. A lot of visual design requires imperfect numbers that don’t properly scale. With a functional CSS system, you’ll probably want a system of base 10, or base 8, where each value is based on that scale. But when you’re aligning items together visually, you may need to do so in a way that it won’t align to those values. This is called optical adjustment and it’s because our brains are, well, super weird. What makes sense mathematically often doesn’t visually. So, in this case, we’d need to add more bottom padding to the button to make the text feel like it’s positioned in the center. With a functional CSS approach it’s harder to do stuff like that neatly, at least in my experience.

In those cases where you need to balance readability, dependencies, and optical adjustments, writing regular CSS in a regular old-fashioned stylesheet is still my favorite thing in the world. But functional CSS still solves a ton of other problems very eloquently.

For example, what we’re trying to prevent with functional classes at Gusto is creating tons of stylesheets that do a ton of very specific or custom stuff. Going back to that earlier example with the margin beneath those two elements for a second:

<div className='margin-bottom-20px'>   <p>Item 1 description goes here</p>   <Button>Checkout item</Button> </div>

In the past our teams might have written something like this instead:

<div className='cool-feature-description-wrapper'>   <p>Item 1 description goes here</p>   <button>Checkout item</button> </div>

A new CSS file called cool_feature_description_wrapper.scss would need to be created in our application like so:

.cool-feature-description-wrapper {   margin-bottom: 20px; }

I would argue that styles like this make our code harder to understand, harder to read, and encourages diversions from our library of components. By replacing this with a class from our library of functional classes, it’s suddenly much easier to read, and to change in the future. It also solves a custom solution for our particular needs without forking our library of styles.

So, I haven’t read much about balancing both approaches this way, although I assume someone has covered this in depth already. I truly believe that a combination of these two methods is much more useful than trying to solve all problems with a single bag of tricks.

I know, right? Nuanced opinions are the worst.

The post Why can’t we use Functional CSS and regular CSS at the same time? appeared first on CSS-Tricks.

CSS-Tricks

, , , ,
[Top]