Tag: Comparing

Comparing Static Site Generator Build Times

There are so many static site generators (SSGs). It’s overwhelming trying to decide where to start. While an abundance of helpful articles may help wade through the (popular) options, they don’t magically make the decision easy.

I’ve been on a quest to help make that decision easier. A colleague of mine built a static site generator evaluation cheatsheet. It provides a really nice snapshot across numerous popular SSG choices. What’s missing is how they actually perform in action.

One feature every static site generator has in common is that it takes input data, runs it through a templating engine, and outputs HTML files. We typically refer to this process as The Build.

There’s too much nuance, context, and variability needed to compare how various SSGs perform during the build process to display on a spreadsheet — and thus begins our test to benchmark build times against popular static site generators.

This isn’t just to determine which SSG is fastest. Hugo already has that reputation. I mean, they say it on their website — The world’s fastest framework for building websites — so it must be true!

This is an in-depth comparison of build times across multiple popular SSGs and, more importantly, to analyze why those build times look the way they do. Blindly choosing the fastest or discrediting the slowest would be a mistake. Let’s find out why.

The tests

The testing process is designed to start simple — with just a few popular SSGs and a simple data format. A foundation on which to expand to more SSGs and more nuanced data. For today, the test includes six popular SSG choices:

Each test used the following approach and conditions:

  • The data source for each build are Markdown files with a randomly-generated title (as frontmatter) and body (containing three paragraphs of content).
  • The content contains no images.
  • Tests are run in series on a single machine, making the actual values less relevant than the relative comparison among the lot.
  • The output is plain text on an HTML page, run through the default starter, following each SSG’s respective guide on getting started.
  • Each test is a cold run. Caches are cleared and Markdown files are regenerated for every test.

These tests are considered benchmark tests. They are using basic Markdown files and outputting unstyled HTML into the built output.

In other words, the output is technically a website that could be deployed to production, though it’s not really a real-world scenario. Instead, this provides a baseline comparison among these frameworks. The choices you make as a developer using one of these frameworks will adjust the build times in various ways (usually by slowing it down).

For example, one way in which this doesn’t represent the real-world is that we’re testing cold builds. In the real-world, if you have 10,000 Markdown files as your data source and are using Gatsby, you’re going to make use of Gatsby’s cache, which will greatly reduce the build times (by as much as half).

The same can be said for incremental builds, which are related to warm versus cold runs in that they only build the file that changed. We’re not testing the incremental approach in these tests (at this time).

The two tiers of static site generators

Before we do that, let’s first consider that there are really two tiers of static site generators. Let’s call them basic and advanced.

  • Basic generators (which are not basic under the hood) are essentially a command-line interface (CLI) that takes in data and outputs HTML, and can often be extended to process assets (which we’re not doing here).
  • Advanced generators offer something in addition to outputting a static site, such as server-side rendering, serverless functions, and framework integration. They tend to be configured to be more dynamic right out of the box.

I intentionally chose three of each type of generator in this test. Falling into the basic bucket would be Eleventy, Hugo, and Jekyll. The other three are based on a front-end framework and ship with various amounts of tooling. Gatsby and Next are built on React, while Nuxt is built atop Vue.

Basic generators Advanced generators
Eleventy Gatsby
Hugo Next
Jekyll Nuxt

My hypothesis

Let’s apply the scientific method to this approach because science is fun (and useful)!

My hypothesis is that if an SSG is advanced, then it will perform slower than a basic SSG. I believe the results will reflect that because advanced SSGs have more overhead than basic SSGs. Thus, it’s likely that we’re going to see both groups of generators — basic and advanced — bundled together, in the results with basic generators moving significantly quicker.

Let me expand on that hypothesis a bit.

Linear(ish) and fast

Hugo and Eleventy will fly with smaller datasets. They are (relatively) simple processes in Go and Node.js, respectively, and their build output will reflect that. While both SSG will slow down as the number of files grows, I expect them to remain at the top of the class, though Eleventy may be a little less linear at scale, simply because Go tends to be more performant than Node.

Slow, then fast, but still slow

The advanced, or framework-bound SSGs, will start out and appear slow. I suspect a single-file test to contain a significant difference — milliseconds for the basic ones, compared to several seconds for Gatsby, Next, and Nuxt.

The framework-based SSGs are each built using webpack, bringing a significant amount of overhead along with it, regardless of the amount of content they are processing. That’s the baggage we sign up for in using those tools (more on this later).

But, as we add thousands of files, I suspect we’ll see the gap between the buckets close, though the advanced SSG group will stay farther behind by some significant amount.

In the advanced SSG group, I expect Gatsby to be the fastest, only because it doesn’t have a server-side component to worry about — but that’s just a gut feeling. Next and Nuxt may have optimized this to the point where, if we’re not using that feature, it won’t affect build times. And I suspect Nuxt will beat out Next, only because there is a little less overhead with Vue, compared to React.

Jekyll: The odd child

Ruby is infamously slow. It’s gotten more performant over time, but I don’t expect it to scale with Node, and certainly not with Go. And yet, at the same time, it doesn’t have the baggage of a framework.

At first, I think we’ll see Jekyll as pretty speedy, perhaps even indistinguishable from Eleventy. But as we get to the thousands of files, the performance will take a hit. My gut feeling is that there may exist a point at which Jekyll becomes the slowest of all six. We’ll push up to the 100,000 mark to see for sure.

A hand-drawn line chart showing build time on the y-axis and number of files on the x-asix, where Next is a green line, then nuxt is a yellow line, gatsby is a pink line jekyll is a blue line, eleventy is a teal line and hugo is an orange line. All lines show the build time increasing as the number of files increase, where jekyll has the sharpest slope.

The results are in!

The code that powers these tests are on GitHub. There’s also a site that shows the relative results.

After many iterations of building out a foundation on which these tests could be run, I ended up with a series of 10 runs in three different datasets:

  • Base: A single file, to compare the base build times
  • Small sites: From 1 to 1024 files, doubling each to time (to make it easier to determine if the SSGs scaled linearly)
  • Large sites: From 1,000 to 64,000 files, double on each run. I originally wanted to go up to 128,000 files, but hit some bottlenecks with a few of the frameworks. 64,000 ended up being enough to produce an idea of how the players would scale with even larger sites.

Click or tap the images to view them larger.

Summarizing the results

A few results were surprising to me, while others were expected. Here are the high-level points:

  • As expected, Hugo was the fastest, regardless of size. What I didn’t expect is that it wasn’t even close to any other generator, even at base builds (nor was it linear, but more on that below.)
  • The basic and advanced groups of SSGs are quite obvious when looking at the results for small sites. That was expected, but it was surprising to see Next is faster than Jekyll at 32,000 files, and faster than both Eleventy and Jekyll at 64,000 files. Also surprising is that Jekyll performed faster than Eleventy at 64,000 files.
  • None of the SSGs scale linearly. Next was the closest. Hugo has the appearance of being linear, but only because it’s so much faster than the rest.
  • I figured Gatsby to be the fastest among the advanced frameworks, and suspected it would be the one to get closer to the basics. But Gatsby turned out to be the slowest, producing the most dramatic curve.
  • While it wasn’t specifically mentioned in the hypothesis, the scale of differences was larger than I would have imagined. At one file, Hugo was approximately 170 times faster than Gatsby. But at 64,000 files, it was closer — about 25 times faster. That means that, while Hugo remains the fastest, it actually has the most dramatic exponential growth shape among the lot. It just looks linear because of the scale of the chart.

What does it all mean?

When I shared my results with the creators and maintainers of these SSGs, I generally received the same message. To paraphrase:

The generators that take more time to build do so because they are doing more. They are bringing more to the table for developers to work with, whereas the faster sites (i.e. the “basic” tools) focus their efforts largely in converting templates into HTML files.

I agree.

To sum it up: Scaling Jamstack sites is hard.

The challenges that will present themselves to you, Developer, as you scale a site will vary depending on the site you’re trying to build. That data isn’t captured here because it can’t be — every project is unique in some way.

What it really comes down to is your level of tolerance for waiting in exchange for developer experience.

For example, if you’re going to build a large, image-heavy site with Gatsby, you’re going to pay for it with build times, but you’re also given an immense network of plugins and a foundation on which to build a solid, organized, component-based website. Do the same with Jekyll, and it’s going to take a lot more effort to stay organized and efficient throughout the process, though your builds may run faster.

At work, I typically build sites with Gatsby (or Next, depending on the level of dynamic interactivity required). We’ve worked with the Gatsby framework to build a core on which we can rapidly build highly-customized, image-rich websites, packed with an abundance of components. Our builds become slower as the sites scale, but that’s when we get creative by implementing micro front-ends, offloading image processing, implementing content previews, along with many other optimizations.

On the side, I tend to prefer working with Eleventy. It’s usually just me writing code, and my needs are much simpler. (I like to think of myself as a good client for myself.) I feel I have more control over the output files, which makes it easier for me to get 💯s on client-side performance, and that’s important to me.

In the end, this isn’t only about what is fast or slow. It’s about what works best for you and how long you’re willing to wait.

Wrapping up

This is just the beginning! The goal of this effort was to create a foundation on which we can, together, benchmark relative build times across popular static site generators.

What ideas do you have? What holes can you poke in the process? What can we do to tighten up these tests? How can we make them more like real-world scenarios? Should we offload the processing to a dedicated machine?

These are the questions I’d love for you to help me answer. Let’s talk about it.


The post Comparing Static Site Generator Build Times appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , ,

Comparing Various Ways to Hide Things in CSS

You would think that hiding content with CSS is a straightforward and solved problem, but there are multiple solutions, each one being unique.

Developers most commonly use display: none to hide the content on the page. Unfortunately, this way of hiding content isn’t bulletproof because now that content is now “inaccessible” to screen readers. It’s tempting to use it, but especially in cases where something is only meant to be visually hidden, don’t reach for it.

The fact is that there are many ways to “hide” things in CSS, each with their pros and cons which greatly depend on how it’s being used. We’re going to review each technique here and cap things off with a summary that helps us decide which to use and when.

How to spot differences between the techniques

To see a difference between different ways of hiding content, we must introduce some metrics. Metrics that we’ll use to compare the methods. I decided to break that down by asking questions focused on four particular areas that affect layout, performance and accessibility:

  1. Accessibility: Is the hidden content read by a screen reader?
  2. Document flow: Will the hidden element affect the document layout?
  3. Rendering: Will the hidden element’s box model be rendered?
  4. Event triggers: Does the element detect clicks or focus?

Now that we have our criteria out of the way, let’s compare the methods. Again, we’ll put everything together at the end in a way that we can use it as a reference for making decisions when hiding things in CSS.

Method 1: The display property

We kicked off this post with a caution about using display to hide content. And as we established, using it to hide an element means that the element is not generated at all. It’s in the DOM, but never actually rendered.

The element will still show in the markup, if you inspect the page you will be able to see the element. The box model will not generate nor appear on the page, which also applies to all its children.

And what’s more, if the element has any event listeners — say a click or hover — they won’t register at all. And as we’ve discussed already, all the content will be ignored by screen readers. Here, we have two visible buttons and one hidden with display: none. All three buttons have click events but only the two visible buttons will render and register the clicks.

Display is the only property that will affect image request firing. If an image tag (or parent element) has a display property set to none either through inline CSS or by selector, the image will be downloaded. On the other hand, if the image is applied with a background property, it won’t be downloaded.

This is the case because the parser hasn’t applied the CSS when an HTML document is parsed and it encounters an <img> tag. On the other hand, when we apply the image to an element with a background property, the image won’t be downloaded because the parser hasn’t applied the CSS where the image is called. This behavior is matched across all latest browsers. The only exception is IE 11, which will download images in both cases.

Metric Result
Is the hidden content read by a screen reader?
Will the hidden element affect the document layout?
Will the hidden element’s box model be rendered?
Does the element detect clicks or focus?

Method 2: The visibility property

If an element’s visibility property is set to hidden, then the element is “visually hidden.” Being “visually hidden” sounds a lot like what display: none does, but it’s incredibly different in that the element is generated and rendered, but invisible. This means that the element’s box model is present, giving it dimensions that continue to occupy space on the screen even though it doesn’t appear to be there.

Imagine you’re wearing an invisible cloak that makes you invisible to others, but you are still able to bump into things. You’re physically there, even if you’re invisible to the human eye.

But that’s where the differences between “visually hidden” and “not displayed” end. In fact, elements hidden with visibility and display behave the same in terms of accessibility and event triggers. Invisible elements are inaccessible to screen readers and won’t register events, as we see in the following demo that’s exactly the same as the last example, but merely swaps display: none with visibility: hidden.

Metric Result
Is the hidden content read by a screen reader?
Will the hidden element affect the document layout?
Will the hidden element’s box model be rendered?
Does the element detect clicks or focus?

Method 3: The opacity property

The opacity property only affects the visual aspect of the element. If we set an element’s opacity to zero, the element will be fully transparent. Again, it’s a lot like visibility: hidden where we’re draping an invisible cloak on the element where it’s invisible, but still physically present.

In other words, what we have is a hollow, transparent element that acts like any other element, only it’s invisible. Sounds a lot like the visibility method, right? The difference is that a fully transparent element is still accessible to a screen reader and can register events, like clicks, as we see in the following example.

Metric Result
Is the hidden content read by a screen reader?
Will the hidden element affect the document layout?
Will the hidden element’s box model be rendered?
Does the element detect clicks or focus?

Method 4: The position property

Pushing an element off-screen with absolute positioning is another way developers often hide things. Using top and left, we can push the element so far off the screen that there’s no way it will ever be seen. It’s like hiding the cookie jar outside of the house so the kids (or maybe you!) can’t find them.

“Absolute” is the key word here. If we set position to absolute, an element is taken out of the document flow which is a way of saying it no longer adheres to its natural position in the DOM. In other words, the page doesn’t reserve any space for it, which knocks the element out of order visually, positioning it to it’s nearest positioned element if there is one, or the document root if nothing else.

We take advantage of absolute positioning by taking the “hidden” element out of the document flow and offsetting it toward the top-left with values of -9999px.

.hidden {   position: absolute;   top: -9999px;   left: -9999px; }
Metric Effect
Is the hidden content read by a screen reader?
Will the hidden element affect the document layout?
Will the hidden element’s box model be rendered?
Does the element detect clicks or focus?

If the hidden element contains focusable content, the page will scroll to the element when it is in focus, creating a sudden jump.

Method 5: The “visually hidden” class

So far, the position method is the closest we’ve seen to an accessibility-friendly way to hide things in CSS. But the problem with focusable content causing sudden page jumps isn’t great. Another approach to accessible hiding combines absolute positioning, the clip property and hidden overflow. Scott O’Hara blogged it back in 2017.

.visually-hidden:not(:focus):not(:active) {   clip: rect(0 0 0 0);    clip-path: inset(50%);   height: 1px;   overflow: hidden;   position: absolute;   white-space: nowrap;    width: 1px; }

Let’s break that down.

We need to remove the element from the document flow. The best way to do this is by using position: absolute. This will remove the element, but we won’t push it off the screen.

.visually-hidden {   position: absolute; }

We can hide the element by setting the width and height property to zero. Unfortunately, that won’t work because some screen readers will ignore elements with zero width and height. What we can do is set it to the second-lowest value, 1px. That means the content will easily overflow the space, so we also need overflow: hidden to make sure it doesn’t visually spill over.

.visually-hidden {   height: 1px;   overflow: hidden;   position: absolute;   width: 1px; }

To hide that one-pixel square, we can use the CSS clipping property. It is perfect for this situation, as it doesn’t affect screen readers. The content is there but, again, is visually hidden. The thing to note is that clip was deprecated in favor of clip-path but is still needed if we need to support older versions of Internet Explorer.

.visually-hidden {   clip: rect(0 0 0 0);   clip-path: inset(50%);   height: 1px;   overflow: hidden;   position: absolute;   width: 1px; }

Another piece of the “visually hidden” class puzzle is to address smushed off-screen accessible text, an oddity that removes white-spacing between words, causing them to be read aloud like one big string of words. For example, “Welcome back home” will be read out as “Welcomebackhome.”

A simple solution to this problem is to set the white-space: nowrap:

.visually-hidden {   clip: rect(0 0 0 0);   clip-path: inset(50%);   height: 1px;   overflow: hidden;   position: absolute;   white-space: nowrap;   width: 1px; }

And, finally! The last thing to consider is to allow certain element with native focus and active sites to display when they are in focus, while continuing to prevent other elements, like paragraphs, from displaying. We can use the :not pseudo-selector for that.

.visually-hidden:not(:focus):not(:active) {   clip: rect(0 0 0 0);   clip-path: inset(50%);   height: 1px;   overflow: hidden;   position: absolute;   white-space: nowrap;   width: 1px; }
Metric Result
Is the hidden content read by a screen reader?
Will the hidden element affect the document layout?
Will the hidden element’s box model be rendered?
Does the element detect clicks or focus?

Honorable mentions

There are even more methods than the five we’ve covered. For example, the text-indent property can push text off the screen like the position method:

.hidden {   text-indent: -9999em; }

Unfortunately, this approach doesn’t jive with RTL writing modes. That makes it less adaptable than other solutions we’ve covered.

Another method is using transform to scale or move the element out of the way. It works the same — visually only — like opacity.

.hidden {   transform: scale(0); }

Let’s put everything together!

We got to a solution that will visually hide content but still be accessible. Then, should you stop using display: none? No, this is still the best way to hide an element completely (visually and accessibly).

That said, It is worth mentioning that if you want to achieve an opposite result — hide something from the screen reader, the aria-hidden="true" attribute will hide the content from screen readers, but not visually.

With that, here is a complete table that compares all of the approaches. Use it to guide your decisions on how to hide content next time you find yourself in that situation.

Metric Display Visibility Opacity Position Accessible Way
Is the hidden content read by a screen reader?
Will the hidden element affect the document layout?
Will the hidden element’s box model be rendered?
Does the element detect clicks or focus?

The post Comparing Various Ways to Hide Things in CSS appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , ,
[Top]

Comparing Styling Methods in 2020

Over on Smashing, Adebiyi Adedotun Lukman covers all these styling methods. It’s in the context of Next.js, which is somewhat important as Next.js has some specific ways you work with these tools, is React and, thus, is a components-based architecture. But the styling methods talked about transcend Next.js, and can apply broadly to lots of websites.

Here are my hot takes on the whole table-of-contents of styling possibilities these days.

  • Regular CSS. If you can, do. No build tooling is refreshing. It will age well. The only thing I really miss without any other tooling is nesting media queries.
  • Sass. Sass has been around the block and is still a hugely popular CSS preprocessor. Sass is built into tons of other tools, so choosing Sass doesn’t always mean the same thing. A simple Sass integration might be as easy as a sass --watch src/style.scss dist/style.css npm script. Then, once you’ve come to terms with the fact that you have a build process, you can start concatenating files, minifying, busting cache, and all this stuff that you’re probably going to end up doing somehow anyway.
  • Less & Stylus. I’m surprised they aren’t more popular since they’ve always been Node and work great with the proliferation of Node-powered build processes. Not to mention they are nice feature-rich preprocessors. I have nothing against either, but Sass is more ubiquitous, more actively developed, and canonical Sass now works fine in Node-land,
  • PostCSS. I’m not compelled by PostCSS because I don’t love having to cobble together the processing features that I want. That also has the bad side effect of making the process of writing CSS different across projects. Plus, I don’t like the idea of preprocessing away modern features, some of which can’t really be preprocessed (e.g. custom properties cannot be preprocessed). But I did love Autoprefixer when we really needed that, which is based on PostCSS.
  • CSS Modules. If you’re working with components in any technology, CSS modules give you the ability to scope CSS to that component, which is an incredibly great idea. I like this approach wherever I can get it. Your module CSS can be Sass too, so we can get the best of both worlds there.
  • CSS-in-JS. Let’s be real, this means “CSS-in-React.” If you’re writing Vue, you’re writing styles the way Vue helps you do it. Same with Svelte. Same with Angular. React is the un-opinionated one, leaving you to choose between things like styled-components, styled-jsx, Emotion… there are a lot of them. I have projects in React and I just use Sass+CSS Modules and think that’s fine, but a lot of people like CSS-inJS approaches too. I get it. You get the scoping automatically and can do fancy stuff like incorporate props into styling decisions. Could be awesome for a design system.

If you want to hear some other hot takes on this spectrum, the Syntax FM fellas sounded off on this recently.


The post Comparing Styling Methods in 2020 appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , ,
[Top]

Comparing Browsers for Responsive Design

There are a number of these desktop apps where the goal is showing your site at different dimensions all at the same time. So you can, for example, be writing CSS and making sure it’s working across all the viewports in a single glance.

They are all very similar. For example, they do “event mirroring” meaning if you scroll in one window or device, then all the others do too, along with clicks, typing, etc. You can also zoom in and out to see many devices at once, just scaled down. Let’s see if we can root out any differences.

Sizzy

  • Windows, Mac, and Linux
  • “Solo” plan starts at $ 5/month and they have plans up from there

There are loads of little cool developer-focused features like:

  • Kill a port just by typing in the port number.
  • There’s a universal inspect mode but, while you can’t apply a change in DevTools that affects all windows and devices at the same time, you can at least inspect across all of them, and when you click, it activates the correct DevTools session.
  • Throttle or go offline in a click.
  • Turn off JavaScript with a click.
  • Turn on Design Mode with a click (e.g. every element has contenteditable).
  • Toggles for hiding images, turning off all styles, outlining all elements, etc.
  • Override fonts with Google Font choices.

Responsively App

  • Universal inspect mode that selects the correct DevTools context
  • The option to “Disable SSL Validation” is clever, should you run into issues with local HTTPS.
  • One-click dark mode toggle

Blisk

  • Window and Mac
  • Free, with premium upgrades ($ 10/month). Some of the features like scroll syncing and auto refreshing are listed as premium features, which makes me thing that the free version limits them in some way.
  • Autorefresh is a neat idea. You set up a “watcher” for certain file types in certain folders, and if they change, it refreshes the page. I imagine most dev environments have some kind of style injection or hot module reloading, but having it available anyway is useful for ones that don’t.
  • There is no universal DevTools inspector, but you can open the DevTools individually and they do have a custom universal inspection tool for showing the box model dimensions of elements.
  • There’s a custom error report screen.
  • You can enable “Browsing Mode” to turn off all the fancy device stuff and just use it as a semi-regular browser.

Polypane

  • Windows, Mac, and Linux
  • Free, with premium plans starting at $ 10/month. Signing up is going to get you a good handful onboarding emails over a week (with the option to you can opt out).
  • It has browser extensions for other browsers to pop your current tab over to Polypane
  • The universal inspect mode seems the most seamless of the bunch to me, but it doesn’t go so far propagate changes across windows and devices. Someone needs to do this! It’s does have a “Live CSS” pane that will inject additional CSS to all the open devices though, which is cool.
  • It can open devices based on breakpoints in your own CSS — and it actually works!

Duo

  • It’s on the Mac App Store for $ 5, but its website is offline, which makes it seem kinda dead.
  • It has zero fancy features. As the name implies, it simply shows the same site side-by-side in two columns that can be resized.

Re:view

  • It’s not a separate browser app, but a browser extension. I kind of like this as I can stay in a canonical browser that I’m already comfortable with that’s getting regular updates.
  • The “breakpoints” view is a clever idea. I believe it should show your site at the breakpoints in your CSS, but, it seems broken to me. I’m not sure if this is an actively developed project. (My guess is that it is not.)

So?

What, you want me to pick a winner?

While I was turned off a little Polypane’s hoop jumping and onboarding, I think it has the most well-considered feature set. Sizzy is close, but the interface is more cluttered in a way that doesn’t seem necessary. I admit I like how Blisk is really focused on “just look at the mobile view and then we’ll fill the rest of the space with a larger view” because that’s closer to how I actually work. (I rarely need to see a “device wall” of trivially different mobile screens.)

The fact that Responsively is free and open source is very cool, but is that sustainable? I think I feel safer digging into apps that are run as a business. The fact that I just stay in my normal browser with Re:View means I actually have the highest chance of actually using it, but it feels like a dead project at the moment so I probably won’t. So, for now, I guess I’ll have to crown Polypane.


The post Comparing Browsers for Responsive Design appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , ,
[Top]

Comparing Data in Google and Netlify Analytics

Jim Nielsen:

the datasets weren’t even close for me.

Google Analytics works by putting a client-side bit of JavaScript on your site. Netlify Analytics works by parsing server logs server-side. They are not exactly apples to apples, feature-wise. Google Analytics is, I think it’s fair to say, far more robust. You can do things like track custom events which might be very important analytics data to a site. But they both have the basics. They both want to tell you how many pageviews your homepage got, for instance.

There are two huge things that affect these numbers:

  • Client-side JavaScript is blockable and tons of people use content blockers, particularly for third-party scripts from Google. Server-side logs are not blockable.
  • Netlify doesn’t filter things out of that log, meaning bots are counted in addition to regular people visiting.

So I’d say: Netlify probably has more accurate numbers, but a bit inflated from the bots.

Also worth noting, you can do server-side Google Analytics. I’ve never seen anyone actually do it but it seems like a good idea.

One bit of advice from Jim:

Never assume too much from a single set of data. In other words, don’t draw all your data-driven insights from one basket.

Direct Link to ArticlePermalink


The post Comparing Data in Google and Netlify Analytics appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , ,
[Top]

Comparing Social Media Outlets for Developer Tips

As a little experiment, I shared a development tip on three different social networks. I also tried to post it in a format that was most suitable for that particular social network:

How did each of them “do”? Let’s take a look. But bear in mind… this ain’t scientific. This is just me having a glance at one isolated example to get a feel for things across different social media sites.

The Twitter Thread

The Tweet

Twitter is probably our largest social media outlet. Despite the fact that I’ve done absolutely nothing with it this year other than auto-tweeting posts from this site (via our Jetpack Integration), those tweets do just about as well as it ever did when I was writing each tweet. These numbers are bound to change, but at the time of writing:

Views

102,501

Followers

~446,000

Retweets

108

Engagements

3,753

Likes

428 (first tweet)

Tweet Analytics showing 102,501 Impressions, 3,753 engagements and a few other more fine-grained stats.
Twitter provides analytics on tweets

Going off that engagements number, a little bit less than 1% of the followers had anything to do with it. I’d say this was a very average tweet for us, if not on the low side.

The Instagram Post

The Post

Instagram is by far the smallest of our social media outlets, being newer and not something I stay particularly active or consistent on. No auto-posting there just yet.

Followers

~2,800

Likes

308

Reached

2,685

Instagram provides analytics (“insights”) on posts.

Using Reach, that’s 96% of the followers. That’s pretty incredible compared t 1% of followers on Twitter. Although, on Twitter. I can easily put URLs to tweets and send people places, where my only options on Instagram are “check out the link in my profile” or use a swipe-up thing in an Instagram Story. So, despite the high engagement of Instagram, I’m mostly just getting the satisfaction of teaching something as well as a little brand awareness. It’s much harder for me to get you to directly do something from Instagram.

The YouTube Video

The Video

YouTube is in the middle for us, much bigger than Instagram but not as big as Twitter. YouTube is a little unique in that there can be (and are) advertising directly on the videos and that get’s a “revenue share” from YouTube. That’s very much not driving motivation for using YouTube (I make 50 cents a day, but it is unique compared to the others.

Subscribers

51,300

Likes

116

Views

2,455

YouTube analytics page showing 2.4K views, 192.8 hours of watch time, and a chart showing a graph that this video has more views than typical over time.
YouTube provides video analytics

Facebook?

We do have a Facebook page but it’s the most neglected of all of them. We auto-post new articles to it, but this experiment didn’t really have a blog post. I published the video to our site, but that doesn’t get auto-posted to Facebook, so the tip never made it there.

I used to feel a little guilty about not taking as much advantage of Facebook as I could, but whenever I look at overall analytics, I’m reminded that all of social media accounts combine for ~2% of traffic to this site. Spending any more time on this stuff is foolish for me, when that time could be spent on content for this site and information architecture for what we already have. And for Facebook specifically, whatever time we have spent there has never seemed to pan out. Just not a hive for developers.

CodePen?

I probably should have factored CodePen into this more, since it’s something of a social network itself with similar metrics. I worked on the examples in CodePen and the whole video was done in CodePen. But in this case, it was more about the journey than the destination. I did ultimately link to a demo at the end of the Twitter thread, but Instagram can’t link to it and I wasn’t as compelled to link to it on YouTube as the video itself to me was the important information.

If I was trying to compare CodePen stats here, I would have created the Pen in a step-by-step educational format so I could deliver the same idea. That actually sounds fun and I should probably still do that!

Winner?

Eh.

The problem is that there isn’t anything particularly useful to measure. What would have been way more interesting is if I had some really important call to action in each one where I’m like trying to sell you something or get you to sign up for something or whatever. I feel like that’s the real world of developer marketing. You gotta do 100 things for someone for free if you want them to do something for you on that 101st time. And on the 101st time, you should probably measure it somehow to see if the effort is worth it.

Here’s the very basic data together though…

Followers Engagements %
Twitter ~446,000 3,753 0.08%
Instagram ~2,800 2,685 96%
YouTube ~51,300 2,455 5%

One interesting thing is that I find the effort was about equal for all of them. You’d think a video would be hardest, but at least that’s just hit-record-hit-stop and minor editing. The other formats take longer to craft with custom text and graphics.

These would be my takeaways from this limited experiment:

  • You need big numbers on Twitter to do much. That’s because the engagement is pretty low. Still, it’s probably our best outlet for getting people to click a link and do something.
  • Instagram has amazing engagement, but it’s hard to send anyone anywhere. It’s still no wonder why people use it. You really do reach your audience there. If you had a strong call to action, I bet you could still get people do to it even with the absence of links (since people know how to search for stuff on the web).
  • While I mentioned that for this example the effort level was fairly even, in general, YouTube is going to require much higher effort. Video production just isn’t the same as farting out a couple of words or a screenshot. With that, and knowing that you’d need absolutely massive numbers to earn anything directly from YouTube, it’s pretty similar to other social networks in that you need to derive value from it abstractly.
  • This was not an idea that “went viral” in any sense. This is just standard-grade engagement, which was good for this experiment. I’m always super surprised at the type of developer tips that go viral. It’s always something I don’t expect, and often something I’m like awwwww we have an article about that too! I’d never bet on or expect anything going viral. Making stuff that your normal audience likes is the ticket.
  • Being active is pretty important. Any chart I’ve seen has big peaks when posts go out regularly and valleys when they don’t. Post regularly = riding the peaks.
  • None of this compares anywhere close to the real jewel of making things: blogging. Blogging is where you have full control and full benefit. The most important thing social media can do is get people over to your own site.

The post Comparing Social Media Outlets for Developer Tips appeared first on CSS-Tricks.

CSS-Tricks

, , , , ,
[Top]

Comparing the Different Types of Native JavaScript Popups

JavaScript has a variety of built-in popup APIs that display special UI for user interaction. Famously:

alert("Hello, World!");

The UI for this varies from browser to browser, but generally you’ll see a little window pop up front and center in a very show-stopping way that contains the message you just passed. Here’s Firefox and Chrome:

Native popups in Firefox (left) and Chrome (right). Note the additional UI preventing additional dialogs in Firefox from triggering it more than once. You can also see how Chrome is pinned to the top of the window.

There is one big problem you should know about up front

JavaScript popups are blocking.

The entire page essentially stops when a popup is open. You can’t interact with anything on the page while one is open — that’s kind of the point of a “modal” but it’s still a UX consideration you should be keenly aware of. And crucially, no other main-thread JavaScript is running while the popup is open, which could (and probably is) unnecessarily preventing your site from doing things it needs to do.

Nine times out of ten, you’d be better off architecting things so that you don’t have to use such heavy-handed stop-everything behavior. Native JavaScript alerts are also implemented by browsers in such a way that you have zero design control. You can’t control *where* they appear on the page or what they look like when they get there. Unless you absolutely need the complete blocking nature of them, it’s almost always better to use a custom user interface that you can design to tailor the experience for the user.

With that out of the way, let’s look at each one of the native popups.

window.alert();

window.alert("Hello World");  <button onclick="alert('Hello, World!');">Show Message</button>  const button = document.querySelectorAll("button"); button.addEventListener("click", () => {   alert("Text of button: " + button.innerText); });

See the Pen
alert("Example");
by Elliot KG (@ElliotKG)
on CodePen.

What it’s for: Displaying a simple message or debugging the value of a variable.

How it works: This function takes a string and presents it to the user in a popup with a button with an “OK” label. You can only change the message and not any other aspect, like what the button says.

The Alternative: Like the other alerts, if you have to present a message to the user, it’s probably better to do it in a way that’s tailor-made for what you’re trying to do.

If you’re trying to debug the value of a variable, consider console.log(<code>"`Value of variable:"`, variable); and looking in the console.

window.confirm();

window.confirm("Are you sure?");  <button onclick="confirm('Would you like to play a game?');">Ask Question</button>  let answer = window.confirm("Do you like cats?"); if (answer) {   // User clicked OK } else {   // User clicked Cancel }

See the Pen
confirm("Example");
by Elliot KG (@ElliotKG)
on CodePen.

What it’s for: “Are you sure?”-style messages to see if the user really wants to complete the action they’ve initiated.

How it works: You can provide a custom message and popup will give you the option of “OK” or “Cancel,” a value you can then use to see what was returned.

The Alternative: This is a very intrusive way to prompt the user. As Aza Raskin puts it:

…maybe you don’t want to use a warning at all.”

There are any number of ways to ask a user to confirm something. Probably a clear UI with a <button>Confirm</button> wired up to do what you need it to do.

window.prompt();

window.prompt("What’s your name?");   let answer = window.prompt("What is your favorite color?"); // answer is what the user typed in, if anything

See the Pen
prompt("Example?", "Default Example");
by Elliot KG (@ElliotKG)
on CodePen.

What it’s for: Prompting the user for an input. You provide a string (probably formatted like a question) and the user sees a popup with that string, an input they can type into, and “OK” and “Cancel” buttons.

How it works: If the user clicks OK, you’ll get what they entered into the input. If they enter nothing and click OK, you’ll get an empty string. If they choose Cancel, the return value will be null.

The Alternative: Like all of the other native JavaScript alerts, this doesn’t allow you to style or position the alert box. It’s probably better to use a <form> to get information from the user. That way you can provide more context and purposeful design.

window.onbeforeunload();

window.addEventListener("beforeunload", () => {   // Standard requires the default to be cancelled.   event.preventDefault();   // Chrome requires returnValue to be set (via MDN)   event.returnValue = ''; });

See the Pen
Example of beforeunload event
by Chris Coyier (@chriscoyier)
on CodePen.

What it’s for: Warn the user before they leave the page. That sounds like it could be very obnoxious, but it isn’t often used obnoxiously. It’s used on sites where you can be doing work and need to explicitly save it. If the user hasn’t saved their work and is about to navigate away, you can use this to warn them. If they *have* saved their work, you should remove it.

How it works: If you’ve attached the beforeunload event to the window (and done the extra things as shown in the snippet above), users will see a popup asking them to confirm if they would like to “Leave” or “Cancel” when attempting to leave the page. Leaving the site may be because the user clicked a link, but it could also be the result of clicking the browser’s refresh or back buttons. You cannot customize the message.

MDN warns that some browsers require the page to be interacted with for it to work at all:

To combat unwanted pop-ups, some browsers don’t display prompts created in beforeunload event handlers unless the page has been interacted with. Moreover, some don’t display them at all.

The Alternative: Nothing that comes to mind. If this is a matter of a user losing work or not, you kinda have to use this. And if they choose to stay, you should be clear about what they should to to make sure it’s safe to leave.

Accessibility

Native JavaScript alerts used to be frowned upon in the accessibility world, but it seems that screen readers have since become smarter in how they deal with them. According to Penn State Accessibility:

The use of an alert box was once discouraged, but they are actually accessible in modern screen readers.

It’s important to take accessibility into account when making your own modals, but there are some great resources like this post by Ire Aderinokun to point you in the right direction.

General alternatives

There are a number of alternatives to native JavaScript popups such as writing your own, using modal window libraries, and using alert libraries. Keep in mind that nothing we’ve covered can fully block JavaScript execution and user interaction, but some can come close by greying out the background and forcing the user to interact with the modal before moving forward.

You may want to look at HTML’s native <dialog> element. Chris recently took a hands-on look) at it. It’s compelling, but apparently suffers from some significant accessibility issues. I’m not entirely sure if building your own would end up better or worse, since handling modals is an extremely non-trivial interactive element to dabble in. Some UI libraries, like Bootstrap, offer modals but the accessibility is still largely in your hands. You might to peek at projects like a11y-dialog.

Wrapping up

Using built-in APIs of the web platform can seem like you’re doing the right thing — instead of shipping buckets of JavaScript to replicate things, you’re using what we already have built-in. But there are serious limitations, UX concerns, and performance considerations at play here, none of which land particularly in favor of using the native JavaScript popups. It’s important to know what they are and how they can be used, but you probably won’t need them a heck of a lot in production web sites.

The post Comparing the Different Types of Native JavaScript Popups appeared first on CSS-Tricks.

CSS-Tricks

, , , , ,
[Top]

Filtering Data Client-Side: Comparing CSS, jQuery, and React

Say you have a list of 100 names:

<ul>   <li>Randy Hilpert</li>   <li>Peggie Jacobi</li>   <li>Ethelyn Nolan Sr.</li>    <!-- and then some --> </ul>

…or file names, or phone numbers, or whatever. And you want to filter them client-side, meaning you aren’t making a server-side request to search through data and return results. You just want to type “rand” and have it filter the list to include “Randy Hilpert” and “Danika Randall” because they both have that string of characters in them. Everything else isn’t included in the results.

Let’s look at how we might do that with different technologies.

CSS can sorta do it, with a little help.

CSS can’t select things based on the content they contain, but it can select on attributes and the values of those attributes. So let’s move the names into attributes as well.

<ul>   <li data-name="Randy Hilpert">Randy Hilpert</li>   <li data-name="Peggie Jacobi">Peggie Jacobi</li>   <li data-name="Ethelyn Nolan Sr.">Ethelyn Nolan Sr.</li>    ... </ul>

Now to filter that list for names that contain “rand”, it’s very easy:

li {   display: none; } li[data-name*="rand" i] {   display: list-item; }

Note the i on Line 4. That means “case insensitive” which is very useful here.

To make this work dynamically with a filter <input>, we’ll need to get JavaScript involved to not only react to the filter being typed in, but generate CSS that matches what is being searched.

Say we have a <style> block sitting on the page:

<style id="cssFilter">   /* dynamically generated CSS will be put in here */ </style>

We can watch for changes on our filter input and generate that CSS:

filterElement.addEventListener("input", e => {   let filter = e.target.value;   let css = filter ? `     li {       display: none;     }     li[data-name*="$  {filter}" i] {       display: list-item;     }   ` : ``;   window.cssFilter.innerHTML = css; });

Note that we’re emptying out the style block when the filter is empty, so all results show.

See the Pen
Filtering Technique: CSS
by Chris Coyier (@chriscoyier)
on CodePen.

I’ll admit it’s a smidge weird to leverage CSS for this, but Tim Carry once took it way further if you’re interested in the concept.

jQuery makes it even easier.

Since we need JavaScript anyway, perhaps jQuery is an acceptable tool. There are two notable changes here:

  • jQuery can select items based on the content they contain. It has a selector API just for this. We don’t need the extra attribute anymore.
  • This keeps all the filtering to a single technology.

We still watch the input for typing, then if we have a filter term, we hide all the list items and reveal the ones that contain our filter term. Otherwise, we reveal them all again:

const listItems = $  ("li");  $  ("#filter").on("input", function() {   let filter = $  (this).val();   if (filter) {     listItems.hide();     $  (`li:contains('$  {filter}')`).show();   } else {     listItems.show();   } });

It’s takes more fiddling to make the filter case-insensitive than CSS does, but we can do it by overriding the default method:

jQuery.expr[':'].contains = function(a, i, m) {   return jQuery(a).text().toUpperCase()       .indexOf(m[3].toUpperCase()) >= 0; };

See the Pen
Filtering Technique: jQuery
by Chris Coyier (@chriscoyier)
on CodePen.

React can do it with state and rendering only what it needs.

There is no one-true-way to do this in React, but I would think it’s React-y to keep the list of names as data (like an Array), map over them, and only render what you need. Changes in the input filter the data itself and React re-renders as necessary.

If we have an names = [array, of, names], we can filter it pretty easily:

filteredNames = names.filter(name => {   return name.includes(filter); });

This time, case sensitivity can be done like this:

filteredNames = names.filter(name => {   return name.toUpperCase().includes(filter.toUpperCase()); });

Then we’d do the typical .map() thing in JSX to loop over our array and output the names.

See the Pen
Filtering Technique: React
by Chris Coyier (@chriscoyier)
on CodePen.

I don’t have any particular preference

This isn’t the kind of thing you choose a technology for. You do it in whatever technology you already have. I also don’t think any one approach is particularly heavier than the rest in terms of technical debt.

The post Filtering Data Client-Side: Comparing CSS, jQuery, and React appeared first on CSS-Tricks.

CSS-Tricks

, , , , ,
[Top]