Tag: Request

Reliably Send an HTTP Request as a User Leaves a Page

On several occasions, I’ve needed to send off an HTTP request with some data to log when a user does something like navigate to a different page or submit a form. Consider this contrived example of sending some information to an external service when a link is clicked:

<a href="/some-other-page" id="link">Go to Page</a>  <script> document.getElementById('link').addEventListener('click', (e) => {   fetch("/log", {     method: "POST",     headers: {       "Content-Type": "application/json"     },      body: JSON.stringify({       some: "data"     })   }); }); </script>

There’s nothing terribly complicated going on here. The link is permitted to behave as it normally would (I’m not using e.preventDefault()), but before that behavior occurs, a POST request is triggered on click. There’s no need to wait for any sort of response. I just want it to be sent to whatever service I’m hitting.

On first glance, you might expect the dispatch of that request to be synchronous, after which we’d continue navigating away from the page while some other server successfully handles that request. But as it turns out, that’s not what always happens.

Browsers don’t guarantee to preserve open HTTP requests

When something occurs to terminate a page in the browser, there’s no guarantee that an in-process HTTP request will be successful (see more about the “terminated” and other states of a page’s lifecycle). The reliability of those requests may depend on several things — network connection, application performance, and even the configuration of the external service itself.

As a result, sending data at those moments can be anything but reliable, which presents a potentially significant problem if you’re relying on those logs to make data-sensitive business decisions.

To help illustrate this unreliability, I set up a small Express application with a page using the code included above. When the link is clicked, the browser navigates to /other, but before that happens, a POST request is fired off.

While everything happens, I have the browser’s Network tab open, and I’m using a “Slow 3G” connection speed. Once the page loads and I’ve cleared the log out, things look pretty quiet:

Viewing HTTP request in the network tab

But as soon as the link is clicked, things go awry. When navigation occurs, the request is cancelled.

Viewing HTTP request fail in the network tab

And that leaves us with little confidence that the external service was actually able process the request. Just to verify this behavior, it also occurs when we navigate programmatically with window.location:

document.getElementById('link').addEventListener('click', (e) => { + e.preventDefault();    // Request is queued, but cancelled as soon as navigation occurs.    fetch("/log", {     method: "POST",     headers: {       "Content-Type": "application/json"     },      body: JSON.stringify({       some: 'data'     }),   });  + window.location = e.target.href; });

Regardless of how or when navigation occurs and the active page is terminated, those unfinished requests are at risk for being abandoned.

But why are they cancelled?

The root of the issue is that, by default, XHR requests (via fetch or XMLHttpRequest) are asynchronous and non-blocking. As soon as the request is queued, the actual work of the request is handed off to a browser-level API behind the scenes.

As it relates to performance, this is good — you don’t want requests hogging the main thread. But it also means there’s a risk of them being deserted when a page enters into that “terminated” state, leaving no guarantee that any of that behind-the-scenes work reaches completion. Here’s how Google summarizes that specific lifecycle state:

A page is in the terminated state once it has started being unloaded and cleared from memory by the browser. No new tasks can start in this state, and in-progress tasks may be killed if they run too long.

In short, the browser is designed with the assumption that when a page is dismissed, there’s no need to continue to process any background processes queued by it.

So, what are our options?

Perhaps the most obvious approach to avoid this problem is, as much as possible, to delay the user action until the request returns a response. In the past, this has been done the wrong way by use of the synchronous flag supported within XMLHttpRequest. But using it completely blocks the main thread, causing a host of performance issues — I’ve written about some of this in the past — so the idea shouldn’t even be entertained. In fact, it’s on its way out of the platform (Chrome v80+ has already removed it).

Instead, if you’re going to take this type of approach, it’s better to wait for a Promise to resolve as a response is returned. After it’s back, you can safely perform the behavior. Using our snippet from earlier, that might look something like this:

document.getElementById('link').addEventListener('click', async (e) => {   e.preventDefault();    // Wait for response to come back...   await fetch("/log", {     method: "POST",     headers: {       "Content-Type": "application/json"     },      body: JSON.stringify({       some: 'data'     }),   });    // ...and THEN navigate away.    window.location = e.target.href; });

That gets the job done, but there are some non-trivial drawbacks.

First, it compromises the user’s experience by delaying the desired behavior from occurring. Collecting analytics data certainly benefits the business (and hopefully future users), but it’s less than ideal to make your present users to pay the cost to realize those benefits. Not to mention, as an external dependency, any latency or other performance issues within the service itself will be surfaced to the user. If timeouts from your analytics service cause a customer from completing a high-value action, everyone loses.

Second, this approach isn’t as reliable as it initially sounds, since some termination behaviors can’t be programmatically delayed. For example, e.preventDefault() is useless in delaying someone from closing a browser tab. So, at best, it’ll cover collecting data for some user actions, but not enough to be able to trust it comprehensively.

Instructing the browser to preserve outstanding requests

Thankfully, there are options to preserve outstanding HTTP requests that are built into the vast majority of browsers, and that don’t require user experience to be compromised.

Using Fetch’s keepalive flag

If the keepalive flag is set to true when using fetch(), the corresponding request will remain open, even if the page that initiated that request is terminated. Using our initial example, that’d make for an implementation that looks like this:

<a href="/some-other-page" id="link">Go to Page</a>  <script>   document.getElementById('link').addEventListener('click', (e) => {     fetch("/log", {       method: "POST",       headers: {         "Content-Type": "application/json"       },        body: JSON.stringify({         some: "data"       }),        keepalive: true     });   }); </script>

When that link is clicked and page navigation occurs, no request cancellation occurs:

Viewing HTTP request succeed in the network tab

Instead, we’re left with an (unknown) status, simply because the active page never waited around to receive any sort of response.

A one-liner like this an easy fix, especially when it’s part of a commonly used browser API. But if you’re looking for a more focused option with a simpler interface, there’s another way with virtually the same browser support.

Using Navigator.sendBeacon()

The Navigator.sendBeacon()function is specifically intended for sending one-way requests (beacons). A basic implementation looks like this, sending a POST with stringified JSON and a “text/plain” Content-Type:

navigator.sendBeacon('/log', JSON.stringify({   some: "data" }));

But this API doesn’t permit you to send custom headers. So, in order for us to send our data as “application/json”, we’ll need to make a small tweak and use a Blob:

<a href="/some-other-page" id="link">Go to Page</a>  <script>   document.getElementById('link').addEventListener('click', (e) => {     const blob = new Blob([JSON.stringify({ some: "data" })], { type: 'application/json; charset=UTF-8' });     navigator.sendBeacon('/log', blob));   }); </script>

In the end, we get the same result — a request that’s allowed to complete even after page navigation. But there’s something more going on that may give it an edge over fetch(): beacons are sent with a low priority.

To demonstrate, here’s what’s shown in the Network tab when both fetch() with keepalive and sendBeacon() are used at the same time:

Viewing HTTP request in the network tab

By default, fetch() gets a “High” priority, while the beacon (noted as the “ping” type above) have the “Lowest” priority. For requests that aren’t critical to the functionality of the page, this is a good thing. Taken straight from the Beacon specification:

This specification defines an interface that […] minimizes resource contention with other time-critical operations, while ensuring that such requests are still processed and delivered to destination.

Put another way, sendBeacon() ensures its requests stay out of the way of those that really matter for your application and your user’s experience.

An honorable mention for the ping attribute

It’s worth mentioning that a growing number of browsers support the ping attribute. When attached to links, it’ll fire off a small POST request:

<a href="http://localhost:3000/other" ping="http://localhost:3000/log">   Go to Other Page </a>

And those requests headers will contain the page on which the link was clicked (ping-from), as well as the href value of that link (ping-to):

headers: {   'ping-from': 'http://localhost:3000/',   'ping-to': 'http://localhost:3000/other'   'content-type': 'text/ping'   // ...other headers },

It’s technically similar to sending a beacon, but has a few notable limitations:

  1. It’s strictly limited for use on links, which makes it a non-starter if you need to track data associated with other interactions, like button clicks or form submissions.
  2. Browser support is good, but not great. At the time of this writing, Firefox specifically doesn’t have it enabled by default.
  3. You’re unable to send any custom data along with the request. As mentioned, the most you’ll get is a couple of ping-* headers, along with whatever other headers are along for the ride.

All things considered, ping is a good tool if you’re fine with sending simple requests and don’t want to write any custom JavaScript. But if you’re needing to send anything of more substance, it might not be the best thing to reach for.

So, which one should I reach for?

There are definitely tradeoffs to using either fetch with keepalive or sendBeacon() to send your last-second requests. To help discern which is the most appropriate for different circumstances, here are some things to consider:

You might go with fetch() + keepalive if:

  • You need to easily pass custom headers with the request.
  • You want to make a GET request to a service, rather than a POST.
  • You’re supporting older browsers (like IE) and already have a fetch polyfill being loaded.

But sendBeacon() might be a better choice if:

  • You’re making simple service requests that don’t need much customization.
  • You prefer the cleaner, more elegant API.
  • You want to guarantee that your requests don’t compete with other high-priority requests being sent in the application.

Avoid repeating my mistakes

There’s a reason I chose to do a deep dive into the nature of how browsers handle in-process requests as a page is terminated. A while back, my team saw a sudden change in the frequency of a particular type of analytics log after we began firing the request just as a form was being submitted. The change was abrupt and significant — a ~30% drop from what we had been seeing historically.

Digging into the reasons this problem arose, as well as the tools that are available to avoid it again, saved the day. So, if anything, I’m hoping that understanding the nuances of these challenges help someone avoid some of the pain we ran into. Happy logging!


Reliably Send an HTTP Request as a User Leaves a Page originally published on CSS-Tricks. You should get the newsletter.

CSS-Tricks

, , , , , ,

Generate a Pull Request of Static Content With a Simple HTML Form

Jamstack has been in the website world for years. Static Site Generators (SSGs) — which often have content that lives right within a GitHub repo itself — are a big part of that story. That opens up the idea of having contributors that can open pull requests to add, change, or edit content. Very useful!

Examples of this are like:

Why built with a static site approach?

When we need to build content-based sites like this, it’s common to think about what database to use. Keeping content in a database is a time-honored good idea. But it’s not the only approach! SSGs can be a great alternative because…

  • They are cheap and easy to deploy. SSGs are usually free, making them great for an MVP or a proof of concept.
  • They have great security. There is nothing to hack through the browser, as all the site contains is often just static files.
  • You’re ready to scale. The host you’re already on can handle it.

There is another advantage for us when it comes to a content site. The content of the site itself can be written in static files right in the repo. That means that adding and updating content can happen right from pull requests on GitHub, for example. Even for the non-technically inclined, it opens the door to things like Netlify CMS and the concept of open authoring, allowing for community contributions.

But let’s go the super lo-fi route and embrace the idea of pull requests for content, using nothing more than basic HTML.

The challenge

How people contribute adding or updating a resource isn’t always perfectly straightforward. People need to understand how to fork your repository, how to and where to add their content, content formatting standards, required fields, and all sorts of stuff. They might even need to “spin up” the site themselves locally to ensure the content looks right.

People who seriously want to help our site sometimes will back off because the process of contributing is a technological hurdle and learning curve — which is sad.

You know what anybody can do? Use a <form>

Just like a normal website, the easy way for people to submit a content is to fill out a form and submit it with the content they want.

What if we can make a way for users to contribute content to our sites by way of nothing more than an HTML <form> designed to take exactly the content we need? But instead of the form posting to a database, it goes the route of a pull request against our static site generator? There is a trick!

The trick: Create a GitHub pull request with query parameters

Here’s a little known trick: We can pre-fill a pull request against our repository by adding query parameter to a special GitHub URL. This comes right from the GitHub docs themselves.

Let’s reverse engineer this.

If we know we can pre-fill a link, then we need to generate the link. We’re trying to make this easy remember. To generate this dynamic data-filled link, we’ll use a touch of JavaScript.

So now, how do we generate this link after the user submits the form?

Demo time!

Let’s take the Serverless site from CSS-Tricks as an example. Currently, the only way to add a new resource is by forking the repo on GitHub and adding a new Markdown file. But let’s see how we can do it with a form instead of jumping through those hoops.

The Serverless site itself has many categories (e.g. for forms) we can contribute to. For the sake of simplicity, let’s focus on the “Resources” category. People can add articles about things related to Serverless or Jamstack from there.

The Resources page of the CSS-Tricks Serverless site. The site is primarily purple in varying shades with accents of orange. The page shows a couple of serverless resources in the main area and a list of categories in the right sidebar.

All of the resource files are in this folder in the repo.

Showing the main page of the CSS-Tricks Serverless repo in GitHub, displaying all the files.

Just picking a random file from there to explore the structure…

--- title: "How to deploy a custom domain with the Amplify Console" url: "https://read.acloud.guru/how-to-deploy-a-custom-domain-with-the-amplify-console-a884b6a3c0fc" author: "Nader Dabit" tags: ["hosting", "amplify"] ---  In this tutorial, we’ll learn how to add a custom domain to an Amplify Console deployment in just a couple of minutes.

Looking over that content, our form must have these columns:

  • Title
  • URL
  • Author
  • Tags
  • Snippet or description of the link.

So let’s build an HTML form for all those fields:

<div class="columns container my-2">   <div class="column is-half is-offset-one-quarter">   <h1 class="title">Contribute to Serverless Resources</h1>    <div class="field">     <label class="label" for="title">Title</label>     <div class="control">       <input id="title" name="title" class="input" type="text">     </div>   </div>      <div class="field">     <label class="label" for="url">URL</label>     <div class="control">       <input id="url" name="url" class="input" type="url">     </div>   </div>        <div class="field">     <label class="label" for="author">Author</label>     <div class="control">       <input id="author" class="input" type="text" name="author">     </div>   </div>      <div class="field">     <label class="label" for="tags">Tags (comma separated)</label>     <div class="control">       <input id="tags" class="input" type="text" name="tags">     </div>   </div>        <div class="field">     <label class="label" for="description">Description</label>     <div class="control">       <textarea id="description" class="textarea" name="description"></textarea>     </div>   </div>       <!-- Prepare the JavaScript function for later -->   <div class="control">     <button onclick="validateSubmission();" class="button is-link is-fullwidth">Submit</button>   </div>        </div> </div>

I’m using Bulma for styling, so the class names in use here are from that.

Now we write a JavaScript function that transforms a user’s input into a friendly URL that we can combine as GitHub query parameters on our pull request. Here is the step by step:

  • Get the user’s input about the content they want to add
  • Generate a string from all that content
  • Encode the string to format it in a way that humans can read
  • Attach the encoded string to a complete URL pointing to GitHub’s page for new pull requests

Here is the Pen:

After pressing the Submit button, the user is taken right to GitHub with an open pull request for this new file in the right location.

GitHub pull request screen showing a new file with content.

Quick caveat: Users still need a GitHub account to contribute. But this is still much easier than having to know how to fork a repo and create a pull request from that fork.

Other benefits of this approach

Well, for one, this is a form that lives on our site. We can style it however we want. That sort of control is always nice to have.

Secondly, since we’ve already written the JavaScript, we can use the same basic idea to talk with other services or APIs in order to process the input first. For example, if we need information from a website (like the title, meta description, or favicon) we can fetch this information just by providing the URL.

Taking things further

Let’s have a play with that second point above. We could simply pre-fill our form by fetching information from the URL provided for the user rather than having them have to enter it by hand.

With that in mind, let’s now only ask the user for two inputs (rather than four) — just the URL and tags.

How does this work? We can fetch meta information from a website with JavaScript just by having the URL. There are many APIs that fetch information from a website, but you might the one that I built for this project. Try hitting any URL like this:

https://metadata-api.vercel.app/api?url=https://css-tricks.com

The demo above uses that as an API to pre-fill data based on the URL the user provides. Easier for the user!

Wrapping up

You could think of this as a very minimal CMS for any kind of Static Site Generator. All you need to do is customize the form and update the pre-filled query parameters to match the data formats you need.

How will you use this sort of thing? The four sites we saw at the very beginning are good examples. But there are so many other times where might need to do something with a user submission, and this might be a low-lift way to do it.


The post Generate a Pull Request of Static Content With a Simple HTML Form appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , , , ,
[Top]