Tag: Workers

The Difference Between Web Sockets, Web Workers, and Service Workers

Web Sockets, Web Workers, Service Workers… these are terms you may have read or overheard. Maybe not all of them, but likely at least one of them. And even if you have a good handle on front-end development, there’s a good chance you need to look up what they mean. Or maybe you’re like me and mix them up from time to time. The terms all look and sound awful similar and it’s really easy to get them confused.

So, let’s break them down together and distinguish Web Sockets, Web Workers, and Service Workers. Not in the nitty-gritty sense where we do a deep dive and get hands-on experience with each one — more like a little helper to bookmark the next time I you need a refresher.

Quick reference

We’ll start with a high-level overview for a quick compare and contrast.

Feature What it is
Web Socket Establishes an open and persistent two-way connection between the browser and server to send and receive messages over a single connection triggered by events.
Web Worker Allows scripts to run in the background in separate threads to prevent scripts from blocking one another on the main thread.
Service Worker A type of Web Worker that creates a background service that acts middleware for handling network requests between the browser and server, even in offline situations.

Web Sockets

A Web Socket is a two-way communication protocol. Think of this like an ongoing call between you and your friend that won’t end unless one of you decides to hang up. The only difference is that you are the browser and your friend is the server. The client sends a request to the server and the server responds by processing the client’s request and vice-versa.

Illustration of two women representing the browser and server, respectively. Arrows between them show the flow of communication in an active connection.

The communication is based on events. A WebSocket object is established and connects to a server, and messages between the server trigger events that send and receive them.

This means that when the initial connection is made, we have a client-server communication where a connection is initiated and kept alive until either the client or server chooses to terminate it by sending a CloseEvent. That makes Web Sockets ideal for applications that require continuous and direct communication between a client and a server. Most definitions I’ve seen call out chat apps as a common use case — you type a message, send it to the server, trigger an event, and the server responds with data without having to ping the server over and again.

Consider this scenario: You’re on your way out and you decide to switch on Google Maps. You probably already know how Google Maps works, but if you don’t, it finds your location automatically after you connect to the app and keeps track of it wherever you go. It uses real-time data transmission to keep track of your location as long as this connection is alive. That’s a Web Socket establishing a persistent two-way conversation between the browser and server to keep that data up to date. A sports app with real-time scores might also make use of Web Sockets this way.

The big difference between Web Sockets and Web Workers (and, by extension as we’ll see, Service Workers) is that they have direct access to the DOM. Whereas Web Workers (and Service Workers) run on separate threads, Web Sockets are part of the main thread which gives them the ability to manipulate the DOM.

There are tools and services to help establish and maintain Web Socket connections, including: SocketCluster, AsyncAPI, cowboy, WebSocket King, Channels, and Gorilla WebSocket. MDN has a running list that includes other services.

More Web Sockets information

Web Workers

Consider a scenario where you need to perform a bunch of complex calculations while at the same time making changes to the DOM. JavaScript is a single-threaded application and running more than one script might disrupt the user interface you are trying to make changes to as well as the complex calculation being performed.

This is where the Web Workers come into play.

Web Workers allow scripts to run in the background in separate threads to prevent scripts from blocking one another on the main thread. That makes them great for enhancing the performance of applications that require intensive operations since those operations can be performed in the background on separate threads without affecting the user interface from rendering. But they’re not so great at accessing the DOM because, unlike Web Sockets, a web worker runs outside the main thread in its own thread.

A Web Worker is an object that executes a script file by using a Worker object to carry out the tasks. And when we talk about workers, they tend to fall into one of three types:

  • Dedicated Workers: A dedicated worker is only within reach by the script that calls it. It still executes the tasks of a typical web worker, such as its multi-threading scripts.
  • Shared Workers: A shared worker is the opposite of a dedicated worker. It can be accessed by multiple scripts and can practically perform any task that a web worker executes as long as they exist in the same domain as the worker.
  • Service Workers: A service worker acts as a network proxy between an app, the browser, and the server, allowing scripts to run even in the event when the network goes offline. We’re going to get to this in the next section.

More Web Workers information

Service Workers

There are some things we have no control over as developers, and one of those things is a user’s network connection. Whatever network a user connects to is what it is. We can only do our best to optimize our apps so they perform the best they can on any connection that happens to be used.

Service Workers are one of the things we can do to progressively enhance an app’s performance. A service worker sits between the app, the browser, and the server, providing a secure connection that runs in the background on a separate thread, thanks to — you guessed it — Web Workers. As we learned in the last section, Service Workers are one of three types of Web Workers.

So, why would you ever need a service worker sitting between your app and the user’s browser? Again, we have no control over the user’s network connection. Say the connection gives out for some unknown reason. That would break communication between the browser and the server, preventing data from being passed back and forth. A service worker maintains the connection, acting as an async proxy that is capable of intercepting requests and executing tasks — even after the network connection is lost.

A gear cog icon labeled Service Worker in between a browser icon labeled client and a cloud icon labeled server.

This is the main driver of what’s often referred to as “offline-first” development. We can store assets in the local cache instead of the network, provide critical information if the user goes offline, prefetch things so they’re ready when the user needs them, and provide fallbacks in response to network errors. They’re fully asynchronous but, unlike Web Sockets, they have no access to the DOM since they run on their own threads.

The other big thing to know about Service Workers is that they intercept every single request and response from your app. As such, they have some security implications, most notably that they follow a same-origin policy. So, that means no running a service worker from a CDN or third-party service. They also require a secure HTTPS connection, which means you’ll need a SSL certificate for them to run.

More Service Workers information

Wrapping up

That’s a super high-level explanation of the differences (and similarities) between Web Sockets, Web Workers, and Service Workers. Again, the terminology and concepts are similar enough to mix one up with another, but hopefully, this gives you a better idea of how to distinguish them.

We kicked things off with a quick reference table. Here’s the same thing, but slightly expanded to draw thicker comparisons.

Feature What it is Multithreaded? HTTPS? DOM access?
Web Socket Establishes an open and persistent two-way connection between the browser and server to send and receive messages over a single connection triggered by events. Runs on the main thread Not required Yes
Web Worker Allows scripts to run in the background in separate threads to prevent scripts from blocking one another on the main thread. Runs on a separate thread Required No
Service Worker A type of Web Worker that creates a background service that acts middleware for handling network requests between the browser and server, even in offline situations. Runs on a separate thread Required No

The Difference Between Web Sockets, Web Workers, and Service Workers originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

CSS-Tricks

, , , ,

Learn How to Build True Edge Apps With Cloudflare Workers and Fauna

(This is a sponsored post.)

There is a lot of buzz around apps running on the edge instead of on a centralized server in web development. Running your app on the edge allows your code to be closer to your users, which makes it faster. However, there is a spectrum of edge apps. Many apps only have some parts, usually static content, on the edge. But you can move even more to the edge, like computing and databases. This article describes how to do that.

Intro to the edge

First, let’s look at what the edge really is.

The “edge” refers to locations designed to be close to users instead of being at one centralized place. Edge servers are smaller servers put on the edge. Traditionally, servers have been centralized so that there was only one server available. This made websites slower and less reliable. They were slower because the server can often be far away from the user. Say if you have two users, one in Singapore and one in the U.S., and your server is in the U.S. For the customer in the U.S., the server would be close, but for the person in Singapore, the signal would have to travel across the entire Pacific. This adds latency, which makes your app slower and less responsive for the user. Placing your servers on the edge mitigates this latency problem.

Normal server architecture

With an edge server design, your servers have lighter-weight versions in multiple different areas, so a user in Singapore would be able to access a server in Singapore, and a user in the U.S. would also be able to access a close server. Multiple servers on the edge also make an app more reliable because if the server in Singapore went offline, the user in Singapore would still be able to access the U.S. server.

Edge architecture

Many apps have more than 100 different server locations on the edge. However, multiple server locations can add significant cost. To make it cheaper and easier for developers to harness the power of the edge, many services offer the ability to easily deploy to the edge without having to spend a lot of money or time managing multiple servers. There are many different types of these. The most basic and widely used is an edge Content Delivery Network (CDN), which allows static content to be served from the edge. However, CDNs cannot do anything more complicated than serving content. If you need databases or custom code on the edge, you will need a more advanced service.

Introducing edge functions and edge databases

Luckily, there are solutions to this. The first of which, for running code on the edge, is edge functions. These are small pieces of code, automatically provisioned when needed, that are designed to respond to HTTP requests. They are also commonly called serverless functions. However, not all serverless functions run on the edge. Some edge function providers are Lambda@Edge, Cloudflare Workers, and Deno Deploy. In this article, we will focus on Cloudflare Workers. We can also take databases to the edge to ensure that our serverless functions run fast even when querying a database. There are also solutions for databases, the easiest of which is Fauna. With traditional databases, it is very hard or almost impossible to scale to multiple different regions. You have to manage different servers and how database updates are replicated between them. Fauna, however, abstracts all of that away, allowing you to use a cross-region database with a click of a button. It also provides an easy-to-use GraphQL interface and its own query language if you need more. By using Cloudflare Workers and Fauna, we can build a true edge app where everything is run on the edge.

Using Cloudflare Workers and Fauna to build a URL shortener

Setting up Cloudflare Workers and the code

URL Shorteners need to be fast, which makes Cloudflare Workers and Fauna perfect for this. To get started, clone the repository at github.com/AsyncBanana/url-shortener and set your directory to the folder generated.

git clone https://github.com/AsyncBanana/url-shortener.git cd url-shortener

Then, install wrangler, the CLI needed for Cloudflare Workers. After that, install all npm dependencies.

npm install -g @cloudflare/wrangler npm install

Then, sign up for Cloudflare workers at https://dash.cloudflare.com/sign-up/workers and run wrangler login. Finally, to finish off the Cloudflare Workers set up, run wrangler whoami and take the account id from there and put it inside wrangler.toml, which is in the URL shortener.

Setting up Fauna

Good job, now we need to set up Fauna, which will provide the edge database for our URL shortener.

First, register for a Fauna account. Once you have finished that, create a new database by clicking “create database” on the dashboard. Enter URL-Shortener for the name, click classic for the region, and uncheck use demo data.

This is what it should look like

Once you create the database, click Collections on the dashboard sidebar and click “create new collection.” Name the collection URLs and click save.

Next, click the Security tab on the sidebar and click “New key.” Next, click Save on the modal and copy the resulting API key. You can also name the key, but it is not required. Finally, copy the key into the file named .env in the code under FAUNA_KEY.

Black code editor with code in it.
This is what the .env file should look like, except with API_KEY_HERE replaced with your key

Good job! Now we can start coding.

Create the URL shortener

There are two main folders in the code, public and src. The public folder is where all of the user-facing files are stored. src is the folder where the server code is. You can look through and edit the HTML, CSS, and client-side JavaScript if you want, but we will be focusing on the server-side code right now. If you look in src, you should see a file called urlManager.js. This is where our URL Shortening code will go.

This is the URL manager

First, we need to make the code to create shortened URLs. in the function createUrl, create a database query by running FaunaClient.query(). Now, we will use Fauna Query Language (FQL) to structure the query. Fauna Query Language is structured using functions, which are all available under q in this case. When you execute a query, you put all of the functions as arguments in FaunaClient.query(). Inside FaunaClient.query(), add:

q.Create(q.Collection("urls"),{   data: {     url: url   } })

What this does is creates a new document in the collection urls and puts in an object containing the URL to redirect to. Now, we need to get the id of the document so we can return it as a redirection point. First, to get the document id in the Fauna query, put q.Create in the second argument of q.Select, with the first argument being [“ref”,”id”]. This will get the id of the new document. Then, return the value returned by awaiting FaunaClient.query(). The function should now look like this:

return await FaunaClient.query(   q.Select(     ["ref", "id"],       q.Create(q.Collection("urls"), {         data: {           url: url,         },       })     )   ); }

Now, if you run wrangler dev and go to localhost:8787, you should see the URL shortener page. You can enter in a URL and click submit, and you should see another URL generated. However, if you go to that URL it will not do anything. Now we need to add the second part of this, the URL redirect.

Look back in urlManager.js. You should see a function called processUrl. In that function, put:

const res = await FaunaClient.query(q.Get(q.Ref(q.Collection("urls"), id)));

What this does is executes a Fauna query that gets the value of the document in the collection URLs with the specified id. You can use this to get the URL of the id in the URL. Next return res.data.url.url.

const res = await FaunaClient.query(q.Get(q.Ref(q.Collection("urls"), id))); return res.data.url.url

Now you should be all set! Just run wrangler publish, go to your workers.dev domain, and try it out!

Conclusion

Now have a URL shortener that runs entirely on the edge! If you want to add more features or learn more about Fauna or Cloudflare Workers, look below. I hope you have learned something from this, and thank you for reading.

Next steps

  • Further improve the speed of your URL shortener by adding caching
  • Add analytics
  • Read more about Fauna

Read more about Cloudflare Workers


The post Learn How to Build True Edge Apps With Cloudflare Workers and Fauna appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , , , ,
[Top]

The State Of Web Workers In 2021

You gotta appreciate the tenacity of Surma. He’s been advocating for Web Workers as a path forward to better-feeling websites for a lot of years now. He’s at it again making sure we all understand the landscape:

[…] regardless of where you look, multithreading is used everywhere. iOS empowers developers to easily parallelize code using Grand Central Dispatch, Android does this via their new, unified task scheduler WorkManager and game engines like Unity have job systems. The reason for any of these platforms to not only support multithreading, but making it as easy as possible is always the same: Ensure your app feels great.

Surma, “The State Of Web Workers In 2021”

So pretty much every platform has its own version of multi-threading, including the web. It’s just that on the web we have to sort of “fight” against the single-threaded nature of JavaScript by using Web Workers (which are “universally supported” if you’re wondering about that). The question is: use them how and for what? For the latter, Surma shows off an example of a game where “the entire app state and game logic is running in a worker.” For the former, the helper library comlink looks like a big reduction in toil.

Personally, I wish popular tooling would just kinda… do it. I don’t know what that really looks like, but it kinda feels like developer outreach isn’t really moving the needle on this. What if popular tooling like Apollo — which is in charge of a lot of “app state” — were to magically handle all of that off the main thread. Does that make sense? Is it possible?

Direct Link to ArticlePermalink


The post The State Of Web Workers In 2021 appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, ,
[Top]

WordPress-Powered Landing Pages on a Totally Different Site via Cloudflare Workers

What if you have some content on one site and want to display that content on another site? We can do this in the browser no problem. We can fetch it, and plunk it onto the page.

Ajax, right? Ugh. Now we’re in client-side rendered site territory, which isn’t great for performance, speed, or resiliency.

What if we could fetch that content and stitch it into the main page on the server side? Server side isn’t the right word for it though. What if we could do it at the global CDN level? Do it at the edge, as they say. That’s what we’ve been doing at CodePen, so we can build pages with the lovely WordPress block editor but serve them on our main site.

Direct Link to ArticlePermalink


The post WordPress-Powered Landing Pages on a Totally Different Site via Cloudflare Workers appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , , , ,
[Top]

Smaller HTML Payloads with Service Workers

Short story: Philip Walton has a clever idea for using service workers to cache the top and bottom of HTML files, reducing a lot of network weight.

Longer thoughts: When you’re building a really simple website, you can get away with literally writing raw HTML. It doesn’t take long to need a bit more abstraction than that. Even if you’re building a three-page site, that’s three HTML files, and your programmer’s mind will be looking for ways to not repeat yourself. You’ll probably find a way to “include” all the stuff at the top and bottom of the HTML, and just change the content in the middle.

I have tended to reach for PHP for that sort of thing in the past (<?php include('header.php); ?>), although these days I’m feeling much more jamstacky and I’d probably do it with Eleventy and Nunjucks.

Or, you could go down the SPA (Single Page App) route just for this basic abstraction if you want. Next and Nuxt are perhaps a little heavy-handed for a few includes, but hey, at least they are easy to work with and the result is a nice static site. The thing about these JavaScript-powered SPA frameworks (Gatsby is in here, too), is that they “hydrate” from static sites into SPAs as the JavaScript loads. Part of the reason for that is speed. No longer does the browser need to reload and request a whole big HTML page again to render; it just asks for whatever smaller amount of data it needs and replaces it on the fly.

So in a sense, you might build a SPA because you have a common header and footer and just want to replace the guts, for efficiencies sake.

Here’s Phil:

In a traditional client-server setup, the server always needs to send a full HTML page to the client for every request (otherwise the response would be invalid). But when you think about it, that’s pretty wasteful. Most sites on the internet have a lot of repetition in their HTML payloads because their pages share a lot of common elements (e.g. the <head>, navigation bars, banners, sidebars, footers etc.). But in an ideal world, you wouldn’t have to send so much of the same HTML, over and over again, with every single page request.

With service workers, there’s a solution to this problem. A service worker can request just the bare minimum of data it needs from the server (e.g. an HTML content partial, a Markdown file, JSON data, etc.), and then it can programmatically transform that data into a full HTML document.

So rather than PHP, Eleventy, a JavaScript framework, or any other solution, Phil’s idea is that a service worker (a native browser technology) can save a cache of a site’s header and footer. Then server requests only need to be made for the “guts” while the full HTML document can be created on the fly.

It’s a super fancy idea, and no joke to implement, but the fact that it could be done with less tooling might be appealing to some. On Phil’s site:

 on this site over the past 30 days, page loads from a service worker had a 47.6% smaller network payloads, and a median First Contentful Paint (FCP) that was 52.3% faster than page loads without a service worker (416ms vs. 851ms).

Aside from configuring a service worker, I’d think the most finicky part is having to configure your server/API to deliver a content-only version of your stuff or build two flat file versions of everything.

Direct Link to ArticlePermalink

The post Smaller HTML Payloads with Service Workers appeared first on CSS-Tricks.

CSS-Tricks

, , , ,
[Top]