Tag: Service

The Difference Between Web Sockets, Web Workers, and Service Workers

Web Sockets, Web Workers, Service Workers… these are terms you may have read or overheard. Maybe not all of them, but likely at least one of them. And even if you have a good handle on front-end development, there’s a good chance you need to look up what they mean. Or maybe you’re like me and mix them up from time to time. The terms all look and sound awful similar and it’s really easy to get them confused.

So, let’s break them down together and distinguish Web Sockets, Web Workers, and Service Workers. Not in the nitty-gritty sense where we do a deep dive and get hands-on experience with each one — more like a little helper to bookmark the next time I you need a refresher.

Quick reference

We’ll start with a high-level overview for a quick compare and contrast.

Feature What it is
Web Socket Establishes an open and persistent two-way connection between the browser and server to send and receive messages over a single connection triggered by events.
Web Worker Allows scripts to run in the background in separate threads to prevent scripts from blocking one another on the main thread.
Service Worker A type of Web Worker that creates a background service that acts middleware for handling network requests between the browser and server, even in offline situations.

Web Sockets

A Web Socket is a two-way communication protocol. Think of this like an ongoing call between you and your friend that won’t end unless one of you decides to hang up. The only difference is that you are the browser and your friend is the server. The client sends a request to the server and the server responds by processing the client’s request and vice-versa.

Illustration of two women representing the browser and server, respectively. Arrows between them show the flow of communication in an active connection.

The communication is based on events. A WebSocket object is established and connects to a server, and messages between the server trigger events that send and receive them.

This means that when the initial connection is made, we have a client-server communication where a connection is initiated and kept alive until either the client or server chooses to terminate it by sending a CloseEvent. That makes Web Sockets ideal for applications that require continuous and direct communication between a client and a server. Most definitions I’ve seen call out chat apps as a common use case — you type a message, send it to the server, trigger an event, and the server responds with data without having to ping the server over and again.

Consider this scenario: You’re on your way out and you decide to switch on Google Maps. You probably already know how Google Maps works, but if you don’t, it finds your location automatically after you connect to the app and keeps track of it wherever you go. It uses real-time data transmission to keep track of your location as long as this connection is alive. That’s a Web Socket establishing a persistent two-way conversation between the browser and server to keep that data up to date. A sports app with real-time scores might also make use of Web Sockets this way.

The big difference between Web Sockets and Web Workers (and, by extension as we’ll see, Service Workers) is that they have direct access to the DOM. Whereas Web Workers (and Service Workers) run on separate threads, Web Sockets are part of the main thread which gives them the ability to manipulate the DOM.

There are tools and services to help establish and maintain Web Socket connections, including: SocketCluster, AsyncAPI, cowboy, WebSocket King, Channels, and Gorilla WebSocket. MDN has a running list that includes other services.

More Web Sockets information

Web Workers

Consider a scenario where you need to perform a bunch of complex calculations while at the same time making changes to the DOM. JavaScript is a single-threaded application and running more than one script might disrupt the user interface you are trying to make changes to as well as the complex calculation being performed.

This is where the Web Workers come into play.

Web Workers allow scripts to run in the background in separate threads to prevent scripts from blocking one another on the main thread. That makes them great for enhancing the performance of applications that require intensive operations since those operations can be performed in the background on separate threads without affecting the user interface from rendering. But they’re not so great at accessing the DOM because, unlike Web Sockets, a web worker runs outside the main thread in its own thread.

A Web Worker is an object that executes a script file by using a Worker object to carry out the tasks. And when we talk about workers, they tend to fall into one of three types:

  • Dedicated Workers: A dedicated worker is only within reach by the script that calls it. It still executes the tasks of a typical web worker, such as its multi-threading scripts.
  • Shared Workers: A shared worker is the opposite of a dedicated worker. It can be accessed by multiple scripts and can practically perform any task that a web worker executes as long as they exist in the same domain as the worker.
  • Service Workers: A service worker acts as a network proxy between an app, the browser, and the server, allowing scripts to run even in the event when the network goes offline. We’re going to get to this in the next section.

More Web Workers information

Service Workers

There are some things we have no control over as developers, and one of those things is a user’s network connection. Whatever network a user connects to is what it is. We can only do our best to optimize our apps so they perform the best they can on any connection that happens to be used.

Service Workers are one of the things we can do to progressively enhance an app’s performance. A service worker sits between the app, the browser, and the server, providing a secure connection that runs in the background on a separate thread, thanks to — you guessed it — Web Workers. As we learned in the last section, Service Workers are one of three types of Web Workers.

So, why would you ever need a service worker sitting between your app and the user’s browser? Again, we have no control over the user’s network connection. Say the connection gives out for some unknown reason. That would break communication between the browser and the server, preventing data from being passed back and forth. A service worker maintains the connection, acting as an async proxy that is capable of intercepting requests and executing tasks — even after the network connection is lost.

A gear cog icon labeled Service Worker in between a browser icon labeled client and a cloud icon labeled server.

This is the main driver of what’s often referred to as “offline-first” development. We can store assets in the local cache instead of the network, provide critical information if the user goes offline, prefetch things so they’re ready when the user needs them, and provide fallbacks in response to network errors. They’re fully asynchronous but, unlike Web Sockets, they have no access to the DOM since they run on their own threads.

The other big thing to know about Service Workers is that they intercept every single request and response from your app. As such, they have some security implications, most notably that they follow a same-origin policy. So, that means no running a service worker from a CDN or third-party service. They also require a secure HTTPS connection, which means you’ll need a SSL certificate for them to run.

More Service Workers information

Wrapping up

That’s a super high-level explanation of the differences (and similarities) between Web Sockets, Web Workers, and Service Workers. Again, the terminology and concepts are similar enough to mix one up with another, but hopefully, this gives you a better idea of how to distinguish them.

We kicked things off with a quick reference table. Here’s the same thing, but slightly expanded to draw thicker comparisons.

Feature What it is Multithreaded? HTTPS? DOM access?
Web Socket Establishes an open and persistent two-way connection between the browser and server to send and receive messages over a single connection triggered by events. Runs on the main thread Not required Yes
Web Worker Allows scripts to run in the background in separate threads to prevent scripts from blocking one another on the main thread. Runs on a separate thread Required No
Service Worker A type of Web Worker that creates a background service that acts middleware for handling network requests between the browser and server, even in offline situations. Runs on a separate thread Required No

The Difference Between Web Sockets, Web Workers, and Service Workers originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

CSS-Tricks

, , , ,

Add a Service Worker to Your Site

One of the best things you can do for your website in 2022 is add a service worker, if you don’t have one in place already. Service workers give your website super powers. Today, I want to show you some of the amazing things that they can do, and give you a paint-by-numbers boilerplate that you can use to start using them on your site right away.

What are service workers?

A service worker is a special type of JavaScript file that acts like middleware for your site. Any request that comes from the site, and any response it gets back, first goes through the service worker file. Service workers also have access to a special cache where they can save responses and assets locally.

Together, these features allow you to…

  • Serve frequently accessed assets from your local cache instead of the network, reducing data usage and improving performance.
  • Provide access to critical information (or even your entire site or app) when the visitor goes offline.
  • Prefetch important assets and API responses so they’re ready when the user needs them.
  • Provide fallback assets in response to HTTP errors.

In short, service workers allow you to build faster and more resilient web experiences.

Unlike regular JavaScript files, service workers do not have access to the DOM. They also run on their own thread, and as a result, don’t block other JavaScript from running. Service workers are designed to be fully asynchronous.

Security

Because service workers intercept every request and response for your site or app, they have some important security limitations.

Service workers follow a same-origin policy.

You can’t run your service worker from a CDN or third party. It has to be hosted at the same domain as where it will be run.

Service workers only work on sites with an installed SSL certificate.

Many web hosts provide SSL certificates at no cost or for a small fee. If you’re comfortable with the command line, you can also install one for free using Let’s Encrypt.

There is an exception to the SSL certificate requirement for localhost testing, but you can’t run your service worker from the file:// protocol. You need to have a local server running.

Adding a service worker to your site or web app

To use a service worker, the first thing we need to do is register it with the browser. You can register a service worker using the navigator.serviceWorker.register() method. Pass in the path to the service worker file as an argument.

navigator.serviceWorker.register('sw.js');

You can run this in an external JavaScript file, but prefer to run it directly in a script element inline in my HTML so that it runs as soon as possible.

Unlike other types of JavaScript files, service workers only work for the directory in which they exist (and any of its sub-directories). A service worker file located at /js/sw.js would only work for files in the /js directory. As a result, you should place your service worker file inside the root directory of your site.

While service workers have fantastic browser support, it’s a good idea to make sure the browser supports them before running your registration script.

if (navigator && navigator.serviceWorker) {   navigator.serviceWorker.register('sw.js'); }

After the service worker installs, the browser can activate it. Typically, this only happens when…

  • there is no service worker currently active, or
  • the user refreshes the page.

The service worker won’t run or intercept requests until it’s activated.

Listening for requests in a service worker

Once the service worker is active, it can start intercepting requests and running other tasks. We can listen for requests with self.addEventListener() and the fetch event.

// Listen for request events self.addEventListener('fetch', function (event) {   // Do stuff... });

Inside the event listener, the event.request property is the request object itself. For ease, we can save it to the request variable.

Certain versions of the Chromium browser have a bug that throws an error if the page is opened in a new tab. Fortunately, there’s a simple fix from Paul Irish that I include in all of my service workers, just in case:

// Listen for request events self.addEventListener('fetch', function (event) {    // Get the request   let request = event.request;    // Bug fix   // https://stackoverflow.com/a/49719964   if (event.request.cache === 'only-if-cached' && event.request.mode !== 'same-origin') return;  }); 

Once your service worker is active, every single request is sent through it, and will be intercepted with the fetch event.

Service worker strategies

Once your service worker is installed and activated, you can intercept requests and responses, and handle them in various ways. There are two primary strategies you can use in your service worker:

  1. Network-first. With a network-first approach, you pass along requests to the network. If the request isn’t found, or there’s no network connectivity, you then look for the request in the service worker cache.
  2. Offline-first. With an offline-first approach, you check for a requested asset in the service worker cache first. If it’s not found, you send the request to the network.

Network-first and offline-first approaches work in tandem. You will likely mix-and-match approaches depending on the type of asset being requested.

Offline-first is great for large assets that don’t change very often: CSS, JavaScript, images, and fonts. Network-first is a better fit for frequently updated assets like HTML and API requests.

Strategies for caching assets

How do you get assets into your browser’s cache? You’ll typically use two different approaches, depending on the types of assets.

  1. Pre-cache on install. Every site and web app has a set of core assets that are used on almost every page: CSS, JavaScript, a logo, favicon, and fonts. You can pre-cache these during the install event, and serve them using an offline-first approach whenever they’re requested.
  2. Cache as you browser. Your site or app likely has assets that won’t be accessed on every visit or by every visitor; things like blog posts and images that go with articles. For these assets, you may want to cache them in real-time as the visitor accesses them.

You can then serve those cached assets, either by default or as a fallback, depending on your approach.

Implementing network-first and offline-first strategies in your service worker

Inside a fetch event in your service worker, the request.headers.get('Accept') method returns the MIME type for the content. We can use that to determine what type of file the request is for. MDN has a list of common files and their MIME types. For example, HTML files have a MIME type of text/html.

We can pass the type of file we’re looking for into the String.includes() method as an argument, and use if statements to respond in different ways based on the file type.

// Listen for request events self.addEventListener('fetch', function (event) {    // Get the request   let request = event.request;    // Bug fix   // https://stackoverflow.com/a/49719964   if (event.request.cache === 'only-if-cached' && event.request.mode !== 'same-origin') return;    // HTML files   // Network-first   if (request.headers.get('Accept').includes('text/html')) {     // Handle HTML files...     return;   }    // CSS & JavaScript   // Offline-first   if (request.headers.get('Accept').includes('text/css') || request.headers.get('Accept').includes('text/javascript')) {     // Handle CSS and JavaScript files...     return;   }    // Images   // Offline-first   if (request.headers.get('Accept').includes('image')) {     // Handle images...   }  });

Network-first

Inside each if statement, we use the event.respondWith() method to modify the response that’s sent back to the browser.

For assets that use a network-first approach, we use the fetch() method, passing in the request, to pass through the request for the HTML file. If it returns successfully, we’ll return the response in our callback function. This is the same behavior as not having a service worker at all.

If there’s an error, we can use Promise.catch() to modify the response instead of showing the default browser error message. We can use the caches.match() method to look for that page, and return it instead of the network response.

// Send the request to the network first // If it's not found, look in the cache event.respondWith(   fetch(request).then(function (response) {     return response;   }).catch(function (error) {     return caches.match(request).then(function (response) {       return response;     });   }) );

Offline-first

For assets that use an offline-first approach, we’ll first check inside the browser cache using the caches.match() method. If a match is found, we’ll return it. Otherwise, we’ll use the fetch() method to pass the request along to the network.

// Check the cache first // If it's not found, send the request to the network event.respondWith(   caches.match(request).then(function (response) {     return response || fetch(request).then(function (response) {       return response;     });   }) );

Pre-caching core assets

Inside an install event listener in the service worker, we can use the caches.open() method to open a service worker cache. We pass in the name we want to use for the cache, app, as an argument.

The cache is scoped and restricted to your domain. Other sites can’t access it, and if they have a cache with the same name the contents are kept entirely separate.

The caches.open() method returns a Promise. If a cache already exists with this name, the Promise will resolve with it. If not, it will create the cache first, then resolve.

// Listen for the install event self.addEventListener('install', function (event) {   event.waitUntil(caches.open('app')); });

Next, we can chain a then() method to our caches.open() method with a callback function.

In order to add files to the cache, we need to request them, which we can do with the new Request() constructor. We can use the cache.add() method to add the file to the service worker cache. Then, we return the cache object.

We want the install event to wait until we’ve cached our file before completing, so let’s wrap our code in the event.waitUntil() method:

// Listen for the install event self.addEventListener('install', function (event) {    // Cache the offline.html page   event.waitUntil(caches.open('app').then(function (cache) {     cache.add(new Request('offline.html'));     return cache;   }));  });

I find it helpful to create an array with the paths to all of my core files. Then, inside the install event listener, after I open my cache, I can loop through each item and add it.

let coreAssets = [   '/css/main.css',   '/js/main.js',   '/img/logo.svg',   '/img/favicon.ico' ];  // On install, cache some stuff self.addEventListener('install', function (event) {    // Cache core assets   event.waitUntil(caches.open('app').then(function (cache) {     for (let asset of coreAssets) {       cache.add(new Request(asset));     }     return cache;   }));  });

Cache as you browse

Your site or app likely has assets that won’t be accessed on every visit or by every visitor; things like blog posts and images that go with articles. For these assets, you may want to cache them in real-time as the visitor accesses them. On subsequent visits, you can load them directly from cache (with an offline-first approach) or serve them as a fallback if the network fails (using a network-first approach).

When a fetch() method returns a successful response, we can use the Response.clone() method to create a copy of it.

Next, we can use the caches.open() method to open our cache. Then, we’ll use the cache.put() method to save the copied response to the cache, passing in the request and copy of the response as arguments. Because this is an asynchronous function, we’ll wrap our code in the event.waitUntil() method. This prevents the event from ending before we’ve saved our copy to cache. Once the copy is saved, we can return the response as normal.

/explanation We use cache.put() instead of cache.add() because we already have a response. Using cache.add() would make another network call.

// HTML files // Network-first if (request.headers.get('Accept').includes('text/html')) {   event.respondWith(     fetch(request).then(function (response) {        // Create a copy of the response and save it to the cache       let copy = response.clone();       event.waitUntil(caches.open('app').then(function (cache) {         return cache.put(request, copy);       }));        // Return the response       return response;    }).catch(function (error) {       return caches.match(request).then(function (response) {         return response;       });     })   ); }

Putting it all together

I’ve put together a copy-paste boilerplate for you on GitHub. Add your core assets to the coreAssets array, and register it on your site to get started.

If you do nothing else, this will be a huge boost to your site in 2022.

But there’s so much more you can do with service workers. There are advanced caching strategies for APIs. You can provide an offline page with critical information if a visitor loses their network connection. You can clean up bloated caches as the user browses.

Jeremy Keith’s book, Going Offline, is a great primer on service workers. If you want to take things to the next level and dig into progressive web apps, Jason Grigsby’s book dives into the various strategies you can use.

And for a pragmatic deep dive you can complete in about an hour, I also have a course and ebook on service workers with lots of code examples and a project you can work on.

CSS-Tricks

, ,
[Top]

Building Your First Serverless Service With AWS Lambda Functions

Many developers are at least marginally familiar with AWS Lambda functions. They’re reasonably straightforward to set up, but the vast AWS landscape can make it hard to see the big picture. With so many different pieces it can be daunting, and frustratingly hard to see how they fit seamlessly into a normal web application.

The Serverless framework is a huge help here. It streamlines the creation, deployment, and most significantly, the integration of Lambda functions into a web app. To be clear, it does much, much more than that, but these are the pieces I’ll be focusing on. Hopefully, this post strikes your interest and encourages you to check out the many other things Serverless supports. If you’re completely new to Lambda you might first want to check out this AWS intro.

There’s no way I can cover the initial installation and setup better than the quick start guide, so start there to get up and running. Assuming you already have an AWS account, you might be up and running in 5–10 minutes; and if you don’t, the guide covers that as well.

Your first Serverless service

Before we get to cool things like file uploads and S3 buckets, let’s create a basic Lambda function, connect it to an HTTP endpoint, and call it from an existing web app. The Lambda won’t do anything useful or interesting, but this will give us a nice opportunity to see how pleasant it is to work with Serverless.

First, let’s create our service. Open any new, or existing web app you might have (create-react-app is a great way to quickly spin up a new one) and find a place to create our services. For me, it’s my lambda folder. Whatever directory you choose, cd into it from terminal and run the following command:

sls create -t aws-nodejs --path hello-world

That creates a new directory called hello-world. Let’s crack it open and see what’s in there.

If you look in handler.js, you should see an async function that returns a message. We could hit sls deploy in our terminal right now, and deploy that Lambda function, which could then be invoked. But before we do that, let’s make it callable over the web.

Working with AWS manually, we’d normally need to go into the AWS API Gateway, create an endpoint, then create a stage, and tell it to proxy to our Lambda. With serverless, all we need is a little bit of config.

Still in the hello-world directory? Open the serverless.yaml file that was created in there.

The config file actually comes with boilerplate for the most common setups. Let’s uncomment the http entries, and add a more sensible path. Something like this:

functions:   hello:     handler: handler.hello #   The following are a few example events you can configure #   NOTE: Please make sure to change your handler code to work with those events #   Check the event documentation for details     events:       - http:         path: msg         method: get

That’s it. Serverless does all the grunt work described above.

CORS configuration 

Ideally, we want to call this from front-end JavaScript code with the Fetch API, but that unfortunately means we need CORS to be configured. This section will walk you through that.

Below the configuration above, add cors: true, like this

functions:   hello:     handler: handler.hello     events:       - http:         path: msg         method: get         cors: true

That’s the section! CORS is now configured on our API endpoint, allowing cross-origin communication.

CORS Lambda tweak

While our HTTP endpoint is configured for CORS, it’s up to our Lambda to return the right headers. That’s just how CORS works. Let’s automate that by heading back into handler.js, and adding this function:

const CorsResponse = obj => ({   statusCode: 200,   headers: {     "Access-Control-Allow-Origin": "*",     "Access-Control-Allow-Headers": "*",     "Access-Control-Allow-Methods": "*"   },   body: JSON.stringify(obj) });

Before returning from the Lambda, we’ll send the return value through that function. Here’s the entirety of handler.js with everything we’ve done up to this point:

'use strict'; const CorsResponse = obj => ({   statusCode: 200,   headers: {     "Access-Control-Allow-Origin": "*",     "Access-Control-Allow-Headers": "*",     "Access-Control-Allow-Methods": "*"   },   body: JSON.stringify(obj) }); 
 module.exports.hello = async event => {   return CorsResponse("HELLO, WORLD!"); };

Let’s run it. Type sls deploy into your terminal from the hello-world folder.

When that runs, we’ll have deployed our Lambda function to an HTTP endpoint that we can call via Fetch. But… where is it? We could crack open our AWS console, find the gateway API that serverless created for us, then find the Invoke URL. It would look something like this.

The AWS console showing the Settings tab which includes Cache Settings. Above that is a blue notice that contains the invoke URL.

Fortunately, there is an easier way, which is to type sls info into our terminal:

Just like that, we can see that our Lambda function is available at the following path:

https://6xpmc3g0ch.execute-api.us-east-1.amazonaws.com/dev/ms

Woot, now let’s call It!

Now let’s open up a web app and try fetching it. Here’s what our Fetch will look like:

fetch("https://6xpmc3g0ch.execute-api.us-east-1.amazonaws.com/dev/msg")   .then(resp => resp.json())   .then(resp => {     console.log(resp);   });

We should see our message in the dev console.

Console output showing Hello World.

Now that we’ve gotten our feet wet, let’s repeat this process. This time, though, let’s make a more interesting, useful service. Specifically, let’s make the canonical “resize an image” Lambda, but instead of being triggered by a new S3 bucket upload, let’s let the user upload an image directly to our Lambda. That’ll remove the need to bundle any kind of aws-sdk resources in our client-side bundle.

Building a useful Lambda

OK, from the start! This particular Lambda will take an image, resize it, then upload it to an S3 bucket. First, let’s create a new service. I’m calling it cover-art but it could certainly be anything else.

sls create -t aws-nodejs --path cover-art

As before, we’ll add a path to our HTTP endpoint (which in this case will be a POST, instead of GET, since we’re sending the file instead of receiving it) and enable CORS:

// Same as before   events:     - http:       path: upload       method: post       cors: true

Next, let’s grant our Lambda access to whatever S3 buckets we’re going to use for the upload. Look in your YAML file — there should be a iamRoleStatements section that contains boilerplate code that’s been commented out. We can leverage some of that by uncommenting it. Here’s the config we’ll use to enable the S3 buckets we want:

iamRoleStatements:  - Effect: "Allow"    Action:      - "s3:*"    Resource: ["arn:aws:s3:::your-bucket-name/*"]

Note the /* on the end. We don’t list specific bucket names in isolation, but rather paths to resources; in this case, that’s any resources that happen to exist inside your-bucket-name.

Since we want to upload files directly to our Lambda, we need to make one more tweak. Specifically, we need to configure the API endpoint to accept multipart/form-data as a binary media type. Locate the provider section in the YAML file:

provider:   name: aws   runtime: nodejs12.x

…and modify if it to:

provider:   name: aws   runtime: nodejs12.x   apiGateway:     binaryMediaTypes:       - 'multipart/form-data'

For good measure, let’s give our function an intelligent name. Replace handler: handler.hello with handler: handler.upload, then change module.exports.hello to module.exports.upload in handler.js.

Now we get to write some code

First, let’s grab some helpers.

npm i jimp uuid lambda-multipart-parser

Wait, what’s Jimp? It’s the library I’m using to resize uploaded images. uuid will be for creating new, unique file names of the sized resources, before uploading to S3. Oh, and lambda-multipart-parser? That’s for parsing the file info inside our Lambda.

Next, let’s make a convenience helper for S3 uploading:

const uploadToS3 = (fileName, body) => {   const s3 = new S3({});   const  params = { Bucket: "your-bucket-name", Key: `/$ {fileName}`, Body: body }; 
   return new Promise(res => {     s3.upload(params, function(err, data) {       if (err) {         return res(CorsResponse({ error: true, message: err }));       }       res(CorsResponse({          success: true,          url: `https://$ {params.Bucket}.s3.amazonaws.com/$ {params.Key}`        }));     });   }); };

Lastly, we’ll plug in some code that reads the upload files, resizes them with Jimp (if needed) and uploads the result to S3. The final result is below.

'use strict'; const AWS = require("aws-sdk"); const { S3 } = AWS; const path = require("path"); const Jimp = require("jimp"); const uuid = require("uuid/v4"); const awsMultiPartParser = require("lambda-multipart-parser"); 
 const CorsResponse = obj => ({   statusCode: 200,   headers: {     "Access-Control-Allow-Origin": "*",     "Access-Control-Allow-Headers": "*",     "Access-Control-Allow-Methods": "*"   },   body: JSON.stringify(obj) }); 
 const uploadToS3 = (fileName, body) => {   const s3 = new S3({});   var params = { Bucket: "your-bucket-name", Key: `/$ {fileName}`, Body: body };   return new Promise(res => {     s3.upload(params, function(err, data) {       if (err) {         return res(CorsResponse({ error: true, message: err }));       }       res(CorsResponse({          success: true,          url: `https://$ {params.Bucket}.s3.amazonaws.com/$ {params.Key}`        }));     });   }); }; 
 module.exports.upload = async event => {   const formPayload = await awsMultiPartParser.parse(event);   const MAX_WIDTH = 50;   return new Promise(res => {     Jimp.read(formPayload.files[0].content, function(err, image) {       if (err || !image) {         return res(CorsResponse({ error: true, message: err }));       }       const newName = `$ {uuid()}$ {path.extname(formPayload.files[0].filename)}`;       if (image.bitmap.width > MAX_WIDTH) {         image.resize(MAX_WIDTH, Jimp.AUTO);         image.getBuffer(image.getMIME(), (err, body) => {           if (err) {             return res(CorsResponse({ error: true, message: err }));           }           return res(uploadToS3(newName, body));         });       } else {         image.getBuffer(image.getMIME(), (err, body) => {           if (err) {             return res(CorsResponse({ error: true, message: err }));           }           return res(uploadToS3(newName, body));         });       }     });   }); };

I’m sorry to dump so much code on you but — this being a post about Amazon Lambda and serverless — I’d rather not belabor the grunt work within the serverless function. Of course, yours might look completely different if you’re using an image library other than Jimp.

Let’s run it by uploading a file from our client. I’m using the react-dropzone library, so my JSX looks like this:

<Dropzone   onDrop={files => onDrop(files)}   multiple={false} >   <div>Click or drag to upload a new cover</div> </Dropzone>

The onDrop function looks like this:

const onDrop = files => {   let request = new FormData();   request.append("fileUploaded", files[0]); 
   fetch("https://yb1ihnzpy8.execute-api.us-east-1.amazonaws.com/dev/upload", {     method: "POST",     mode: "cors",     body: request     })   .then(resp => resp.json())   .then(res => {     if (res.error) {       // handle errors     } else {       // success - woo hoo - update state as needed     }   }); };

And just like that, we can upload a file and see it appear in our S3 bucket! 

Screenshot of the AWS interface for buckets showing an uploaded file in a bucket that came from the Lambda function.

An optional detour: bundling

There’s one optional enhancement we could make to our setup. Right now, when we deploy our service, Serverless is zipping up the entire services folder and sending all of it to our Lambda. The content currently weighs in at 10MB, since all of our node_modules are getting dragged along for the ride. We can use a bundler to drastically reduce that size. Not only that, but a bundler will cut deploy time, data usage, cold start performance, etc. In other words, it’s a nice thing to have.

Fortunately for us, there’s a plugin that easily integrates webpack into the serverless build process. Let’s install it with:

npm i serverless-webpack --save-dev

…and add it via our YAML config file. We can drop this in at the very end:

// Same as before plugins:   - serverless-webpack

Naturally, we need a webpack.config.js file, so let’s add that to the mix:

const path = require("path"); module.exports = {   entry: "./handler.js",   output: {     libraryTarget: 'commonjs2',     path: path.join(__dirname, '.webpack'),     filename: 'handler.js',   },   target: "node",   mode: "production",   externals: ["aws-sdk"],   resolve: {     mainFields: ["main"]   } };

Notice that we’re setting target: node so Node-specific assets are treated properly. Also note that you may need to set the output filename to  handler.js. I’m also adding aws-sdk to the externals array so webpack doesn’t bundle it at all; instead, it’ll leave the call to const AWS = require("aws-sdk"); alone, allowing it to be handled by our Lamdba, at runtime. This is OK since Lambdas already have the aws-sdk available implicitly, meaning there’s no need for us to send it over the wire. Finally, the mainFields: ["main"] is to tell webpack to ignore any ESM module fields. This is necessary to fix some issues with the Jimp library.

Now let’s re-deploy, and hopefully we’ll see webpack running.

Now our code is bundled nicely into a single file that’s 935K, which zips down further to a mere 337K. That’s a lot of savings!

Odds and ends

If you’re wondering how you’d send other data to the Lambda, you’d add what you want to the request object, of type FormData, from before. For example:

request.append("xyz", "Hi there");

…and then read formPayload.xyz in the Lambda. This can be useful if you need to send a security token, or other file info.

If you’re wondering how you might configure env variables for your Lambda, you might have guessed by now that it’s as simple as adding some fields to your serverless.yaml file. It even supports reading the values from an external file (presumably not committed to git). This blog post by Philipp Müns covers it well.

Wrapping up

Serverless is an incredible framework. I promise, we’ve barely scratched the surface. Hopefully this post has shown you its potential, and motivated you to check it out even further.

If you’re interested in learning more, I’d recommend the learning materials from David Wells, an engineer at Netlify, and former member of the serverless team, as well as the Serverless Handbook by Swizec Teller

The post Building Your First Serverless Service With AWS Lambda Functions appeared first on CSS-Tricks.

CSS-Tricks

, , , , ,
[Top]

Smaller HTML Payloads with Service Workers

Short story: Philip Walton has a clever idea for using service workers to cache the top and bottom of HTML files, reducing a lot of network weight.

Longer thoughts: When you’re building a really simple website, you can get away with literally writing raw HTML. It doesn’t take long to need a bit more abstraction than that. Even if you’re building a three-page site, that’s three HTML files, and your programmer’s mind will be looking for ways to not repeat yourself. You’ll probably find a way to “include” all the stuff at the top and bottom of the HTML, and just change the content in the middle.

I have tended to reach for PHP for that sort of thing in the past (<?php include('header.php); ?>), although these days I’m feeling much more jamstacky and I’d probably do it with Eleventy and Nunjucks.

Or, you could go down the SPA (Single Page App) route just for this basic abstraction if you want. Next and Nuxt are perhaps a little heavy-handed for a few includes, but hey, at least they are easy to work with and the result is a nice static site. The thing about these JavaScript-powered SPA frameworks (Gatsby is in here, too), is that they “hydrate” from static sites into SPAs as the JavaScript loads. Part of the reason for that is speed. No longer does the browser need to reload and request a whole big HTML page again to render; it just asks for whatever smaller amount of data it needs and replaces it on the fly.

So in a sense, you might build a SPA because you have a common header and footer and just want to replace the guts, for efficiencies sake.

Here’s Phil:

In a traditional client-server setup, the server always needs to send a full HTML page to the client for every request (otherwise the response would be invalid). But when you think about it, that’s pretty wasteful. Most sites on the internet have a lot of repetition in their HTML payloads because their pages share a lot of common elements (e.g. the <head>, navigation bars, banners, sidebars, footers etc.). But in an ideal world, you wouldn’t have to send so much of the same HTML, over and over again, with every single page request.

With service workers, there’s a solution to this problem. A service worker can request just the bare minimum of data it needs from the server (e.g. an HTML content partial, a Markdown file, JSON data, etc.), and then it can programmatically transform that data into a full HTML document.

So rather than PHP, Eleventy, a JavaScript framework, or any other solution, Phil’s idea is that a service worker (a native browser technology) can save a cache of a site’s header and footer. Then server requests only need to be made for the “guts” while the full HTML document can be created on the fly.

It’s a super fancy idea, and no joke to implement, but the fact that it could be done with less tooling might be appealing to some. On Phil’s site:

 on this site over the past 30 days, page loads from a service worker had a 47.6% smaller network payloads, and a median First Contentful Paint (FCP) that was 52.3% faster than page loads without a service worker (416ms vs. 851ms).

Aside from configuring a service worker, I’d think the most finicky part is having to configure your server/API to deliver a content-only version of your stuff or build two flat file versions of everything.

Direct Link to ArticlePermalink

The post Smaller HTML Payloads with Service Workers appeared first on CSS-Tricks.

CSS-Tricks

, , , ,
[Top]

Workflow Considerations for Using an Image Management Service

There are all these sites out there that want to help you with your images. They do things like optimize your images and help you serve them performantly.

Here’s the type of service I mean.

That’s a very good thing. By any metric, images are a major slice of the resources on websites, and we’re notoriously bad at optimizing them and doing all the things we could to lower the performance hit from them. I’m sitting at a conference right now and Dave just bet everyone in the audience $ 100 that he could find an unoptimized image on their site. I wasn’t about to take him up on it.

So you use some service to help you deliver images better. Smart. Many of them will make managing and optimizing images a lot easier. But I don’t consider them a no-brainer. There is a lot to think about, like making choices that don’t paint you into a corner.

I should be able to upload images from my own CMS.

I don’t want to go to your site to upload my assets. I want to use the media management in my own CMS. So, the service should have an API at a minimum, and possible even officially maintained CMS plugins.

This site uses WordPress. I can drag and drop images into the media library and posts very easily. I can search my media library for images I’ve uploaded before. I like that, and I want to take advantage of it today, and as it evolves.

The images should be uploaded to my own server.

If it also has to be uploaded to the image service, that’s fine. But it should go to my server first, then to the service. That way, I still maintain ownership of the source file.

Images within content should use functional, semantic markup in my CMS.

I’d prefer that the images within content are stored as totally functional HTML in my database:

<img src="/images/flower.jpg" alt="a blue flower">

It could be fancier than that, like using srcset (but probably not sizes as that will change as the design changes), or be contained within <picture> or <figure> elements… whatever you like that makes sense as semantic HTML. The most important thing being that the content in my database has fully functional HTML with a src on the image that points to a real image on my real server.

The implementation of the image service will involve filtering that HTML to do whatever it needs to do, like replace the URLs to generate fancier responsive image markup and whatnot.

Between having functional HTML and images on my server, that enables me to turn off the image service if I need to. Services have a habit of coming and going, or changing in ways that make them more or less palatable. I don’t want to be locked-in; I want freedom. I want to be able turn off the service and have a perfectly functional site with perfectly functional images, and not be obstructed from moving to a different service — or no service at all.

Even if I didn’t use the service in the past, I want all my images to benefit from it.

I just mentioned filtering the HTML for images in my database. That should happen for all the images on my site, even if they were uploaded and used before I started using the image service.

This probably means the services offers a URL-based “get” API to optimize images on-the-fly pulled from their canonical locations.

I shouldn’t have to think about format or size.

I want to upload whatever I have. Probably some huge un-optimized screenshot I just took. If I think about it at all, I want to upload something much too big and much too high-quality so that I know I have a great original version available. The service will create optimized, sized, and formatted images as needed.

I also want to upload SVG and have it stay SVG (that’s also optimized).

The images will ultimately be served on a CDN.

CDNs are vital for speed. Australians get images from servers hosted in Australia. Canadians get images from servers hosted in Canada. The servers are configured to be fast and cookie-less and all the fancy over-my-head things that make an asset CDN scream.

The images should serve in the right format.

If you serve images in WebP format to browsers that support it, you’ll probably get as much or more performance out of that optimization than serving re-sized images with responsive images syntax. It’s a big deal.

I want the service to know what the best possible format for any particular image for any particular browser and serve the image in that format. This is going to change over time, so I want the service to stay on top of this so I don’t have to.

I know that involved formats like JPEG-XR and JPEG-2000 three years ago. Is that still the case? I have no idea. This is a core value proposition for the service.

It should optimize the images and handle quality.

This is perhaps the most obvious feature and the reason you reach for an image service in the first place. Images need optimization. There are perhaps dozens of image optimization tools/algorithms that aim to squeeze every last byte out of images. The image service probably uses those or even has its own fancy tech for it. Ideally, the default is to optimize an image the most it possibly can be without noticeably hurting the quality, but still allowing me to ratchet it down even more if I want to.

Don’t shame me for using high-pixel density images.

A lot of image services have some sort of tester tool where you drop in a URL and it tells you how bad you’re doing with images. Many of them test the size of the image on the rendered page and compare the dimensions of the original image. If the original image is larger, they tell you could have had savings by sizing it down. That’s obnoxious to me. High-pixel density displays have been around for a long time and it’s no crime to serve them.

It should help me serve the right size for the device it’s on and the perfect responsive syntax if needed.

Not all images benefit from the same responsive breakpoints. Check out the site Responsive Image Breakpoints. It generates versions of the image that are best depending on the image itself. That’s the kind of help I like to see from an image service. Take something hard and automate it for me.

I know I’ll probably need to bring my own sizes attribute because that is very dependant on my own CSS and how the design of the site plays out. It’s still important, and makes me wonder if an image service could step up and help me figure out what my optimal sizes attribute should be for certain images. Like loading my site at different sizes and seeing how large the image renders with my CSS and calculating it from there to use later.

Just me.

This is just my own list of requirements. I feel like it’s fairly reflective of “normal” sites that have a bunch of images and want to do the right thing to serve them.

I didn’t go into all the fancy features image services offer, like being able to tell you that an image contains a giraffe facing west and hasn’t eaten since Thursday while offering to recolor its retinas. I know those things are vital to some companies. This is more about what seems to me the widest and most common use case of just hosting and delivering images in the best way current technology allows.

The post Workflow Considerations for Using an Image Management Service appeared first on CSS-Tricks.

CSS-Tricks

, , , , ,
[Top]

Social Cards as a Service

I love the idea of programmatically generated images. That power is close at hand these days for us front-end developers, thanks to the concept of headless browsers. Take Puppeteer, the library for controlling headless Chrome. Generating images from URLs is their default use case:

const puppeteer = require('puppeteer');  (async () => {   const browser = await puppeteer.launch();   const page = await browser.newPage();   await page.goto('https://example.com');   await page.screenshot({path: 'example.png'});    await browser.close(); })();

That ought to get the ol’ mind grape going. What if we had URLs on our site that — with the power of our HTML and CSS skills — created perfect little designs for sharing using dynamic data… then turned them into images and used them for our meta tags?

The first I saw of this idea was Drew McLellan’s Dynamic Social Sharing Images. Drew wrote a script to fire up Puppeteer and get the job done.

Since the design part is entirely an HTML/CSS adventure, I’m sure you could imagine a setup where the URL passed in parameters that did things like set copy and typography, colors, sizes, etc. Zeit built exactly that!

The URL is like this:

https://og-image.now.sh/I%20am%20Chris%20and%20I%20am%20**cool**%20la%20tee%20ding%20dong%20da..png?theme=light&md=1&fontSize=100px&images=https%3A%2F%2Fassets.zeit.co%2Fimage%2Fupload%2Ffront%2Fassets%2Fdesign%2Fhyper-color-logo.svg

Kind of amazing that you can spin up an entire browser in a cloud function! Netlify also offers cloud functions, and when I mentioned this to Phil Hawksworth, he told me he was already doing this for his blog!

So on a blog post like this one, an image like this is automatically generated:

Which is inserted as meta:

<meta property="og:image" content="https://www.hawksworx.com/card-image/-blog-find-that-at-card.png">

I dug through Phil’s repos, naturally, and found his little machine for doing it.

I’m madly envious of all this and need to get one set up for myself.

The post Social Cards as a Service appeared first on CSS-Tricks.

CSS-Tricks

, ,
[Top]