Tag: Service

Building Your First Serverless Service With AWS Lambda Functions

Many developers are at least marginally familiar with AWS Lambda functions. They’re reasonably straightforward to set up, but the vast AWS landscape can make it hard to see the big picture. With so many different pieces it can be daunting, and frustratingly hard to see how they fit seamlessly into a normal web application.

The Serverless framework is a huge help here. It streamlines the creation, deployment, and most significantly, the integration of Lambda functions into a web app. To be clear, it does much, much more than that, but these are the pieces I’ll be focusing on. Hopefully, this post strikes your interest and encourages you to check out the many other things Serverless supports. If you’re completely new to Lambda you might first want to check out this AWS intro.

There’s no way I can cover the initial installation and setup better than the quick start guide, so start there to get up and running. Assuming you already have an AWS account, you might be up and running in 5–10 minutes; and if you don’t, the guide covers that as well.

Your first Serverless service

Before we get to cool things like file uploads and S3 buckets, let’s create a basic Lambda function, connect it to an HTTP endpoint, and call it from an existing web app. The Lambda won’t do anything useful or interesting, but this will give us a nice opportunity to see how pleasant it is to work with Serverless.

First, let’s create our service. Open any new, or existing web app you might have (create-react-app is a great way to quickly spin up a new one) and find a place to create our services. For me, it’s my lambda folder. Whatever directory you choose, cd into it from terminal and run the following command:

sls create -t aws-nodejs --path hello-world

That creates a new directory called hello-world. Let’s crack it open and see what’s in there.

If you look in handler.js, you should see an async function that returns a message. We could hit sls deploy in our terminal right now, and deploy that Lambda function, which could then be invoked. But before we do that, let’s make it callable over the web.

Working with AWS manually, we’d normally need to go into the AWS API Gateway, create an endpoint, then create a stage, and tell it to proxy to our Lambda. With serverless, all we need is a little bit of config.

Still in the hello-world directory? Open the serverless.yaml file that was created in there.

The config file actually comes with boilerplate for the most common setups. Let’s uncomment the http entries, and add a more sensible path. Something like this:

functions:   hello:     handler: handler.hello #   The following are a few example events you can configure #   NOTE: Please make sure to change your handler code to work with those events #   Check the event documentation for details     events:       - http:         path: msg         method: get

That’s it. Serverless does all the grunt work described above.

CORS configuration 

Ideally, we want to call this from front-end JavaScript code with the Fetch API, but that unfortunately means we need CORS to be configured. This section will walk you through that.

Below the configuration above, add cors: true, like this

functions:   hello:     handler: handler.hello     events:       - http:         path: msg         method: get         cors: true

That’s the section! CORS is now configured on our API endpoint, allowing cross-origin communication.

CORS Lambda tweak

While our HTTP endpoint is configured for CORS, it’s up to our Lambda to return the right headers. That’s just how CORS works. Let’s automate that by heading back into handler.js, and adding this function:

const CorsResponse = obj => ({   statusCode: 200,   headers: {     "Access-Control-Allow-Origin": "*",     "Access-Control-Allow-Headers": "*",     "Access-Control-Allow-Methods": "*"   },   body: JSON.stringify(obj) });

Before returning from the Lambda, we’ll send the return value through that function. Here’s the entirety of handler.js with everything we’ve done up to this point:

'use strict'; const CorsResponse = obj => ({   statusCode: 200,   headers: {     "Access-Control-Allow-Origin": "*",     "Access-Control-Allow-Headers": "*",     "Access-Control-Allow-Methods": "*"   },   body: JSON.stringify(obj) }); 
 module.exports.hello = async event => {   return CorsResponse("HELLO, WORLD!"); };

Let’s run it. Type sls deploy into your terminal from the hello-world folder.

When that runs, we’ll have deployed our Lambda function to an HTTP endpoint that we can call via Fetch. But… where is it? We could crack open our AWS console, find the gateway API that serverless created for us, then find the Invoke URL. It would look something like this.

The AWS console showing the Settings tab which includes Cache Settings. Above that is a blue notice that contains the invoke URL.

Fortunately, there is an easier way, which is to type sls info into our terminal:

Just like that, we can see that our Lambda function is available at the following path:

https://6xpmc3g0ch.execute-api.us-east-1.amazonaws.com/dev/ms

Woot, now let’s call It!

Now let’s open up a web app and try fetching it. Here’s what our Fetch will look like:

fetch("https://6xpmc3g0ch.execute-api.us-east-1.amazonaws.com/dev/msg")   .then(resp => resp.json())   .then(resp => {     console.log(resp);   });

We should see our message in the dev console.

Console output showing Hello World.

Now that we’ve gotten our feet wet, let’s repeat this process. This time, though, let’s make a more interesting, useful service. Specifically, let’s make the canonical “resize an image” Lambda, but instead of being triggered by a new S3 bucket upload, let’s let the user upload an image directly to our Lambda. That’ll remove the need to bundle any kind of aws-sdk resources in our client-side bundle.

Building a useful Lambda

OK, from the start! This particular Lambda will take an image, resize it, then upload it to an S3 bucket. First, let’s create a new service. I’m calling it cover-art but it could certainly be anything else.

sls create -t aws-nodejs --path cover-art

As before, we’ll add a path to our HTTP endpoint (which in this case will be a POST, instead of GET, since we’re sending the file instead of receiving it) and enable CORS:

// Same as before   events:     - http:       path: upload       method: post       cors: true

Next, let’s grant our Lambda access to whatever S3 buckets we’re going to use for the upload. Look in your YAML file — there should be a iamRoleStatements section that contains boilerplate code that’s been commented out. We can leverage some of that by uncommenting it. Here’s the config we’ll use to enable the S3 buckets we want:

iamRoleStatements:  - Effect: "Allow"    Action:      - "s3:*"    Resource: ["arn:aws:s3:::your-bucket-name/*"]

Note the /* on the end. We don’t list specific bucket names in isolation, but rather paths to resources; in this case, that’s any resources that happen to exist inside your-bucket-name.

Since we want to upload files directly to our Lambda, we need to make one more tweak. Specifically, we need to configure the API endpoint to accept multipart/form-data as a binary media type. Locate the provider section in the YAML file:

provider:   name: aws   runtime: nodejs12.x

…and modify if it to:

provider:   name: aws   runtime: nodejs12.x   apiGateway:     binaryMediaTypes:       - 'multipart/form-data'

For good measure, let’s give our function an intelligent name. Replace handler: handler.hello with handler: handler.upload, then change module.exports.hello to module.exports.upload in handler.js.

Now we get to write some code

First, let’s grab some helpers.

npm i jimp uuid lambda-multipart-parser

Wait, what’s Jimp? It’s the library I’m using to resize uploaded images. uuid will be for creating new, unique file names of the sized resources, before uploading to S3. Oh, and lambda-multipart-parser? That’s for parsing the file info inside our Lambda.

Next, let’s make a convenience helper for S3 uploading:

const uploadToS3 = (fileName, body) => {   const s3 = new S3({});   const  params = { Bucket: "your-bucket-name", Key: `/$ {fileName}`, Body: body }; 
   return new Promise(res => {     s3.upload(params, function(err, data) {       if (err) {         return res(CorsResponse({ error: true, message: err }));       }       res(CorsResponse({          success: true,          url: `https://$ {params.Bucket}.s3.amazonaws.com/$ {params.Key}`        }));     });   }); };

Lastly, we’ll plug in some code that reads the upload files, resizes them with Jimp (if needed) and uploads the result to S3. The final result is below.

'use strict'; const AWS = require("aws-sdk"); const { S3 } = AWS; const path = require("path"); const Jimp = require("jimp"); const uuid = require("uuid/v4"); const awsMultiPartParser = require("lambda-multipart-parser"); 
 const CorsResponse = obj => ({   statusCode: 200,   headers: {     "Access-Control-Allow-Origin": "*",     "Access-Control-Allow-Headers": "*",     "Access-Control-Allow-Methods": "*"   },   body: JSON.stringify(obj) }); 
 const uploadToS3 = (fileName, body) => {   const s3 = new S3({});   var params = { Bucket: "your-bucket-name", Key: `/$ {fileName}`, Body: body };   return new Promise(res => {     s3.upload(params, function(err, data) {       if (err) {         return res(CorsResponse({ error: true, message: err }));       }       res(CorsResponse({          success: true,          url: `https://$ {params.Bucket}.s3.amazonaws.com/$ {params.Key}`        }));     });   }); }; 
 module.exports.upload = async event => {   const formPayload = await awsMultiPartParser.parse(event);   const MAX_WIDTH = 50;   return new Promise(res => {     Jimp.read(formPayload.files[0].content, function(err, image) {       if (err || !image) {         return res(CorsResponse({ error: true, message: err }));       }       const newName = `$ {uuid()}$ {path.extname(formPayload.files[0].filename)}`;       if (image.bitmap.width > MAX_WIDTH) {         image.resize(MAX_WIDTH, Jimp.AUTO);         image.getBuffer(image.getMIME(), (err, body) => {           if (err) {             return res(CorsResponse({ error: true, message: err }));           }           return res(uploadToS3(newName, body));         });       } else {         image.getBuffer(image.getMIME(), (err, body) => {           if (err) {             return res(CorsResponse({ error: true, message: err }));           }           return res(uploadToS3(newName, body));         });       }     });   }); };

I’m sorry to dump so much code on you but — this being a post about Amazon Lambda and serverless — I’d rather not belabor the grunt work within the serverless function. Of course, yours might look completely different if you’re using an image library other than Jimp.

Let’s run it by uploading a file from our client. I’m using the react-dropzone library, so my JSX looks like this:

<Dropzone   onDrop={files => onDrop(files)}   multiple={false} >   <div>Click or drag to upload a new cover</div> </Dropzone>

The onDrop function looks like this:

const onDrop = files => {   let request = new FormData();   request.append("fileUploaded", files[0]); 
   fetch("https://yb1ihnzpy8.execute-api.us-east-1.amazonaws.com/dev/upload", {     method: "POST",     mode: "cors",     body: request     })   .then(resp => resp.json())   .then(res => {     if (res.error) {       // handle errors     } else {       // success - woo hoo - update state as needed     }   }); };

And just like that, we can upload a file and see it appear in our S3 bucket! 

Screenshot of the AWS interface for buckets showing an uploaded file in a bucket that came from the Lambda function.

An optional detour: bundling

There’s one optional enhancement we could make to our setup. Right now, when we deploy our service, Serverless is zipping up the entire services folder and sending all of it to our Lambda. The content currently weighs in at 10MB, since all of our node_modules are getting dragged along for the ride. We can use a bundler to drastically reduce that size. Not only that, but a bundler will cut deploy time, data usage, cold start performance, etc. In other words, it’s a nice thing to have.

Fortunately for us, there’s a plugin that easily integrates webpack into the serverless build process. Let’s install it with:

npm i serverless-webpack --save-dev

…and add it via our YAML config file. We can drop this in at the very end:

// Same as before plugins:   - serverless-webpack

Naturally, we need a webpack.config.js file, so let’s add that to the mix:

const path = require("path"); module.exports = {   entry: "./handler.js",   output: {     libraryTarget: 'commonjs2',     path: path.join(__dirname, '.webpack'),     filename: 'handler.js',   },   target: "node",   mode: "production",   externals: ["aws-sdk"],   resolve: {     mainFields: ["main"]   } };

Notice that we’re setting target: node so Node-specific assets are treated properly. Also note that you may need to set the output filename to  handler.js. I’m also adding aws-sdk to the externals array so webpack doesn’t bundle it at all; instead, it’ll leave the call to const AWS = require("aws-sdk"); alone, allowing it to be handled by our Lamdba, at runtime. This is OK since Lambdas already have the aws-sdk available implicitly, meaning there’s no need for us to send it over the wire. Finally, the mainFields: ["main"] is to tell webpack to ignore any ESM module fields. This is necessary to fix some issues with the Jimp library.

Now let’s re-deploy, and hopefully we’ll see webpack running.

Now our code is bundled nicely into a single file that’s 935K, which zips down further to a mere 337K. That’s a lot of savings!

Odds and ends

If you’re wondering how you’d send other data to the Lambda, you’d add what you want to the request object, of type FormData, from before. For example:

request.append("xyz", "Hi there");

…and then read formPayload.xyz in the Lambda. This can be useful if you need to send a security token, or other file info.

If you’re wondering how you might configure env variables for your Lambda, you might have guessed by now that it’s as simple as adding some fields to your serverless.yaml file. It even supports reading the values from an external file (presumably not committed to git). This blog post by Philipp Müns covers it well.

Wrapping up

Serverless is an incredible framework. I promise, we’ve barely scratched the surface. Hopefully this post has shown you its potential, and motivated you to check it out even further.

If you’re interested in learning more, I’d recommend the learning materials from David Wells, an engineer at Netlify, and former member of the serverless team, as well as the Serverless Handbook by Swizec Teller

The post Building Your First Serverless Service With AWS Lambda Functions appeared first on CSS-Tricks.

CSS-Tricks

, , , , ,

Smaller HTML Payloads with Service Workers

Short story: Philip Walton has a clever idea for using service workers to cache the top and bottom of HTML files, reducing a lot of network weight.

Longer thoughts: When you’re building a really simple website, you can get away with literally writing raw HTML. It doesn’t take long to need a bit more abstraction than that. Even if you’re building a three-page site, that’s three HTML files, and your programmer’s mind will be looking for ways to not repeat yourself. You’ll probably find a way to “include” all the stuff at the top and bottom of the HTML, and just change the content in the middle.

I have tended to reach for PHP for that sort of thing in the past (<?php include('header.php); ?>), although these days I’m feeling much more jamstacky and I’d probably do it with Eleventy and Nunjucks.

Or, you could go down the SPA (Single Page App) route just for this basic abstraction if you want. Next and Nuxt are perhaps a little heavy-handed for a few includes, but hey, at least they are easy to work with and the result is a nice static site. The thing about these JavaScript-powered SPA frameworks (Gatsby is in here, too), is that they “hydrate” from static sites into SPAs as the JavaScript loads. Part of the reason for that is speed. No longer does the browser need to reload and request a whole big HTML page again to render; it just asks for whatever smaller amount of data it needs and replaces it on the fly.

So in a sense, you might build a SPA because you have a common header and footer and just want to replace the guts, for efficiencies sake.

Here’s Phil:

In a traditional client-server setup, the server always needs to send a full HTML page to the client for every request (otherwise the response would be invalid). But when you think about it, that’s pretty wasteful. Most sites on the internet have a lot of repetition in their HTML payloads because their pages share a lot of common elements (e.g. the <head>, navigation bars, banners, sidebars, footers etc.). But in an ideal world, you wouldn’t have to send so much of the same HTML, over and over again, with every single page request.

With service workers, there’s a solution to this problem. A service worker can request just the bare minimum of data it needs from the server (e.g. an HTML content partial, a Markdown file, JSON data, etc.), and then it can programmatically transform that data into a full HTML document.

So rather than PHP, Eleventy, a JavaScript framework, or any other solution, Phil’s idea is that a service worker (a native browser technology) can save a cache of a site’s header and footer. Then server requests only need to be made for the “guts” while the full HTML document can be created on the fly.

It’s a super fancy idea, and no joke to implement, but the fact that it could be done with less tooling might be appealing to some. On Phil’s site:

 on this site over the past 30 days, page loads from a service worker had a 47.6% smaller network payloads, and a median First Contentful Paint (FCP) that was 52.3% faster than page loads without a service worker (416ms vs. 851ms).

Aside from configuring a service worker, I’d think the most finicky part is having to configure your server/API to deliver a content-only version of your stuff or build two flat file versions of everything.

Direct Link to ArticlePermalink

The post Smaller HTML Payloads with Service Workers appeared first on CSS-Tricks.

CSS-Tricks

, , , ,
[Top]

Workflow Considerations for Using an Image Management Service

There are all these sites out there that want to help you with your images. They do things like optimize your images and help you serve them performantly.

Here’s the type of service I mean.

That’s a very good thing. By any metric, images are a major slice of the resources on websites, and we’re notoriously bad at optimizing them and doing all the things we could to lower the performance hit from them. I’m sitting at a conference right now and Dave just bet everyone in the audience $ 100 that he could find an unoptimized image on their site. I wasn’t about to take him up on it.

So you use some service to help you deliver images better. Smart. Many of them will make managing and optimizing images a lot easier. But I don’t consider them a no-brainer. There is a lot to think about, like making choices that don’t paint you into a corner.

I should be able to upload images from my own CMS.

I don’t want to go to your site to upload my assets. I want to use the media management in my own CMS. So, the service should have an API at a minimum, and possible even officially maintained CMS plugins.

This site uses WordPress. I can drag and drop images into the media library and posts very easily. I can search my media library for images I’ve uploaded before. I like that, and I want to take advantage of it today, and as it evolves.

The images should be uploaded to my own server.

If it also has to be uploaded to the image service, that’s fine. But it should go to my server first, then to the service. That way, I still maintain ownership of the source file.

Images within content should use functional, semantic markup in my CMS.

I’d prefer that the images within content are stored as totally functional HTML in my database:

<img src="/images/flower.jpg" alt="a blue flower">

It could be fancier than that, like using srcset (but probably not sizes as that will change as the design changes), or be contained within <picture> or <figure> elements… whatever you like that makes sense as semantic HTML. The most important thing being that the content in my database has fully functional HTML with a src on the image that points to a real image on my real server.

The implementation of the image service will involve filtering that HTML to do whatever it needs to do, like replace the URLs to generate fancier responsive image markup and whatnot.

Between having functional HTML and images on my server, that enables me to turn off the image service if I need to. Services have a habit of coming and going, or changing in ways that make them more or less palatable. I don’t want to be locked-in; I want freedom. I want to be able turn off the service and have a perfectly functional site with perfectly functional images, and not be obstructed from moving to a different service — or no service at all.

Even if I didn’t use the service in the past, I want all my images to benefit from it.

I just mentioned filtering the HTML for images in my database. That should happen for all the images on my site, even if they were uploaded and used before I started using the image service.

This probably means the services offers a URL-based “get” API to optimize images on-the-fly pulled from their canonical locations.

I shouldn’t have to think about format or size.

I want to upload whatever I have. Probably some huge un-optimized screenshot I just took. If I think about it at all, I want to upload something much too big and much too high-quality so that I know I have a great original version available. The service will create optimized, sized, and formatted images as needed.

I also want to upload SVG and have it stay SVG (that’s also optimized).

The images will ultimately be served on a CDN.

CDNs are vital for speed. Australians get images from servers hosted in Australia. Canadians get images from servers hosted in Canada. The servers are configured to be fast and cookie-less and all the fancy over-my-head things that make an asset CDN scream.

The images should serve in the right format.

If you serve images in WebP format to browsers that support it, you’ll probably get as much or more performance out of that optimization than serving re-sized images with responsive images syntax. It’s a big deal.

I want the service to know what the best possible format for any particular image for any particular browser and serve the image in that format. This is going to change over time, so I want the service to stay on top of this so I don’t have to.

I know that involved formats like JPEG-XR and JPEG-2000 three years ago. Is that still the case? I have no idea. This is a core value proposition for the service.

It should optimize the images and handle quality.

This is perhaps the most obvious feature and the reason you reach for an image service in the first place. Images need optimization. There are perhaps dozens of image optimization tools/algorithms that aim to squeeze every last byte out of images. The image service probably uses those or even has its own fancy tech for it. Ideally, the default is to optimize an image the most it possibly can be without noticeably hurting the quality, but still allowing me to ratchet it down even more if I want to.

Don’t shame me for using high-pixel density images.

A lot of image services have some sort of tester tool where you drop in a URL and it tells you how bad you’re doing with images. Many of them test the size of the image on the rendered page and compare the dimensions of the original image. If the original image is larger, they tell you could have had savings by sizing it down. That’s obnoxious to me. High-pixel density displays have been around for a long time and it’s no crime to serve them.

It should help me serve the right size for the device it’s on and the perfect responsive syntax if needed.

Not all images benefit from the same responsive breakpoints. Check out the site Responsive Image Breakpoints. It generates versions of the image that are best depending on the image itself. That’s the kind of help I like to see from an image service. Take something hard and automate it for me.

I know I’ll probably need to bring my own sizes attribute because that is very dependant on my own CSS and how the design of the site plays out. It’s still important, and makes me wonder if an image service could step up and help me figure out what my optimal sizes attribute should be for certain images. Like loading my site at different sizes and seeing how large the image renders with my CSS and calculating it from there to use later.

Just me.

This is just my own list of requirements. I feel like it’s fairly reflective of “normal” sites that have a bunch of images and want to do the right thing to serve them.

I didn’t go into all the fancy features image services offer, like being able to tell you that an image contains a giraffe facing west and hasn’t eaten since Thursday while offering to recolor its retinas. I know those things are vital to some companies. This is more about what seems to me the widest and most common use case of just hosting and delivering images in the best way current technology allows.

The post Workflow Considerations for Using an Image Management Service appeared first on CSS-Tricks.

CSS-Tricks

, , , , ,
[Top]

Social Cards as a Service

I love the idea of programmatically generated images. That power is close at hand these days for us front-end developers, thanks to the concept of headless browsers. Take Puppeteer, the library for controlling headless Chrome. Generating images from URLs is their default use case:

const puppeteer = require('puppeteer');  (async () => {   const browser = await puppeteer.launch();   const page = await browser.newPage();   await page.goto('https://example.com');   await page.screenshot({path: 'example.png'});    await browser.close(); })();

That ought to get the ol’ mind grape going. What if we had URLs on our site that — with the power of our HTML and CSS skills — created perfect little designs for sharing using dynamic data… then turned them into images and used them for our meta tags?

The first I saw of this idea was Drew McLellan’s Dynamic Social Sharing Images. Drew wrote a script to fire up Puppeteer and get the job done.

Since the design part is entirely an HTML/CSS adventure, I’m sure you could imagine a setup where the URL passed in parameters that did things like set copy and typography, colors, sizes, etc. Zeit built exactly that!

The URL is like this:

https://og-image.now.sh/I%20am%20Chris%20and%20I%20am%20**cool**%20la%20tee%20ding%20dong%20da..png?theme=light&md=1&fontSize=100px&images=https%3A%2F%2Fassets.zeit.co%2Fimage%2Fupload%2Ffront%2Fassets%2Fdesign%2Fhyper-color-logo.svg

Kind of amazing that you can spin up an entire browser in a cloud function! Netlify also offers cloud functions, and when I mentioned this to Phil Hawksworth, he told me he was already doing this for his blog!

So on a blog post like this one, an image like this is automatically generated:

Which is inserted as meta:

<meta property="og:image" content="https://www.hawksworx.com/card-image/-blog-find-that-at-card.png">

I dug through Phil’s repos, naturally, and found his little machine for doing it.

I’m madly envious of all this and need to get one set up for myself.

The post Social Cards as a Service appeared first on CSS-Tricks.

CSS-Tricks

, ,
[Top]