Tag: First

Building Your First Serverless Service With AWS Lambda Functions

Many developers are at least marginally familiar with AWS Lambda functions. They’re reasonably straightforward to set up, but the vast AWS landscape can make it hard to see the big picture. With so many different pieces it can be daunting, and frustratingly hard to see how they fit seamlessly into a normal web application.

The Serverless framework is a huge help here. It streamlines the creation, deployment, and most significantly, the integration of Lambda functions into a web app. To be clear, it does much, much more than that, but these are the pieces I’ll be focusing on. Hopefully, this post strikes your interest and encourages you to check out the many other things Serverless supports. If you’re completely new to Lambda you might first want to check out this AWS intro.

There’s no way I can cover the initial installation and setup better than the quick start guide, so start there to get up and running. Assuming you already have an AWS account, you might be up and running in 5–10 minutes; and if you don’t, the guide covers that as well.

Your first Serverless service

Before we get to cool things like file uploads and S3 buckets, let’s create a basic Lambda function, connect it to an HTTP endpoint, and call it from an existing web app. The Lambda won’t do anything useful or interesting, but this will give us a nice opportunity to see how pleasant it is to work with Serverless.

First, let’s create our service. Open any new, or existing web app you might have (create-react-app is a great way to quickly spin up a new one) and find a place to create our services. For me, it’s my lambda folder. Whatever directory you choose, cd into it from terminal and run the following command:

sls create -t aws-nodejs --path hello-world

That creates a new directory called hello-world. Let’s crack it open and see what’s in there.

If you look in handler.js, you should see an async function that returns a message. We could hit sls deploy in our terminal right now, and deploy that Lambda function, which could then be invoked. But before we do that, let’s make it callable over the web.

Working with AWS manually, we’d normally need to go into the AWS API Gateway, create an endpoint, then create a stage, and tell it to proxy to our Lambda. With serverless, all we need is a little bit of config.

Still in the hello-world directory? Open the serverless.yaml file that was created in there.

The config file actually comes with boilerplate for the most common setups. Let’s uncomment the http entries, and add a more sensible path. Something like this:

functions:   hello:     handler: handler.hello #   The following are a few example events you can configure #   NOTE: Please make sure to change your handler code to work with those events #   Check the event documentation for details     events:       - http:         path: msg         method: get

That’s it. Serverless does all the grunt work described above.

CORS configuration 

Ideally, we want to call this from front-end JavaScript code with the Fetch API, but that unfortunately means we need CORS to be configured. This section will walk you through that.

Below the configuration above, add cors: true, like this

functions:   hello:     handler: handler.hello     events:       - http:         path: msg         method: get         cors: true

That’s the section! CORS is now configured on our API endpoint, allowing cross-origin communication.

CORS Lambda tweak

While our HTTP endpoint is configured for CORS, it’s up to our Lambda to return the right headers. That’s just how CORS works. Let’s automate that by heading back into handler.js, and adding this function:

const CorsResponse = obj => ({   statusCode: 200,   headers: {     "Access-Control-Allow-Origin": "*",     "Access-Control-Allow-Headers": "*",     "Access-Control-Allow-Methods": "*"   },   body: JSON.stringify(obj) });

Before returning from the Lambda, we’ll send the return value through that function. Here’s the entirety of handler.js with everything we’ve done up to this point:

'use strict'; const CorsResponse = obj => ({   statusCode: 200,   headers: {     "Access-Control-Allow-Origin": "*",     "Access-Control-Allow-Headers": "*",     "Access-Control-Allow-Methods": "*"   },   body: JSON.stringify(obj) }); 
 module.exports.hello = async event => {   return CorsResponse("HELLO, WORLD!"); };

Let’s run it. Type sls deploy into your terminal from the hello-world folder.

When that runs, we’ll have deployed our Lambda function to an HTTP endpoint that we can call via Fetch. But… where is it? We could crack open our AWS console, find the gateway API that serverless created for us, then find the Invoke URL. It would look something like this.

The AWS console showing the Settings tab which includes Cache Settings. Above that is a blue notice that contains the invoke URL.

Fortunately, there is an easier way, which is to type sls info into our terminal:

Just like that, we can see that our Lambda function is available at the following path:

https://6xpmc3g0ch.execute-api.us-east-1.amazonaws.com/dev/ms

Woot, now let’s call It!

Now let’s open up a web app and try fetching it. Here’s what our Fetch will look like:

fetch("https://6xpmc3g0ch.execute-api.us-east-1.amazonaws.com/dev/msg")   .then(resp => resp.json())   .then(resp => {     console.log(resp);   });

We should see our message in the dev console.

Console output showing Hello World.

Now that we’ve gotten our feet wet, let’s repeat this process. This time, though, let’s make a more interesting, useful service. Specifically, let’s make the canonical “resize an image” Lambda, but instead of being triggered by a new S3 bucket upload, let’s let the user upload an image directly to our Lambda. That’ll remove the need to bundle any kind of aws-sdk resources in our client-side bundle.

Building a useful Lambda

OK, from the start! This particular Lambda will take an image, resize it, then upload it to an S3 bucket. First, let’s create a new service. I’m calling it cover-art but it could certainly be anything else.

sls create -t aws-nodejs --path cover-art

As before, we’ll add a path to our HTTP endpoint (which in this case will be a POST, instead of GET, since we’re sending the file instead of receiving it) and enable CORS:

// Same as before   events:     - http:       path: upload       method: post       cors: true

Next, let’s grant our Lambda access to whatever S3 buckets we’re going to use for the upload. Look in your YAML file — there should be a iamRoleStatements section that contains boilerplate code that’s been commented out. We can leverage some of that by uncommenting it. Here’s the config we’ll use to enable the S3 buckets we want:

iamRoleStatements:  - Effect: "Allow"    Action:      - "s3:*"    Resource: ["arn:aws:s3:::your-bucket-name/*"]

Note the /* on the end. We don’t list specific bucket names in isolation, but rather paths to resources; in this case, that’s any resources that happen to exist inside your-bucket-name.

Since we want to upload files directly to our Lambda, we need to make one more tweak. Specifically, we need to configure the API endpoint to accept multipart/form-data as a binary media type. Locate the provider section in the YAML file:

provider:   name: aws   runtime: nodejs12.x

…and modify if it to:

provider:   name: aws   runtime: nodejs12.x   apiGateway:     binaryMediaTypes:       - 'multipart/form-data'

For good measure, let’s give our function an intelligent name. Replace handler: handler.hello with handler: handler.upload, then change module.exports.hello to module.exports.upload in handler.js.

Now we get to write some code

First, let’s grab some helpers.

npm i jimp uuid lambda-multipart-parser

Wait, what’s Jimp? It’s the library I’m using to resize uploaded images. uuid will be for creating new, unique file names of the sized resources, before uploading to S3. Oh, and lambda-multipart-parser? That’s for parsing the file info inside our Lambda.

Next, let’s make a convenience helper for S3 uploading:

const uploadToS3 = (fileName, body) => {   const s3 = new S3({});   const  params = { Bucket: "your-bucket-name", Key: `/$ {fileName}`, Body: body }; 
   return new Promise(res => {     s3.upload(params, function(err, data) {       if (err) {         return res(CorsResponse({ error: true, message: err }));       }       res(CorsResponse({          success: true,          url: `https://$ {params.Bucket}.s3.amazonaws.com/$ {params.Key}`        }));     });   }); };

Lastly, we’ll plug in some code that reads the upload files, resizes them with Jimp (if needed) and uploads the result to S3. The final result is below.

'use strict'; const AWS = require("aws-sdk"); const { S3 } = AWS; const path = require("path"); const Jimp = require("jimp"); const uuid = require("uuid/v4"); const awsMultiPartParser = require("lambda-multipart-parser"); 
 const CorsResponse = obj => ({   statusCode: 200,   headers: {     "Access-Control-Allow-Origin": "*",     "Access-Control-Allow-Headers": "*",     "Access-Control-Allow-Methods": "*"   },   body: JSON.stringify(obj) }); 
 const uploadToS3 = (fileName, body) => {   const s3 = new S3({});   var params = { Bucket: "your-bucket-name", Key: `/$ {fileName}`, Body: body };   return new Promise(res => {     s3.upload(params, function(err, data) {       if (err) {         return res(CorsResponse({ error: true, message: err }));       }       res(CorsResponse({          success: true,          url: `https://$ {params.Bucket}.s3.amazonaws.com/$ {params.Key}`        }));     });   }); }; 
 module.exports.upload = async event => {   const formPayload = await awsMultiPartParser.parse(event);   const MAX_WIDTH = 50;   return new Promise(res => {     Jimp.read(formPayload.files[0].content, function(err, image) {       if (err || !image) {         return res(CorsResponse({ error: true, message: err }));       }       const newName = `$ {uuid()}$ {path.extname(formPayload.files[0].filename)}`;       if (image.bitmap.width > MAX_WIDTH) {         image.resize(MAX_WIDTH, Jimp.AUTO);         image.getBuffer(image.getMIME(), (err, body) => {           if (err) {             return res(CorsResponse({ error: true, message: err }));           }           return res(uploadToS3(newName, body));         });       } else {         image.getBuffer(image.getMIME(), (err, body) => {           if (err) {             return res(CorsResponse({ error: true, message: err }));           }           return res(uploadToS3(newName, body));         });       }     });   }); };

I’m sorry to dump so much code on you but — this being a post about Amazon Lambda and serverless — I’d rather not belabor the grunt work within the serverless function. Of course, yours might look completely different if you’re using an image library other than Jimp.

Let’s run it by uploading a file from our client. I’m using the react-dropzone library, so my JSX looks like this:

<Dropzone   onDrop={files => onDrop(files)}   multiple={false} >   <div>Click or drag to upload a new cover</div> </Dropzone>

The onDrop function looks like this:

const onDrop = files => {   let request = new FormData();   request.append("fileUploaded", files[0]); 
   fetch("https://yb1ihnzpy8.execute-api.us-east-1.amazonaws.com/dev/upload", {     method: "POST",     mode: "cors",     body: request     })   .then(resp => resp.json())   .then(res => {     if (res.error) {       // handle errors     } else {       // success - woo hoo - update state as needed     }   }); };

And just like that, we can upload a file and see it appear in our S3 bucket! 

Screenshot of the AWS interface for buckets showing an uploaded file in a bucket that came from the Lambda function.

An optional detour: bundling

There’s one optional enhancement we could make to our setup. Right now, when we deploy our service, Serverless is zipping up the entire services folder and sending all of it to our Lambda. The content currently weighs in at 10MB, since all of our node_modules are getting dragged along for the ride. We can use a bundler to drastically reduce that size. Not only that, but a bundler will cut deploy time, data usage, cold start performance, etc. In other words, it’s a nice thing to have.

Fortunately for us, there’s a plugin that easily integrates webpack into the serverless build process. Let’s install it with:

npm i serverless-webpack --save-dev

…and add it via our YAML config file. We can drop this in at the very end:

// Same as before plugins:   - serverless-webpack

Naturally, we need a webpack.config.js file, so let’s add that to the mix:

const path = require("path"); module.exports = {   entry: "./handler.js",   output: {     libraryTarget: 'commonjs2',     path: path.join(__dirname, '.webpack'),     filename: 'handler.js',   },   target: "node",   mode: "production",   externals: ["aws-sdk"],   resolve: {     mainFields: ["main"]   } };

Notice that we’re setting target: node so Node-specific assets are treated properly. Also note that you may need to set the output filename to  handler.js. I’m also adding aws-sdk to the externals array so webpack doesn’t bundle it at all; instead, it’ll leave the call to const AWS = require("aws-sdk"); alone, allowing it to be handled by our Lamdba, at runtime. This is OK since Lambdas already have the aws-sdk available implicitly, meaning there’s no need for us to send it over the wire. Finally, the mainFields: ["main"] is to tell webpack to ignore any ESM module fields. This is necessary to fix some issues with the Jimp library.

Now let’s re-deploy, and hopefully we’ll see webpack running.

Now our code is bundled nicely into a single file that’s 935K, which zips down further to a mere 337K. That’s a lot of savings!

Odds and ends

If you’re wondering how you’d send other data to the Lambda, you’d add what you want to the request object, of type FormData, from before. For example:

request.append("xyz", "Hi there");

…and then read formPayload.xyz in the Lambda. This can be useful if you need to send a security token, or other file info.

If you’re wondering how you might configure env variables for your Lambda, you might have guessed by now that it’s as simple as adding some fields to your serverless.yaml file. It even supports reading the values from an external file (presumably not committed to git). This blog post by Philipp Müns covers it well.

Wrapping up

Serverless is an incredible framework. I promise, we’ve barely scratched the surface. Hopefully this post has shown you its potential, and motivated you to check it out even further.

If you’re interested in learning more, I’d recommend the learning materials from David Wells, an engineer at Netlify, and former member of the serverless team, as well as the Serverless Handbook by Swizec Teller

The post Building Your First Serverless Service With AWS Lambda Functions appeared first on CSS-Tricks.

CSS-Tricks

, , , , ,

A First Look at `aspect-ratio`

Oh hey! A brand new property that affects how a box is sized! That’s a big deal. There are lots of ways already to make an aspect-ratio sized box (and I’d say this custom properties based solution is the best), but none of them are particularly intuitive and certainly not as straightforward as declaring a single property.

So, with the impending arrival of aspect-ratio (MDN, and not to be confused with the media query version), I thought I’d take a look at how it works and try to wrap my mind around it.

Shout out to Una where I first saw this. Boy howdy did it strike interest in folks:

https://twitter.com/Una/status/1260980901934137345

Just dropping aspect-ratio on an element alone will calculate a height based on the auto width.

Without setting a width, an element will still have a natural auto width. So the height can be calculated from the aspect ratio and the rendered width.

.el {   aspect-ratio: 16 / 9; }

If the content breaks out of the aspect ratio, the element will still expand.

The aspect ratio becomes ignored in that situation, which is actually nice. Better to avoid potential data loss. If you prefer it doesn’t do this, you can always use the padding hack style.

If the element has either a height or width, the other is calculated from the aspect ratio.

So aspect-ratio is basically a way of seeing the other direction when you only have one (demo).

If the element has both a height and width, aspect-ratio is ignored.

The combination of an explicit height and width is “stronger” than the aspect ratio.

Factoring in min-* and max-*

There is always a little tension between width, min-width, and max-width (or the height versions). One of them always “wins.” It’s generally pretty intuitive.

If you set width: 100px; and min-width: 200px; then min-width will win. So, min-width is either ignored because you’re already over it, or wins. Same deal with max-width: if you set width: 100px; and max-width: 50px; then max-width will win. So, max-width is either ignored because you’re already under it, or wins.

It looks like that general intuitiveness carries on here: the min-* and max-* properties will either win or are irrelevant. And if they win, they break the aspect-ratio.

.el {   aspect-ratio: 1 / 4;   height: 500px;    /* Ignored, because width is calculated to be 125px */   /* min-width: 100px; */    /* Wins, making the aspect ratio 1 / 2 */   /* min-width: 250px; */ } 

With value functions

Aspect ratios are always most useful in fluid situations, or anytime you essentially don’t know one of the dimensions ahead of time. But even when you don’t know, you’re often putting constraints on things. Say 50% wide is cool, but you only want it to shrink as far as 200px. You might do width: max(50%, 200px);. Or constrain on both sides with clamp(200px, 50%, 400px);.

This seems to work inutitively:

.el {   aspect-ratio: 4 / 3;   width: clamp(200px, 50%, 400px); }

But say you run into that minimum 200px, and then apply a min-width of 300px? The min-width wins. It’s still intuitive, but it gets brain-bending because of how many properties, functions, and values can be involved.

Maybe it’s helpful to think of aspect-ratio as the weakest way to size an element?

It will never beat any other sizing information out, but it will always do its sizing if there is no other information available for that dimension.

The post A First Look at `aspect-ratio` appeared first on CSS-Tricks.

CSS-Tricks

, ,
[Top]

First Steps into a Possible CSS Masonry Layout

It’s not at the level of demand as, say, container queries, but being able to make “masonry” layouts in CSS has been a big ask for CSS developers for a long time. Masonry being that kind of layout where unevenly-sized elements are layed out in ragged rows. Sorta like a typical brick wall turned sideways.

The layout alone is achievable in CSS alone already, but with one big caveat: the items aren’t arranged in rows, they are arranged in columns, which is often a deal-breaker for folks.

/* People usually don't want this */  1  4  6  8 2     7 3  5     9

/* They want this *.  1  2  3  4 5  6     7 8     9

If you want that ragged row thing and horizontal source order, you’re in JavaScript territory. Until now, that is, as Firefox rolled this out under a feature flag in Firefox Nightly, as part of CSS grid.

Mats Palmgren:

An implementation of this proposal is now available in Firefox Nightly. It is disabled by default, so you need to load about:config and set the preference layout.css.grid-template-masonry-value.enabled to true to enable it (type “masonry” in the search box on that page and it will show you that pref).

Jen Simmons has created some demos already:

Is this really a grid?

A bit of pushback from Rachel Andrew

Grid isn’t Masonry, because it’s a grid with strict rows and columns. If you take another look at the layout created by Masonry, we don’t have strict rows and columns. Typically we have defined rows, but the columns act more like a flex layout, or Multicol. The key difference between the layout you get with Multicol and a Masonry layout, is that in Multicol the items are displayed by column. Typically in a Masonry layout you want them displayed row-wise.

[…]

Speaking personally, I am not a huge fan of this being part of the Grid specification. It is certainly compelling at first glance, however I feel that this is a relatively specialist layout mode and actually isn’t a grid at all. It is more akin to flex layout than grid layout.

By placing this layout method into the Grid spec I worry that we then tie ourselves to needing to support the Masonry functionality with any other additions to Grid.

None of this is final yet, and there is active CSS Working Group discussion about it.

As Jen said:

This is an experimental implementation — being discussed as a possible CSS specification. It is NOT yet official, and likely will change. Do not write blog posts saying this is definitely a thing. It’s not a thing. Not yet. It’s an experiment. A prototype. If you have thoughts, chime in at the CSSWG.

Houdini?

Last time there was chatter about native masonry, it was mingled with idea that the CSS Layout API, as part of Houdini, could do this. That is a thing, as you can see by opening this demo (repo) in Chrome Canary.

I’m not totally up to speed on whether Houdini is intended to be a thing so that ideas like this can be prototyped in the browser and ultimately moved out of Houdini, or if the ideas should just stay in Houdini, or what.

The post First Steps into a Possible CSS Masonry Layout appeared first on CSS-Tricks.

CSS-Tricks

, , , , ,
[Top]

Firefox 71: First Out of the Gate With Subgrid

A great release from Firefox this week! See the whole roundup post from Chris Mills. I’m personally stoked to see clip-path: path(); go live, which we’ve been tracking as it’s so clearly useful. We also get column-span: all; which is nice in case you’re one of the few taking advantages of CSS columns.

But there are two other things I think are a very big deal:

  1. If you have fluid images (most sites do) via flexible-width containers and img { max-width: 100%; }, you’re subject to somewhat janky loading as those images load in because the browser doesn’t know how tall the space to reserve needs to be until it knows the dimensions of the image. But now, if you put width/height attributes (e.g. <img width="500" height="250" src="...">), Firefox (and Chrome) will calculate the aspect ratio from those and reserve the correct amount of space. It seems like a small thing, but it really isn’t. It will improve the perceived loading for a zillion sites.
  2. Now we’ve got subgrid! Remember Eric Meyer called them essential years ago. They allow you to have an element share the grid lines of parent elements instead of needing to establish new ones. The result might be less markup flattening and cleaner designs. A grid of “cards” is a great example here, which Miriam covers in this video showing how you can get much more logical space distribution. It must be in the water, as Anton Ball covers the same concept in this post. I’m a fan of how this is progressive-enhancement friendly. You can still set grid columns/rows on an element for browsers that don’t support subgrid, but then display: subgrid; to have them inherit lines instead in supporting browsers.

Direct Link to ArticlePermalink

CSS-Tricks

, , ,
[Top]

Weekly Platform News: Tracking via Web Storage, First Input Delay, Navigating by Headings

In this week’s roundup, Safari takes on cross-site tracking, the delay between load and user interaction is greater on mobile, and a new survey says headings are a popular way for screen readers to navigate a webpage.

Let’s get into the news.

Safari’s tracking prevention limits web storage

Some social networks and other websites that engage in cross-site tracking add a query string or fragment to external links for tracking purposes. (Apple calls this “abuse of link decoration.”)

When people navigate from websites with tracking abilities to other websites via such “decorated” links, Safari will expire the cookies that are created on the loaded web pages after 24 hours. This has led some trackers to start using other types of storage (e.g. localStorage) to track people on websites.

As a new countermeasure, Safari will now delete all non-cookie website data in these scenarios if the user hasn’t interacted with the website for seven days.

The reason why we cap the lifetime of script-writable storage is simple. Site owners have been convinced to deploy third-party scripts on their websites for years. Now those scripts are being repurposed to circumvent browsers’ protections against third-party tracking. By limiting the ability to use any script-writable storage for cross-site tracking purposes, [Safari’s tracking prevention] makes sure that third-party scripts cannot leverage the storage powers they have gained over all these websites.

(via John Wilander)

First Input Delay is much worse on mobile

First Input Delay (FID), the delay until the page is able to respond to the user, is much worse on mobile: Only 13% of websites have a fast FID on mobile, compared to 70% on desktop.

Tip: If your website is popular enough to be included in the Chrome UX Report, you can check your site’s mobile vs. desktop FID data on PageSpeed Insights.

(via Rick Viscomi)

Screen reader users navigate web pages by headings

According to WebAIM’s recent screen reader user survey, the most popular screen readers are NVDA (41%) and JAWS (40%) on desktop (primary screen reader) and VoiceOver (71%) and TalkBack (33%) on mobile (commonly used screen readers).

When trying to find information on a web page, most screen reader users navigate the page through the headings (<h1>, <h2>, <h3>, etc.).

The usefulness of proper heading structures is very high, with 86.1% of respondents finding heading levels very or somewhat useful.

Tip: You can check a web page’s heading structure with W3C’s Nu Html Checker (enable the “outline” option).

(via WebAIM)

More news…

Read even more news in my weekly Sunday issue that can be delivered to you via email every Monday morning.

More News →

The post Weekly Platform News: Tracking via Web Storage, First Input Delay, Navigating by Headings appeared first on CSS-Tricks.

CSS-Tricks

, , , , , , , , ,
[Top]

Static First: Pre-Generated JAMstack Sites with Serverless Rendering as a Fallback

You might be seeing the term JAMstack popping up more and more frequently. I’ve been a fan of it as an approach for some time.

One of the principles of JAMstack is that of pre-rendering. In other words, it generates your site into a collection of static assets in advance, so that it can be served to your visitors with maximum speed and minimum overhead from a CDN or other optimized static hosting environment.

But if we are going to pre-generate our sites ahead of time, how do we make them feel dynamic? How do we build sites that need to change often? How do we work with things like user generated content?

As it happens, this can be a great use case for serverless functions. JAMstack and serverless are the best of friends. They complement each other wonderfully.

In this article, we’ll look at a pattern of using serverless functions as a fallback for pre-generated pages in a site that is comprised almost entirely of user generated content. We’ll use a technique of optimistic URL routing where the 404 page is a serverless function to add serverless rendering on the fly.

Buzzwordy? Perhaps. Effective? Most certainly!

You can go and have a play with the demo site to help you imagine this use case. But only if you promise to come back.

https://vlolly.net

Is that you? You came back? Great. Let’s dig in.

The idea behind this little example site is that it lets you create a nice, happy message and virtual pick-me-up to send to a friend. You can write a message, customize a lollipop (or a popsicle, for my American friends) and get a URL to share with your intended recipient. And just like that, you’ve brightened up their day. What’s not to love?

Traditionally, we’d build this site using some server-side scripting to handle the form submissions, add new lollies (our user generated content) to a database and generate a unique URL. Then we’d use some more server-side logic to parse requests for these pages, query the database to get the data needed to populate a page view, render it with a suitable template, and return it to the user.

That all seems logical.

But how much will it cost to scale?

Technical architects and tech leads often get this question when scoping a project. They need to plan, pay for, and provision enough horsepower in case of success.

This virtual lollipop site is no mere trinket. This thing is going to make me a gazillionaire due to all the positive messages we all want to send each other! Traffic levels are going to spike as the word gets out. I had better have a good strategy of ensuring that the servers can handle the hefty load. I might add some caching layers, some load balancers, and I’ll design my database and database servers to be able to share the load without groaning from the demand to make and serve all these lollies.

Except… I don’t know how to do that stuff.

And I don’t know how much it would cost to add that infrastructure and keep it all humming. It’s complicated.

This is why I love to simplify my hosting by pre-rendering as much as I can.

Serving static pages is significantly simpler and cheaper than serving pages dynamically from a web server which needs to perform some logic to generate views on demand for every visitor.

Since we are working with lots of user generated content, it still makes sense to use a database, but I’m not going to manage that myself. Instead, I’ll choose one of the many database options available as a service. And I’ll talk to it via its APIs.

I might choose Firebase, or MongoDB, or any number of others. Chris compiled a few of these on an excellent site about serverless resources which is well worth exploring.

In this case, I selected Fauna to use as my data store. Fauna has a nice API for stashing and querying data. It is a no-SQL flavored data store and gives me just what I need.

https://fauna.com

Critically, Fauna have made an entire business out of providing database services. They have the deep domain knowledge that I’ll never have. By using a database-as-a-service provider, I just inherited an expert data service team for my project, complete with high availability infrastructure, capacity and compliance peace of mind, skilled support engineers, and rich documentation.

Such are the advantages of using a third-party service like this rather than rolling your own.

Architecture TL;DR

I often find myself doodling the logical flow of things when I’m working on a proof of concept. Here’s my doodle for this site:

And a little explanation:

  1. A user creates a new lollipop by completing a regular old HTML form.
  2. The new content is saved in a database, and its submission triggers a new site generation and deployment.
  3. Once the site deployment is complete, the new lollipop will be available on a unique URL. It will be a static page served very rapidly from the CDN with no dependency on a database query or a server.
  4. Until the site generation is complete, any new lollipops will not be available as static pages. Unsuccessful requests for lollipop pages fall back to a page which dynamically generates the lollipop page by querying the database API on the fly.

This kind of approach, which first assumes static/pre-generated assets, only then falling back to a dynamic render when a static view is not available was usefully described by Markus Schork of Unilever as “Static First” which I rather like.

In a little more detail

You could just dive into the code for this site, which is open source and available for you to explore, or we could talk some more.

You want to dig in a little further, and explore the implementation of this example? OK, I’ll explain in some more details:

  • Getting data from the database to generate each page
  • Posting data to a database API with a serverless function
  • Triggering a full site re-generation
  • Rendering on demand when pages are yet to be generated

Generating pages from a database

In a moment, we’ll talk about how we post data into the database, but first, let’s assume that there are some entries in the database already. We are going to want to generate a site which includes a page for each and every one of those.

Static site generators are great at this. They chomp through data, apply it to templates, and output HTML files ready to be served. We could use any generator for this example. I chose Eleventy due to it’s relative simplicity and the speed of its site generation.

To feed Eleventy some data, we have a number of options. One is to give it some JavaScript which returns structured data. This is perfect for querying a database API.

Our Eleventy data file will look something like this:

// Set up a connection with the Fauna database. // Use an environment variable to authenticate // and get access to the database. const faunadb = require('faunadb'); const q = faunadb.query; const client = new faunadb.Client({   secret: process.env.FAUNADB_SERVER_SECRET });  module.exports = () => {   return new Promise((resolve, reject) => {     // get the most recent 100,000 entries (for the sake of our example)     client.query(       q.Paginate(q.Match(q.Ref("indexes/all_lollies")),{size:100000})     ).then((response) => {       // get all data for each entry       const lollies = response.data;       const getAllDataQuery = lollies.map((ref) => {         return q.Get(ref);       });       return client.query(getAllDataQuery).then((ret) => {         // send the data back to Eleventy for use in the site build         resolve(ret);       });     }).catch((error) => {       console.log("error", error);       reject(error);     });   }) }

I named this file lollies.js which will make all the data it returns available to Eleventy in a collection called lollies.

We can now use that data in our templates. If you’d like to see the code which takes that and generates a page for each item, you can see it in the code repository.

Submitting and storing data without a server

When we create a new lolly page we need to capture user content in the database so that it can be used to populate a page at a given URL in the future. For this, we are using a traditional HTML form which posts data to a suitable form handler.

The form looks something like this (or see the full code in the repo):

<form name="new-lolly" action="/new" method="POST">    <!-- Default "flavors": 3 bands of colors with color pickers -->   <input type="color" id="flavourTop" name="flavourTop" value="#d52358" />   <input type="color" id="flavourMiddle" name="flavourMiddle" value="#e95946" />   <input type="color" id="flavourBottom" name="flavourBottom" value="#deaa43" />    <!-- Message fields -->   <label for="recipientName">To</label>   <input type="text" id="recipientName" name="recipientName" />    <label for="message">Say something nice</label>   <textarea name="message" id="message" cols="30" rows="10"></textarea>    <label for="sendersName">From</label>   <input type="text" id="sendersName" name="sendersName" />    <!-- A descriptive submit button -->   <input type="submit" value="Freeze this lolly and get a link">  </form>

We have no web servers in our hosting scenario, so we will need to devise somewhere to handle the HTTP posts being submitted from this form. This is a perfect use case for a serverless function. I’m using Netlify Functions for this. You could use AWS Lambda, Google Cloud, or Azure Functions if you prefer, but I like the simplicity of the workflow with Netlify Functions, and the fact that this will keep my serverless API and my UI all together in one code repository.

It is good practice to avoid leaking back-end implementation details into your front-end. A clear separation helps to keep things more portable and tidy. Take a look at the action attribute of the form element above. It posts data to a path on my site called /new which doesn’t really hint at what service this will be talking to.

We can use redirects to route that to any service we like. I’ll send it to a serverless function which I’ll be provisioning as part of this project, but it could easily be customized to send the data elsewhere if we wished. Netlify gives us a simple and highly optimized redirects engine which directs our traffic out at the CDN level, so users are very quickly routed to the correct place.

The redirect rule below (which lives in my project’s netlify.toml file) will proxy requests to /new through to a serverless function hosted by Netlify Functions called newLolly.js.

# resolve the "new" URL to a function [[redirects]]   from = "/new"   to = "/.netlify/functions/newLolly"   status = 200

Let’s look at that serverless function which:

  • stores the new data in the database,
  • creates a new URL for the new page and
  • redirects the user to the newly created page so that they can see the result.

First, we’ll require the various utilities we’ll need to parse the form data, connect to the Fauna database and create readably short unique IDs for new lollies.

const faunadb = require('faunadb');          // For accessing FaunaDB const shortid = require('shortid');          // Generate short unique URLs const querystring = require('querystring');  // Help us parse the form data  // First we set up a new connection with our database. // An environment variable helps us connect securely // to the correct database. const q = faunadb.query const client = new faunadb.Client({   secret: process.env.FAUNADB_SERVER_SECRET })

Now we’ll add some code to the handle requests to the serverless function. The handler function will parse the request to get the data we need from the form submission, then generate a unique ID for the new lolly, and then create it as a new record in the database.

// Handle requests to our serverless function exports.handler = (event, context, callback) => {    // get the form data   const data = querystring.parse(event.body);   // add a unique path id. And make a note of it - we'll send the user to it later   const uniquePath = shortid.generate();   data.lollyPath = uniquePath;    // assemble the data ready to send to our database   const lolly = {     data: data   };    // Create the lolly entry in the fauna db   client.query(q.Create(q.Ref('classes/lollies'), lolly))     .then((response) => {       // Success! Redirect the user to the unique URL for this new lolly page       return callback(null, {         statusCode: 302,         headers: {           Location: `/lolly/$ {uniquePath}`,         }       });     }).catch((error) => {       console.log('error', error);       // Error! Return the error with statusCode 400       return callback(null, {         statusCode: 400,         body: JSON.stringify(error)       });     });  }

Let’s check our progress. We have a way to create new lolly pages in the database. And we’ve got an automated build which generates a page for every one of our lollies.

To ensure that there is a complete set of pre-generated pages for every lolly, we should trigger a rebuild whenever a new one is successfully added to the database. That is delightfully simple to do. Our build is already automated thanks to our static site generator. We just need a way to trigger it. With Netlify, we can define as many build hooks as we like. They are webhooks which will rebuild and deploy our site of they receive an HTTP POST request. Here’s the one I created in the site’s admin console in Netlify:

Netlify build hook

To regenerate the site, including a page for each lolly recorded in the database, we can make an HTTP POST request to this build hook as soon as we have saved our new data to the database.

This is the code to do that:

const axios = require('axios'); // Simplify making HTTP POST requests  // Trigger a new build to freeze this lolly forever axios.post('https://api.netlify.com/build_hooks/5d46fa20da4a1b70XXXXXXXXX') .then(function (response) {   // Report back in the serverless function's logs   console.log(response); }) .catch(function (error) {   // Describe any errors in the serverless function's logs   console.log(error); });

You can see it in context, added to the success handler for the database insertion in the full code.

This is all great if we are happy to wait for the build and deployment to complete before we share the URL of our new lolly with its intended recipient. But we are not a patient lot, and when we get that nice new URL for the lolly we just created, we’ll want to share it right away.

Sadly, if we hit that URL before the site has finished regenerating to include the new page, we’ll get a 404. But happily, we can use that 404 to our advantage.

Optimistic URL routing and serverless fallbacks

With custom 404 routing, we can choose to send every failed request for a lolly page to a page which will can look for the lolly data directly in the database. We could do that in with client-side JavaScript if we wanted, but even better would be to generate a ready-to-view page dynamically from a serverless function.

Here’s how:

Firstly, we need to tell all those hopeful requests for a lolly page that come back empty to go instead to our serverless function. We do that with another rule in our Netlify redirects configuration:

# unfound lollies should proxy to the API directly [[redirects]]   from = "/lolly/*"   to = "/.netlify/functions/showLolly?id=:splat"   status = 302

This rule will only be applied if the request for a lolly page did not find a static page ready to be served. It creates a temporary redirect (HTTP 302) to our serverless function, which looks something like this:

const faunadb = require('faunadb');                  // For accessing FaunaDB const pageTemplate = require('./lollyTemplate.js');  // A JS template litereal   // setup and auth the Fauna DB client const q = faunadb.query; const client = new faunadb.Client({   secret: process.env.FAUNADB_SERVER_SECRET });  exports.handler = (event, context, callback) => {    // get the lolly ID from the request   const path = event.queryStringParameters.id.replace("/", "");    // find the lolly data in the DB   client.query(     q.Get(q.Match(q.Index("lolly_by_path"), path))   ).then((response) => {     // if found return a view     return callback(null, {       statusCode: 200,       body: pageTemplate(response.data)     });    }).catch((error) => {     // not found or an error, send the sad user to the generic error page     console.log('Error:', error);     return callback(null, {       body: JSON.stringify(error),       statusCode: 301,       headers: {         Location: `/melted/index.html`,       }     });   }); }

If a request for any other page (not within the /lolly/ path of the site) should 404, we won’t send that request to our serverless function to check for a lolly. We can just send the user directly to a 404 page. Our netlify.toml config lets us define as many level of 404 routing as we’d like, by adding fallback rules further down in the file. The first successful match in the file will be honored.

# unfound lollies should proxy to the API directly [[redirects]]   from = "/lolly/*"   to = "/.netlify/functions/showLolly?id=:splat"   status = 302  # Real 404s can just go directly here: [[redirects]]   from = "/*"   to = "/melted/index.html"   status = 404

And we’re done! We’ve now got a site which is static first, and which will try to render content on the fly with a serverless function if a URL has not yet been generated as a static file.

Pretty snappy!

Supporting larger scale

Our technique of triggering a build to regenerate the lollipop pages every single time a new entry is created might not be optimal forever. While it’s true that the automation of the build means it is trivial to redeploy the site, we might want to start throttling and optimizing things when we start to get very popular. (Which can only be a matter of time, right?)

That’s fine. Here are a couple of things to consider when we have very many pages to create, and more frequent additions to the database:

  • Instead of triggering a rebuild for each new entry, we could rebuild the site as a scheduled job. Perhaps this could happen once an hour or once a day.
  • If building once per day, we might decide to only generate the pages for new lollies submitted in the last day, and cache the pages generated each day for future use. This kind of logic in the build would help us support massive numbers of lolly pages without the build getting prohibitively long. But I’ll not go into intra-build caching here. If you are curious, you could ask about it over in the Netlify Community forum.

By combining both static, pre-generated assets, with serverless fallbacks which give dynamic rendering, we can satisfy a surprisingly broad set of use cases — all while avoiding the need to provision and maintain lots of dynamic infrastructure.

What other use cases might you be able to satisfy with this “static first” approach?

The post Static First: Pre-Generated JAMstack Sites with Serverless Rendering as a Fallback appeared first on CSS-Tricks.

CSS-Tricks

, , , , , , ,
[Top]

Your first performance budget with Lighthouse

Ire Aderinokun writes about a new way to set a performance budget (and stick to it) with Lighthouse, Google’s suite of tools that help developers see how performant and accessible their websites are:

Until recently, I also hadn’t setup an official performance budget and enforced it. This isn’t to say that I never did performance audits. I frequently use tools like PageSpeed Insights and take the feedback to make improvements. But what I had never done was set a list of metrics that I needed the site to meet, and enforce them using some automated tool.

The reasons for this were a combination of not knowing what exact numbers I should use for budgets as well as there being a disconnect between setting a budget and testing/enforcing it. This is why I was really excited when, at Google I/O this year, the Lighthouse team announced support for performance budgets that can be integrated with Lighthouse. We can now define a simple performance budget in a JSON file, which will be tested as part of the lighthouse audit!

I completely agree with Ire, and much in the same way I’ve tended to neglect sticking to a performance budget simply because the process of testing was so manual and tedious. But no more! As Ire shows in this post, you can even set Lighthouse up to test your budget with every PR in GitHub. That tool is called lighthousebot and it’s just what I’ve been looking for – an automated and predictable way to integrate a performance budget into every change that I make to a codebase.

Today lighthousebot will comment on your PR after a test is complete and it will show you the before and after score:

How neat is that? This reminds me of Gareth Clubb’s recent post about improving web performance and building a culture around budgets in an organization. What better way to remind everyone about performance than right in GitHub after each and every change that they make?

Direct Link to ArticlePermalink

The post Your first performance budget with Lighthouse appeared first on CSS-Tricks.

CSS-Tricks

, , ,
[Top]

A Quick Look at the First Public Working Draft for Color Adjust Module 1

We’ve been talking a lot about Dark Mode around here ever since Apple released it as a system setting in MacOS 10.14 and subsequently as part of Safari. It’s interesting because of both what it opens up as as far as design opportunities as well as tailoring user experience based on actual user preferences.

This week, we got an Editor’s Draft for the Color Adjust Module Level 1 specification and the First Public Working Draft of it. All of this is a work-in-progress, but the progression of it has been interesting to track. The spec introduces three new CSS properties that help inform how much control the user agent should have when determining the visual appearance of a rendered page based on user preferences.

color-scheme is the first property defined in the spec and perhaps the centerpiece of it. It accepts light and dark values which — as you may have guessed — correspond to Light Mode and Dark Mode preferences for operating systems that support them. And, for what it’s worth, we could be dealing with labels other than “Light” and “Dark” (e.g. “Day” and “Night”) but what we’re dealing with boils down to a light color scheme versus a dark one.

Source: developer.apple.com

This single property carries some important implications. For one, the idea is that it allows us to set styles based on a user’s system preferences which gives us fine-grained control over that experience.

Another possible implication is that declaring the property at all enables the user agent to take some responsibility for determining an element’s colors, where declaring light or dark informs the user agent that an element is “aware” of color schemes and should be styled according to a preference setting matching the value. On the other hand, we can give the browser full control to determine what color scheme to use based on the user’s system preferences by using the auto value. That tells the browser that an element is “unaware” of color schemes and that the browser can determine how to proceed using the user preferences and a systems’s default styling as a guide.

It’s worth noting at this point that we may also have a prefers-color-scheme media feature (currently in the Editor’s Draft for the Media Queries Level 5 specification) that also serves to let us detect a user’s preference and help gives us greater control of the user experience based on system preferences. Robin has a nice overview of it. The Color Adjust Module Level 1 Working Draft also makes mention of possibly using a color scheme value in a <meta> element to indicate color scheme support.

There’s more to the property, of course, including an only keyword, chaining values to indicate an order of preference, and even an open-ended custom ident keyword. So definitely dig in there because there’s a lot to take in.

Pretty interesting, right? Hopefully you’re starting to see how this draft could open up new possibilities and even impacts how we make design decisions. And that’s only the start because there are two more properties!

  • forced-color-adjust: This is used when we want to support color schemes but override the user agent’s default stylesheet with our own CSS. This includes a note about possibly merging this into color-adjust.
  • color-adjust: Unlike forcing CSS overrides onto the user agent, this property provides a hint to browsers that they can change color values based on the both the user’s preferences and other factors, such as screen quality, bandwidth, or whatever is “deem[ed] necessary and prudent for the output device.” Eric Bailey wrote up the possibilities this property could open up as far as use cases, enhanced accessibility, and general implementations.

The current draft is sure to expand but, hey, this is where we get to be aware of the awesome work that W3C authors are doing, gain context for the challenges they face, and even contribute to the work. (See Rachel Andrew’s advice on making contributions.)

The post A Quick Look at the First Public Working Draft for Color Adjust Module 1 appeared first on CSS-Tricks.

CSS-Tricks

, , , , , , , ,
[Top]