Tag: Functions

Creating a Clock with the New CSS sin() and cos() Trigonometry Functions

CSS trigonometry functions are here! Well, they are if you’re using the latest versions of Firefox and Safari, that is. Having this sort of mathematical power in CSS opens up a whole bunch of possibilities. In this tutorial, I thought we’d dip our toes in the water to get a feel for a couple of the newer functions: sin() and cos().

There are other trigonometry functions in the pipeline — including tan() — so why focus just on sin() and cos()? They happen to be perfect for the idea I have in mind, which is to place text along the edge of a circle. That’s been covered here on CSS-Tricks when Chris shared an approach that uses a Sass mixin. That was six years ago, so let’s give it the bleeding edge treatment.

Here’s what I have in mind. Again, it’s only supported in Firefox and Safari at the moment:

So, it’s not exactly like words forming a circular shape, but we are placing text characters along the circle to form a clock face. Here’s some markup we can use to kick things off:

<div class="clock">   <div class="clock-face">     <time datetime="12:00">12</time>     <time datetime="1:00">1</time>     <time datetime="2:00">2</time>     <time datetime="3:00">3</time>     <time datetime="4:00">4</time>     <time datetime="5:00">5</time>     <time datetime="6:00">6</time>     <time datetime="7:00">7</time>     <time datetime="8:00">8</time>     <time datetime="9:00">9</time>     <time datetime="10:00">10</time>     <time datetime="11:00">11</time>   </div> </div>

Next, here are some super basic styles for the .clock-face container. I decided to use the <time> tag with a datetime attribute. 

.clock {   --_ow: clamp(5rem, 60vw, 40rem);   --_w: 88cqi;   aspect-ratio: 1;   background-color: tomato;   border-radius: 50%;   container-type: inline;   display: grid;   height: var(--_ow);   place-content: center;   position: relative;   width var(--_ow); }

I decorated things a bit in there, but only to get the basic shape and background color to help us see what we’re doing. Notice how we save the width value in a CSS variable. We’ll use that later. Not much to look at so far:

Large tomato colored circle with a vertical list of numbers 1-12 on the left.

It looks like some sort of modern art experiment, right? Let’s introduce a new variable, --_r, to store the circle’s radius, which is equal to half of the circle’s width. This way, if the width (--_w) changes, the radius value (--_r) will also update — thanks to another CSS math function, calc():

.clock {   --_w: 300px;   --_r: calc(var(--_w) / 2);   /* rest of styles */ }

Now, a bit of math. A circle is 360 degrees. We have 12 labels on our clock, so want to place the numbers every 30 degrees (360 / 12). In math-land, a circle begins at 3 o’clock, so noon is actually minus 90 degrees from that, which is 270 degrees (360 - 90).

Let’s add another variable, --_d, that we can use to set a degree value for each number on the clock face. We’re going to increment the values by 30 degrees to complete our circle:

.clock time:nth-child(1) { --_d: 270deg; } .clock time:nth-child(2) { --_d: 300deg; } .clock time:nth-child(3) { --_d: 330deg; } .clock time:nth-child(4) { --_d: 0deg; } .clock time:nth-child(5) { --_d: 30deg; } .clock time:nth-child(6) { --_d: 60deg; } .clock time:nth-child(7) { --_d: 90deg; } .clock time:nth-child(8) { --_d: 120deg; } .clock time:nth-child(9) { --_d: 150deg; } .clock time:nth-child(10) { --_d: 180deg; } .clock time:nth-child(11) { --_d: 210deg; } .clock time:nth-child(12) { --_d: 240deg; }

OK, now’s the time to get our hands dirty with the sin() and cos() functions! What we want to do is use them to get the X and Y coordinates for each number so we can place them properly around the clock face.

The formula for the X coordinate is radius + (radius * cos(degree)). Let’s plug that into our new --_x variable:

--_x: calc(var(--_r) + (var(--_r) * cos(var(--_d))));

The formula for the Y coordinate is radius + (radius * sin(degree)). We have what we need to calculate that:

--_y: calc(var(--_r) + (var(--_r) * sin(var(--_d))));

There are a few housekeeping things we need to do to set up the numbers, so let’s put some basic styling on them to make sure they are absolutely positioned and placed with our coordinates:

.clock-face time {   --_x: calc(var(--_r) + (var(--_r) * cos(var(--_d))));   --_y: calc(var(--_r) + (var(--_r) * sin(var(--_d))));   --_sz: 12cqi;   display: grid;   height: var(--_sz);   left: var(--_x);   place-content: center;   position: absolute;   top: var(--_y);   width: var(--_sz); }

Notice --_sz, which we’ll use for the width and height of the numbers in a moment. Let’s see what we have so far.

Large tomato colored circle with off-centered hour number labels along its edge.

This definitely looks more like a clock! See how the top-left corner of each number is positioned at the correct place around the circle? We need to “shrink” the radius when calculating the positions for each number. We can deduct the size of a number (--_sz) from the size of the circle (--_w), before we calculate the radius:

--_r: calc((var(--_w) - var(--_sz)) / 2);
Large tomato colored circle with hour number labels along its rounded edge.

Much better! Let’s change the colors, so it looks more elegant:

A white clock face with numbers against a dark gray background. The clock has no arms.

We could stop right here! We accomplished the goal of placing text around a circle, right? But what’s a clock without arms to show hours, minutes, and seconds?

Let’s use a single CSS animation for that. First, let’s add three more elements to our markup,

<div class="clock">   <!-- after <time>-tags -->   <span class="arm seconds"></span>   <span class="arm minutes"></span>   <span class="arm hours"></span>   <span class="arm center"></span> </div>

Then some common markup for all three arms. Again, most of this is just make sure the arms are absolutely positioned and placed accordingly:

.arm {   background-color: var(--_abg);   border-radius: calc(var(--_aw) * 2);   display: block;   height: var(--_ah);   left: calc((var(--_w) - var(--_aw)) / 2);   position: absolute;   top: calc((var(--_w) / 2) - var(--_ah));   transform: rotate(0deg);   transform-origin: bottom;   width: var(--_aw); }

We’ll use the same animation for all three arms:

@keyframes turn {   to {     transform: rotate(1turn);   } }

The only difference is the time the individual arms take to make a full turn. For the hours arm, it takes 12 hours to make a full turn. The animation-duration property only accepts values in milliseconds and seconds. Let’s stick with seconds, which is 43,200 seconds (60 seconds * 60 minutes * 12 hours).

animation: turn 43200s infinite;

It takes 1 hour for the minutes arm to make a full turn. But we want this to be a multi-step animation so the movement between the arms is staggered rather than linear. We’ll need 60 steps, one for each minute:

animation: turn 3600s steps(60, end) infinite;

The seconds arm is almost the same as the minutes arm, but the duration is 60 seconds instead of 60 minutes:

animation: turn 60s steps(60, end) infinite;

Let’s update the properties we created in the common styles:

.seconds {   --_abg: hsl(0, 5%, 40%);   --_ah: 145px;   --_aw: 2px;   animation: turn 60s steps(60, end) infinite; } .minutes {   --_abg: #333;   --_ah: 145px;   --_aw: 6px;   animation: turn 3600s steps(60, end) infinite; } .hours {   --_abg: #333;   --_ah: 110px;   --_aw: 6px;   animation: turn 43200s linear infinite; }

What if we want to start at the current time? We need a little bit of JavaScript:

const time = new Date(); const hour = -3600 * (time.getHours() % 12); const mins = -60 * time.getMinutes(); app.style.setProperty('--_dm', `$ {mins}s`); app.style.setProperty('--_dh', `$ {(hour+mins)}s`);

I’ve added id="app" to the clockface and set two new custom properties on it that set a negative animation-delay, as Mate Marschalko did when he shared a CSS-only clock. The getHours() method of JavaScipt’s Date object is using the 24-hour format, so we use the remainder operator to convert it into 12-hour format.

In the CSS, we need to add the animation-delay as well:

.minutes {   animation-delay: var(--_dm, 0s);   /* other styles */ }  .hours {   animation-delay: var(--_dh, 0s);   /* other styles */ }

Just one more thing. Using CSS @supports and the properties we’ve already created, we can provide a fallback to browsers that do not supprt sin() and cos(). (Thank you, Temani Afif!):

@supports not (left: calc(1px * cos(45deg))) {   time {     left: 50% !important;     top: 50% !important;     transform: translate(-50%,-50%) rotate(var(--_d)) translate(var(--_r)) rotate(calc(-1*var(--_d)))   } }

And, voilà! Our clock is done! Here’s the final demo one more time. Again, it’s only supported in Firefox and Safari at the moment.

What else can we do?

Just messing around here, but we can quickly turn our clock into a circular image gallery by replacing the <time> tags with <img> then updating the width (--_w) and radius (--_r) values:

Let’s try one more. I mentioned earlier how the clock looked kind of like a modern art experiment. We can lean into that and re-create a pattern I saw on a poster (that I unfortunately didn’t buy) in an art gallery the other day. As I recall, it was called “Moon” and consisted of a bunch of dots forming a circle.

A large circle formed out of a bunch of smaller filled circles of various earthtone colors.

We’ll use an unordered list this time since the circles don’t follow a particular order. We’re not even going to put all the list items in the markup. Instead, let’s inject them with JavaScript and add a few controls we can use to manipulate the final result.

The controls are range inputs (<input type="range">) which we’ll wrap in a <form> and listen for the input event.

<form id="controls">   <fieldset>     <label>Number of rings       <input type="range" min="2" max="12" value="10" id="rings" />     </label>     <label>Dots per ring       <input type="range" min="5" max="12" value="7" id="dots" />     </label>     <label>Spread       <input type="range" min="10" max="40" value="40" id="spread" />     </label>   </fieldset> </form>

We’ll run this method on “input”, which will create a bunch of <li> elements with the degree (--_d) variable we used earlier applied to each one. We can also repurpose our radius variable (--_r) .

I also want the dots to be different colors. So, let’s randomize (well, not completely randomized) the HSL color value for each list item and store it as a new CSS variable, --_bgc:

const update = () => {   let s = "";   for (let i = 1; i <= rings.valueAsNumber; i++) {     const r = spread.valueAsNumber * i;     const theta = coords(dots.valueAsNumber * i);     for (let j = 0; j < theta.length; j++) {       s += `<li style="--_d:$ {theta[j]};--_r:$ {r}px;--_bgc:hsl($ {random(         50,         25       )},$ {random(90, 50)}%,$ {random(90, 60)}%)"></li>`;     }   }   app.innerHTML = s; }

The random() method picks a value within a defined range of numbers:

const random = (max, min = 0, f = true) => f ? Math.floor(Math.random() * (max - min) + min) : Math.random() * max;

And that’s it. We use JavaScript to render the markup, but as soon as it’s rendered, we don’t really need it. The sin() and cos() functions help us position all the dots in the right spots.

Final thoughts

Placing things around a circle is a pretty basic example to demonstrate the powers of trigonometry functions like sin() and cos(). But it’s really cool that we are getting modern CSS features that provide new solutions for old workarounds I’m sure we’ll see way more interesting, complex, and creative use cases, especially as browser support comes to Chrome and Edge.


Creating a Clock with the New CSS sin() and cos() Trigonometry Functions originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

CSS-Tricks

, , ,

Inline Image Previews with Sharp, BlurHash, and Lambda Functions

Don’t you hate it when you load a website or web app, some content displays and then some images load — causing content to shift around? That’s called content reflow and can lead to an incredibly annoying user experience for visitors.

I’ve previously written about solving this with React’s Suspense, which prevents the UI from loading until the images come in. This solves the content reflow problem but at the expense of performance. The user is blocked from seeing any content until the images come in.

Wouldn’t it be nice if we could have the best of both worlds: prevent content reflow while also not making the user wait for the images? This post will walk through generating blurry image previews and displaying them immediately, with the real images rendering over the preview whenever they happen to come in.

So you mean progressive JPEGs?

You might be wondering if I’m about to talk about progressive JPEGs, which are an alternate encoding that causes images to initially render — full size and blurry — and then gradually refine as the data come in until everything renders correctly.

This seems like a great solution until you get into some of the details. Re-encoding your images as progressive JPEGs is reasonably straightforward; there are plugins for Sharp that will handle that for you. Unfortunately, you still need to wait for some of your images’ bytes to come over the wire until even a blurry preview of your image displays, at which point your content will reflow, adjusting to the size of the image’s preview.

You might look for some sort of event to indicate that an initial preview of the image has loaded, but none currently exists, and the workarounds are … not ideal.

Let’s look at two alternatives for this.

The libraries we’ll be using

Before we start, I’d like to call out the versions of the libraries I’ll be using for this post:

Making our own previews

Most of us are used to using <img /> tags by providing a src attribute that’s a URL to some place on the internet where our image exists. But we can also provide a Base64 encoding of an image and just set that inline. We wouldn’t usually want to do that since those Base64 strings can get huge for images and embedding them in our JavaScript bundles can cause some serious bloat.

But what if, when we’re processing our images (to resize, adjust the quality, etc.), we also make a low quality, blurry version of our image and take the Base64 encoding of that? The size of that Base64 image preview will be significantly smaller. We could save that preview string, put it in our JavaScript bundle, and display that inline until our real image is done loading. This will cause a blurry preview of our image to show immediately while the image loads. When the real image is done loading, we can hide the preview and show the real image.

Let’s see how.

Generating our preview

For now, let’s look at Jimp, which has no dependencies on things like node-gyp and can be installed and used in a Lambda.

Here’s a function (stripped of error handling and logging) that uses Jimp to process an image, resize it, and then creates a blurry preview of the image:

function resizeImage(src, maxWidth, quality) {   return new Promise<ResizeImageResult>(res => {     Jimp.read(src, async function (err, image) {       if (image.bitmap.width > maxWidth) {         image.resize(maxWidth, Jimp.AUTO);       }       image.quality(quality);        const previewImage = image.clone();       previewImage.quality(25).blur(8);       const preview = await previewImage.getBase64Async(previewImage.getMIME());        res({ STATUS: "success", image, preview });     });   }); }

For this post, I’ll be using this image provided by Flickr Commons:

Photo of the Big Boy statue holding a burger.

And here’s what the preview looks like:

Blurry version of the Big Boy statue.

If you’d like to take a closer look, here’s the same preview in a CodeSandbox.

Obviously, this preview encoding isn’t small, but then again, neither is our image; smaller images will produce smaller previews. Measure and profile for your own use case to see how viable this solution is.

Now we can send that image preview down from our data layer, along with the actual image URL, and any other related data. We can immediately display the image preview, and when the actual image loads, swap it out. Here’s some (simplified) React code to do that:

const Landmark = ({ url, preview = "" }) => {     const [loaded, setLoaded] = useState(false);     const imgRef = useRef<HTMLImageElement>(null);        useEffect(() => {       // make sure the image src is added after the onload handler       if (imgRef.current) {         imgRef.current.src = url;       }     }, [url, imgRef, preview]);        return (       <>         <Preview loaded={loaded} preview={preview} />         <img           ref={imgRef}           onLoad={() => setTimeout(() => setLoaded(true), 3000)}           style={{ display: loaded ? "block" : "none" }}         />       </>     );   };      const Preview: FunctionComponent<LandmarkPreviewProps> = ({ preview, loaded }) => {     if (loaded) {       return null;     } else if (typeof preview === "string") {       return <img key="landmark-preview" alt="Landmark preview" src={preview} style={{ display: "block" }} />;     } else {       return <PreviewCanvas preview={preview} loaded={loaded} />;     }   };

Don’t worry about the PreviewCanvas component yet. And don’t worry about the fact that things like a changing URL aren’t accounted for.

Note that we set the image component’s src after the onLoad handler to ensure it fires. We show the preview, and when the real image loads, we swap it in.

Improving things with BlurHash

The image preview we saw before might not be small enough to send down with our JavaScript bundle. And these Base64 strings will not gzip well. Depending on how many of these images you have, this may or may not be good enough. But if you’d like to compress things even smaller and you’re willing to do a bit more work, there’s a wonderful library called BlurHash.

BlurHash generates incredibly small previews using Base83 encoding. Base83 encoding allows it to squeeze more information into fewer bytes, which is part of how it keeps the previews so small. 83 might seem like an arbitrary number, but the README sheds some light on this:

First, 83 seems to be about how many low-ASCII characters you can find that are safe for use in all of JSON, HTML and shells.

Secondly, 83 * 83 is very close to, and a little more than, 19 * 19 * 19, making it ideal for encoding three AC components in two characters.

The README also states how Signal and Mastodon use BlurHash.

Let’s see it in action.

Generating blurhash previews

For this, we’ll need to use the Sharp library.


Note

To generate your blurhash previews, you’ll likely want to run some sort of serverless function to process your images and generate the previews. I’ll be using AWS Lambda, but any alternative should work.

Just be careful about maximum size limitations. The binaries Sharp installs add about 9 MB to the serverless function’s size.

To run this code in an AWS Lambda, you’ll need to install the library like this:

"install-deps": "npm i && SHARP_IGNORE_GLOBAL_LIBVIPS=1 npm i --arch=x64 --platform=linux sharp"

And make sure you’re not doing any sort of bundling to ensure all of the binaries are sent to your Lambda. This will affect the size of the Lambda deploy. Sharp alone will wind up being about 9 MB, which won’t be great for cold start times. The code you’ll see below is in a Lambda that just runs periodically (without any UI waiting on it), generating blurhash previews.


This code will look at the size of the image and create a blurhash preview:

import { encode, isBlurhashValid } from "blurhash"; const sharp = require("sharp");  export async function getBlurhashPreview(src) {   const image = sharp(src);   const dimensions = await image.metadata();    return new Promise(res => {     const { width, height } = dimensions;      image       .raw()       .ensureAlpha()       .toBuffer((err, buffer) => {         const blurhash = encode(new Uint8ClampedArray(buffer), width, height, 4, 4);         if (isBlurhashValid(blurhash)) {           return res({ blurhash, w: width, h: height });         } else {           return res(null);         }       });   }); }

Again, I’ve removed all error handling and logging for clarity. Worth noting is the call to ensureAlpha. This ensures that each pixel has 4 bytes, one each for RGB and Alpha.

Jimp lacks this method, which is why we’re using Sharp; if anyone knows otherwise, please drop a comment.

Also, note that we’re saving not only the preview string but also the dimensions of the image, which will make sense in a bit.

The real work happens here:

const blurhash = encode(new Uint8ClampedArray(buffer), width, height, 4, 4);

We’re calling blurhash‘s encode method, passing it our image and the image’s dimensions. The last two arguments are componentX and componentY, which from my understanding of the documentation, seem to control how many passes blurhash does on our image, adding more and more detail. The acceptable values are 1 to 9 (inclusive). From my own testing, 4 is a sweet spot that produces the best results.

Let’s see what this produces for that same image:

{   "blurhash" : "UAA]{ox^0eRiO_bJjdn~9#M_=|oLIUnzxtNG",   "w" : 276,   "h" : 400 }

That’s incredibly small! The tradeoff is that using this preview is a bit more involved.

Basically, we need to call blurhash‘s decode method and render our image preview in a canvas tag. This is what the PreviewCanvas component was doing before and why we were rendering it if the type of our preview was not a string: our blurhash previews use an entire object — containing not only the preview string but also the image dimensions.

Let’s look at our PreviewCanvas component:

const PreviewCanvas: FunctionComponent<CanvasPreviewProps> = ({ preview }) => {     const canvasRef = useRef<HTMLCanvasElement>(null);        useLayoutEffect(() => {       const pixels = decode(preview.blurhash, preview.w, preview.h);       const ctx = canvasRef.current.getContext("2d");       const imageData = ctx.createImageData(preview.w, preview.h);       imageData.data.set(pixels);       ctx.putImageData(imageData, 0, 0);     }, [preview]);        return <canvas ref={canvasRef} width={preview.w} height={preview.h} />;   };

Not too terribly much going on here. We’re decoding our preview and then calling some fairly specific Canvas APIs.

Let’s see what the image previews look like:

In a sense, it’s less detailed than our previous previews. But I’ve also found them to be a bit smoother and less pixelated. And they take up a tiny fraction of the size.

Test and use what works best for you.

Wrapping up

There are many ways to prevent content reflow as your images load on the web. One approach is to prevent your UI from rendering until the images come in. The downside is that your user winds up waiting longer for content.

A good middle-ground is to immediately show a preview of the image and swap the real thing in when it’s loaded. This post walked you through two ways of accomplishing that: generating degraded, blurry versions of an image using a tool like Sharp and using BlurHash to generate an extremely small, Base83 encoded preview.

Happy coding!


Inline Image Previews with Sharp, BlurHash, and Lambda Functions originally published on CSS-Tricks. You should get the newsletter.

CSS-Tricks

, , , , , ,
[Top]

Netlify Has Scheduled Functions

(This is a sponsored post.)

Hey! Scheduled Functions are cool! Think of them like a CRON job. I want this code to run every Monday at 2pm. I want this code run every hour on the hour. That kind of thing. Why would you want to do that? There are tons of reasons! Perhaps something like “send my newsletter” where you write it on your site in Markdown, it gets processed into an email template and sent out via a Netlify Function. Now you could make that happen on a set schedule. Or something like “send all my new blog posts out, if there are any.”

This is pretty near and dear to me, because I’ve reached for paid outside services to do this for me in the past!

See, I have a little mini site right here on CSS-Tricks that is very time-based in that it lists upcoming conferences. It’s a totally static site, so once a date is passed, it, uh, kinda doesn’t matter, the site just stays how it is. But there is code that during the build process, it only builds out conferences in the future, not the past. So the trick is to run the build process every day.

Before Scheduled Functions, I used Zapier to do this, which has been humming along doing this just fine for years:

But the knowlege of how that works is basically locked up in my head. Plus, I’m doing it on a non-free third-party service, and there is always a little bit of Rube Goldberg-y technical debt to that.

I’m literally switching up how I’m doing it right this second as I type out this blog post. I’m just going to write the dumbest function ever that kicks a POST request to the URL that Netlify gives me to trigger builds and do it once a day. That’s it.

Might as well keep that URL as an “Enviornment Variable” like process.env.BUILD_SECRET or whatever

With this in place, I’m gonna switch off my Zap and just rest easy knowing all this functionality is now shored up in one place.

This is a Beta feature, for the record. Netlify doesn’t recommend it for production just quiiiiite yet, as per the Labs documentation. But my thing isn’t super mission-critical so I’m giving it a shot.

What else might you use them for? The blog post about the new feature has some ideas:

• Invoke a set of APIs to collate data for a report at the end of every week

• Back up data from one data store to another at the end of every night

• Build and deploy all your static content every hour instead of for every authored or merged pull request, or

• Anything else you can imagine you might want to invoke on a regular basis!


Netlify Has Scheduled Functions originally published on CSS-Tricks. You should get the newsletter and become a supporter.

CSS-Tricks

, ,
[Top]

Accessing Your Data With Netlify Functions and React

(This is a sponsored post.)

Static site generators are popular for their speed, security, and user experience. However, sometimes your application needs data that is not available when the site is built. React is a library for building user interfaces that helps you retrieve and store dynamic data in your client application. 

Fauna is a flexible, serverless database delivered as an API that completely eliminates operational overhead such as capacity planning, data replication, and scheduled maintenance. Fauna allows you to model your data as documents, making it a natural fit for web applications written with React. Although you can access Fauna directly via a JavaScript driver, this requires a custom implementation for each client that connects to your database. By placing your Fauna database behind an API, you can enable any authorized client to connect, regardless of the programming language.

Netlify Functions allow you to build scalable, dynamic applications by deploying server-side code that works as API endpoints. In this tutorial, you build a serverless application using React, Netlify Functions, and Fauna. You learn the basics of storing and retrieving your data with Fauna. You create and deploy Netlify Functions to access your data in Fauna securely. Finally, you deploy your React application to Netlify.

Getting started with Fauna

Fauna is a distributed, strongly consistent OLTP NoSQL serverless database that is ACID-compliant and offers a multi-model interface. Fauna also supports document, relational, graph, and temporal data sets from a single query. First, we will start by creating a database in the Fauna console by selecting the Database tab and clicking on the Create Database button.

Next, you will need to create a Collection. For this, you will need to select a database, and under the Collections tab, click on Create Collection.

Fauna uses a particular structure when it comes to persisting data. The design consists of attributes like the example below.

{   "ref": Ref(Collection("avengers"), "299221087899615749"),   "ts": 1623215668240000,   "data": {     "id": "db7bd11d-29c5-4877-b30d-dfc4dfb2b90e",     "name": "Captain America",     "power": "High Strength",     "description": "Shield"   } }

Notice that Fauna keeps a ref column which is a unique identifier used to identify a particular document. The ts attribute is a timestamp to determine the time of creating the record and the data attribute responsible for the data.

Why creating an index is important

Next, let’s create two indexes for our avengers collection. This will be pretty valuable in the latter part of the project. You can create an index from the Index tab or from the Shell tab, which provides a console to execute scripts. Fauna supports two types of querying techniques: FQL (Fauna’s Query language) and GraphQL. FQL operates based on the schema of Fauna, which includes documents, collections, indexes, sets, and databases. 

Let’s create the indexes from the shell.

This command will create an index on the Collection, which will create an index by the id field inside the data object. This index will return a ref of the data object. Next, let’s create another index for the name attribute and name it avenger_by_name.

Creating a server key

To create a server key, we need to navigate the Security tab and click on the New Key button. This section will prompt you to create a key for a selected database and the user’s role.

Getting started with Netlify functions and React

In this section, we’ll see how we create Netlify functions with React. We will be using create-react-app to create the react app.

npx create-react-app avengers-faunadb

After creating the react app, let’s install some dependencies, including Fauna and Netlify dependencies.

yarn add axios bootstrap node-sass uuid faunadb react-netlify-identity react-netlify-identity-widget

Now let’s create our first Netlfiy function. To make the functions, first, we need to install Netlifiy CLI globally.

npm install netlify-cli -g

Now that the CLI is installed, let’s create a .env file on our project root with the following fields.

FAUNADB_SERVER_SECRET= <FaunaDB secret key> REACT_APP_NETLIFY= <Netlify app url>

Next, Let’s see how we can start with creating netlify functions. For this, we will need to create a directory in our project root called functions and a file called netlify.toml, which will be responsible for maintaining configurations for our Netlify project. This file defines our function’s directory, build directory, and commands to execute.

[build] command = "npm run build" functions = "functions/" publish = "build"  [[redirects]]   from = "/api/*"   to = "/.netlify/functions/:splat"   status = 200   force = true

We will do some additional configuration for the Netlify configuration file, like in the redirection section in this example. Notice that we are changing the default path of the Netlify function of /.netlify/** to /api/. This configuration is mainly for the improvement of the look and field of the API URL. So to trigger or call our function, we can use the path:

https://domain.com/api/getPokemons

 …instead of:

https://domain.com/.netlify/getPokemons

Next, let’s create our Netlify function in the functions directory. But, first, let’s make a connection file for Fauna called util/connections.js, returning a Fauna connection object.

const faunadb = require('faunadb'); const q = faunadb.query  const clientQuery = new faunadb.Client({   secret: process.env.FAUNADB_SERVER_SECRET, });  module.exports = { clientQuery, q };

Next, let’s create a helper function checking for reference and returning since we will need to parse the data on several occasions throughout the application. This file will be util/helper.js.

const responseObj = (statusCode, data) => {   return {     statusCode: statusCode,     headers: {      /* Required for CORS support to work */       "Access-Control-Allow-Origin": "*",       "Access-Control-Allow-Headers": "*",       "Access-Control-Allow-Methods": "GET, POST, OPTIONS",     },    body: JSON.stringify(data)   }; };  const requestObj = (data) => {   return JSON.parse(data); }  module.exports = { responseObj: responseObj, requestObj: requestObj }

Notice that the above helper functions handle the CORS issues, stringifying and parsing of JSON data. Let’s create our first function, getAvengers, which will return all the data.

const { responseObj } = require('./util/helper'); const { q, clientQuery } = require('./util/connection');  exports.handler = async (event, context) => {   try {    let avengers = await clientQuery.query(      q.Map(        q.Paginate(q.Documents(q.Collection('avengers'))),        q.Lambda(x => q.Get(x))       )     )     return responseObj(200, avengers)   } catch (error) {     console.log(error)     return responseObj(500, error);   } };

In the above code example, you can see that we have used several FQL commands like Map, Paginate, Lamda. The Map key is used to iterate through the array, and it takes two arguments: an Array and Lambda. We have passed the Paginate for the first parameter, which will check for reference and return a page of results (an array). Next, we used a Lamda statement, an anonymous function that is quite similar to an anonymous arrow function in ES6.

Next, Let’s create our function AddAvenger responsible for creating/inserting data into the Collection.

const { requestObj, responseObj } = require('./util/helper'); const { q, clientQuery } = require('./util/connection');  exports.handler = async (event, context) => {   let data = requestObj(event.body);    try {     let avenger = await clientQuery.query(       q.Create(         q.Collection('avengers'),         {           data: {             id: data.id,             name: data.name,             power: data.power,             description: data.description           }         }       )     );      return responseObj(200, avenger)   } catch (error) {     console.log(error)     return responseObj(500, error);   }   };

To save data for a particular collection, we will have to pass, or data to the data:{} object like in the above code example. Then we need to pass it to the Create function and point it to the collection you want and the data. So, let’s run our code and see how it works through the netlify dev command.

Let’s trigger the GetAvengers function through the browser through the URL http://localhost:8888/api/GetAvengers.

The above function will fetch the avenger object by the name property searching from the avenger_by_name index. But, first, let’s invoke the GetAvengerByName function through a Netlify function. For that, let’s create a function called SearchAvenger.

const { responseObj } = require('./util/helper'); const { q, clientQuery } = require('./util/connection');  exports.handler = async (event, context) => {   const {     queryStringParameters: { name },   } = event;    try {     let avenger = await clientQuery.query(       q.Call(q.Function("GetAvengerByName"), [name])     );     return responseObj(200, avenger)   } catch (error) {     console.log(error)     return responseObj(500, error);   } };

Notice that the Call function takes two arguments where the first parameter will be the reference for the FQL function that we created and the data that we need to pass to the function.

Calling the Netlify function through React

Now that several functions are available let’s consume those functions through React. Since the functions are REST APIs, let’s consume them via Axios, and for state management, let’s use React’s Context API. Let’s start with the Application context called AppContext.js.

import { createContext, useReducer } from "react"; import AppReducer from "./AppReducer"  const initialState = {     isEditing: false,     avenger: { name: '', description: '', power: '' },     avengers: [],     user: null,     isLoggedIn: false };  export const AppContext = createContext(initialState);  export const AppContextProvider = ({ children }) => {     const [state, dispatch] = useReducer(AppReducer, initialState);      const login = (data) => { dispatch({ type: 'LOGIN', payload: data }) }     const logout = (data) => { dispatch({ type: 'LOGOUT', payload: data }) }     const getAvenger = (data) => { dispatch({ type: 'GET_AVENGER', payload: data }) }     const updateAvenger = (data) => { dispatch({ type: 'UPDATE_AVENGER', payload: data }) }     const clearAvenger = (data) => { dispatch({ type: 'CLEAR_AVENGER', payload: data }) }     const selectAvenger = (data) => { dispatch({ type: 'SELECT_AVENGER', payload: data }) }     const getAvengers = (data) => { dispatch({ type: 'GET_AVENGERS', payload: data }) }     const createAvenger = (data) => { dispatch({ type: 'CREATE_AVENGER', payload: data }) }     const deleteAvengers = (data) => { dispatch({ type: 'DELETE_AVENGER', payload: data }) }      return <AppContext.Provider value={{         ...state,         login,         logout,         selectAvenger,         updateAvenger,         clearAvenger,         getAvenger,         getAvengers,         createAvenger,         deleteAvengers     }}>{children}</AppContext.Provider> }  export default AppContextProvider;

Let’s create the Reducers for this context in the AppReducer.js file, Which will consist of a reducer function for each operation in the application context.

const updateItem = (avengers, data) => {     let avenger = avengers.find((avenger) => avenger.id === data.id);     let updatedAvenger = { ...avenger, ...data };     let avengerIndex = avengers.findIndex((avenger) => avenger.id === data.id);     return [         ...avengers.slice(0, avengerIndex),         updatedAvenger,         ...avengers.slice(++avengerIndex),     ]; }  const deleteItem = (avengers, id) => {     return avengers.filter((avenger) => avenger.data.id !== id) }  const AppReducer = (state, action) => {     switch (action.type) {         case 'SELECT_AVENGER':             return {                 ...state,                 isEditing: true,                 avenger: action.payload             }         case 'CLEAR_AVENGER':             return {                 ...state,                 isEditing: false,                 avenger: { name: '', description: '', power: '' }             }         case 'UPDATE_AVENGER':             return {                 ...state,                 isEditing: false,                 avengers: updateItem(state.avengers, action.payload)             }         case 'GET_AVENGER':             return {                 ...state,                 avenger: action.payload.data             }         case 'GET_AVENGERS':             return {                 ...state,                 avengers: Array.isArray(action.payload && action.payload.data) ? action.payload.data : [{ ...action.payload }]             };         case 'CREATE_AVENGER':             return {                 ...state,                 avengers: [{ data: action.payload }, ...state.avengers]             };         case 'DELETE_AVENGER':             return {                 ...state,                 avengers: deleteItem(state.avengers, action.payload)             };         case 'LOGIN':             return {                 ...state,                 user: action.payload,                 isLoggedIn: true             };         case 'LOGOUT':             return {                 ...state,                 user: null,                 isLoggedIn: false             };         default:             return state     } }  export default AppReducer; 

Since the application context is now available, we can fetch data from the Netlify functions that we have created and persist them in our application context. So let’s see how to call one of these functions.

const { avengers, getAvengers } = useContext(AppContext);  const GetAvengers = async () => {   let { data } = await axios.get('/api/GetAvengers);   getAvengers(data) }

To get the data to the application contexts, let’s import the function getAvengers from our application context and pass the data fetched by the get call. This function will call the reducer function, which will keep the data in the context. To access the context, we can use the attribute called avengers. Next, let’s see how we could save data on the avengers collection.

const { createAvenger } = useContext(AppContext);  const CreateAvenger = async (e) => {   e.preventDefault();   let new_avenger = { id: uuid(), ...newAvenger }   await axios.post('/api/AddAvenger', new_avenger);   clear();   createAvenger(new_avenger) }

The above newAvenger object is the state object which will keep the form data. Notice that we pass a new id of type uuid to each of our documents. Thus, when the data is saved in Fauna, We will be using the createAvenger function in the application context to save the data in our context. Similarly, we can invoke all the netlify functions with CRUD operations like this via Axios.

How to deploy the application to Netlify

Now that we have a working application, we can deploy this app to Netlify. There are several ways that we can deploy this application:

  1. Connecting and deploying the application through GitHub
  2. Deploying the application through the Netlify CLI

Using the CLI will prompt you to enter specific details and selections, and the CLI will handle the rest. But in this example, we will be deploying the application through Github. So first, let’s log in to the Netlify dashboard and click on New Site from Git button. Next, It will prompt you to select the Repo you need to deploy and the configurations for your site like build command, build folder, etc.

How to authenticate and authorize functions by Netlify Identity

Netlify Identity provides a full suite of authentication functionality to your application which will help us to manage authenticated users throughout the application. Netlify Identity can be integrated easily into the application without using any other 3rd party service and libraries. To enable Netlify Identity, we need to login into our Neltify dashboard, and under our deployed site, we need to move to the Identity tab and allow the identity feature.

Enabling Identity will provide a link to your netlify identity. You will have to copy that URL and add it to the .env file of your application for REACT_APP_NETLIFY. Next, We need to add the Netlify Identity to our React application through the netlify-identity-widget and the Netlify functions. But, first, let’s add the REACT_APP_NETLIFY property for the Identity Context Provider component in the index.js file.

import React from 'react'; import ReactDOM from 'react-dom'; import './index.css'; import "react-netlify-identity-widget/styles.css" import 'bootstrap/dist/css/bootstrap.css'; import App from './App'; import { IdentityContextProvider } from "react-netlify-identity-widget" const url = process.env.REACT_APP_NETLIFY;  ReactDOM.render(   <IdentityContextProvider url=https://css-tricks.com/accessing-data-netlify-functions-react/>     <App />   </IdentityContextProvider>,   document.getElementById('root') );

This component is the Navigation bar that we use in this application. This component will be on top of all the other components to be the ideal place to handle the authentication. This react-netlify-identity-widget will add another component that will handle the user signI= in and sign up.

Next, let’s use the Identity in our Netlify functions. Identity will introduce some minor modifications to our functions, like the below function GetAvenger.

const { responseObj } = require('./util/helper'); const { q, clientQuery } = require('./util/connection');  exports.handler = async (event, context) => {     if (context.clientContext.user) {         const {             queryStringParameters: { id },         } = event;         try {             const avenger = await clientQuery.query(                 q.Get(                     q.Match(q.Index('avenger_by_id'), id)                 )             );             return responseObj(200, avenger)         } catch (error) {             console.log(error)             return responseObj(500, error);         }     } else {         return responseObj(401, 'Unauthorized');     } };

The context of each request will consist of a property called clientContext, which will consist of authenticated user details. In the above example, we use a simple if condition to check the user context. 

To get the clientContext in each of our requests, we need to pass the user token through the Authorization Headers. 

const { user } = useIdentityContext();  const GetAvenger = async (id) => {   let { data } = await axios.get('/api/GetAvenger/?id=' + id, user && {     headers: {       Authorization: `Bearer $ {user.token.access_token}`     }   });   getAvenger(data) }

This user token will be available in the user context once logged in to the application through the netlify identity widget.

As you can see, Netlify functions and Fauna look to be a promising duo for building serverless applications. You can follow this GitHub repo for the complete code and refer to this URL for the working demo.

Conclusion

In conclusion, Fauna and Netlify look to be a promising duo for building serverless applications. Netlify also provides the flexibility to extend its functionality through the plugins to enhance the experience. The pricing plan with pay as you go is ideal for developers to get started with fauna. Fauna is extremely fast, and it auto-scales so that developers will have the time to focus on their development more than ever. Fauna can handle complex database operations where you would find in Relational, Document, Graph, Temporal databases. Fauna Driver support all the major languages such as Android, C#, Go, Java, JavaScript, Python, Ruby, Scala, and Swift. With all these excellent features, Fauna looks to be one of the best Serverless databases. For more information, go through Fauna documentation.


The post Accessing Your Data With Netlify Functions and React appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , ,
[Top]

Serverless Functions as Proxies

The first time cloud functions / serverless functions clicked for me was when I saw and tried Auth0’s (now defunct) Webtask. It was a little CodePen-like IDE but you didn’t really see anything aside from code and logs. The point was to write little bits of Node when you hit the functions URL (that’s literally exactly what a serverless function is). It would even store your secrets for you, meaning that you could use the serverless function as a proxy. You hit the function, the function hits the API using your unexposed API Key secrets, the API returns data, the function then returns data, and that data is available to the client side for you to work with. The entire point was 1) you can snag data from an otherwise totally static website, and 2) your API keys are protected. Brilliant.

I still miss Webtask, but I’m sure there are better and fancier things these days. I don’t have a solid handle on the whole landscape. Even AWS has an online editor for lambdas (a “lambda” is AWS’s standards-setting implementation of what a serverless function is), but using the AWS console directly for anything isn’t usually… very good. Functions in AWS Amplify are probably a better bet there.

My guess is the proper modern way of building these things are things like…

But there are all sorts of other tools that seem pretty modern that I just can’t speak to as well, but seem good:

But what makes me think of all this, and is also in the category of things I don’t have any personal experience with, is Pipedream. I heard about it via Raymond, who has a similar story to mine:

One of the first things that intrigued me about serverless, and honestly it’s not really that novel, is the ability to build proxies to other APIs. So for example, imagine a cool API that requires authentication of some sort to use, like an API key. If you use this in client-side JavaScript, anyone can look at your code and get your key. Better services let you lock a key to a domain, but if you don’t have that option, then a simple use of serverless is to simply give you an endpoint that makes the call to the API with your key.

Raymond Camden, “Using Pipedream to Proxy Other APIs”

Pipedream looks pretty fancy:

Not only is it a web-based IDE for crafting functions, but I can trigger it a bunch of ways—a URL of course, but also on a CRON, or things like via email or RSS. Neat. Look at all the other options too. Slack? GitHub? Twitter? It’s kinda like how Zapier looks in that way, only where Zapier is entirely no-code (I think). Pipedream makes code a first-class citizen.

And it does secrets by way of account-level environment variables.

The Pipedream screen for environment variables. A dark gray sidebar is on the left with different menu options where Settings is the currently selected item in the vertical list. The right side of the screen contains the environment variable information with a paragraph defining them, an example, then a bright blue button to create a new environment variable.

So, it’s perfect for being a serverless proxy. Read Raymond’s post for actual implementation and code examples.


The post Serverless Functions as Proxies appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, ,
[Top]

Building a Headless CMS with Fauna and Vercel Functions

This article introduces the concept of the headless CMS, a backend-only content management system that allows developers to create, store, manage and publish the content over an API using the Fauna and Vercel functions. This improves the frontend-backend workflow, that enables developers to build excellent user experience quickly.

In this tutorial, we will learn and use headless CMS, Fauna, and Vercel functions to build a blogging platform, Blogify🚀. After that, you can easily build any web application using a headless CMS, Fauna and Vercel functions.

Introduction

According to MDN, A content management system (CMS) is a computer software used to manage the creation and modification of digital content. CMS typically has two major components: a content management application (CMA), as the front-end user interface that allows a user, even with limited expertise, to add, modify, and remove content from a website without the intervention of a webmaster; and a content delivery application (CDA), that compiles the content and updates the website.

The Pros And Cons Of Traditional vs Headless CMS

Choosing between these two can be quite confusing and complicated. But they both have potential advantages and drawbacks.

Traditional CMS Pros

  • Setting up your content on a traditional CMS is much easier as everything you need (content management, design, etc) are made available to you.
  • A lot of traditional CMS has drag and drop, making it easy for a person with no programming experience to work easily with them. It also has support for easy customization with zero to little coding knowledge.

Traditional CMS Cons

  • The plugins and themes which the traditional CMS relies on may contain malicious codes or bugs and slow the speed of the website or blog.
  • The traditional coupling of the front-end and back-end definitely would more time and money for maintenance and customization.

Headless CMS Pros

  • There’s flexibility with choice of frontend framework to use since the frontend and backend are separated from each other, it makes it possible for you to pick which front-end technology suits your needs. It gives the freewill to choose the tools need to build the frontend—flexibility during the development stage.
  • Deploying works easier with headless CMS. The applications (blogs, websites, etc) built with headless CMS can be easily be deployed to work on various displays such as web device, mobile devices, AR/VR devices.

Headless CMS Cons

  • You are left with the worries of managing your back-end infrastructures, setting up the UI component of your site, app.
  • Implementation of headless CMS are known to be more costly against the traditional CMS. Building headless CMS application that embodies analytics are not cost-effective.

Fauna uses a preexisting infrastructure to build web applications without the usually setting up a custom API server. This efficiently helps to save time for developers, and the stress of choosing regions and configuring storage that exists among other databases; which is global/multi-region by default, are nonexistent with Fauna. All maintenance we need are actively taken care of by engineers and automated DevOps at Fauna. We will use Fauna as our backend-only content management system.

Pros Of Using Fauna

  • The ease to use and create a Fauna database instance from within development environment of the hosting platforms like Netlify or Vercel.
  • Great support for querying data via GraphQL or use Fauna’s own query language. Fauna Query Language (FQL), for complex functions.
  • Access data in multiple models including relational, document, graph and temporal.
  • Capabilities like built-in authentication, transparent scalability and multi-tenancy are fully available on Fauna.
  • Add-on through Fauna Console as well as Fauna Shell makes it easy to manage database instance very easily.

Vercel Functions, also known as Serverless Functions, according to the docs are pieces of code written with backend languages that take an HTTP request and provide a response.

Prerequisites

To take full advantage of this tutorial, ensure the following tools are available or installed on your local development environment:

  • Access to Fauna dashboard
  • Basic knowledge of React and React Hooks
  • Have create-react-app installed as a global package or use npx to bootstrap the project.
  • Node.js version >= 12.x.x installed on your local machine.
  • Ensure that npm or yarn is also installed as a package manager

Database Setup With Fauna

Sign in into your fauna account to get started with Fauna, or first register a new account using either email credentials/details or using an existing Github account as a new user. You can register for a new account here. Once you have created a new account or signed in, you are going to be welcomed by the dashboard screen. We can also make use of the fauna shell if you love the shell environment. It easily allows you to create
and/or modify resources on Fauna through the terminal.

Using the fauna shell, the command is:

npm install --global fauna-shell fauna cloud-login

But we will use the website throughout this tutorial. Once signed in, the dashboard screen welcomes you:

Now we are logged in or have our accounts created, we can go ahead to create our Fauna. We’ll go through following simple steps to create the new fauna database using Fauna services. We start with naming our database, which we’ll use as our content management system. In this tutorial, we will name our database blogify.

With the database created, next step is to create a new data collection from the Fauna dashboard. Navigate to the Collection tab on the side menu and create a new collection by clicking on the NEW COLLECTION button.

We’ll then go ahead to give whatever name well suiting to our collection. Here we will call it blogify_posts.

Next step in getting our database ready is to create a new index. Navigate to the Indexes tab to create an index. Searching documents in Fauna can be done by using indexes, specifically by matching inputs against an index’s terms field. Click on the NEW INDEX button to create an index. Once in create index screen, fill out the form: selecting the collection we’ve created previously, then giving a name to our index. In this tutorial, we will name ours all_posts. We can now save our index.

After creating an index, now it’s time to create our DOCUMENT, this will contain the contents/data we want to use for our CMS website. Click on the NEW DOCUMENT button to get started. With the text editor to create our document, we’ll create an object data to serve our needs for the website.

The above post object represents the unit data we need to create our blog post. Your choice of data can be so different from what we have here, serving the purpose whatever you want it for within your website. You can create as much document you may need for your CMS website. To keep things simple, we just have three blog posts.

Now that we have our database setup complete to our choice, we can move on to create our React app, the frontend.

Create A New React App And Install Dependencies

For the frontend development, we will need dependencies such as Fauna SDK, styled-components and vercel in our React app. We will use the styled-components for the UI styling, use the vercel within our terminal to host our application. The Fauna SDK would be used to access our contents at the database we had setup. You can always replace the styled-components for whatever library you decide to use for your UI styling. Also use any UI framework or library you preferred to others.

npx create-react-app blogify # install dependencies once directory is done/created yarn add fauna styled-components # install vercel globally yarn global add vercel

The fauna package is Fauna JavaScript driver for Fauna. The library styled-components allows you to write actual CSS code to style your components. Once done with all the installation for the project dependencies, check the package.json file to confirm all installation was done
successfully.

Now let’s start an actual building of our blog website UI. We’ll start with the header section. We will create a Navigation component within the components folder inside the src folder, src/components, to contain our blog name, Blogify🚀.

import styled from "styled-components"; function Navigation() {   return (     <Wrapper>       <h1>Blogify🚀</h1>     </Wrapper>   ); } const Wrapper = styled.div`   background-color: #23001e;   color: #f3e0ec;   padding: 1.5rem 5rem;   & > h1 {     margin: 0px;   } `; export default Navigation;

After being imported within the App components, the above code coupled with the stylings through the styled-components library, will turn out to look like the below UI:

Now time to create the body of the website, that will contain the post data from our database. We structure a component, called Posts, which will contains our blog posts created on the backend.

import styled from "styled-components"; function Posts() {   return (     <Wrapper>       <h3>My Recent Articles</h3>       <div className="container"></div>     </Wrapper>   ); } const Wrapper = styled.div`   margin-top: 3rem;   padding-left: 5rem;   color: #23001e;   & > .container {     display: flex;     flex-wrap: wrap;   }   & > .container > div {     width: 50%;     padding: 1rem;     border: 2px dotted #ca9ce1;     margin-bottom: 1rem;     border-radius: 0.2rem;   }   & > .container > div > h4 {     margin: 0px 0px 5px 0px;   }   & > .container > div > button {     padding: 0.4rem 0.5rem;     border: 1px solid #f2befc;     border-radius: 0.35rem;     background-color: #23001e;     color: #ffffff;     font-weight: medium;     margin-top: 1rem;     cursor: pointer;   }   & > .container > div > article {     margin-top: 1rem;   } `; export default Posts;

The above code contains styles for JSX that we’ll still create once we start querying for data from the backend to the frontend.

Integrate Fauna SDK Into Our React App

To integrate the fauna client with the React app, you have to make an initial connection from the app. Create a new file db.js at the directory path src/config/. Then import the fauna driver and define a new client.
The secret passed as the argument to the fauna.Client() method is going to hold the access key from .env file:

import fauna from 'fauna'; const client = new fauna.Client({   secret: process.env.REACT_APP_DB_KEY, }); const q = fauna.query; export { client, q };

Inside the Posts component create a state variable called posts using useState React Hooks with a default value of an array. It is going to store the value of the content we’ll get back from our database using the setPosts function. Then define a second state variable, visible, with a default value of false, that we’ll use to hide or show more post content using the handleDisplay function that would be triggered by a button we’ll add later in the tutorial.

function App() {   const [posts, setPosts] = useState([]);   const [visible, setVisibility] = useState(false);   const handleDisplay = () => setVisibility(!visible);   // ... }

Creating A Serverless Function By Writing Queries

Since our blog website is going to perform only one operation, that’s to get the data/contents we created on the database, let’s create a new directory called src/api/ and inside it, we create a new file called index.js. Making the request with ES6, we’ll use import to import the client and the query instance from the config/db.js file:

export const getAllPosts = client   .query(q.Paginate(q.Match(q.Ref('indexes/all_posts'))))     .then(response => {       const expenseRef = response.data;       const getAllDataQuery = expenseRef.map(ref => {         return q.Get(ref);       });      return client.query(getAllDataQuery).then(data => data);    })    .catch(error => console.error('Error: ', error.message));
 })  .catch(error => console.error('Error: ', error.message));

The query above to the database is going to return a ref that we can map over to get the actual results need for the application. We’ll make sure to append the catch that will help check for an error while querying the database, so we can log it out.

Next is to display all the data returned from our CMS, database—from the Fauna collection. We’ll do so by invoking the query getAllPosts from the ./api/index.js file inside the useEffect Hook inside our Posts component. This is because when the Posts component renders for the first time, it iterates over the data, checking if there are any post in the database:

useEffect(() => {   getAllPosts.then((res) => {     setPosts(res);     console.log(res);   }); }, []);

Open the browser’s console to inspect the data returned from the database. If all things being right, and you’re closely following, the return data should look like the below:

With these data successfully returned from the database, we can now complete our Posts components, adding all necessary JSX elements that we’ve styled using styled-components library. We’ll use JavaScript map to loop over the posts state, array, only when the array is not empty:

import { useEffect, useState } from "react"; import styled from "styled-components"; import { getAllPosts } from "../api";  function Posts() {   useEffect(() => {     getAllPosts.then((res) => {       setPosts(res);       console.log(res);     });   }, []);    const [posts, setPosts] = useState([]);   const [visible, setVisibility] = useState(false);   const handleDisplay = () => setVisibility(!visible);    return (     <Wrapper>       <h3>My Recent Articles</h3>       <div className="container">         {posts &&           posts.map((post) => (             <div key={post.ref.id} id={post.ref.id}>               <h4>{post.data.post.title}</h4>               <em>{post.data.post.date}</em>               <article>                 {post.data.post.mainContent}                 <p style={{ display: visible ? "block" : "none" }}>                   {post.data.post.subContent}                 </p>               </article>               <button onClick={handleDisplay}>                 {visible ? "Show less" : "Show more"}               </button>             </div>           ))}       </div>     </Wrapper>   ); }  const Wrapper = styled.div`   margin-top: 3rem;   padding-left: 5rem;   color: #23001e;   & > .container {     display: flex;     flex-wrap: wrap;   }   & > .container > div {     width: 50%;     padding: 1rem;     border: 2px dotted #ca9ce1;     margin-bottom: 1rem;     border-radius: 0.2rem;   }   & > .container > div > h4 {     margin: 0px 0px 5px 0px;   }   & > .container > div > button {     padding: 0.4rem 0.5rem;     border: 1px solid #f2befc;     border-radius: 0.35rem;     background-color: #23001e;     color: #ffffff;     font-weight: medium;     margin-top: 1rem;     cursor: pointer;   }   & > .container > div > article {     margin-top: 1rem;   } `;  export default Posts;

With the complete code structure above, our blog website, Blogify🚀, will look like the below UI:

Deploying To Vercel

Vercel CLI provides a set of commands that allow you to deploy and manage your projects. The following steps will get your project hosted from your terminal on vercel platform fast and easy:

vercel login

Follow the instructions to login into your vercel account on the terminal

vercel

Using the vercel command from the root of a project directory. This will prompt questions that we will provide answers to depending on what’s asked.

vercel ? Set up and deploy “~/Projects/JavaScript/React JS/blogify”? [Y/n]  ? Which scope do you want to deploy to? ikehakinyemi ? Link to existing project? [y/N] n ? What’s your project’s name? (blogify)    # click enter if you don't want to change the name of the project ? In which directory is your code located? ./    # click enter if you running this deployment from root directory ? ? Want to override the settings? [y/N] n

This will deploy your project to vercel. Visit your vercel account to complete any other setup needed for CI/CD purpose.

Conclusion

I’m glad you followed the tutorial to this point, hope you’ve learnt how to use Fauna as Headless CMS. The combination of Fauna with Headless CMS concepts you can build great web application, from e-commerce application to Notes keeping application, any web application that needs data to be stored and retrieved for use on the frontend. Here’s the GitHub link to code sample we used within our tutorial, and the live demo which is hosted on vercel.


The post Building a Headless CMS with Fauna and Vercel Functions appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , ,
[Top]

Serverless Functions: The Secret to Ultra-Productive Front-End Teams

Modern apps place high demands on front-end developers. Web apps require complex functionality, and the lion’s share of that work is falling to front-end devs:

  • building modern, accessible user interfaces
  • creating interactive elements and complex animations
  • managing complex application state
  • meta-programming: build scripts, transpilers, bundlers, linters, etc.
  • reading from REST, GraphQL, and other APIs
  • middle-tier programming: proxies, redirects, routing, middleware, auth, etc.

This list is daunting on its own, but it gets really rough if your tech stack doesn’t optimize for simplicity. A complex infrastructure introduces hidden responsibilities that introduce risk, slowdowns, and frustration.

Depending on the infrastructure we choose, we may also inadvertently add server configuration, release management, and other DevOps duties to a front-end developer’s plate.

Software architecture has a direct impact on team productivity. Choose tools that avoid hidden complexity to help your teams accomplish more and feel less overloaded.

The sneaky middle tier — where front-end tasks can balloon in complexity

Let’s look at a task I’ve seen assigned to multiple front-end teams: create a simple REST API to combine data from a few services into a single request for the frontend. If you just yelled at your computer, “But that’s not a frontend task!” — I agree! But who am I to let facts hinder the backlog?

An API that’s only needed by the frontend falls into middle-tier programming. For example, if the front end combines the data from several backend services and derives a few additional fields, a common approach is to add a proxy API so the frontend isn’t making multiple API calls and doing a bunch of business logic on the client side.

There’s not a clear line to which back-end team should own an API like this. Getting it onto another team’s backlog — and getting updates made in the future — can be a bureaucratic nightmare, so the front-end team ends up with the responsibility.

This is a story that ends differently depending on the architectural choices we make. Let’s look at two common approaches to handling this task:

  • Build an Express app on Node to create the REST API
  • Use serverless functions to create the REST API

Express + Node comes with a surprising amount of hidden complexity and overhead. Serverless lets front-end developers deploy and scale the API quickly so they can get back to their other front-end tasks.

Solution 1: Build and deploy the API using Node and Express (and Docker and Kubernetes)

Earlier in my career, the standard operating procedure was to use Node and Express to stand up a REST API. On the surface, this seems relatively straightforward. We can create the whole REST API in a file called server.js:

const express = require('express');  const PORT = 8080; const HOST = '0.0.0.0';  const app = express();  app.use(express.static('site'));  // simple REST API to load movies by slug const movies = require('./data.json');  app.get('/api/movies/:slug', (req, res) => {   const { slug } = req.params;   const movie = movies.find((m) => m.slug === slug);    res.json(movie); });  app.listen(PORT, HOST, () => {   console.log(`app running on http://$  {HOST}:$  {PORT}`); });

This code isn’t too far removed from front-end JavaScript. There’s a decent amount of boilerplate in here that will trip up a front-end dev if they’ve never seen it before, but it’s manageable.

If we run node server.js, we can visit http://localhost:8080/api/movies/some-movie and see a JSON object with details for the movie with the slug some-movie (assuming you’ve defined that in data.json).

Deployment introduces a ton of extra overhead

Building the API is only the beginning, however. We need to get this API deployed in a way that can handle a decent amount of traffic without falling down. Suddenly, things get a lot more complicated.

We need several more tools:

  • somewhere to deploy this (e.g. DigitalOcean, Google Cloud Platform, AWS)
  • a container to keep local dev and production consistent (i.e. Docker)
  • a way to make sure the deployment stays live and can handle traffic spikes (i.e. Kubernetes)

At this point, we’re way outside front-end territory. I’ve done this kind of work before, but my solution was to copy-paste from a tutorial or Stack Overflow answer.

The Docker config is somewhat comprehensible, but I have no idea if it’s secure or optimized:

FROM node:14 WORKDIR /usr/src/app COPY package*.json ./ RUN npm install COPY . . EXPOSE 8080 CMD [ "node", "server.js" ]

Next, we need to figure out how to deploy the Docker container into Kubernetes. Why? I’m not really sure, but that’s what the back end teams at the company use, so we should follow best practices.

This requires more configuration (all copy-and-pasted). We entrust our fate to Google and come up with Docker’s instructions for deploying a container to Kubernetes.

Our initial task of “stand up a quick Node API” has ballooned into a suite of tasks that don’t line up with our core skill set. The first time I got handed a task like this, I lost several days getting things configured and waiting on feedback from the backend teams to make sure I wasn’t causing more problems than I was solving.

Some companies have a DevOps team to check this work and make sure it doesn’t do anything terrible. Others end up trusting the hivemind of Stack Overflow and hoping for the best.

With this approach, things start out manageable with some Node code, but quickly spiral out into multiple layers of config spanning areas of expertise that are well beyond what we should expect a frontend developer to know.

Solution 2: Build the same REST API using serverless functions

If we choose serverless functions, the story can be dramatically different. Serverless is a great companion to Jamstack web apps that provides front-end developers with the ability to handle middle tier programming without the unnecessary complexity of figuring out how to deploy and scale a server.

There are multiple frameworks and platforms that make deploying serverless functions painless. My preferred solution is to use Netlify since it enables automated continuous delivery of both the front end and serverless functions. For this example, we’ll use Netlify Functions to manage our serverless API.

Using Functions as a Service (a fancy way of describing platforms that handle the infrastructure and scaling for serverless functions) means that we can focus only on the business logic and know that our middle tier service can handle huge amounts of traffic without falling down. We don’t need to deal with Docker containers or Kubernetes or even the boilerplate of a Node server — it Just Works™ so we can ship a solution and move on to our next task.

First, we can define our REST API in a serverless function at netlify/functions/movie-by-slug.js:

const movies = require('./data.json');  exports.handler = async (event) => {   const slug = event.path.replace('/api/movies/', '');   const movie = movies.find((m) => m.slug === slug);    return {     statusCode: 200,     body: JSON.stringify(movie),   }; };

To add the proper routing, we can create a netlify.toml at the root of the project:

[[redirects]]   from = "/api/movies/*"   to = "/.netlify/functions/movie-by-slug"   status = 200

This is significantly less configuration than we’d need for the Node/Express approach. What I prefer about this approach is that the config here is stripped down to only what we care about: the specific paths our API should handle. The rest — build commands, ports, and so on — is handled for us with good defaults.

If we have the Netlify CLI installed, we can run this locally right away with the command ntl dev, which knows to look for serverless functions in the netlify/functions directory.

Visiting http://localhost:888/api/movies/booper will show a JSON object containing details about the “booper” movie.

So far, this doesn’t feel too different from the Node and Express setup. However, when we go to deploy, the difference is huge. Here’s what it takes to deploy this site to production:

  1. Commit the serverless function and netlify.toml to repo and push it up on GitHub, Bitbucket, or GitLab
  2. Use the Netlify CLI to create a new site connected to your git repo: ntl init

That’s it! The API is now deployed and capable of scaling on demand to millions of hits. Changes will be automatically deployed whenever they’re pushed to the main repo branch.

You can see this in action at https://serverless-rest-api.netlify.app and check out the source code on GitHub.

Serverless unlocks a huge amount of potential for front-end developers

Serverless functions are not a replacement for all back-ends, but they’re an extremely powerful option for handling middle-tier development. Serverless avoids the unintentional complexity that can cause organizational bottlenecks and severe efficiency problems.

Using serverless functions allows front-end developers to complete middle-tier programming tasks without taking on the additional boilerplate and DevOps overhead that creates risk and decreases productivity.

If our goal is to empower frontend teams to quickly and confidently ship software, choosing serverless functions bakes productivity into the infrastructure. Since adopting this approach as my default Jamstack starter, I’ve been able to ship faster than ever, whether I’m working alone, with other front-end devs, or cross-functionally with teams across a company.


The post Serverless Functions: The Secret to Ultra-Productive Front-End Teams appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , ,
[Top]

Netlify Background Functions

As quickly as I can:

  • AWS Lambda is great: it allows you to run server-side code without really running a server. This is what “serverless” largely means.
  • Netlify Functions run on AWS Lambda and make them way easier to use. For example, you just chuck some scripts into a folder they deploy when you push to your main branch. Plus you get logs.
  • Netlify Functions used to be limited to a 10-second execution time, even though Lambda’s can run 15 minutes.
  • Now, you can run 15-minute functions on Netlify also, by appending -background to the filename like my-function-background.js. (You can write in Go also.)
  • This means you can do long-ish running tasks, like spin up a headless browser and scrape some data, process images to build into a PDF and email it, sync data across systems with batch API requests… or anything else that takes a lot longer than 10 seconds to do.

The post Netlify Background Functions appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, ,
[Top]

Create an FAQ Slack app with Netlify functions and FaunaDB

Sometimes, when you’re looking for a quick answer, it’s really useful to have an FAQ system in place, rather than waiting for someone to respond to a question. Wouldn’t it be great if Slack could just answer these FAQs for us? In this tutorial, we’re going to be making just that: a slash command for Slack that will answer user FAQs. We’ll be storing our answers in FaunaDB, using FQL to search the database, and utilising a Netlify function to provide a serverless endpoint to connect Slack and FaunaDB.

Prerequisites

This tutorial assumes you have the following requirements:

  • Github account, used to log in to Netlify and Fauna, as well as storing our code
  • Slack workspace with permission to create and install new apps
  • Node.js v12

Create npm package

To get started, create a new folder and initialise a npm package by using your package manager of choice and run npm init -y from inside the folder. After the package has been created, we have a few npm packages to install.

Run this to install all the packages we will need for this tutorial:

npm install express body-parser faunadb encoding serverless-http netlify-lambda

These packages are explained below, but if you are already familiar with them, feel free to skip ahead.

Encoding has been installed due to a plugin error occurring in @netlify/plugin-functions-core at the time of writing and may not be needed when you follow this tutorial.

Packages

Express is a web application framework that will allow us to simplify writing multiple endpoints for our function. Netlify functions require handlers for each endpoint, but express combined with serverless-http will allow us to write the endpoints all in one place.

Body-parser is an express middleware which will take care of the application/x-www-form-urlencoded data Slack will be sending to our function.

Faunadb is an npm module that allows us to interact with the database through the FaunaDB Javascript driver. It allows us to pass queries from our function to the database, in order to get the answers

Serverless-http is a module that wraps Express applications to the format expected by Netlify functions, meaning we won’t have to rewrite our code when we shift from local development to Netlify.

Netlify-lambda is a tool which will allow us to build and serve our functions locally, in the same way they will be built and deployed on Netlify. This means we can develop locally before pushing our code to Netlify, increasing the speed of our workflow.

Create a function

With our npm packages installed, it’s time to begin work on the function. We’ll be using serverless to wrap an express app, which will allow us to deploy it to Netlify later. To get started, create a file called netlify.toml, and add the following into it:

[build]   functions = "functions"

We will use a .gitignore file, to prevent our node_modules and functions folders from being added to git later. Create a file called .gitignore, and add the following:

functions/

node_modules/

We will also need a folder called src, and a file inside it called server.js. Your final file structure should look like:

With this in place, create a basic express app by inserting the code below into server.js:

const express = require("express"); const bodyParser = require("body-parser"); const fauna = require("faunadb"); const serverless = require("serverless-http");   const app = express();   module.exports.handler = serverless(app);

Check out the final line; it looks a little different to a regular express app. Rather than listening on a port, we’re passing our app into serverless and using this as our handler, so that Netlify can invoke our function.

Let’s set up our body parser to use application/x-www-form-urlencoded data, as well as putting a router in place. Add the following to server.js after defining app: 

const router = express.Router(); app.use(bodyParser.json()); app.use(bodyParser.urlencoded({ extended: true })); app.use("/.netlify/functions/server", router);

Notice that the router is using /.netlify/functions/server as an endpoint. This is so that Netlify will be able to correctly deploy the function later in the tutorial. This means we will need to add this to any base URLs, in order to invoke the function.

Create a test route

With a basic app in place, let’s create a test route to check everything is working. Insert the following code to create a simple GET route, that returns a simple json object:

router.get("/test", (req, res) => {  res.json({ hello: "world" }); });

With this route in place, let’s spin up our function on localhost, and check that we get a response. We’ll be using netlify-lambda to serve our app, so that we can imitate a Netlify function locally on port 9000. In our package.json, add the following lines into the scripts section:

"start": "./node_modules/.bin/netlify-lambda serve src",    "build": "./node_modules/.bin/netlify-lambda build src"

With this in place, after saving the file, we can run npm start to begin netlify-lambda on port 9000.

The build command will be used when we deploy to Netlify later.

Once it is up and running, we can visit http://localhost:9000/.netlify/functions/server/test to check our function is working as expected.

The great thing about netlify-lambda is it will listen for changes to our code, and automatically recompile whenever we update something, so we can leave it running for the duration of this tutorial.

Start ngrok URL

Now we have a test route working on our local machine, let’s make it available online. To do this, we’ll be using ngrok, a npm package that provides a public URL for our function. If you don’t have ngrok installed already, first run npm install -g ngrok to globally install it on your machine. Then run ngrok http 9000 which will automatically direct traffic to our function running on port 9000.

After starting ngrok, you should see a forwarding URL in the terminal, which we can visit to confirm our server is available online. Copy this base URL to your browser, and follow it with /.netlify/functions/server/test. You should see the same result as when we made our calls on localhost, which means we can now use this URL as an endpoint for Slack!

Each time you restart ngrok, it creates a new URL, so if you need to stop it at any point, you will need to update your URL endpoint in Slack.

Setting up Slack

Now that we have a function in place, it’s time to move to Slack to create the app and slash command. We will have to deploy this app to our workspace, as well as making a few updates to our code to connect our function. For a more in depth set of instructions on how to create a new slash command, you can follow the official Slack documentation. For a streamlined set of instructions, follow along below:

Create a new Slack app

First off, let’s create our new Slack app for these FAQs. Visit https://api.slack.com/apps and select Create New App to begin. Give your app a name (I used Fauna FAQ), and select a development workspace for the app.

Create a slash command

After creating the app, we need to add a slash command to it, so that we can interact with the app. Select slash commands from the menu after the app has been created, then create a new command. Fill in the following form with the name of your command (I used /faq) as well as providing the URL from ngrok. Don’t forget to add /.netlify/functions/server/ to the end!

Install app to workspace

Once you have created your slash command, click on basic information in the sidebar on the left to return to the app’s main page. From here, select the dropdown “Install app to your workspace” and click the button to install it.

Once you have allowed access, the app will be installed, and you’ll be able to start using the slash command in your workspace.

Update the function

With our new app in place, we’ll need to create a new endpoint for Slack to send the requests to. For this, we’ll use the root endpoint for simplicity. The endpoint will need to be able to take a post request with application/x-www-form-urlencoded data, then return a 200 status response with a message. To do this, let’s create a new post route at the root by adding the following code to server.js:

router.post("/", async (req, res) => {   });

Now that we have our endpoint, we can also extract and view the text that has been sent by slack by adding the following line before we set the status:

const text = req.body.text; console.log(`Input text: $  {text}`);

For now, we’ll just pass this text into the response and send it back instantly, to ensure the slack app and function are communicating.

res.status(200); res.send(text);

Now, when you type /faq <somequestion> on a slack channel, you should get back the same message from the slack slash command.

Formatting the response

Rather than just sending back plaintext, we can make use of Slack’s Block Kit to use specialised UI elements to improve the look of our answers. If you want to create a more complex layout, Slack provides a Block Kit builder to visually design your layout.

For now, we’re going to keep things simple, and just provide a response where each answer is separated by a divider. Add the following function to your server.js file after the post route:

const format = (answers) => {  if (answers.length == 0) {    answers = ["No answers found"];  }    let formatted = {    blocks: [],  };    for (answer of answers) {    formatted["blocks"].push({      type: "divider",    });    formatted["blocks"].push({      type: "section",      text: {        type: "mrkdwn",        text: answer,      },    });  }    return formatted; };

With this in place, we now need to pass our answers into this function, to format the answers before returning them to Slack. Update the following in the root post route:

let answers = text; const formattedAnswers = format(answers);

Now when we enter the same command to the slash app, we should get back the same message, but this time in a formatted version!

Setting up Fauna

With our slack app in place, and a function to connect to it, we now need to start working on the database to store our answers. If you’ve never set up a database with FaunaDB before, there is some great documentation on how to quickly get started. A brief step-by-step overview for the database and collection is included below:

Create database

First, we’ll need to create a new database. After logging into the Fauna dashboard online, click New Database. Give your new database a name you’ll remember (I used “slack-faq”) and save the database.

Create collection

With this database in place, we now need a collection. Click the “New Collection” button that should appear on your dashboard, and give your collection a name (I used “faq”). The history days and TTL values can be left as their defaults, but you should ensure you don’t add a value to the TTL field, as we don’t want our documents to be removed automatically after a certain time.

Add question / answer documents

Now we have a database and collection in place, we can start adding some documents to it. Each document should follow the structure:

{    question: "a question string",    answer: "an answer string",    qTokens: [        "first token",        "second token",        "third token"    ] }

The qToken values should be key terms in the question, as we will use them for a tokenized search when we can’t match a question exactly. You can add as many qTokens as you like for each question. The more relevant the tokens are, the more accurate results will be. For example, if our question is “where are the bathrooms”, we should include the qTokens “bathroom”, “bathrooms”, “toilet”, “toilets” and any other terms you may think people will search for when trying to find information about a bathroom.

The questions I used to develop a proof of concept are as follows:

{   question: "where is the lobby",   answer: "On the third floor",   qTokens: ["lobby", "reception"], }, {   question: "when is payday",   answer: "On the first Monday of each month",   qTokens: ["payday", "pay", "paid"], }, {   question: "when is lunch",   answer: "Lunch break is *12 - 1pm*",   qTokens: ["lunch", "break", "eat"], }, {   question: "where are the bathrooms",   answer: "Next to the elevators on each floor",   qTokens: ["toilet", "bathroom", "toilets", "bathrooms"], }, {   question: "when are my breaks",   answer: "You can take a break whenever you want",   qTokens: ["break", "breaks"], }

Feel free to take this time to add as many documents as you like, and as many qTokens as you think each question needs, then we’ll move on to the next step.

Creating Indexes

With these questions in place, we will create two indexes to allow us to search the database. First, create an index called “answers_by_question”, selecting question as the term and answer as the value. This will allow us to search all answers by their associated question.

Then, create an index called “answers_by_qTokens”, selecting qTokens as the term and answer as the value. We will use this index to allow us to search through the qTokens of all items in the database.

Searching the database

To run a search in our database, we will do two things. First, we’ll run a search for an exact match to the question, so we can provide a single answer to the user. Second, if this search doesn’t find a result, we’ll do a search on the qTokens each answer has, returning any results that provide a match. We’ll use Fauna’s online shell to demonstrate and explain these queries, before using them in our function.

Exact Match

Before searching the tokens, we’ll test whether we can match the input question exactly, as this will allow for the best answer to what the user has asked. To search our questions, we will match against the “answers_by_question” index, then paginate our answers. Copy the following code into the online Fauna shell to see this in action:

q.Paginate(q.Match(q.Index("answers_by_question"), "where is the lobby"))

If you have a question matching the “where is the lobby” example above, you should see the expected answer of “On the third floor” as a result.

Searching the tokens

For cases where there is no exact match on the database, we will have to use our qTokens to find any relevant answers. For this, we will match against the “answers_by_qTokens” index we created and again paginate our answers. Copy the following into the online shell to see how this works:

q.Paginate(q.Match(q.Index("answers_by_qTokens"), "break"))

If you have any questions with the qToken “break” from the example questions, you should see all answers returned as a result.

Connect function to Fauna

We have our searches figured out, but currently we can only run them from the online shell. To use these in our function, there is some configuration required, as well as an update to our function’s code.

Function configuration

To connect to Fauna from our function, we will need to create a server key. From your database’s dashboard, select security in the left hand sidebar, and create a new key. Give your new key a name you will recognise, and ensure that the dropdown has Server selected, not Admin. Finally, once the key has been created, add the following code to server.js before the test route, replacing the <secretKey> value with the secret provided by Fauna.

const q = fauna.query; const client = new fauna.Client({  secret: "<secretKey>", });

It would be preferred to store this key in an environment variable in Netlify, rather than directly in the code, but that is beyond the scope of this tutorial. If you would like to use environment variables, this Netlify post explains how to do so.

Update function code

To include our new search queries in the function, copy the following code into server.js after the post route:

const searchText = async (text) => {  console.log("Beginning searchText");  const answer = await client.query(    q.Paginate(q.Match(q.Index("answers_by_question"), text))  );  console.log(`searchText response: $  {answer.data}`);  return answer.data; };   const getTokenResponse = async (text) => {  console.log("Beginning getTokenResponse");  let answers = [];  const questionTokens = text.split(/[ ]+/);  console.log(`Tokens: $  {questionTokens}`);  for (token of questionTokens) {    const tokenResponse = await client.query(      q.Paginate(q.Match(q.Index("answers_by_qTokens"), text))    );    answers = [...answers, ...tokenResponse.data];  }  console.log(`Token answers: $  {answers}`);  return answers; };

These functions replicate the same functionality as the queries we previously ran in the online Fauna shell, but now we can utilise them from our function.

Deploy to Netlify

Now the function is searching the database, the only thing left to do is put it on the cloud, rather than a local machine. To do this, we’ll be making use of a Netlify function deployed from a GitHub repository.

First things first, add a new repo on Github, and push your code to it. Once the code is there, go to Netlify and either sign up or log in using your Github profile. From the home page of Netlify, select “New site from git” to deploy a new site, using the repo you’ve just created in Github.

If you have never deployed a site in Netlify before, this post explains the process to deploy from git.

Ensure while you are creating the new site, that your build command is set to npm run build, to have Netlify build the function before deployment. The publish directory can be left blank, as we are only deploying a function, rather than any pages.

Netlify will now build and deploy your repo, generating a unique URL for the site deployment. We can use this base URL to access the test endpoint of our function from earlier, to ensure things are working.

The last thing to do is update the Slack endpoint to our new URL! Navigate to your app, then select ‘slash commands’ in the left sidebar. Click on the pencil icon to edit the slash command and paste in the new URL for the function. Finally, you can use your new slash command in any authorised Slack channels!

Conclusion

There you have it, an entirely serverless, functional slack slash command. We have used FaunaDB to store our answers and connected to it through a Netlify function. Also, by using Express, we have the flexibility to add further endpoints to the function for adding new questions, or anything else you can think up to further extend this project! Hopefully now, instead of waiting around for someone to answer your questions, you can just use /faq and get the answer instantly!


Matthew Williams is a software engineer from Melbourne, Australia who believes the future of technology is serverless. If you’re interested in more from him, check out his Medium articles, or his GitHub repos.


The post Create an FAQ Slack app with Netlify functions and FaunaDB appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , ,
[Top]

Building Your First Serverless Service With AWS Lambda Functions

Many developers are at least marginally familiar with AWS Lambda functions. They’re reasonably straightforward to set up, but the vast AWS landscape can make it hard to see the big picture. With so many different pieces it can be daunting, and frustratingly hard to see how they fit seamlessly into a normal web application.

The Serverless framework is a huge help here. It streamlines the creation, deployment, and most significantly, the integration of Lambda functions into a web app. To be clear, it does much, much more than that, but these are the pieces I’ll be focusing on. Hopefully, this post strikes your interest and encourages you to check out the many other things Serverless supports. If you’re completely new to Lambda you might first want to check out this AWS intro.

There’s no way I can cover the initial installation and setup better than the quick start guide, so start there to get up and running. Assuming you already have an AWS account, you might be up and running in 5–10 minutes; and if you don’t, the guide covers that as well.

Your first Serverless service

Before we get to cool things like file uploads and S3 buckets, let’s create a basic Lambda function, connect it to an HTTP endpoint, and call it from an existing web app. The Lambda won’t do anything useful or interesting, but this will give us a nice opportunity to see how pleasant it is to work with Serverless.

First, let’s create our service. Open any new, or existing web app you might have (create-react-app is a great way to quickly spin up a new one) and find a place to create our services. For me, it’s my lambda folder. Whatever directory you choose, cd into it from terminal and run the following command:

sls create -t aws-nodejs --path hello-world

That creates a new directory called hello-world. Let’s crack it open and see what’s in there.

If you look in handler.js, you should see an async function that returns a message. We could hit sls deploy in our terminal right now, and deploy that Lambda function, which could then be invoked. But before we do that, let’s make it callable over the web.

Working with AWS manually, we’d normally need to go into the AWS API Gateway, create an endpoint, then create a stage, and tell it to proxy to our Lambda. With serverless, all we need is a little bit of config.

Still in the hello-world directory? Open the serverless.yaml file that was created in there.

The config file actually comes with boilerplate for the most common setups. Let’s uncomment the http entries, and add a more sensible path. Something like this:

functions:   hello:     handler: handler.hello #   The following are a few example events you can configure #   NOTE: Please make sure to change your handler code to work with those events #   Check the event documentation for details     events:       - http:         path: msg         method: get

That’s it. Serverless does all the grunt work described above.

CORS configuration 

Ideally, we want to call this from front-end JavaScript code with the Fetch API, but that unfortunately means we need CORS to be configured. This section will walk you through that.

Below the configuration above, add cors: true, like this

functions:   hello:     handler: handler.hello     events:       - http:         path: msg         method: get         cors: true

That’s the section! CORS is now configured on our API endpoint, allowing cross-origin communication.

CORS Lambda tweak

While our HTTP endpoint is configured for CORS, it’s up to our Lambda to return the right headers. That’s just how CORS works. Let’s automate that by heading back into handler.js, and adding this function:

const CorsResponse = obj => ({   statusCode: 200,   headers: {     "Access-Control-Allow-Origin": "*",     "Access-Control-Allow-Headers": "*",     "Access-Control-Allow-Methods": "*"   },   body: JSON.stringify(obj) });

Before returning from the Lambda, we’ll send the return value through that function. Here’s the entirety of handler.js with everything we’ve done up to this point:

'use strict'; const CorsResponse = obj => ({   statusCode: 200,   headers: {     "Access-Control-Allow-Origin": "*",     "Access-Control-Allow-Headers": "*",     "Access-Control-Allow-Methods": "*"   },   body: JSON.stringify(obj) }); 
 module.exports.hello = async event => {   return CorsResponse("HELLO, WORLD!"); };

Let’s run it. Type sls deploy into your terminal from the hello-world folder.

When that runs, we’ll have deployed our Lambda function to an HTTP endpoint that we can call via Fetch. But… where is it? We could crack open our AWS console, find the gateway API that serverless created for us, then find the Invoke URL. It would look something like this.

The AWS console showing the Settings tab which includes Cache Settings. Above that is a blue notice that contains the invoke URL.

Fortunately, there is an easier way, which is to type sls info into our terminal:

Just like that, we can see that our Lambda function is available at the following path:

https://6xpmc3g0ch.execute-api.us-east-1.amazonaws.com/dev/ms

Woot, now let’s call It!

Now let’s open up a web app and try fetching it. Here’s what our Fetch will look like:

fetch("https://6xpmc3g0ch.execute-api.us-east-1.amazonaws.com/dev/msg")   .then(resp => resp.json())   .then(resp => {     console.log(resp);   });

We should see our message in the dev console.

Console output showing Hello World.

Now that we’ve gotten our feet wet, let’s repeat this process. This time, though, let’s make a more interesting, useful service. Specifically, let’s make the canonical “resize an image” Lambda, but instead of being triggered by a new S3 bucket upload, let’s let the user upload an image directly to our Lambda. That’ll remove the need to bundle any kind of aws-sdk resources in our client-side bundle.

Building a useful Lambda

OK, from the start! This particular Lambda will take an image, resize it, then upload it to an S3 bucket. First, let’s create a new service. I’m calling it cover-art but it could certainly be anything else.

sls create -t aws-nodejs --path cover-art

As before, we’ll add a path to our HTTP endpoint (which in this case will be a POST, instead of GET, since we’re sending the file instead of receiving it) and enable CORS:

// Same as before   events:     - http:       path: upload       method: post       cors: true

Next, let’s grant our Lambda access to whatever S3 buckets we’re going to use for the upload. Look in your YAML file — there should be a iamRoleStatements section that contains boilerplate code that’s been commented out. We can leverage some of that by uncommenting it. Here’s the config we’ll use to enable the S3 buckets we want:

iamRoleStatements:  - Effect: "Allow"    Action:      - "s3:*"    Resource: ["arn:aws:s3:::your-bucket-name/*"]

Note the /* on the end. We don’t list specific bucket names in isolation, but rather paths to resources; in this case, that’s any resources that happen to exist inside your-bucket-name.

Since we want to upload files directly to our Lambda, we need to make one more tweak. Specifically, we need to configure the API endpoint to accept multipart/form-data as a binary media type. Locate the provider section in the YAML file:

provider:   name: aws   runtime: nodejs12.x

…and modify if it to:

provider:   name: aws   runtime: nodejs12.x   apiGateway:     binaryMediaTypes:       - 'multipart/form-data'

For good measure, let’s give our function an intelligent name. Replace handler: handler.hello with handler: handler.upload, then change module.exports.hello to module.exports.upload in handler.js.

Now we get to write some code

First, let’s grab some helpers.

npm i jimp uuid lambda-multipart-parser

Wait, what’s Jimp? It’s the library I’m using to resize uploaded images. uuid will be for creating new, unique file names of the sized resources, before uploading to S3. Oh, and lambda-multipart-parser? That’s for parsing the file info inside our Lambda.

Next, let’s make a convenience helper for S3 uploading:

const uploadToS3 = (fileName, body) => {   const s3 = new S3({});   const  params = { Bucket: "your-bucket-name", Key: `/$ {fileName}`, Body: body }; 
   return new Promise(res => {     s3.upload(params, function(err, data) {       if (err) {         return res(CorsResponse({ error: true, message: err }));       }       res(CorsResponse({          success: true,          url: `https://$ {params.Bucket}.s3.amazonaws.com/$ {params.Key}`        }));     });   }); };

Lastly, we’ll plug in some code that reads the upload files, resizes them with Jimp (if needed) and uploads the result to S3. The final result is below.

'use strict'; const AWS = require("aws-sdk"); const { S3 } = AWS; const path = require("path"); const Jimp = require("jimp"); const uuid = require("uuid/v4"); const awsMultiPartParser = require("lambda-multipart-parser"); 
 const CorsResponse = obj => ({   statusCode: 200,   headers: {     "Access-Control-Allow-Origin": "*",     "Access-Control-Allow-Headers": "*",     "Access-Control-Allow-Methods": "*"   },   body: JSON.stringify(obj) }); 
 const uploadToS3 = (fileName, body) => {   const s3 = new S3({});   var params = { Bucket: "your-bucket-name", Key: `/$ {fileName}`, Body: body };   return new Promise(res => {     s3.upload(params, function(err, data) {       if (err) {         return res(CorsResponse({ error: true, message: err }));       }       res(CorsResponse({          success: true,          url: `https://$ {params.Bucket}.s3.amazonaws.com/$ {params.Key}`        }));     });   }); }; 
 module.exports.upload = async event => {   const formPayload = await awsMultiPartParser.parse(event);   const MAX_WIDTH = 50;   return new Promise(res => {     Jimp.read(formPayload.files[0].content, function(err, image) {       if (err || !image) {         return res(CorsResponse({ error: true, message: err }));       }       const newName = `$ {uuid()}$ {path.extname(formPayload.files[0].filename)}`;       if (image.bitmap.width > MAX_WIDTH) {         image.resize(MAX_WIDTH, Jimp.AUTO);         image.getBuffer(image.getMIME(), (err, body) => {           if (err) {             return res(CorsResponse({ error: true, message: err }));           }           return res(uploadToS3(newName, body));         });       } else {         image.getBuffer(image.getMIME(), (err, body) => {           if (err) {             return res(CorsResponse({ error: true, message: err }));           }           return res(uploadToS3(newName, body));         });       }     });   }); };

I’m sorry to dump so much code on you but — this being a post about Amazon Lambda and serverless — I’d rather not belabor the grunt work within the serverless function. Of course, yours might look completely different if you’re using an image library other than Jimp.

Let’s run it by uploading a file from our client. I’m using the react-dropzone library, so my JSX looks like this:

<Dropzone   onDrop={files => onDrop(files)}   multiple={false} >   <div>Click or drag to upload a new cover</div> </Dropzone>

The onDrop function looks like this:

const onDrop = files => {   let request = new FormData();   request.append("fileUploaded", files[0]); 
   fetch("https://yb1ihnzpy8.execute-api.us-east-1.amazonaws.com/dev/upload", {     method: "POST",     mode: "cors",     body: request     })   .then(resp => resp.json())   .then(res => {     if (res.error) {       // handle errors     } else {       // success - woo hoo - update state as needed     }   }); };

And just like that, we can upload a file and see it appear in our S3 bucket! 

Screenshot of the AWS interface for buckets showing an uploaded file in a bucket that came from the Lambda function.

An optional detour: bundling

There’s one optional enhancement we could make to our setup. Right now, when we deploy our service, Serverless is zipping up the entire services folder and sending all of it to our Lambda. The content currently weighs in at 10MB, since all of our node_modules are getting dragged along for the ride. We can use a bundler to drastically reduce that size. Not only that, but a bundler will cut deploy time, data usage, cold start performance, etc. In other words, it’s a nice thing to have.

Fortunately for us, there’s a plugin that easily integrates webpack into the serverless build process. Let’s install it with:

npm i serverless-webpack --save-dev

…and add it via our YAML config file. We can drop this in at the very end:

// Same as before plugins:   - serverless-webpack

Naturally, we need a webpack.config.js file, so let’s add that to the mix:

const path = require("path"); module.exports = {   entry: "./handler.js",   output: {     libraryTarget: 'commonjs2',     path: path.join(__dirname, '.webpack'),     filename: 'handler.js',   },   target: "node",   mode: "production",   externals: ["aws-sdk"],   resolve: {     mainFields: ["main"]   } };

Notice that we’re setting target: node so Node-specific assets are treated properly. Also note that you may need to set the output filename to  handler.js. I’m also adding aws-sdk to the externals array so webpack doesn’t bundle it at all; instead, it’ll leave the call to const AWS = require("aws-sdk"); alone, allowing it to be handled by our Lamdba, at runtime. This is OK since Lambdas already have the aws-sdk available implicitly, meaning there’s no need for us to send it over the wire. Finally, the mainFields: ["main"] is to tell webpack to ignore any ESM module fields. This is necessary to fix some issues with the Jimp library.

Now let’s re-deploy, and hopefully we’ll see webpack running.

Now our code is bundled nicely into a single file that’s 935K, which zips down further to a mere 337K. That’s a lot of savings!

Odds and ends

If you’re wondering how you’d send other data to the Lambda, you’d add what you want to the request object, of type FormData, from before. For example:

request.append("xyz", "Hi there");

…and then read formPayload.xyz in the Lambda. This can be useful if you need to send a security token, or other file info.

If you’re wondering how you might configure env variables for your Lambda, you might have guessed by now that it’s as simple as adding some fields to your serverless.yaml file. It even supports reading the values from an external file (presumably not committed to git). This blog post by Philipp Müns covers it well.

Wrapping up

Serverless is an incredible framework. I promise, we’ve barely scratched the surface. Hopefully this post has shown you its potential, and motivated you to check it out even further.

If you’re interested in learning more, I’d recommend the learning materials from David Wells, an engineer at Netlify, and former member of the serverless team, as well as the Serverless Handbook by Swizec Teller

The post Building Your First Serverless Service With AWS Lambda Functions appeared first on CSS-Tricks.

CSS-Tricks

, , , , ,
[Top]