Tag: Serverless

Building Your First Serverless Service With AWS Lambda Functions

Many developers are at least marginally familiar with AWS Lambda functions. They’re reasonably straightforward to set up, but the vast AWS landscape can make it hard to see the big picture. With so many different pieces it can be daunting, and frustratingly hard to see how they fit seamlessly into a normal web application.

The Serverless framework is a huge help here. It streamlines the creation, deployment, and most significantly, the integration of Lambda functions into a web app. To be clear, it does much, much more than that, but these are the pieces I’ll be focusing on. Hopefully, this post strikes your interest and encourages you to check out the many other things Serverless supports. If you’re completely new to Lambda you might first want to check out this AWS intro.

There’s no way I can cover the initial installation and setup better than the quick start guide, so start there to get up and running. Assuming you already have an AWS account, you might be up and running in 5–10 minutes; and if you don’t, the guide covers that as well.

Your first Serverless service

Before we get to cool things like file uploads and S3 buckets, let’s create a basic Lambda function, connect it to an HTTP endpoint, and call it from an existing web app. The Lambda won’t do anything useful or interesting, but this will give us a nice opportunity to see how pleasant it is to work with Serverless.

First, let’s create our service. Open any new, or existing web app you might have (create-react-app is a great way to quickly spin up a new one) and find a place to create our services. For me, it’s my lambda folder. Whatever directory you choose, cd into it from terminal and run the following command:

sls create -t aws-nodejs --path hello-world

That creates a new directory called hello-world. Let’s crack it open and see what’s in there.

If you look in handler.js, you should see an async function that returns a message. We could hit sls deploy in our terminal right now, and deploy that Lambda function, which could then be invoked. But before we do that, let’s make it callable over the web.

Working with AWS manually, we’d normally need to go into the AWS API Gateway, create an endpoint, then create a stage, and tell it to proxy to our Lambda. With serverless, all we need is a little bit of config.

Still in the hello-world directory? Open the serverless.yaml file that was created in there.

The config file actually comes with boilerplate for the most common setups. Let’s uncomment the http entries, and add a more sensible path. Something like this:

functions:   hello:     handler: handler.hello #   The following are a few example events you can configure #   NOTE: Please make sure to change your handler code to work with those events #   Check the event documentation for details     events:       - http:         path: msg         method: get

That’s it. Serverless does all the grunt work described above.

CORS configuration 

Ideally, we want to call this from front-end JavaScript code with the Fetch API, but that unfortunately means we need CORS to be configured. This section will walk you through that.

Below the configuration above, add cors: true, like this

functions:   hello:     handler: handler.hello     events:       - http:         path: msg         method: get         cors: true

That’s the section! CORS is now configured on our API endpoint, allowing cross-origin communication.

CORS Lambda tweak

While our HTTP endpoint is configured for CORS, it’s up to our Lambda to return the right headers. That’s just how CORS works. Let’s automate that by heading back into handler.js, and adding this function:

const CorsResponse = obj => ({   statusCode: 200,   headers: {     "Access-Control-Allow-Origin": "*",     "Access-Control-Allow-Headers": "*",     "Access-Control-Allow-Methods": "*"   },   body: JSON.stringify(obj) });

Before returning from the Lambda, we’ll send the return value through that function. Here’s the entirety of handler.js with everything we’ve done up to this point:

'use strict'; const CorsResponse = obj => ({   statusCode: 200,   headers: {     "Access-Control-Allow-Origin": "*",     "Access-Control-Allow-Headers": "*",     "Access-Control-Allow-Methods": "*"   },   body: JSON.stringify(obj) }); 
 module.exports.hello = async event => {   return CorsResponse("HELLO, WORLD!"); };

Let’s run it. Type sls deploy into your terminal from the hello-world folder.

When that runs, we’ll have deployed our Lambda function to an HTTP endpoint that we can call via Fetch. But… where is it? We could crack open our AWS console, find the gateway API that serverless created for us, then find the Invoke URL. It would look something like this.

The AWS console showing the Settings tab which includes Cache Settings. Above that is a blue notice that contains the invoke URL.

Fortunately, there is an easier way, which is to type sls info into our terminal:

Just like that, we can see that our Lambda function is available at the following path:

https://6xpmc3g0ch.execute-api.us-east-1.amazonaws.com/dev/ms

Woot, now let’s call It!

Now let’s open up a web app and try fetching it. Here’s what our Fetch will look like:

fetch("https://6xpmc3g0ch.execute-api.us-east-1.amazonaws.com/dev/msg")   .then(resp => resp.json())   .then(resp => {     console.log(resp);   });

We should see our message in the dev console.

Console output showing Hello World.

Now that we’ve gotten our feet wet, let’s repeat this process. This time, though, let’s make a more interesting, useful service. Specifically, let’s make the canonical “resize an image” Lambda, but instead of being triggered by a new S3 bucket upload, let’s let the user upload an image directly to our Lambda. That’ll remove the need to bundle any kind of aws-sdk resources in our client-side bundle.

Building a useful Lambda

OK, from the start! This particular Lambda will take an image, resize it, then upload it to an S3 bucket. First, let’s create a new service. I’m calling it cover-art but it could certainly be anything else.

sls create -t aws-nodejs --path cover-art

As before, we’ll add a path to our HTTP endpoint (which in this case will be a POST, instead of GET, since we’re sending the file instead of receiving it) and enable CORS:

// Same as before   events:     - http:       path: upload       method: post       cors: true

Next, let’s grant our Lambda access to whatever S3 buckets we’re going to use for the upload. Look in your YAML file — there should be a iamRoleStatements section that contains boilerplate code that’s been commented out. We can leverage some of that by uncommenting it. Here’s the config we’ll use to enable the S3 buckets we want:

iamRoleStatements:  - Effect: "Allow"    Action:      - "s3:*"    Resource: ["arn:aws:s3:::your-bucket-name/*"]

Note the /* on the end. We don’t list specific bucket names in isolation, but rather paths to resources; in this case, that’s any resources that happen to exist inside your-bucket-name.

Since we want to upload files directly to our Lambda, we need to make one more tweak. Specifically, we need to configure the API endpoint to accept multipart/form-data as a binary media type. Locate the provider section in the YAML file:

provider:   name: aws   runtime: nodejs12.x

…and modify if it to:

provider:   name: aws   runtime: nodejs12.x   apiGateway:     binaryMediaTypes:       - 'multipart/form-data'

For good measure, let’s give our function an intelligent name. Replace handler: handler.hello with handler: handler.upload, then change module.exports.hello to module.exports.upload in handler.js.

Now we get to write some code

First, let’s grab some helpers.

npm i jimp uuid lambda-multipart-parser

Wait, what’s Jimp? It’s the library I’m using to resize uploaded images. uuid will be for creating new, unique file names of the sized resources, before uploading to S3. Oh, and lambda-multipart-parser? That’s for parsing the file info inside our Lambda.

Next, let’s make a convenience helper for S3 uploading:

const uploadToS3 = (fileName, body) => {   const s3 = new S3({});   const  params = { Bucket: "your-bucket-name", Key: `/$ {fileName}`, Body: body }; 
   return new Promise(res => {     s3.upload(params, function(err, data) {       if (err) {         return res(CorsResponse({ error: true, message: err }));       }       res(CorsResponse({          success: true,          url: `https://$ {params.Bucket}.s3.amazonaws.com/$ {params.Key}`        }));     });   }); };

Lastly, we’ll plug in some code that reads the upload files, resizes them with Jimp (if needed) and uploads the result to S3. The final result is below.

'use strict'; const AWS = require("aws-sdk"); const { S3 } = AWS; const path = require("path"); const Jimp = require("jimp"); const uuid = require("uuid/v4"); const awsMultiPartParser = require("lambda-multipart-parser"); 
 const CorsResponse = obj => ({   statusCode: 200,   headers: {     "Access-Control-Allow-Origin": "*",     "Access-Control-Allow-Headers": "*",     "Access-Control-Allow-Methods": "*"   },   body: JSON.stringify(obj) }); 
 const uploadToS3 = (fileName, body) => {   const s3 = new S3({});   var params = { Bucket: "your-bucket-name", Key: `/$ {fileName}`, Body: body };   return new Promise(res => {     s3.upload(params, function(err, data) {       if (err) {         return res(CorsResponse({ error: true, message: err }));       }       res(CorsResponse({          success: true,          url: `https://$ {params.Bucket}.s3.amazonaws.com/$ {params.Key}`        }));     });   }); }; 
 module.exports.upload = async event => {   const formPayload = await awsMultiPartParser.parse(event);   const MAX_WIDTH = 50;   return new Promise(res => {     Jimp.read(formPayload.files[0].content, function(err, image) {       if (err || !image) {         return res(CorsResponse({ error: true, message: err }));       }       const newName = `$ {uuid()}$ {path.extname(formPayload.files[0].filename)}`;       if (image.bitmap.width > MAX_WIDTH) {         image.resize(MAX_WIDTH, Jimp.AUTO);         image.getBuffer(image.getMIME(), (err, body) => {           if (err) {             return res(CorsResponse({ error: true, message: err }));           }           return res(uploadToS3(newName, body));         });       } else {         image.getBuffer(image.getMIME(), (err, body) => {           if (err) {             return res(CorsResponse({ error: true, message: err }));           }           return res(uploadToS3(newName, body));         });       }     });   }); };

I’m sorry to dump so much code on you but — this being a post about Amazon Lambda and serverless — I’d rather not belabor the grunt work within the serverless function. Of course, yours might look completely different if you’re using an image library other than Jimp.

Let’s run it by uploading a file from our client. I’m using the react-dropzone library, so my JSX looks like this:

<Dropzone   onDrop={files => onDrop(files)}   multiple={false} >   <div>Click or drag to upload a new cover</div> </Dropzone>

The onDrop function looks like this:

const onDrop = files => {   let request = new FormData();   request.append("fileUploaded", files[0]); 
   fetch("https://yb1ihnzpy8.execute-api.us-east-1.amazonaws.com/dev/upload", {     method: "POST",     mode: "cors",     body: request     })   .then(resp => resp.json())   .then(res => {     if (res.error) {       // handle errors     } else {       // success - woo hoo - update state as needed     }   }); };

And just like that, we can upload a file and see it appear in our S3 bucket! 

Screenshot of the AWS interface for buckets showing an uploaded file in a bucket that came from the Lambda function.

An optional detour: bundling

There’s one optional enhancement we could make to our setup. Right now, when we deploy our service, Serverless is zipping up the entire services folder and sending all of it to our Lambda. The content currently weighs in at 10MB, since all of our node_modules are getting dragged along for the ride. We can use a bundler to drastically reduce that size. Not only that, but a bundler will cut deploy time, data usage, cold start performance, etc. In other words, it’s a nice thing to have.

Fortunately for us, there’s a plugin that easily integrates webpack into the serverless build process. Let’s install it with:

npm i serverless-webpack --save-dev

…and add it via our YAML config file. We can drop this in at the very end:

// Same as before plugins:   - serverless-webpack

Naturally, we need a webpack.config.js file, so let’s add that to the mix:

const path = require("path"); module.exports = {   entry: "./handler.js",   output: {     libraryTarget: 'commonjs2',     path: path.join(__dirname, '.webpack'),     filename: 'handler.js',   },   target: "node",   mode: "production",   externals: ["aws-sdk"],   resolve: {     mainFields: ["main"]   } };

Notice that we’re setting target: node so Node-specific assets are treated properly. Also note that you may need to set the output filename to  handler.js. I’m also adding aws-sdk to the externals array so webpack doesn’t bundle it at all; instead, it’ll leave the call to const AWS = require("aws-sdk"); alone, allowing it to be handled by our Lamdba, at runtime. This is OK since Lambdas already have the aws-sdk available implicitly, meaning there’s no need for us to send it over the wire. Finally, the mainFields: ["main"] is to tell webpack to ignore any ESM module fields. This is necessary to fix some issues with the Jimp library.

Now let’s re-deploy, and hopefully we’ll see webpack running.

Now our code is bundled nicely into a single file that’s 935K, which zips down further to a mere 337K. That’s a lot of savings!

Odds and ends

If you’re wondering how you’d send other data to the Lambda, you’d add what you want to the request object, of type FormData, from before. For example:

request.append("xyz", "Hi there");

…and then read formPayload.xyz in the Lambda. This can be useful if you need to send a security token, or other file info.

If you’re wondering how you might configure env variables for your Lambda, you might have guessed by now that it’s as simple as adding some fields to your serverless.yaml file. It even supports reading the values from an external file (presumably not committed to git). This blog post by Philipp Müns covers it well.

Wrapping up

Serverless is an incredible framework. I promise, we’ve barely scratched the surface. Hopefully this post has shown you its potential, and motivated you to check it out even further.

If you’re interested in learning more, I’d recommend the learning materials from David Wells, an engineer at Netlify, and former member of the serverless team, as well as the Serverless Handbook by Swizec Teller

The post Building Your First Serverless Service With AWS Lambda Functions appeared first on CSS-Tricks.

CSS-Tricks

, , , , ,

Rethinking Twitter as a Serverless App

In a previous article, we showed how to build a GraphQL API with FaunaDB. We’ve also written a series of articles [1, 2, 3, 4] explaining how traditional databases built for global scalability have to adopt eventual (vs. strong) consistency, and/or make compromises on relations and indexing possibilities. FaunaDB is different since it does not make these compromises. It’s built to scale so it can safely serve your future startup no matter how big it gets, without sacrificing relations and consistent data.

In this article, we’re very excited to start bringing all of this together in a real-world app with highly dynamic data in a serverless fashion using React hooks, FaunaDB, and Cloudinary. We will use the Fauna Query Language (FQL) instead of GraphQL and start with a frontend-only approach that directly accesses the serverless database FaunaDB for data storage, authentication, and authorization.

The golden standard for example applications that feature a specific technology is a todo app–mainly because they are simple. Any database out there can serve a very simple application and shine. 

And that is exactly why this app will be different! If we truly want to show how FaunaDB excels for real world applications, then we need to build something more advanced. 

Introducing Fwitter

When we started at Twitter, databases were bad. When we left, they were still bad

Evan Weaver

Since FaunaDB was developed by ex-Twitter engineers who experienced these limitations first-hand, a Twitter-like application felt like an appropriately sentimental choice. And, since we are building it with FaunaDB, let’s call this serverless baby ‘Fwitter’

Below is a short video that shows how it looks, and the full source code is available on GitHub.

When you clone the repo and start digging around, you might notice a plethora of well-commented example queries not covered in this article. That’s because we’ll be using Fwitter as our go-to example application in future articles, and building additional features into it with time. 

But, for now, here’s a basic rundown of what we’ll cover here:

We build these features without having to configure operations or set up servers for your database. Since both Cloudinary and FaunaDB are scalable and distributed out-of-the-box, we will never have to worry about setting up servers in multiple regions to achieve low latencies for users in other countries. 

Let’s dive in!

Modeling the data 

Before we can show how FaunaDB excels at relations, we need to cover the types of relations in our application’s data model. FaunaDB’s data entities are stored in documents, which are then stored in collections–like rows in tables. For example, each user’s details will be represented by a User document stored in a Users collection. And we eventually plan to support both single sign-on and password-based login methods for a single user, each of which will be represented as an Account document in an Accounts collection.

At this point, one user has one account, so it doesn’t matter which entity stores the reference (i.e., the user ID). We could have stored the user ID in either the Account or the User document in a one-to-one relation:

One-to-one

However, since one User will eventually have multiple Accounts (or authentication methods), we’ll have a one-to-many model.

One-to-many

In a one-to-many relation between Users and Accounts, each Account points to only one user, so it makes sense to store the User reference on the Account:

We also have many-to-many relations, like the relations between Fweets and Users, because of the complex ways users interact with each other via likes, comments, and refweets. 

Many-to-many

Further, we will use a third collection, Fweetstats, to store information about the interaction between a User and a Fweet.

Fweetstats’ data will help us determine, for example, whether or not to color the icons indicating to  the user that he has already liked, commented, or refweeted a Fweet. It also helps us determine what clicking on the heart means: unlike or like.

The final model for the application will look like this: 

The application model of the fwitter application

Fweets are the center of the model, because they contain the most important data of the Fweet such as the information about the message, the number of likes, refweets, comments, and the Cloudinary media that was attached. FaunaDB stores this data in a json format that looks like this: 

As shown in the model and in this example json, hashtags are stored as a list of references. If we wanted to, we could have stored the complete hashtag json in here, and that is the preferred solution in more limited document-based databases that lack relations. However, that would mean that our hashtags would be duplicated everywhere (as they are in more limited databases) and it would be more difficult to search for hashtags and/or retrieve Fweets for a specific hashtag as shown below.

Note that a Fweet does not contain a link to Comments, but the Comments collection contains a reference to the Fweet. That’s because one Comment belongs to one Fweet, but a Fweet can have many comments–similar to the one-to-many relation between Users and Accounts.

Finally, there is a FollowerStats collection which basically saves information about how much users interact with each other in order to personalize their respective feeds. We won’t cover that much in this article, but you can experiment with the queries in the source code and stay tuned for a future article on advanced indexing.

Hopefully, you’re starting to see why we chose something more complex than a ToDo app. Although Fwitter  is nowhere near the complexity of the real Twitter app on which it’s based, it’s already becoming apparent that implementing such an application without relations would be a serious brainbreaker. 

Now, if you haven’t already done so from the github repo, it’s finally time to get our project running locally!

Setup the project

To set up the project, go to the FaunaDB dashboard and sign up. Once you are in the dashboard, click on New Database, fill in a name, and click Save. You should now be on the “Overview” page of your new database. 

Next, we need a key that we will use in our setup scripts. Click on the Security tab in the left sidebar, then click the New key button. 

In the “New key” form, the current database should already be selected. For “Role”, leave it as “Admin”. Optionally, add a key name. Next, click Save and copy the key secret displayed on the next page. It will not be displayed again.

Now that you have your database secret, clone the git repository and follow the readme. We have prepared a few scripts so that you only have to run the following commands to initialize your app, create all collections, and populate your database. The scripts will give you further instructions:

// install node modules npm install // run setup, this will create all the resources in your database // provide the admin key when the script asks for it.  // !!! the setup script will give you another key, this is a key // with almost no permissions that you need to place in your .env.local as the // script suggestions  npm run setup npm run populate   // start the frontend

After the script, your .env.local file should contain the bootstrap key that the script provided you (not the admin key)

REACT_APP_LOCAL___BOOTSTRAP_FAUNADB_KEY=<bootstrap key>

You can optionally create an account with Cloudinary and add your cloudname and a public template (there is a default template called ‘ml_default’ which you can make public) to the environment to include images and videos in the fweets. 

REACT_APP_LOCAL___CLOUDINARY_CLOUDNAME=<cloudinary cloudname> REACT_APP_LOCAL___CLOUDINARY_TEMPLATE=<cloudinary template>

Without these variables, the include media button will not work, but the rest of the app should run fine:

Creating the front end

For the frontend, we used Create React App to generate an application, then divided the application into pages and components. Pages are top-level components which have their own URLs. The Login and Register pages speak for themselves. Home is the standard feed of Fweets from the authors we follow; this is the page that we see when we log into our account. And the User and Tag pages show the Fweets for a specific user or tag in reverse chronological order. 

We use React Router to direct to these pages depending on the URL, as you can see in the src/app.js file.

<Router>   <SessionProvider value={{ state, dispatch }}>     <Layout>       <Switch>         <Route exact path="/accounts/login">           <Login />         </Route>         <Route exact path="/accounts/register">           <Register />         </Route>         <Route path="/users/:authorHandle" component={User} />         <Route path="/tags/:tag" component={Tag} />         <Route path="/">           <Home />         </Route>       </Switch>     </Layout>   </SessionProvider> </Router>

The only other thing to note in the above snippet is the SessionProvider, which is a React context to store the user’s information upon login. We’ll revisit this in the authentication section. For now, it’s enough to know that this gives us access to the Account (and thus User) information from each component.

Take a quick look at the home page (src/pages/home.js) to see how we use a combination of hooks to manage our data. The bulk of our application’s logic is implemented in FaunaDB queries which live in the src/fauna/queries folder. All calls to the database pass through the query-manager, which in a future article, we’ll refactor into serverless function calls. But for now these calls originate from the frontend and we’ll secure the sensitive parts of it with FaunaDB’s ABAC security rules and User Defined Functions (UDF). Since FaunaDB behaves as a token-secured API, we do not have to worry about a limit on the amount of connections as we would in traditional databases. 

The FaunaDB JavaScript driver

Next, take a look at the src/fauna/query-manager.js file to see how we connect FaunaDB to our application using FaunaDB’s JavaScript driver, which is just a node module we pulled with `npm install`. As with any node module, we import it into our application as so:

import faunadb from 'faunadb'

And create a client by providing a token. 

this.client = new faunadb.Client({   secret: token || this.bootstrapToken })

We’ll cover tokens a little more in the Authentication section. For now, let’s create some data! 

Creating data

The logic to create a new Fweet document can be found in the src/fauna/queries/fweets.js file. FaunaDB documents are just like JSON, and each Fweet follows the same basic structure: 

const data = {   data: {    message: message,    likes: 0,    refweets: 0,    comments: 0,    created: Now()   } }

The Now() function is used to insert the time of the query so that the Fweets in a user’s feed can be sorted chronologically. Note that FaunaDB automatically places timestamps on every database entity for temporal querying. However, the FaunaDB timestamp represents the time the document was last updated, not the time it was created, and the document gets updated every time a Fweet is liked; for our intended sorting order, we need the created time. 

Next, we send this data to FaunaDB with the Create() function. By providing Create() with the reference to the Fweets collection using Collection(‘fweets’), we specify where the data needs to go. 

const query = Create(Collection('fweets'), data )

We can now wrap this query in a function that takes a message parameter and executes it using client.query() which will send the query to the database. Only when we call client.query() will the query be sent to the database and executed. Before that, we combine as many FQL functions as we want to construct our query. 

function createFweet(message, hashtags) {    const data = …    const query = …    return client.query(query) }

Note that we have used plain old JavaScript variables to compose this query and in essence just called functions. Writing FQL is all about function composition; you construct queries by combining small functions into larger expressions. This functional approach has very strong advantages. It allows us to use native language features such as JavaScript variables to compose queries, while also writing higher-order FQL functions that are protected from injection.

For example, in the query below, we add hashtags to the document with a CreateHashtags() function that we’ve defined elsewhere using FQL.

const data = {   data: {     // ...     hashtags: CreateHashtags(tags),     likes: 0,     // ...  }

The way FQL works from within the driver’s host language (in this case, JavaScript) is what makes FQL an eDSL (embedded domain-specific language). Functions like CreateHashtags() behave just like a native FQL function in that they are both just functions that take input. This means that we can easily extend the language with our own functions, like in this open source FQL library from the Fauna community. 

It’s also important to notice that we create two entities in two different collections, in one transaction. Thus, if/when things go wrong, there is no risk that the Fweet is created yet the Hashtags are not. In more technical terms, FaunaDB is transactional and consistent whether you run queries over multiple collections or not, a property that is rare in scalable distributed databases. 

Next, we need to add the author to the query. First, we can use the Identity() FQL function to return a reference to the currently logged in document. As discussed previously in the data modeling section, that document is of the type Account and is separated from Users to support SSO in a later phase.

Then, we need to wrap Identity() in a Get() to access the full Account document and not just the reference to it.

Get(Identity()) 

Finally, we wrap all of that in a Select() to select the data.user field from the account document and add it to the data JSON. 

const data = {   data: {     // ...     hashtags: CreateHashtags(tags),     author: Select(['data', 'user'], Get(Identity())),     likes: 0,     // ...   } }

Now that we’ve constructed the query, let’s pull it all together and call client.query(query) to execute it.

function createFweet(message, hashtags) {  const data = {    data: {      message: message,      likes: 0,      refweets: 0,      comments: 0,      author: Select(['data', 'user'], Get(Identity())),      hashtags: CreateHashtags(tags),      created: Now()    }  }    const query = Create(Collection('fweets'), data )  return client.query(query) }

By using functional composition, you can easily combine all your advanced logic in one query that will be executed in one transaction. Check out the file src/fauna/queries/fweets.js to see the final result which takes even more advantage of function composition to add rate-limiting, etc. 

Securing your data with UDFs and ABAC roles

The attentive reader will have some thoughts about security by now. We are essentially creating queries in JavaScript and calling these queries from the frontend. What stops a malicious user from altering these queries? 

FaunaDB provides two features that allow us to secure our data: Attribute-Based Access Control (ABAC) and User Defined Functions (UDF). With ABAC, we can control which collections or entities that a specific key or token can access by writing Roles. 

With UDFs, we can push FQL statements to the database by using the CreateFunction(). 

CreateFunction({    name: 'create_fweet',    body: <your FQL statement>,  })

Once the function is in the database as a UDF, where the application can’t alter it anymore, we then call this UDF from the front end.

client.query(   Call(Function('create_fweet'), message, hashTags) )

Since the query is now saved on the database (just like a stored procedure), the user can no longer manipulate it. 

One example of how UDFs can be used to secure a call is that we do not pass in the author of the Fweet. The author of the Fweet is derived from the Identity() function instead, which makes it impossible for a user to write a Fweet on someone’s behalf.

Of course, we still have to define that the user has access to call the UDF. For that, we will use a very simple ABAC role that defines a group of role members and their privileges. This role will be named logged_in_role, its membership will include all of the documents in the Accounts collection, and all of these members will be granted the privilege of calling the create_fweet UDF.

CreateRole(   name: 'logged_in_role',    privileges: [    {      resource: q.Function('create_fweet'),      actions: {        call: true      }    }   ],   membership: [{ resource: Collection('accounts') }], )

We now know that these privileges are granted to an account, but how do we ‘become’ an Account? By using the FaunaDB Login() function to authenticate our users as explained in the next section.

How to implement authentication in FaunaDB

We just showed a role that gives Accounts the permissions to call the create_fweets function. But how do we “become” an Account?.

First, we create a new Account document, storing credentials alongside any other data associated with the Account (in this case, the email address and the reference to the User).

return Create(Collection('accounts'), {   credentials: { password: password },     data: {       email: email,       user: Select(['ref'], Var('user'))     }   }) }

We can then call Login() on the Account reference, which retrieves a token.

Login(  Match( < Account reference > ,     { password: password }  ) )

We use this token in the client to impersonate the Account. Since all Accounts are members of the Account collection, this token fulfills the membership requirement of the logged_in_role and is granted access to call the create_fweet UDF.

To bootstrap this whole process, we have two very important roles.

  • bootstrap_role: can only call the login and register UDFs
  • logged_in_role: can call other functions such as create_fweet

The token you received when you ran the setup script is essentially a key created with the bootstrap_role. A client is created with that token in src/fauna/query-manager.js which will only be able to register or login. Once we log in, we use the new token returned from Login() to create a new FaunaDB client which now grants access to other UDF functions such as create_fweet. Logging out means we just revert to the bootstrap token. You can see this process in the src/fauna/query-manager.js, along with more complex role examples in the src/fauna/setup/roles.js file. 

How to implement the session in React

Previously, in the “Creating the front end” section, we mentioned the SessionProvider component. In React, providers belong to a React Context which is a concept to facilitate data sharing between different components. This is ideal for data such as user information that you need everywhere in your application. By inserting the SessionProvider in the HTML early on, we made sure that each component would have access to it. Now, the only thing a component has to do to access the user details is import the context and use React’s ‘useContext’ hook.

import SessionContext from '../context/session' import React, { useContext } from 'react'  // In your component const sessionContext = useContext(SessionContext) const { user } = sessionContext.state

But how does the user end up in the context? When we included the SessionProvider, we passed in a value consisting of the current state and a dispatch function. 

const [state, dispatch] = React.useReducer(sessionReducer, { user: null }) // ... <SessionProvider value={{ state, dispatch }}>

The state is simply the current state, and the dispatch function is called to modify the context. This dispatch function is actually the core of the context since creating a context only involves calling React.createContext() which will give you access to a Provider and a Consumer.

const SessionContext = React.createContext({}) export const SessionProvider = SessionContext.Provider export const SessionConsumer = SessionContext.Consumer export default SessionContext

We can see that the state and dispatch are extracted from something that React calls a reducer (using React.useReducer), so let’s write a reducer.

export const sessionReducer = (state, action) => {  switch (action.type) {    case 'login': {      return { user: action.data.user }    }    case 'register': {      return { user: action.data.user }    }    case 'logout': {      return { user: null }    }    default: {      throw new Error(`Unhandled action type: $ {action.type}`)    }  } }

This is the logic that allows you to change the context. In essence, it receives an action and decides how to modify the context given that action. In my case, the action is simply a type with a string. We use this context to keep user information, which means that we call it on a successful login with: 

sessionContext.dispatch({ type: 'login', data: e })

Adding Cloudinary for media 

When we created a Fweet, we did not take into account assets yet. FaunaDB is meant to store application data, not image blobs or video data. However, we can easily store the media on Cloudinary and just keep a link in FaunaDB. The following inserts the Cloudinary script (in app.js):

loadScript('https://widget.cloudinary.com/v2.0/global/all.js')

We then create a Cloudinary Upload Widget (in src/components/uploader.js):

window.cloudinary.createUploadWidget(   {     cloudName: process.env.REACT_APP_LOCAL___CLOUDINARY_CLOUDNAME,     uploadPreset: process.env.REACT_APP_LOCAL___CLOUDINARY_TEMPLATE,   },   (error, result) => {     // ...   } )

As mentioned earlier, you need to provide a Cloudinary cloud name and template in the environment variables (.env.local file) to use this feature. Creating a Cloudinary account is free and once you have an account you can grab the cloud name from the dashboard.

You have the option to use API keys as well to secure uploads. In this case, we upload straight from the front end so the upload uses a public template. To add a template or modify it to make it public, click on the gear icon in the top menu, go to Upload tab, and click Add upload preset

You could also edit the ml_default template and just make it public.

Now, we just call widget.open() when our media button is clicked.

const handleUploadClick = () => {   widget.open() }   return (   <div>     <FontAwesomeIcon icon={faImage} onClick={handleUploadClick}></FontAwesomeIcon>   </div> )

This provides us with a small media button that will open the Cloudinary Upload Widget when it’s clicked. 

When we create the widget, we can also provide styles and fonts to give it the look and feel of our own application as we did above (in src/components/uploader.js): 

const widget = window.cloudinary.createUploadWidget(    {      cloudName: process.env.REACT_APP_LOCAL___CLOUDINARY_CLOUDNAME,      uploadPreset: process.env.REACT_APP_LOCAL___CLOUDINARY_TEMPLATE,      styles: {        palette: {          window: '#E5E8EB',          windowBorder: '#4A4A4A',          tabIcon: '#000000',          // ...        },        fonts: {

Once we have uploaded media to Cloudinary, we receive a bunch of information about the uploaded media, which we then add to the data when we create a Fweet.

We can then simply use the stored id (which Cloudinary refers to as the publicId) with the Cloudinary React library (in src/components/asset.js):

import { Image, Video, Transformation } from 'cloudinary-react'

To show the image in our feed.

<div className="fweet-asset">   <Image publicId={asset.id}      cloudName={cloudName} fetchFormat="auto" quality="auto" secure="true" /> </div>

When you use the id, instead of the direct URL, Cloudinary does a whole range of optimizations to deliver the media in the most optimal format possible. For example when you add a video image as follows:

<div className="fweet-asset">   <Video playsInline autoPlay loop={true} controls={true} cloudName={cloudName} publicId={publicId}>     <Transformation width="600" fetchFormat="auto" crop="scale" />   </Video> </div>

Cloudinary will automatically scale down the video to a width of 600 pixels and deliver it as a WebM (VP9) to Chrome browsers (482 KB), an MP4 (HEVC) to Safari browsers (520 KB), or an MP4 (H.264) to browsers that support neither format (821 KB). Cloudinary does these optimizations server-side, significantly improving page load time and the overall user experience. 

Retrieving data

We have shown how to add data. Now we still need to retrieve data. Getting the data of our Fwitter feed has many challenges. We need to: 

  • Get fweets from people you follow in a specific order (taking time and popularity into account)
  • Get the author of the fweet to show his profile image and handle
  • Get the statistics to show how many likes, refweets and comments it has
  • Get the comments to list those beneath the fweet. 
  • Get info about whether you already liked, refweeted, or commented on this specific fweet. 
  • If it’s a refweet, get the original fweet. 

This kind of query fetches data from many different collections and requires advanced indexing/sorting, but let’s start off simple. How do we get the Fweets? We start off by getting a reference to the Fweets collection using the Collection() function.

Collection('fweets')

And we wrap that in the Documents() function to get all of the collection’s document references.

Documents(Collection('fweets'))

We then Paginate over these references.

Paginate(Documents(Collection('fweets')))

Paginate() requires some explanation. Before calling Paginate(), we had a query that returned a hypothetical set of data. Paginate() actually materializes that data into pages of entities that we can read. FaunaDB requires that we use this Paginate() function to protect us from writing inefficient queries that retrieve every document from a collection, because in a database built for massive scale, that collection could contain millions of documents. Without the safeguard of Paginate(), that could get very expensive!

Let’s save this partial query in a plain JavaScript variable references that we can continue to build on.

const references = Paginate(Documents(Collection('fweets')))

So far, our query only returns a list of references to our Fweets. To get the actual documents, we do exactly what we would do in JavaScript: map over the list with an anonymous function. In FQL, a Lambda is just an anonymous function.

const fweets = Map(   references,   Lambda(['ref'], Get(Var('ref'))) )

This might seem verbose if you’re used to declarative query languages like SQL that declare what you want and let the database figure out how to get it. In contrast, FQL declares both what you want and how you want it which makes it more procedural. Since you’re the one defining how you want your data, and not the query engine, the price and performance impact of your query is predictable. You can exactly determine how many reads this query costs without executing it, which is a significant advantage if your database contains a huge amount of data and is pay-as-you-go. So there might be a learning curve, but it’s well worth it in the money and hassle it will save you. And once you learn how FQL works, you will find that queries read just like regular code. 

Let’s prepare our query to be extended easily by introducing Let. Let will allow us to bind variables and reuse them immediately in the next variable binding, which allows you to structure your query more elegantly.

const fweets = Map(  references,  Lambda(    ['ref'],    Let(      {        fweet: Get(Var('ref'))      },      // Just return the fweet for now      Var('fweet')    )  ) )

Now that we have this structure, getting extra data is easy. So let’s get the author.

const fweets = Map(  references,  Lambda(    ['ref'],    Let(      {        fweet: Get(Var('ref')),        author: Get(Select(['data', 'author'], Var('fweet')))      },      { fweet: Var('fweet'), author: Var('user') }    )  ) )

Although we did not write a join, we have just joined Users (the author) with the Fweets.
We’ll expand on these building blocks even further in a follow up article. Meanwhile, browse src/fauna/queries/fweets.js to view the final query and several more examples. 

More in the code base

If you haven’t already, please open the code base for this Fwitter example app. You will find a plethora of well-commented examples we haven’t explored here, but will in future articles. This section touches on a few files we think you should check out.

First, check out the src/fauna/queries/fweets.js file for examples of how to do complex matching and sorting with FaunaDB’s indexes (the indexes are created in src/fauna/setup/fweets.js). We implemented three different access patterns to get Fweets by popularity and time, by handle, and by tag.

Getting Fweets by popularity and time is a particularly interesting access pattern because it actually sorts the Fweets by a sort of decaying popularity based on users’ interactions with each other. 

Also, check out src/fauna/queries/search.js, where we’ve implemented autocomplete based on FaunaDB indexes and index bindings to search for authors and tags. Since FaunaDB can index over multiple collections, we can write one index that supports an autocomplete type of search on both Users and Tags.

We’ve implemented these examples because the combination of flexible and powerful indexes with relations is rare for scalable distributed databases. Databases that lack relations and flexible indexes require you to know in advance how your data will be accessed and you will run into problems when your business logic needs to change to accommodate your clients’ evolving use cases.

In FaunaDB, if you did not foresee a specific way that you’d like to access your data, no worries — just add an Index! We have range indexes, term indexes, and composite indexes that can be specified whenever you want without having to code around eventual consistency. 

A preview of what’s to come 

As mentioned in the introduction, we’re introducing this Fwitter app to demonstrate complex, real-world use cases. That said, a few features are still missing and will be covered in future articles, including streaming, pagination, benchmarks, and a more advanced security model with short-lived tokens, JWT tokens, single sign-on (possibly using a service like Auth0), IP-based rate limiting (with Cloudflare workers), e-mail verification (with a service like SendGrid), and HttpOnly cookies.

The end result will be a stack that relies on services and serverless functions which is very similar to a dynamic JAMstack app, minus the static site generator. Stay tuned for the follow-up articles and make sure to subscribe to the Fauna blog and monitor CSS-Tricks for more FaunaDB-related articles. 

The post Rethinking Twitter as a Serverless App appeared first on CSS-Tricks.

CSS-Tricks

, ,
[Top]

The 3 Laws of Serverless

Burke Holland thinks that to “build applications without thinking about servers” is a pretty good way to describe serverless, but…

Nobody really thinks about servers when they are writing their code. I mean, I doubt any developer has ever thrown up their hands and said “Whoa, whoa, whoa. Wait just a minute. We’re not declaring any variables in this joint until I know what server we’re going to be running this on.”

Instead of just one idea wrap it all up, Burke thinks there are three laws:

  1. Law of Furthest Abstraction (I don’t really care where my code runs, just that it runs.)
  2. The Law of Inherent Scale (I can hammer this code if I need to, or barely use it at all.)
  3. Law of Least Consumption (I only pay for what I use.)

Direct Link to ArticlePermalink

The post The 3 Laws of Serverless appeared first on CSS-Tricks.

CSS-Tricks

,
[Top]

In Defence of “Serverless” —the term

Ben Ellerby:

For now Serverless, to me at least, manages to do a hard job, defining the borders of a very fluid and complex space of possible solutions in which we can build next-generation architectures. It would help if there was not a framework of the same name, it would help if people didn’t first hear it synonymous with Lambda and it would help if people stopped saying “but you know there are servers…”. That being said, I’ve not heard a better proposal yet!

I like the term (we got the whole site and all) but rather than explain why, I’ll let my most popular tweet of all time take it from here:

Rather than alt text, here’s the whole conversation in the format of the
American Chopper Argument meme.

Why would you call it “serverless” when the architecture is anything but?

They aren’t yours. You don’t manage them. You barely think about them.

I DON’T THINK ABOUT AIR EITHER BUT WE DON’T LIVE IN AN AIRLESS WORLD.

It’s just an effective buzzword. It evokes a whole ecosystem in a single word.

I BET YOU SAY “THE CLOUD” UNIRONICALLY TOO U PLEEB.

Direct Link to ArticlePermalink

The post In Defence of “Serverless” —the term appeared first on CSS-Tricks.

CSS-Tricks

, , ,
[Top]

Build a 100% Serverless REST API with Firebase Functions & FaunaDB

Indie and enterprise web developers alike are pushing toward a serverless architecture for modern applications. Serverless architectures typically scale well, avoid the need for server provisioning and most importantly are easy and cheap to set up! And that’s why I believe the next evolution for cloud is serverless because it enables developers to focus on writing applications.

With that in mind, let’s build a REST API (because will we ever stop making these?) using 100% serverless technology.

We’re going to do that with Firebase Cloud Functions and FaunaDB, a globally distributed serverless database with native GraphQL.

Those familiar with Firebase know that Google’s serverless app-building tools also provide multiple data storage options: Firebase Realtime Database and Cloud Firestore. Both are valid alternatives to FaunaDB and are effectively serverless.

But why choose FaunaDB when Firestore offers a similar promise and is available with Google’s toolkit? Since our application is quite simple, it does not matter that much. The main difference is that once my application grows and I add multiple collections, then FaunaDB still offers consistency over multiple collections whereas Firestore does not. In this case, I made my choice based on a few other nifty benefits of FaunaDB, which you will discover as you read along — and FaunaDB’s generous free tier doesn’t hurt, either. 😉

In this post, we’ll cover:

  • Installing Firebase CLI tools
  • Creating a Firebase project with Hosting and Cloud Function capabilities
  • Routing URLs to Cloud Functions
  • Building three REST API calls with Express
  • Establishing a FaunaDB Collection to track your (my) favorite video games
  • Creating FaunaDB Documents, accessing them with FaunaDB’s JavaScript client API, and performing basic and intermediate-level queries
  • And more, of course!

Set Up A Local Firebase Functions Project

For this step, you’ll need Node v8 or higher. Install firebase-tools globally on your machine:

$  npm i -g firebase-tools

Then log into Firebase with this command:

$  firebase login

Make a new directory for your project, e.g. mkdir serverless-rest-api and navigate inside.

Create a Firebase project in your new directory by executing firebase login.

Select Functions and Hosting when prompted.

Choose “functions” and “hosting” when the bubbles appear, create a brand new firebase project, select JavaScript as your language, and choose yes (y) for the remaining options.

Create a new project, then choose JavaScript as your Cloud Function language.

Once complete, enter the functions directory, this is where your code lives and where you’ll add a few NPM packages.

Your API requires Express, CORS, and FaunaDB. Install it all with the following:

$  npm i cors express faunadb

Set Up FaunaDB with NodeJS and Firebase Cloud Functions

Before you can use FaunaDB, you need to sign up for an account.

When you’re signed in, go to your FaunaDB console and create your first database, name it “Games.”

You’ll notice that you can create databases inside other databases . So you could make a database for development, one for production or even make one small database per unit test suite. For now we only need ‘Games’ though, so let’s continue.

Create a new database and name it “Games.”

Then tab over to Collections and create your first Collection named ‘games’. Collections will contain your documents (games in this case) and are the equivalent of a table in other databases— don’t worry about payment details, Fauna has a generous free-tier, the reads and writes you perform in this tutorial will definitely not go over that free-tier. At all times you can monitor your usage in the FaunaDB console.

For the purpose of this API, make sure to name your collection ‘games’ because we’re going to be tracking your (my) favorite video games with this nerdy little API.

Create a Collection in your Games Database and name it “Games.”

Tab over to Security, and create a new Key and name it “Personal Key.” There are 3 different types of keys, Admin/Server/Client. Admin key is meant to manage multiple databases, A Server key is typically what you use in a backend which allows you to manage one database. Finally a client key is meant for untrusted clients such as your browser. Since we’ll be using this key to access one FaunaDB database in a serverless backend environment, choose ‘Server key’.

Under the Security tab, create a new Key. Name it Personal Key.

Save the key somewhere, you’ll need it shortly.

Build an Express REST API with Firebase Functions

Firebase Functions can respond directly to external HTTPS requests, and the functions pass standard Node Request and Response objects to your code — sweet. This makes Google’s Cloud Function requests accessible to middleware such as Express.

Open index.js inside your functions directory, clear out the pre-filled code, and add the following to enable Firebase Functions:

const functions = require('firebase-functions') const admin = require('firebase-admin') admin.initializeApp(functions.config().firebase)

Import the FaunaDB library and set it up with the secret you generated in the previous step:

admin.initializeApp(...)   const faunadb = require('faunadb') const q = faunadb.query const client = new faunadb.Client({   secret: 'secrety-secret...that’s secret :)' })

Then create a basic Express app and enable CORS to support cross-origin requests:

const client = new faunadb.Client({...})   const express = require('express') const cors = require('cors') const api = express()   // Automatically allow cross-origin requests api.use(cors({ origin: true }))

You’re ready to create your first Firebase Cloud Function, and it’s as simple as adding this export:

api.use(cors({...}))   exports.api = functions.https.onRequest(api)

This creates a cloud function named, “api” and passes all requests directly to your api express server.

Routing an API URL to a Firebase HTTPS Cloud Function

If you deployed right now, your function’s public URL would be something like this: https://project-name.firebaseapp.com/api. That’s a clunky name for an access point if I do say so myself (and I did because I wrote this… who came up with this useless phrase?)

To remedy this predicament, you will use Firebase’s Hosting options to re-route URL globs to your new function.

Open firebase.json and add the following section immediately below the “ignore” array:

"ignore": [...], "rewrites": [   {     "source": "/api/v1**/**",     "function": "api"   } ]

This setting assigns all /api/v1/... requests to your brand new function, making it reachable from a domain that humans won’t mind typing into their text editors.

With that, you’re ready to test your API. Your API that does… nothing!

Respond to API Requests with Express and Firebase Functions

Before you run your function locally, let’s give your API something to do.

Add this simple route to your index.js file right above your export statement:

api.get(['/api/v1', '/api/v1/'], (req, res) => {   res     .status(200)     .send(`<img src="https://media.giphy.com/media/hhkflHMiOKqI/source.gif">`) })   exports.api = ...

Save your index.js fil, open up your command line, and change into the functions directory.

If you installed Firebase globally, you can run your project by entering the following: firebase serve.

This command runs both the hosting and function environments from your machine.

If Firebase is installed locally in your project directory instead, open package.json and remove the --only functions parameter from your serve command, then run npm run serve from your command line.

Visit localhost:5000/api/v1/ in your browser. If everything was set up just right, you will be greeted by a gif from one of my favorite movies.

And if it’s not one of your favorite movies too, I won’t take it personally but I will say there are other tutorials you could be reading, Bethany.

Now you can leave the hosting and functions emulator running. They will automatically update as you edit your index.js file. Neat, huh?

FaunaDB Indexing

To query data in your games collection, FaunaDB requires an Index.

Indexes generally optimize query performance across all kinds of databases, but in FaunaDB, they are mandatory and you must create them ahead of time.

As a developer just starting out with FaunaDB, this requirement felt like a digital roadblock.

“Why can’t I just query data?” I grimaced as the right side of my mouth tried to meet my eyebrow.

I had to read the documentation and become familiar with how Indexes and the Fauna Query Language (FQL) actually work; whereas Cloud Firestore creates Indexes automatically and gives me stupid-simple ways to access my data. What gives?

Typical databases just let you do what you want and if you do not stop and think: : “is this performant?” or “how much reads will this cost me?” you might have a problem in the long run. Fauna prevents this by requiring an index whenever you query.
As I created complex queries with FQL, I began to appreciate the level of understanding I had when I executed them. Whereas Firestore just gives you free candy and hopes you never ask where it came from as it abstracts away all concerns (such as performance, and more importantly: costs).

Basically, FaunaDB has the flexibility of a NoSQL database coupled with the performance attenuation one expects from a relational SQL database.

We’ll see more examples of how and why in a moment.

Adding Documents to a FaunaDB Collection

Open your FaunaDB dashboard and navigate to your games collection.

In here, click NEW DOCUMENT and add the following BioShock titles to your collection:

{   "title": "BioShock",   "consoles": [     "windows",     "xbox_360",     "playstation_3",     "os_x",     "ios",     "playstation_4",     "xbox_one"   ],   "release_date": Date("2007-08-21"),   "metacritic_score": 96 }  {   "title": "BioShock 2",   "consoles": [     "windows",     "playstation_3",     "xbox_360",     "os_x"   ],   "release_date": Date("2010-02-09"),   "metacritic_score": 88 }{   "title": "BioShock Infinite",   "consoles": [     "windows",     "playstation_3",     "xbox_360",     "os_x",     "linux"   ],   "release_date": Date("2013-03-26"),   "metacritic_score": 94 }

As with other NoSQL databases, the documents are JSON-style text blocks with the exception of a few Fauna-specific objects (such as Date used in the “release_date” field).

Now switch to the Shell area and clear your query. Paste the following:

Map(Paginate(Match(Index("all_games"))),Lambda("ref",Var("ref")))

And click the “Run Query” button. You should see a list of three items: references to the documents you created a moment ago.

In the Shell, clear out the query field, paste the query provided, and click “Run Query.”

It’s a little long in the tooth, but here’s what the query is doing.

Index("all_games") creates a reference to the all_games index which Fauna generated automatically for you when you established your collection.These default indexes are organized by reference and return references as values. So in this case we use the Match function on the index to return a Set of references. Since we do not filter anywhere, we will receive every document in the ‘games’ collection.

The set that was returned from Match is then passed to Paginate. This function as you would expect adds pagination functionality (forward, backward, skip ahead). Lastly, you pass the result of Paginate to Map, which much like its software counterpart lets you perform an operation on each element in a Set and return an array, in this case it is simply returning ref (the reference id).

As we mentioned before, the default index only returns references. The Lambda operation that we fed to Map, pulls this ref field from each entry in the paginated set. The result is an array of references.

Now that you have a list of references, you can retrieve the data behind the reference by using another function: Get.

Wrap Var("ref") with a Get call and re-run your query, which should look like this:

Map(Paginate(Match(Index("all_games"))),Lambda("ref",Get(Var("ref"))))

Instead of a reference array, you now see the contents of each video game document.

Wrap Var("ref") with a Get function, and re-run the query.

Now that you have an idea of what your game documents look like, you can start creating REST calls, beginning with a POST.

Create a Serverless POST API Request

Your first API call is straightforward and shows off how Express combined with Cloud Functions allow you to serve all routes through one method.

Add this below the previous (and impeccable) API call:

api.get(['/api/v1', '/api/v1/'], (req, res) => {...})   api.post(['/api/v1/games', '/api/v1/games/'], (req, res) => {   let addGame = client.query(     q.Create(q.Collection('games'), {       data: {         title: req.body.title,         consoles: req.body.consoles,         metacritic_score: req.body.metacritic_score,         release_date: q.Date(req.body.release_date)       }     })   )   addGame     .then(response => {       res.status(200).send(`Saved! $ {response.ref}`)       return     })     .catch(reason => {       res.error(reason)     }) })

Please look past the lack of input sanitization for the sake of this example (all employees must sanitize inputs before leaving the work-room).

But as you can see, creating new documents in FaunaDB is easy-peasy.

The q object acts as a query builder interface that maps one-to-one with FQL functions (find the full list of FQL functions here).

You perform a Create, pass in your collection, and include data fields that come straight from the body of the request.

client.query returns a Promise, the success-state of which provides a reference to the newly-created document.

And to make sure it’s working, you return the reference to the caller. Let’s see it in action.

Test Firebase Functions Locally with Postman and cURL

Use Postman or cURL to make the following request against localhost:5000/api/v1/ to add Halo: Combat Evolved to your list of games (or whichever Halo is your favorite but absolutely not 4, 5, Reach, Wars, Wars 2, Spartan…)

$  curl http://localhost:5000/api/v1/games -X POST -H "Content-Type: application/json" -d '{"title":"Halo: Combat Evolved","consoles":["xbox","windows","os_x"],"metacritic_score":97,"release_date":"2001-11-15"}'

If everything went right, you should see a reference coming back with your request and a new document show up in your FaunaDB console.

Now that you have some data in your games collection, let’s learn how to retrieve it.

Retrieve FaunaDB Records Using a REST API Request

Earlier, I mentioned that every FaunaDB query requires an Index and that Fauna prevents you from doing inefficient queries. Since our next query will return games filtered by a game console, we can’t simply use a traditional `where` clause since that might be inefficient without an index. In Fauna, we first need to define an index that allows us to filter.

To filter, we need to specify which terms we want to filter on. And by terms, I mean the fields of document you expect to search on.

Navigate to Indexes in your FaunaDB Console and create a new one.

Name it games_by_console, set data.consoles as the only term since we will filter on the consoles. Then set data.title and ref as values. Values are indexed by range, but they are also just the values that will be returned by the query. Indexes are in that sense a bit like views, you can create an index that returns a different combination of fields and each index can have different security.

To minimize request overhead, we’ve limited the response data (e.g. values) to titles and the reference.

Your screen should resemble this one:

Under indexes, create a new index named games_by_console using the parameters above.

Click “Save” when you’re ready.

With your Index prepared, you can draft up your next API call.

I chose to represent consoles as a directory path where the console identifier is the sole parameter, e.g. /api/v1/console/playstation_3, not necessarily best practice, but not the worst either — come on now.

Add this API request to your index.js file:

api.post(['/api/v1/games', '/api/v1/games/'], (req, res) => {...})   api.get(['/api/v1/console/:name', '/api/v1/console/:name/'], (req, res) => {   let findGamesForConsole = client.query(     q.Map(       q.Paginate(q.Match(q.Index('games_by_console'), req.params.name.toLowerCase())),       q.Lambda(['title', 'ref'], q.Var('title'))     )   )   findGamesForConsole     .then(result => {       console.log(result)       res.status(200).send(result)       return     })     .catch(error => {       res.error(error)     }) })

This query looks similar to the one you used in your SHELL to retrieve all games, but with a slight modification.This query looks similar to the one you used in your SHELL to retrieve all games, but with a slight modification. Note how your Match function now has a second parameter (req.params.name.toLowerCase()) which is the console identifier that was passed in through the URL.

The Index you made a moment ago, games_by_console, had one Term in it (the consoles array), this corresponds to the parameter we have provided to the match parameter. Basically, the Match function searches for the string you pass as its second argument in the index. The next interesting bit is the Lambda function. Your first encounter with Lamba featured a single string as Lambda’s first argument, “ref.”

However, the games_by_console Index returns two fields per result, the two values you specified earlier when you created the Index (data.title and ref). So basically we receive a paginated set containing tuples of titles and references, but we only need titles. In case your set contains multiple values, the parameter of your lambda will be an array. The array parameter above (`[‘title’, ‘ref’]`) says that the first value is bound to the text variable title and the second is bound to the variable ref. text parameter. These variables can then be retrieved again further in the query by using Var(‘title’). In this case, both “title” and “ref,” were returned by the index and your Map with Lambda function maps over this list of results and simply returns only the list of titles for each game.

In fauna, the composition of queries happens before they are executed. When you write var q = q.Match(q.Index('games_by_console'))), the variable just contains a query but no query was executed yet. Only when you pass the query to client.query(q) to be executed, it will execute. You can even pass javascript variables in other Fauna FQL functions to start composing queries. his is a big benefit of querying in Fauna vs the chained asynchronous queries required of Firestore. If you ever have tried to generate very complex queries in SQL dynamically, then you will also appreciate the composition and less declarative nature of FQL.

Save index.js and test out your API with this:

$  curl http://localhost:5000/api/v1/xbox {"data":["Halo: Combat Evolved"]}

Neat, huh? But Match only returns documents whose fields are exact matches, which doesn’t help the user looking for a game whose title they can barely recall.

Although Fauna does not offer fuzzy searching via indexes (yet), we can provide similar functionality by making an index on all words in the string. Or if we want really flexible fuzzy searching we can use the filter syntax. Note that its is not necessarily a good idea from a performance or cost point of view… but hey, we’ll do it because we can and because it is a great example of how flexible FQL is!

Filtering FaunaDB Documents by Search String

The last API call we are going to construct will let users find titles by name. Head back into your FaunaDB Console, select INDEXES and click NEW INDEX. Name the new Index, games_by_title and leave the Terms empty, you won’t be needing them.

Rather than rely on Match to compare the title to the search string, you will iterate over every game in your collection to find titles that contain the search query.

Remember how we mentioned that indexes are a bit like views. In order to filter on title , we need to include `data.title` as a value returned by the Index. Since we are using Filter on the results of Match, we have to make sure that Match returns the title so we can work with it.

Add data.title and ref as Values, compare your screen to mine:

Create another index called games_by_title using the parameters above.

Click “Save” when you’re ready.

Back in index.js, add your fourth and final API call:

api.get(['/api/v1/console/:name', '/api/v1/console/:name/'], (req, res) => {...})   api.get(['/api/v1/games/', '/api/v1/games'], (req, res) => {   let findGamesByName = client.query(     q.Map(       q.Paginate(         q.Filter(           q.Match(q.Index('games_by_title')),           q.Lambda(             ['title', 'ref'],             q.GT(               q.FindStr(                 q.LowerCase(q.Var('title')),                 req.query.title.toLowerCase()               ),               -1             )           )         )       ),       q.Lambda(['title', 'ref'], q.Get(q.Var('ref')))     )   )   findGamesByName     .then(result => {       console.log(result)       res.status(200).send(result)       return     })     .catch(error => {       res.error(error)     }) })

Big breath because I know there are many brackets (Lisp programmers will love this) , but once you understand the components, the full query is quite easy to understand since it’s basically just like coding.

Beginning with the first new function you spot, Filter. Filter is again very similar to the filter you encounter in programming languages. It reduces an Array or Set to a subset based on the result of a Lambda function.

In this Filter, you exclude any game titles that do not contain the user’s search query.

You do that by comparing the result of FindStr (a string finding function similar to JavaScript’s indexOf) to -1, a non-negative value here means FindStr discovered the user’s query in a lowercase-version of the game’s title.

And the result of this Filter is passed to Map, where each document is retrieved and placed in the final result output.

Now you may have thought the obvious: performing a string comparison across four entries is cheap, 2 million…? Not so much.

This is an inefficient way to perform a text search, but it will get the job done for the purpose of this example. (Maybe we should have used ElasticSearch or Solr for this?) Well in that case, FaunaDB is quite perfect as central system to keep your data safe and feed this data into a search engine thanks to the temporal aspect which allows you to ask Fauna: “Hey, give me the last changes since timestamp X?”. So you could setup ElasticSearch next to it and use FaunaDB (soon they have push messages) to update it whenever there are changes. Whoever did this once knows how hard it is to keep such an external search up to date and correct, FaunaDB makes it quite easy.

Test the API by searching for “Halo”:

$  curl http://localhost:5000/api/v1/games?title=halo

Don’t You Dare Forget This One Firebase Optimization

A lot of Firebase Cloud Functions code snippets make one terribly wrong assumption: that each function invocation is independent of another.

In reality, Firebase Function instances can remain “hot” for a short period of time, prepared to execute subsequent requests.

This means you should lazy-load your variables and cache the results to help reduce computation time (and money!) during peak activity, here’s how:

let functions, admin, faunadb, q, client, express, cors, api   if (typeof api === 'undefined') { ... // dump the existing code here }   exports.api = functions.https.onRequest(api)

Deploy Your REST API with Firebase Functions

Finally, deploy both your functions and hosting configuration to Firebase by running firebase deploy from your shell.

Without a custom domain name, refer to your Firebase subdomain when making API requests, e.g. https://{project-name}.firebaseapp.com/api/v1/.

What Next?

FaunaDB has made me a conscientious developer.

When using other schemaless databases, I start off with great intentions by treating documents as if I instantiated them with a DDL (strict types, version numbers, the whole shebang).

While that keeps me organized for a short while, soon after standards fall in favor of speed and my documents splinter: leaving outdated formatting and zombie data behind.

By forcing me to think about how I query my data, which Indexes I need, and how to best manipulate that data before it returns to my server, I remain conscious of my documents.

To aid me in remaining forever organized, my catalog (in FaunaDB Console) of Indexes helps me keep track of everything my documents offer.

And by incorporating this wide range of arithmetic and linguistic functions right into the query language, FaunaDB encourages me to maximize efficiency and keep a close eye on my data-storage policies. Considering the affordable pricing model, I’d sooner run 10k+ data manipulations on FaunaDB’s servers than on a single Cloud Function.

For those reasons and more, I encourage you to take a peek at those functions and consider FaunaDB’s other powerful features.

The post Build a 100% Serverless REST API with Firebase Functions & FaunaDB appeared first on CSS-Tricks.

CSS-Tricks

, , , , , ,
[Top]

Static First: Pre-Generated JAMstack Sites with Serverless Rendering as a Fallback

You might be seeing the term JAMstack popping up more and more frequently. I’ve been a fan of it as an approach for some time.

One of the principles of JAMstack is that of pre-rendering. In other words, it generates your site into a collection of static assets in advance, so that it can be served to your visitors with maximum speed and minimum overhead from a CDN or other optimized static hosting environment.

But if we are going to pre-generate our sites ahead of time, how do we make them feel dynamic? How do we build sites that need to change often? How do we work with things like user generated content?

As it happens, this can be a great use case for serverless functions. JAMstack and serverless are the best of friends. They complement each other wonderfully.

In this article, we’ll look at a pattern of using serverless functions as a fallback for pre-generated pages in a site that is comprised almost entirely of user generated content. We’ll use a technique of optimistic URL routing where the 404 page is a serverless function to add serverless rendering on the fly.

Buzzwordy? Perhaps. Effective? Most certainly!

You can go and have a play with the demo site to help you imagine this use case. But only if you promise to come back.

https://vlolly.net

Is that you? You came back? Great. Let’s dig in.

The idea behind this little example site is that it lets you create a nice, happy message and virtual pick-me-up to send to a friend. You can write a message, customize a lollipop (or a popsicle, for my American friends) and get a URL to share with your intended recipient. And just like that, you’ve brightened up their day. What’s not to love?

Traditionally, we’d build this site using some server-side scripting to handle the form submissions, add new lollies (our user generated content) to a database and generate a unique URL. Then we’d use some more server-side logic to parse requests for these pages, query the database to get the data needed to populate a page view, render it with a suitable template, and return it to the user.

That all seems logical.

But how much will it cost to scale?

Technical architects and tech leads often get this question when scoping a project. They need to plan, pay for, and provision enough horsepower in case of success.

This virtual lollipop site is no mere trinket. This thing is going to make me a gazillionaire due to all the positive messages we all want to send each other! Traffic levels are going to spike as the word gets out. I had better have a good strategy of ensuring that the servers can handle the hefty load. I might add some caching layers, some load balancers, and I’ll design my database and database servers to be able to share the load without groaning from the demand to make and serve all these lollies.

Except… I don’t know how to do that stuff.

And I don’t know how much it would cost to add that infrastructure and keep it all humming. It’s complicated.

This is why I love to simplify my hosting by pre-rendering as much as I can.

Serving static pages is significantly simpler and cheaper than serving pages dynamically from a web server which needs to perform some logic to generate views on demand for every visitor.

Since we are working with lots of user generated content, it still makes sense to use a database, but I’m not going to manage that myself. Instead, I’ll choose one of the many database options available as a service. And I’ll talk to it via its APIs.

I might choose Firebase, or MongoDB, or any number of others. Chris compiled a few of these on an excellent site about serverless resources which is well worth exploring.

In this case, I selected Fauna to use as my data store. Fauna has a nice API for stashing and querying data. It is a no-SQL flavored data store and gives me just what I need.

https://fauna.com

Critically, Fauna have made an entire business out of providing database services. They have the deep domain knowledge that I’ll never have. By using a database-as-a-service provider, I just inherited an expert data service team for my project, complete with high availability infrastructure, capacity and compliance peace of mind, skilled support engineers, and rich documentation.

Such are the advantages of using a third-party service like this rather than rolling your own.

Architecture TL;DR

I often find myself doodling the logical flow of things when I’m working on a proof of concept. Here’s my doodle for this site:

And a little explanation:

  1. A user creates a new lollipop by completing a regular old HTML form.
  2. The new content is saved in a database, and its submission triggers a new site generation and deployment.
  3. Once the site deployment is complete, the new lollipop will be available on a unique URL. It will be a static page served very rapidly from the CDN with no dependency on a database query or a server.
  4. Until the site generation is complete, any new lollipops will not be available as static pages. Unsuccessful requests for lollipop pages fall back to a page which dynamically generates the lollipop page by querying the database API on the fly.

This kind of approach, which first assumes static/pre-generated assets, only then falling back to a dynamic render when a static view is not available was usefully described by Markus Schork of Unilever as “Static First” which I rather like.

In a little more detail

You could just dive into the code for this site, which is open source and available for you to explore, or we could talk some more.

You want to dig in a little further, and explore the implementation of this example? OK, I’ll explain in some more details:

  • Getting data from the database to generate each page
  • Posting data to a database API with a serverless function
  • Triggering a full site re-generation
  • Rendering on demand when pages are yet to be generated

Generating pages from a database

In a moment, we’ll talk about how we post data into the database, but first, let’s assume that there are some entries in the database already. We are going to want to generate a site which includes a page for each and every one of those.

Static site generators are great at this. They chomp through data, apply it to templates, and output HTML files ready to be served. We could use any generator for this example. I chose Eleventy due to it’s relative simplicity and the speed of its site generation.

To feed Eleventy some data, we have a number of options. One is to give it some JavaScript which returns structured data. This is perfect for querying a database API.

Our Eleventy data file will look something like this:

// Set up a connection with the Fauna database. // Use an environment variable to authenticate // and get access to the database. const faunadb = require('faunadb'); const q = faunadb.query; const client = new faunadb.Client({   secret: process.env.FAUNADB_SERVER_SECRET });  module.exports = () => {   return new Promise((resolve, reject) => {     // get the most recent 100,000 entries (for the sake of our example)     client.query(       q.Paginate(q.Match(q.Ref("indexes/all_lollies")),{size:100000})     ).then((response) => {       // get all data for each entry       const lollies = response.data;       const getAllDataQuery = lollies.map((ref) => {         return q.Get(ref);       });       return client.query(getAllDataQuery).then((ret) => {         // send the data back to Eleventy for use in the site build         resolve(ret);       });     }).catch((error) => {       console.log("error", error);       reject(error);     });   }) }

I named this file lollies.js which will make all the data it returns available to Eleventy in a collection called lollies.

We can now use that data in our templates. If you’d like to see the code which takes that and generates a page for each item, you can see it in the code repository.

Submitting and storing data without a server

When we create a new lolly page we need to capture user content in the database so that it can be used to populate a page at a given URL in the future. For this, we are using a traditional HTML form which posts data to a suitable form handler.

The form looks something like this (or see the full code in the repo):

<form name="new-lolly" action="/new" method="POST">    <!-- Default "flavors": 3 bands of colors with color pickers -->   <input type="color" id="flavourTop" name="flavourTop" value="#d52358" />   <input type="color" id="flavourMiddle" name="flavourMiddle" value="#e95946" />   <input type="color" id="flavourBottom" name="flavourBottom" value="#deaa43" />    <!-- Message fields -->   <label for="recipientName">To</label>   <input type="text" id="recipientName" name="recipientName" />    <label for="message">Say something nice</label>   <textarea name="message" id="message" cols="30" rows="10"></textarea>    <label for="sendersName">From</label>   <input type="text" id="sendersName" name="sendersName" />    <!-- A descriptive submit button -->   <input type="submit" value="Freeze this lolly and get a link">  </form>

We have no web servers in our hosting scenario, so we will need to devise somewhere to handle the HTTP posts being submitted from this form. This is a perfect use case for a serverless function. I’m using Netlify Functions for this. You could use AWS Lambda, Google Cloud, or Azure Functions if you prefer, but I like the simplicity of the workflow with Netlify Functions, and the fact that this will keep my serverless API and my UI all together in one code repository.

It is good practice to avoid leaking back-end implementation details into your front-end. A clear separation helps to keep things more portable and tidy. Take a look at the action attribute of the form element above. It posts data to a path on my site called /new which doesn’t really hint at what service this will be talking to.

We can use redirects to route that to any service we like. I’ll send it to a serverless function which I’ll be provisioning as part of this project, but it could easily be customized to send the data elsewhere if we wished. Netlify gives us a simple and highly optimized redirects engine which directs our traffic out at the CDN level, so users are very quickly routed to the correct place.

The redirect rule below (which lives in my project’s netlify.toml file) will proxy requests to /new through to a serverless function hosted by Netlify Functions called newLolly.js.

# resolve the "new" URL to a function [[redirects]]   from = "/new"   to = "/.netlify/functions/newLolly"   status = 200

Let’s look at that serverless function which:

  • stores the new data in the database,
  • creates a new URL for the new page and
  • redirects the user to the newly created page so that they can see the result.

First, we’ll require the various utilities we’ll need to parse the form data, connect to the Fauna database and create readably short unique IDs for new lollies.

const faunadb = require('faunadb');          // For accessing FaunaDB const shortid = require('shortid');          // Generate short unique URLs const querystring = require('querystring');  // Help us parse the form data  // First we set up a new connection with our database. // An environment variable helps us connect securely // to the correct database. const q = faunadb.query const client = new faunadb.Client({   secret: process.env.FAUNADB_SERVER_SECRET })

Now we’ll add some code to the handle requests to the serverless function. The handler function will parse the request to get the data we need from the form submission, then generate a unique ID for the new lolly, and then create it as a new record in the database.

// Handle requests to our serverless function exports.handler = (event, context, callback) => {    // get the form data   const data = querystring.parse(event.body);   // add a unique path id. And make a note of it - we'll send the user to it later   const uniquePath = shortid.generate();   data.lollyPath = uniquePath;    // assemble the data ready to send to our database   const lolly = {     data: data   };    // Create the lolly entry in the fauna db   client.query(q.Create(q.Ref('classes/lollies'), lolly))     .then((response) => {       // Success! Redirect the user to the unique URL for this new lolly page       return callback(null, {         statusCode: 302,         headers: {           Location: `/lolly/$ {uniquePath}`,         }       });     }).catch((error) => {       console.log('error', error);       // Error! Return the error with statusCode 400       return callback(null, {         statusCode: 400,         body: JSON.stringify(error)       });     });  }

Let’s check our progress. We have a way to create new lolly pages in the database. And we’ve got an automated build which generates a page for every one of our lollies.

To ensure that there is a complete set of pre-generated pages for every lolly, we should trigger a rebuild whenever a new one is successfully added to the database. That is delightfully simple to do. Our build is already automated thanks to our static site generator. We just need a way to trigger it. With Netlify, we can define as many build hooks as we like. They are webhooks which will rebuild and deploy our site of they receive an HTTP POST request. Here’s the one I created in the site’s admin console in Netlify:

Netlify build hook

To regenerate the site, including a page for each lolly recorded in the database, we can make an HTTP POST request to this build hook as soon as we have saved our new data to the database.

This is the code to do that:

const axios = require('axios'); // Simplify making HTTP POST requests  // Trigger a new build to freeze this lolly forever axios.post('https://api.netlify.com/build_hooks/5d46fa20da4a1b70XXXXXXXXX') .then(function (response) {   // Report back in the serverless function's logs   console.log(response); }) .catch(function (error) {   // Describe any errors in the serverless function's logs   console.log(error); });

You can see it in context, added to the success handler for the database insertion in the full code.

This is all great if we are happy to wait for the build and deployment to complete before we share the URL of our new lolly with its intended recipient. But we are not a patient lot, and when we get that nice new URL for the lolly we just created, we’ll want to share it right away.

Sadly, if we hit that URL before the site has finished regenerating to include the new page, we’ll get a 404. But happily, we can use that 404 to our advantage.

Optimistic URL routing and serverless fallbacks

With custom 404 routing, we can choose to send every failed request for a lolly page to a page which will can look for the lolly data directly in the database. We could do that in with client-side JavaScript if we wanted, but even better would be to generate a ready-to-view page dynamically from a serverless function.

Here’s how:

Firstly, we need to tell all those hopeful requests for a lolly page that come back empty to go instead to our serverless function. We do that with another rule in our Netlify redirects configuration:

# unfound lollies should proxy to the API directly [[redirects]]   from = "/lolly/*"   to = "/.netlify/functions/showLolly?id=:splat"   status = 302

This rule will only be applied if the request for a lolly page did not find a static page ready to be served. It creates a temporary redirect (HTTP 302) to our serverless function, which looks something like this:

const faunadb = require('faunadb');                  // For accessing FaunaDB const pageTemplate = require('./lollyTemplate.js');  // A JS template litereal   // setup and auth the Fauna DB client const q = faunadb.query; const client = new faunadb.Client({   secret: process.env.FAUNADB_SERVER_SECRET });  exports.handler = (event, context, callback) => {    // get the lolly ID from the request   const path = event.queryStringParameters.id.replace("/", "");    // find the lolly data in the DB   client.query(     q.Get(q.Match(q.Index("lolly_by_path"), path))   ).then((response) => {     // if found return a view     return callback(null, {       statusCode: 200,       body: pageTemplate(response.data)     });    }).catch((error) => {     // not found or an error, send the sad user to the generic error page     console.log('Error:', error);     return callback(null, {       body: JSON.stringify(error),       statusCode: 301,       headers: {         Location: `/melted/index.html`,       }     });   }); }

If a request for any other page (not within the /lolly/ path of the site) should 404, we won’t send that request to our serverless function to check for a lolly. We can just send the user directly to a 404 page. Our netlify.toml config lets us define as many level of 404 routing as we’d like, by adding fallback rules further down in the file. The first successful match in the file will be honored.

# unfound lollies should proxy to the API directly [[redirects]]   from = "/lolly/*"   to = "/.netlify/functions/showLolly?id=:splat"   status = 302  # Real 404s can just go directly here: [[redirects]]   from = "/*"   to = "/melted/index.html"   status = 404

And we’re done! We’ve now got a site which is static first, and which will try to render content on the fly with a serverless function if a URL has not yet been generated as a static file.

Pretty snappy!

Supporting larger scale

Our technique of triggering a build to regenerate the lollipop pages every single time a new entry is created might not be optimal forever. While it’s true that the automation of the build means it is trivial to redeploy the site, we might want to start throttling and optimizing things when we start to get very popular. (Which can only be a matter of time, right?)

That’s fine. Here are a couple of things to consider when we have very many pages to create, and more frequent additions to the database:

  • Instead of triggering a rebuild for each new entry, we could rebuild the site as a scheduled job. Perhaps this could happen once an hour or once a day.
  • If building once per day, we might decide to only generate the pages for new lollies submitted in the last day, and cache the pages generated each day for future use. This kind of logic in the build would help us support massive numbers of lolly pages without the build getting prohibitively long. But I’ll not go into intra-build caching here. If you are curious, you could ask about it over in the Netlify Community forum.

By combining both static, pre-generated assets, with serverless fallbacks which give dynamic rendering, we can satisfy a surprisingly broad set of use cases — all while avoiding the need to provision and maintain lots of dynamic infrastructure.

What other use cases might you be able to satisfy with this “static first” approach?

The post Static First: Pre-Generated JAMstack Sites with Serverless Rendering as a Fallback appeared first on CSS-Tricks.

CSS-Tricks

, , , , , , ,
[Top]

The Power of Serverless v2.0! (Now an Open-Source Gatsby Site Hosted on Netlify)

I created a website called The Power of Serverless for Front-End Developers over at thepowerofserverless.info a little while back while I was learning about that whole idea. I know a little more now but still have an endless amount to learn. Still, I felt like it was time to revamp that site a bit.

For one thing, just like our little conferences website, the new site is a subdomain of this very site:

https://serverless.css-tricks.com/

Why? What’s this site all about?

The whole idea behind the serverless buzzword is a pretty big deal. Rather than maintaining your own servers, which you already buy from some other company, you architect your app such that everything is run on commoditized servers you access on-demand instead.

Hosting becomes static, which is rife with advantages. Just look at Netlify who offer blazing-fast static hosting and innovate around the developer experience. The bits you still need back-end services for run in cloud functions that are cheap and efficient.

This is a big deal for front-end developers. We’ve already seen a massive growth in what we are capable of doing on the front end, thanks to the expanding power of JavaScript. Now a JavaScript developer can be building entire websites from end-to-end with the JAMstack concept.

But you still need to know how to string it all together. Who do you use to process forms? Where do you store the data? What can I use for user authentication? What content management systems are available in this world? That’s what this site is all about! I’d like the site to be able to explain the concept and offer resources, but more importantly, be a directory to the slew of services out there that make up this new serverless world.

The site also features a section containing ideas that might help you figure out how you might use serverless technology. Perhaps you’ll even take a spin making a serverless site.

Design by Kylie Timpani and illustration by Geri Coady

Kylie Timpani (yes, the same Kylie who worked on the v17 design of this site!) did all the visual design for this project.

Geri Coady did all the illustration work.

If anything looks off or weird, blame my poor implementation of their work. I’m still making my way through checklists of improvements as we speak. Sometimes you just gotta launch things and improve as you go.

Everything is on GitHub and contributions are welcome

It’s all right here.

I’d appreciate any help cleaning up copy, adding services, making it more accessible… really anything you think would improve the site. Feel free to link up your own work, although I tend to find that contributions are stronger when you are propping up someone else rather than yourself. It’s ultimately my call whether your pull request is accepted. That might be subjective sometimes.

Before doing anything dramatic, probably best to talk it out by emailing me or opening an issue. There’s already a handful of issues in there.

I suspect companies that exist in this space will be interested in being represented in here somewhere, and I’m cool with that. Go for it. Perhaps we can open up some kind of sponsorship opportunities as well.

Creating with components: A good idea

I went with Gatsby for this project. A little site like this (a couple of pages of static content) deserves to be rendered entirely server-side. Gatsby does that, even though you work entirely in React, which is generally thought of as a client-side technology. Next.js and react-static are similar in spirit.

I purposely wanted to work in JavaScript because I feel like JavaScript has been doing the best job around the idea of architecting sites in components. Sure, you could sling some partials and pass local variables in Rails partials or Nunjucks includes, but it’s a far cry from the versatility you get in a framework like React, Vue or Angular that are designing entirely to help build components for the front end.

The fact that these JavaScript frameworks are getting first-class server-side rendering stories is big. Plus, after the site’s initial render, the site “hydrates” and you end up getting that SPA feel anyway… fantastic. Yet another thing that shows how a JavaScript-focused front-end developer is getting more and more powerful.

As an aside: I don’t have much experience with more complicated content data structures and JAMstack sites. I suspect once you’ve gone past this “little simple cards of data” structure, you might be beyond what front-matter Markdown files are best suited toward and need to get into a more full-fledged CMS situation, hopefully with a GraphQL endpoint to get whatever you need. Ripe space, for sure.

The post The Power of Serverless v2.0! (Now an Open-Source Gatsby Site Hosted on Netlify) appeared first on CSS-Tricks.

CSS-Tricks

, , , , , , ,
[Top]

7 things you should know when getting started with Serverless APIs

I want you to take a second and think about Twitter, and think about it in terms of scale. Twitter has 326 million users. Collectively, we create ~6,000 tweets every second. Every minute, that’s 360,000 tweets created. That sums up to nearly 200 billion tweets a year. Now, what if the creators of Twitter had been paralyzed by how to scale and they didn’t even begin?

That’s me on every single startup idea I’ve ever had, which is why I love serverless so much: it handles the issues of scaling leaving me to build the next Twitter!

Live metrics with Application Insights

As you can see in the above, we scaled from one to seven servers in a matter of seconds, as more user requests come in. You can scale that easily, too.

So let’s build an API that will scale instantly as more and more users come in and our workload increases. We’re going to do that is by answering the following questions:


How do I create a new serverless project?

With every new technology, we need to figure out what tools are available for us and how we can integrate them into our existing tool set. When getting started with serverless, we have a few options to consider.

First, we can use the good old browser to create, write and test functions. It’s powerful, and it enables us to code wherever we are; all we need is a computer and a browser running. The browser is a good starting point for writing our very first serverless function.

Serverless in the browser

Next, as you get more accustomed to the new concepts and become more productive, you might want to use your local environment to continue with your development. Typically you’ll want support for a few things:

  • Writing code in your editor of choice
  • Tools that do the heavy lifting and generate the boilerplate code for you
  • Run and debug code locally
  • Support for quickly deploying your code

Microsoft is my employer and I’ve mostly built serverless applications using Azure Functions so for the rest of this article I’ll continue using them as an example. With Azure Functions, you’ll have support for all these features when working with the Azure Functions Core Tools which you can install from npm.

npm install -g azure-functions-core-tools

Next, we can initialize a new project and create new functions using the interactive CLI:

func CLI

If your editor of choice happens to be VS Code, then you can use it to write serverless code too. There’s actually a great extension for it.

Once installed, a new icon will be added to the left-hand sidebar — this is where we can access all our Azure-related extensions! All related functions can to be grouped under the same project (also known as a function app). This is like a folder for grouping functions that should scale together and that we want to manage and monitor at the same time. To initialize a new project using VS Code, click on the Azure icon and then the folder icon.

Create new Azure Functions project

This will generate a few files that help us with global settings. Let’s go over those now.

host.json

We can configure global options for all functions in the project directly in the host.json file.

In it, our function app is configured to use the latest version of the serverless runtime (currently 2.0). We also configure functions to timeout after ten minutes by setting the functionTimeout property to 00:10:00 — the default value for that is currently five minutes (00:05:00).

In some cases, we might want to control the route prefix for our URLs or even tweak settings, like the number of concurrent requests. Azure Functions even allows us to customize other features like logging, healthMonitor and different types of extensions.

Here’s an example of how I’ve configured the file:

// host.json {   "version": "2.0",   "functionTimeout": "00:10:00",   "extensions": {   "http": {     "routePrefix": "tacos",     "maxOutstandingRequests": 200,     "maxConcurrentRequests": 100,     "dynamicThrottlesEnabled": true   } } }

Application settings

Application settings are global settings for managing runtime, language and version, connection strings, read/write access, and ZIP deployment, among others. Some are settings that are required by the platform, like FUNCTIONS_WORKER_RUNTIME, but we can also define custom settings that we’ll use in our application code, like DB_CONN which we can use to connect to a database instance.

While developing locally, we define these settings in a file named local.settings.json and we access them like any other environment variable.

Again, here’s an example snippet that connects these points:

// local.settings.json {   "IsEncrypted": false,   "Values": {     "AzureWebJobsStorage": "your_key_here",     "FUNCTIONS_WORKER_RUNTIME": "node",     "WEBSITE_NODE_DEFAULT_VERSION": "8.11.1",     "FUNCTIONS_EXTENSION_VERSION": "~2",     "APPINSIGHTS_INSTRUMENTATIONKEY": "your_key_here",     "DB_CONN": "your_key_here",   } }

Azure Functions Proxies

Azure Functions Proxies are implemented in the proxies.json file, and they enable us to expose multiple function apps under the same API, as well as modify requests and responses. In the code below we’re publishing two different endpoints under the same URL.

// proxies.json {   "$ schema": "http://json.schemastore.org/proxies",   "proxies": {     "read-recipes": {         "matchCondition": {         "methods": ["POST"],         "route": "/api/recipes"       },       "backendUri": "https://tacofancy.azurewebsites.net/api/recipes"     },     "subscribe": {       "matchCondition": {         "methods": ["POST"],         "route": "/api/subscribe"       },       "backendUri": "https://tacofancy-users.azurewebsites.net/api/subscriptions"     }   } }

Create a new function by clicking the thunder icon in the extension.

Create a new Azure Function

The extension will use predefined templates to generate code, based on the selections we made — language, function type, and authorization level.

We use function.json to configure what type of events our function listens to and optionally to bind to specific data sources. Our code runs in response to specific triggers which can be of type HTTP when we react to HTTP requests — when we run code in response to a file being uploaded to a storage account. Other commonly used triggers can be of type queue, to process a message uploaded on a queue or time triggers to run code at specified time intervals. Function bindings are used to read and write data to data sources or services like databases or send emails.

Here, we can see that our function is listening to HTTP requests and we get access to the actual request through the object named req.

// function.json {   "disabled": false,   "bindings": [     {       "authLevel": "anonymous",       "type": "httpTrigger",       "direction": "in",       "name": "req",       "methods": ["get"],       "route": "recipes"     },     {       "type": "http",       "direction": "out",       "name": "res"     }   ] }

index.js is where we implement the code for our function. We have access to the context object, which we use to communicate to the serverless runtime. We can do things like log information, set the response for our function as well as read and write data from the bindings object. Sometimes, our function app will have multiple functions that depend on the same code (i.e. database connections) and it’s good practice to extract that code into a separate file to reduce code duplication.

//Index.js module.exports = async function (context, req) {   context.log('JavaScript HTTP trigger function processed a request.');    if (req.query.name || (req.body && req.body.name)) {       context.res = {           // status: 200, /* Defaults to 200 */           body: "Hello " + (req.query.name || req.body.name)       };   }   else {       context.res = {           status: 400,           body: "Please pass a name on the query string or in the request body"       };   } };

Who’s excited to give this a run?

How do I run and debug Serverless functions locally?

When using VS Code, the Azure Functions extension gives us a lot of the setup that we need to run and debug serverless functions locally. When we created a new project using it, a .vscode folder was automatically created for us, and this is where all the debugging configuration is contained. To debug our new function, we can use the Command Palette (Ctrl+Shift+P) by filtering on Debug: Select and Start Debugging, or typing debug.

Debugging Serverless Functions

One of the reasons why this is possible is because the Azure Functions runtime is open-source and installed locally on our machine when installing the azure-core-tools package.

How do I install dependencies?

Chances are you already know the answer to this, if you’ve worked with Node.js. Like in any other Node.js project, we first need to create a package.json file in the root folder of the project. That can done by running npm init -y — the -y will initialize the file with default configuration.

Then we install dependencies using npm as we would normally do in any other project. For this project, let’s go ahead and install the MongoDB package from npm by running:

npm i mongodb

The package will now be available to import in all the functions in the function app.

How do I connect to third-party services?

Serverless functions are quite powerful, enabling us to write custom code that reacts to events. But code on its own doesn’t help much when building complex applications. The real power comes from easy integration with third-party services and tools.

So, how do we connect and read data from a database? Using the MongoDB client, we’ll read data from an Azure Cosmos DB instance I have created in Azure, but you can do this with any other MongoDB database.

//Index.js const MongoClient = require('mongodb').MongoClient;  // Initialize authentication details required for database connection const auth = {   user: process.env.user,   password: process.env.password };  // Initialize global variable to store database connection for reuse in future calls let db = null; const loadDB = async () => {   // If database client exists, reuse it     if (db) {     return db;   }   // Otherwise, create new connection     const client = await MongoClient.connect(     process.env.url,     {       auth: auth     }   );   // Select tacos database   db = client.db('tacos');   return db; };  module.exports = async function(context, req) {   try {     // Get database connection     const database = await loadDB();     // Retrieve all items in the Recipes collection      let recipes = await database       .collection('Recipes')       .find()       .toArray();     // Return a JSON object with the array of recipes      context.res = {       body: { items: recipes }     };   } catch (error) {     context.log(`Error code: $ {error.code} message: $ {error.message}`);     // Return an error message and Internal Server Error status code     context.res = {       status: 500,       body: { message: 'An error has occurred, please try again later.' }     };   } };

One thing to note here is that we’re reusing our database connection rather than creating a new one for each subsequent call to our function. This shaves off ~300ms of every subsequent function call. I call that a win!

Where can I save connection strings?

When developing locally, we can store our environment variables, connection strings, and really anything that’s secret into the local.settings.json file, then access it all in the usual manner, using process.env.yourVariableName.

local.settings.json {   "IsEncrypted": false,   "Values": {     "AzureWebJobsStorage": "",     "FUNCTIONS_WORKER_RUNTIME": "node",     "user": "your-db-user",     "password": "your-db-password",     "url": "mongodb://your-db-user.documents.azure.com:10255/?ssl=true"   } }

In production, we can configure the application settings on the function’s page in the Azure portal.

However, another neat way to do this is through the VS Code extension. Without leaving your IDE, we can add new settings, delete existing ones or upload/download them to the cloud.

Debugging Serverless Functions

How do I customize the URL path?

With the REST API, there are a couple of best practices around the format of the URL itself. The one I settled on for our Recipes API is:

  • GET /recipes: Retrieves a list of recipes
  • GET /recipes/1: Retrieves a specific recipe
  • POST /recipes: Creates a new recipe
  • PUT /recipes/1: Updates recipe with ID 1
  • DELETE /recipes/1: Deletes recipe with ID 1

The URL that is made available by default when creating a new function is of the form http://host:port/api/function-name. To customize the URL path and the method that we listen to, we need to configure them in our function.json file:

// function.json {   "disabled": false,   "bindings": [     {       "authLevel": "anonymous",       "type": "httpTrigger",       "direction": "in",       "name": "req",       "methods": ["get"],       "route": "recipes"     },     {       "type": "http",       "direction": "out",       "name": "res"     }   ] }

Moreover, we can add parameters to our function’s route by using curly braces: route: recipes/{id}. We can then read the ID parameter in our code from the req object:

const recipeId = req.params.id;

How can I deploy to the cloud?

Congratulations, you’ve made it to the last step! 🎉 Time to push this goodness to the cloud. As always, the VS Code extension has your back. All it really takes is a single right-click we’re pretty much done.

Deployment using VS Code

The extension will ZIP up the code with the Node modules and push them all to the cloud.

While this option is great when testing our own code or maybe when working on a small project, it’s easy to overwrite someone else’s changes by accident — or even worse, your own.

Don’t let friends right-click deploy!
— every DevOps engineer out there

A much healthier option is setting up on GitHub deployment which can be done in a couple of steps in the Azure portal, via the Deployment Center tab.

Github deployment

Are you ready to make Serverless APIs?

This has been a thorough introduction to the world of Servless APIs. However, there’s much, much more than what we’ve covered here. Serverless enables us to solve problems creatively and at a fraction of the cost we usually pay for using traditional platforms.

Chris has mentioned it in other posts here on CSS-Tricks, but he created this excellent website where you can learn more about serverless and find both ideas and resources for things you can build with it. Definitely check it out and let me know if you have other tips or advice scaling with serverless.

The post 7 things you should know when getting started with Serverless APIs appeared first on CSS-Tricks.

CSS-Tricks

, , , , , ,
[Top]