Tag: Fauna

How to Build a GraphQL API for Text Analytics with Python, Flask and Fauna

GraphQL is a query language and server-side runtime environment for building APIs. It can also  be considered as the syntax that you write in order to describe the kind of data you want from APIs. What this means for you as a backend developer is that with GraphQL, you are able to expose a single endpoint on your server to handle GraphQL queries from client applications, as opposed to the many endpoints you’d need to create to handle specific kinds of requests with REST and turn serve data from those endpoints.

If a client needs new data that is not already provisioned. You’d need to create new endpoints for it and update the API docs. GraphQL makes it possible to send queries to a single endpoint, these queries are then passed on to the server to be handled by the server’s predefined resolver functions and the requested information can be provided over the network.

Running Flask server

Flask is a minimalist framework for building Python servers. I always use Flask to expose my GraphQL API to serve my machine learning models. Requests from client applications are then forwarded by the GraphQL gateway. Overall, microservice architecture allows us to use the best technology for the right job and it allows us to use advanced patterns like schema federation. 

In this article, we will start small with the implementation of the so-called Levenshetein distance. We will use the well-known NLTK library and expose the Levenshtein distance functionality with the GraphQL API. In this article, i assume that you are familiar with basic GraphQL concepts like BUILDING GraphQL mutations. 

Note: We will be working with the free and open source example repository with the following:

In the projects, Pipenv was used for managing the python dependencies. If you are located in the project folder. We can create our virtual environment with this:

…and install dependencies from Pipfile.

We usually define a couple of script aliases in our Pipfile to ease our development workflow.

It allows us to run our dev environment easily with a command aliases as follows:

The flask server should be then exposed by default at port 5000. You can immediately move on to the GraphQL Playground, which serves as the IDE for the live documentation and query execution for GraphQL servers. GraphQL playground uses the so-called GraphQL introspection for fetching information about our GraphQL types. The following code initializes our Flask server:

It is a good practice to use the WSGI server when running a production environment. Therefore, we have to set-up a script alias for the gunicorn with: 

Levenshtein distance (edit distance)

The levenshtein distance, also known as edit distance, is a string metric. It is defined as the minimum number of single-character edits needed to change a one character sequence  a to another one b. If we denote the length of such sequences |a| and |b| respectively, we get the following:

Where

1(ai≠bj) is the distance between the first i characaters of a and the first j character of b. For more on the theoretical background, feel free to check out the Wiki.

In practice, let’s say that someone misspelt ‘machine learning’ and wrote ‘machinlt learning’. We would need to make the following edits:

Edit Edit type Word state
0 Machinlt lerning
1 Substitution Machinet lerning
2 Deletion Machine lerning
3 Insertion Machine learning

For these two strings, we get a Levenshtein distance equal to 3. The levenshtein distance has many applications, such as spell checkers, correction system for optical character recognition, or similarity calculations.

Building a Graphql server with graphene in Python

We will build the following schema in our article:

Each GraphQL schema is required to have at least one query. We usually define our first query in order to health check our microservice. The query can be called like this:

query {   healthcheck }

However, the main function of our schema is to enable us to calculate the Levenshtien distance. We will use variables to pass dynamic parameters in the following GraphQL document:

We have defined our schema so far in SDL format. In the python ecosystem, however, we do not have libraries like graphql-tools, so we need to define our schema with the code-first approach. The schema is defined as follows using the Graphene library:

We have followed the best practices for overall schema and mutations. Our input object type is written in Graphene as follows:

Each time, we execute our mutation in GraphQL playground:

With the following variables:

{   "input": {     "s1": "test1",     "s2": "test2"   } }

We obtain the Levenshtein distance between our two input strings. For our simple example of strings test1 and test2, we get 1. We can leverage the well-known NLTK library for natural language processing (NLP). The following code is executed from the resolver:

It is also straightforward to implement the Levenshtein distance by ourselves using, for example, an iterative matrix, but I would suggest to not reinvent the wheel and use the default NLTK functions.

Serverless GraphQL APIs with Fauna

First off some introductions, before we jump right in. It’s only fair that I give Fauna a proper introduction as it is about to make our lives a whole lot easier. 

Fauna is a serverless database service, that handles all optimization and maintenance tasks so that developers don’t have to bother about them and can focus on developing their apps and shipping to market faster.

Again, serverless doesn’t actually mean “NO SERVERS” but to simply put it: what serverless means is that you can get things working without necessarily having to set things up from scratch. Some apps that use serverless concepts don’t have a backend service written from scratch, they employ the use of cloud functions which are technically scripts written on cloud platforms to handle necessary tasks like login, registration, serving data etc.

Where does Fauna fit into all of this? When we build servers we need to provision our server with a database, and when we do that it’s usually a running instance of our database. With serverless technology like Fauna, we can shift that workload to the cloud and focus on actually writing our auth systems, implementing business logic for our app. Fauna also manages things like, maintenance and scaling which usually calls for concern with systems that use conventional databases.

If you are interested in getting more info about Fauna and it’s features, check the Fauna docs. Let’s get started with building our GraphQL API the serverless way with the GraphQL.

Requirements

  1. Fauna account: That’s all you need, an account with Fauna is all you need for the session, so click here to go to the sign up page.
  2. Creating a Fauna Database
  3. Login to your Fauna account once you have created an account with them. Once on the dashboard, you should see a button to create a new database, click on that and you should see a little form to fill in the name of the database, that resembles the ones below:

I call mine “graphqlbyexample” but you can call yours anything you wish. Please, ignore the pre-populate with demo data option we don’t need that for this demo. Click “Save” and you should be brought to a new screen as shown below:

Adding a GraphQL Schema to Fauna

  1. In order to get our GraphQL server up and running, Fauna allows us to upload our own graphQL schema, on the page we are currently on, you should see a GraphQL options; select that and it will prompt you to upload a schema file. This file usually contains raw GraphQL schema and is saved with either the .gql or .graphql file extension. Let’s create our schema and upload it to Fauna to spin up our server.
  1. Create a new file anywhere you like. I’m creating it in the same directory as our previous app, because it has no impact on it. I’m calling it schema.gql and we will add the following to it:

Here, we simply define our data types in tandem to our two tables (Notes, and user). Save this and go back to that page to upload this schema.gql that we just created. Once that is done, Fauna processes it and takes us to a new page — our GraphQL API playground.

We have literally created a graphql server by simply uploading that really simple schema to Fauna and to highlight some of the really cool feats that Fauna has, observe:

  1. Fauna automatically generates collections for us, if you notice, we did not create any collection(translates to Tables, if you are only familiar with relational databases). Fauna is a NoSQL database and collections are technically the same as tables, and documents as to rows in tables. If we go to the collections options and click on that we had see the tables that were auto-generated on our behalf, courtesy of the schema file that we uploaded.
  1. Fauna automatically creates indexes on our behalf: head over to the indexes option and see what indexes have been created on behalf of the API. Fauna is a document-oriented database and does not have primary keys or foreign-keys as you have in relational databases for search and index purposes, instead, we create indexes in Fauna to help with data retrieval.
  1. Fauna automatically generates graphql queries and mutations as well as API Docs on our behalf: This is one of my personal favorites and I can’t seem to get over just how efficient Fauna does this. Fauna is able to intelligently generate some queries that it thinks you might want in your newly created API. Head over back to the GraphQL option and click on the “Docs” tab to open up the Docs on the playground.

As you can see two queries and a handful of mutations already auto-generated (even though we did nit add them to our schema file), you can click on each one in the docs to see the details. 

Testing our server

Let’s test out some of these queries and mutations from the playground, we also use our server outside of the playground (by the way, it is a fully functional GraphQL server).

Testing from the Playground

  1. We will test our first off by creating a new user, with the predefined createUser mutations as follows:

If we go to the collections options and choose User, we should have our newly created entry(document aka row) in our User collections.

  1. Let’s create a new note and associate it with a user as the author via it’s document ref id, which is a special ID generated by Fauna for all documents for the sake of references like this much like a key in relational tables. To find the ID for the user we just created simply navigate to the collection and from the list of documents you should see the option(a copy Icon)to copy Ref ID:

Once you have this you can create a new note and associate is as follows:

  1. Let’s make a query this time, this time to get data from the database. Currently, we can fetch users by ID or fetch a note by it’s ID. Let’s see that in action:

You must have been thinking it, what if we wanted to fetch info of all users, currently, we can’t do that because Fauna did not generate that automatically for us, but we can update our schema so let’s add our custom query to our schema.gql file, as follows. Note that this is an update to the file so don’t clear everything in the file out just add this to it:

Once you have added this, save the file and click on the update schema option on the playground to upload the file again, it should take a few seconds to update, once it’s done we will be able to use our newly created query, as follows:

Don’t forget that as opposed to having all the info about users served (namely: name, email, password) we can choose what fields we want because it’s a GraphQL and not just that. It’s Fauna’s GraphQL so feel free to specify more fields if you want. 

Testing from without the playground – Using python (requests library)

Now that we’ve seen that our API works from the playground lets see how we can actually use this from an application outside the playground environment, using python’s request library so if you don’t have it installed kindly install it using pip as follows:

pip install requests
  • Before we write any code we need to get our API key from Fauna which is what will help us communicate with our API from outside the playground. Head over to security on your dashboard and on the keys tab select the option to create a new key, it should bring up a form like this:

Leave the database option as the current one, change the role of the key from admin to the server and then save. It’ll generate for you a new key secret that you must copy and save somewhere safe, as an environment variable most probably.

  • For this I’m going to create a simple script to demonstrate, so add a new file call it whatever you wish — I’m calling mine test.py to your current working directory or anywhere you wish. In this file we’ll add the following:

Here we add a couple of imports, including the requests library which we use to send the requests, as well as the os module used here to load our environment variables which is where I stored the Fauna secret key we got from the previous step.

Note the URL where the request is to be sent, this is gotten from the Fauna GraphQL playground here:

Next, we create a query which is to be sent this example shows a simple fetch query to find a user by id (which is one of the automatically generated queries from Fauna), we then retrieve the key from the environment variable and store it in a variable called a token, and create a dictionary to represent out headers, this is, after all, an HTTP request so we can set headers here, and in fact, we have to because Fauna will look for our secret key from the headers of our request.

  • The concluding part of the code features how we use the request library to create the request, and is shown as follows:

We create a request object and check to see if the request went through via its status_code and print the response from the server if it went well otherwise we print an error message lets run test.py and see what it returns.

Conclusion

In this article, we covered creating GraphQL servers from scratch and looked at creating servers right from Fauna without having to do much work, we also saw some of the awesome, cool perks that come with using the serverless system that Fauna provides, we went on to further see how we could test our servers and validate that they work.

Hopefully, this was worth your time, and taught you a thing or two about GraphQL, Serverless, Fauna, and Flask and text analytics. To learn more about Fauna, you can also sign up for a free account and try it out yourself!


By: Adesina Abdrulrahman


The post How to Build a GraphQL API for Text Analytics with Python, Flask and Fauna appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , , ,

Building an Ethereum app using Redwood.js and Fauna

With the recent climb of Bitcoin’s price over 20k $ USD, and to it recently breaking 30k, I thought it’s worth taking a deep dive back into creating Ethereum applications. Ethereum, as you should know by now, is a public (meaning, open-to-everyone-without-restrictions) blockchain that functions as a distributed consensus and data processing network, with the data being in the canonical form of “transactions” (txns). However, the current capabilities of Ethereum let it store (constrained by gas fees) and process (constrained by block size or size of the parties participating in consensus) only so many txns and txns/sec. Now, since this is a “how to” article on building with Redwood and Fauna and not an article on “how does […],” I will not go further into the technical details about how Ethereum works, what constraints it has and does not have, et cetera. Instead, I will assume you, as the reader, already have some understanding about Ethereum and how to build on it or with it.

I realized that there will be some new people stumbling onto this post with no prior experience with Ethereum, and it would behoove me to point these viewers in some direction. Thankfully, as of the time of this rewriting, Ethereum recently revamped their Developers page with tons of resources and tutorials. I highly recommend newcomers to go through it!

Although, I will be providing relevant specific details as we go along so that anyone familiar with either building Ethereum apps, Redwood.js apps, or apps that rely on a Fauna, can easily follow the content in this tutorial. With that out of the way, let’s dive in!

Preliminaries

This project is a fork of the Emanator monorepo, a project that is well described by Patrick Gallagher, one of the creators of the app, in his blog post he made for his team’s Superfluid hackathon submission. While Patrick’s app used Heroku for their database, I will be showing how you can use Fauna with this same app! 

Since this project is a fork, make sure to have downloaded the MetaMask browser extension before continuing.

Fauna

Fauna is a web-native GraphQL interface, with support for custom business logic and integration with the serverless ecosystem, enabling developers to simplify code and ship faster. The underlying globally-distributed storage and compute fabric is fast, consistent, and reliable, with a modern security infrastructure. Fauna is easy to get started with and offers a 100 percent serverless experience with nothing to manage.

Fauna also provides us with a High Availability solution with each server globally located containing a partition of our database, replicating our data asynchronously with each request with a copy of our database or the transaction made. 

Some of the benefits to using Fauna can be summarized as: 

  • Transactional 
  • Multi-document 
  • Geo-distributed 

In short, Fauna frees the developer from worrying about single or multi-document solutions. Guarantees consistent data without burdening the developer on how to model their system to avoid consistency issues. To get a good overview of how Fauna does this see this blog post about the FaunaDB distributed transaction protocol. 

There are a few other alternatives that one could choose instead of using Fauna such as: 

  • Firebase 
  • Cassandra 
  • MongoDB 

But these options don’t give us the ACID guarantees that Fauna does, compromising scaling. ACID stands for: 

  • Atomic:  all transactions are a single unit of truth, either they all pass or none. If we have multiple transactions in the same request then either both are good or neither are, one cannot fail and the other succeed. 
  • Consistent: A transaction can only bring the database from one valid state to another, that is, any data written to the database must follow the rules set out by the database, this ensures that all transactions are legal. 
  • Isolation: When a transaction is made or created, concurrent transactions leave the state of the database the same as they would be if each request was made sequentially. 
  • Durability: Any transaction that is made and committed to the database is persisted in the database, regardless of down time of the system or failure.

Redwood.js

Since I’ve used Fauna several times, I can vouch for Fauna’s database first-hand, and of all the things I enjoy about it, what I love the most is how simple and easy it is to use! Not only that, but Fauna is also great and easy to pair with GraphQL and GraphQL tools like Apollo Client and Apollo Server!! However, we will not be using Apollo Client and Apollo Server directly. We’ll be using Redwood.js instead, a full-stack JavaScript/TypeScript (not production-ready) serverless framework which comes prepackaged with Apollo Client/Server! 

You can check out Redwood.js on its site, and the GitHub page.

Redwood.js is a newer framework to come out of the woodwork (lol) and was started by Tom Preston-Werner (one of the founders of GitHub). Even so, do be warned that this is an opinionated web-app framework, coming with a lot of the dev environment decisions already made for you. While some folk may not like this approach, it does offer us a faster way to build Ethereum apps, which is what this post is all about.

Superfluid

One of the challenges of working with Ethereum applications is block confirmations. The corollary to block confirmations is txn confirmations (i.e. data), and confirmations take time, which means time (usually minutes) that the user must wait until a computation they initiated (either directly via a UI or indirectly via another smart contract) is considered truthful or trustworthy. Superfluid is a protocol that aims to address this issue by introducing cashflows or txn streams to enable real-time financial applications; that is; apps where the user no longer needs to wait for txn confirmations and can immediately follow-up on the next set of computational actions. 

Learn more about Superfluid by reading their documentation.

Emanator

Patrick’s team did something really cool and applied Superfluid’s streaming functionality to NFTs, allowing for a user to “mint a continuous supply of NFTs”. This stream of NFTs can then be sold via auctions. Another interesting part of the emanator app is that these NFTs are for creators, artists 👩‍🎨 , or musicians 🎼 . 

There are a lot more technical details about how this application works, like the use of a Superfluid Instant Distribution Agreement (IDA), revenue split per auction, auction process, and the smart contract itself; however, since this is a “how-to” and not a “how does […]” tutorial, I’ll leave you with a link to the README.md of the original Emanator `monorepo`, if you want to learn more.  

Finally, let’s get to some code!

Setup

1. Download the repo from redwood-eth-with-fauna

Git clone the redwood-eth-with-fauna repo on your terminal, favorite text editor, or IDE. For greater cognitive ease, I’ll be using VSCode for this tutorial.

2. Install app dependencies and setup environment variables 🔐

To install this project’s dependencies after you’ve cloned the repo, just run:

yarn

…at the root of the directory. Then, we need to get our .env file from our .env.example file. To do that run:

cp .env.example .env

In your .env file, you still need to provide INFURA_ENDPOINT_KEY. Contrary to what you might initially think, this variable is actually your PROJECT ID of your Infura app. 

If you don’t have an Infura account, you can create one for free! 🆓 🕺

An example view of the Infura dashboard for my redwood-eth-with-fauna app. Copy the PROJECT ID and paste it in your .env file as for INFURA_ENDPOINT_KEY

3. Update the GraphQL schema and run the database migration

In the schema file found by at:

api/prisma/schema.prisma 

…we need to add a field to the Auction model. This is due to a bug in the code where this field is actually missing from the monorepo. So, we must add it to get our app working!

We are adding line 33, a contentHash field with the type `String` so that our Auctions can be added to our database and then shown to the user.

After that, we need to run a database migration using a Redwood.js command that will automatically update some of our project’s code. (How generous of the Redwood devs to abstract this responsibility from us; this command just works!) To do that, run:

yarn rw db save redwood-eth-with-fauna && yarn rw db up

You should see something like the following if this process was successful.

At this point, you could start the app by running

yarn rw dev

…and create, and then mint your first NFT! 🎉 🎉 

Note: You may get the following error when minting a new NFT:

If you do, just refresh the page to see your new NFT on the right!

You can also click on the name of your new NFT to view it’s auction details like the one shown below:

You can also notice on your terminal that Redwood updates the API resolver when you navigate to this page.

That’s all for the setup! Unfortunately, I won’t be touching on how to use this part of the UI, but you’re welcome to visit Emanator’s monorepo to learn more.

Now, we want to add Fauna to our app.

Adding Fauna

Before we get to adding Fauna to our Redwood app, let’s make sure to power it down by pressing CTL+C (on macOS). Redwood handles hot reloading for us and will automatically re-render pages as we make edits which can get quite annoying while we make your adjustments. So, we’ll keep our app down for now until we’ve finished adding Fauna. 

Next, we want to make sure we have a Fauna secret API key from a Fauna database that we create on Fauna’s dashboard (I will not walk through how to do that, but this helpful article does a good job of covering it!). Once you have copied your key secret, paste it into your .env file by replacing <FAUNA_SECRET_KEY>:

Make sure to leave the quotation marks in place! 

Importing GraphQL Schema to Fauna

To import our GraphQL schema of our project to Fauna, we need to first schema stitch our 3 separate schemas together, a process we’ll do manually. Make a new file api/src/graphql/fauna-schema-to-import.gql. In this file, we will add the following:

type Query {  bids: [Bid!]!   auctions: [Auction!]!  auction(address: String!): Auction   web3Auction(address: String!): Web3Auction!  web3User(address: String!, auctionAddress: String!): Web3User! }   # ------ Auction schema ------ type Auction {  id: Int!  owner: String!  address: String!  name: String!  winLength: Int!  description: String  contentHash: String  createdAt: String!  status: String!  highBid: Int!  generation: Int!  revenue: Int!  bids: [Bid]! }   input CreateAuctionInput {  address: String!  name: String!  owner: String!  winLength: Int!  description: String!  contentHash: String!  status: String  highBid: Int  generation: Int }   # Comment out to bypass Fauna `Import your GraphQL schema' error # type Mutation { #   createAuction(input: CreateAuctionInput!): Auction # }  # ------ Bids ------ type Bid {  id: Int!  amount: Int!  auction: Auction!  auctionAddress: String! }     input CreateBidInput {  amount: Int!  auctionAddress: String! }   input UpdateBidInput {  amount: Int  auctionAddress: String }   # ------ Web3 ------ type Web3Auction {  address: String!  highBidder: String!  status: String!  highBid: Int!  currentGeneration: Int!  auctionBalance: Int!  endTime: String!  lastBidTime: String!  # Unfortunately, the Fauna GraphQL API does not support custom scalars.  # So, we'll this field from the app.  # pastAuctions: JSON!  revenue: Int! }   type Web3User {  address: String!  auctionAddress: String!  superTokenBalance: String!  isSubscribed: Boolean! }

Using this schema, we can now import it to our Fauna database.

Also, don’t forget to make the necessary changes to our 3 separate schema files api/src/graphql/auctions.sdl.js, api/src/graphql/bids.sdl.js, and api/src/graphql/web3.sdl.js to correspond to our new Fauna GraphQL schema!! This is important to maintain consistency between our app’s GraphQL schema and Fauna’s.

View Complete Project Diffs — Quick Start section

If you want to take a deep dive and learn the necessary changes required to get this project up and running, great! Head on to the next section!!  

Otherwise, if you want to just get up and running quickly, this section is for you. 

You can git checkout the `integrating-fauna` branch at the root directory of this project’s repo. To do that, run the following command:

git checkout integrating-fauna

Then, run yarn again, for a sanity check:

yarn

To start the app, you can then run:

yarn rw dev

Steps to add Fauna

Now for some more steps to get our project going!

1. Install faunadb and graphql-request

First, let’s install the Fauna JavaScript driver faunadb and the graphql-request. We will use both of these for our main modifications to our database scripts folder to add Fauna. 

To install, run:

yarn workspace api add faunadb graphql-request

2. Edit  api/src/lib/db.js and api/src/functions/graphql.js

Now, we will replace the PrismaClient instance in api/src/lib/db.js with our Fauna instance. You can delete everything in file and replace it with the following:

Then, we must make a small update to our api/src/functions/graphql.js file like so:

3. Create api/src/lib/fauna-client.js

In this simple file, we will instantiate our client-side instance of the Fauna database with two variables which we will be using in the next step. This file should end up looking like the following:

4. Update our first service under api/src/services/auctions/auctions.js

Here comes the hard part. In order to get our services running, we need to replace all Prisma related commands with commands using an instance of the Fauna client from our fauna-client.js we just created. This part doesn’t seem straightforward initially, but with some deep thought and thinking, all the necessary changes come down to understanding how Fauna’s FQL commands work. 

FQL (Fauna Query Language) is Fauna’s native API for querying Fauna. Since FQL is expression-oriented, using it is as simple as chaining several functional commands. Thus, for the first changes in api/services/auctions/auctions.js, we’ll do the following:

To break this down a bit, first, we import the client variables and `db` instance from the proper project file paths. Then, we remove line 11, and replace it with lines 13 – 28 (you can ignore the comments for now, but if you really want to see the rest of these, you can check out the integrating-fauna branch from this project’s repo to see the complete diffs). Here, all we’re doing is using FQL to query the auctions Index of our Fauna Indexes to get all the auctions data from our Fauna database. You can test this out by running console.log(auctionsRaw).

From running that console.log(), we see that we need to do some object destructing to get the data we need to update what was previously line 18:

const auctions = await auctionsRaw.map(async (auction, i) => {

Since we dealing with an object, but we want an array, we’ll add the following in the next line after finishing the declaration of const auctionsRaw:

Now we can see that we’re getting the right data format.

Next, let’s update the call instance of `auctionsRaw` to our new auctionsDataObjects:

Here comes the most challenging part of updating this file. We want to update the simple return statement of both the auction and createAuction functions. Actually, the changes we make are actually quite similar. So, let’s make update our auction function like so:

Again, you can ignore the comments, as this comment is just to note the preference return command statement that was there prior to our changes.

All this query says is, “in the auction Collection, find one specific auction that has this address.”

This next step to complete this createAuctin function is admittedly quite hacky. While making this tutorial, I realized that Fauna’s GraphQL API unfortunately does not support custom scalars (you can read more about that under the Limitations section of their GraphQL documentation). This sadly meant that the GraphQL schema of Emanator’s monorepo would not work directly out of the box. In the end, this resulted in having to make many minor changes to get the app to properly run the creation of an auction. So, instead of walking in detail through this section, I will first show you the diff, then briefly summarize the purpose of the changes. 

Looking at the green lines of 100 and 101, we can see that the functional commands we’re using here are not that much different; here, we’re just creating a new document in our Auction collection, instead of reading from the Indexes. 

Turning back to the data fields of this createAuction function, we can see that we are given an input as argument, which actually refers to the UI input fields of the new NFT auction form on the Home page. Thus, input is an object of six fields, namely address, name, owner, winLength, description, and contentHash. However, the other four fields that are required to fulfill our GraphQL schema for an Auction type are still missing! Therefore, the other variables I created, id, dateTime, status, and highBid are variables I, more or less, hardcoded so that this function could complete successfully. 

Lastly, we need to complete the export of the Auction constant. To do that, we’ll make use of the Fauna client once more to make the following changes:

And, we’re finally done with our first service 🎊 , phew!

Completing GraphQL services

By now, you may be feeling a bit tired from these changes from updating the GraphQL services (I know I was while I was trying to learn the necessary changes to make!). So, to save you time getting this app to work, I’ll instead of walking through them entirely, I will share the git diffs again from the integrating-fauna branch that I have already working in the repo. After sharing them, I will summarize the changes that were made.

First file to update is api/src/services/bids/bids.js:

And, updating our last GraphQL service:

Finally, one final change in web/src/components/AuctionCell/AuctionCell.js:

So, back to Fauna not supporting custom scalars. Since Fauna doesn’t support custom scalars, we had to comment out the pastAuctions field from our web3.js service query (along with commenting it out from our GraphQL schemas). 

The last change that was made in web/src/components/AuctionCell/AuctionCell.js is another hacky change to make the newly created NFT address domains (you can navigate to these when you click on the hyperlink of the NFT name, located on the right of the home page after you create a new NFT) clickable without throwing an error. 😄 

Conclusion

Finally, when you run:

yarn rw dev

…and you create a new token, you can now do so using Fauna!! 🎉🎉🎉🎉

Final notes

There are two caveats. First, you will see this annoying error message appear above the create NFT form after you have created one and confirmed the transaction with MetaMask.

Unfortunately, I couldn’t find a solution for this besides refreshing the page. So, we will do this just like we did with our original Emanator monorepo version. 

But when you do refresh the page, you should see your new shiny token displayed on the right! 👏 

 And, this is with the NFT token data fetched from Fauna! 🙌 🕺 🙌🙌

The second caveat is that the page for a new NFT is still not renderable due to the bug web/src/components/AuctionCell/AuctionCell.js.

This is another issue I couldn’t solve. However, this is where you, the community, can step in! This repo, redwood-eth-with-fauna is openly available on GitHub, along with the (currently) finalized integrating-fauna branch that has a working (as it currently does 😅) version of the Emanator app. So, if you’re really interested in this app and would like to explore how to leverage this app further with Fauna, feel free to fork the project and explore or make changes! I can always be reached on GitHub and am always happy to help you! 😊

That’s all for this tut, and I hope you enjoyed! Feel free to reach out with any questions on GitHub!


The post Building an Ethereum app using Redwood.js and Fauna appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , ,
[Top]

Deploying a Serverless Jamstack Site with RedwoodJS, Fauna, and Vercel

This article is for anyone interested in the emerging ecosystem of tools and technologies related to Jamstack and serverless. We’re going to use Fauna’s GraphQL API as a serverless back-end for a Jamstack front-end built with the Redwood framework and deployed with a one-click deploy on Vercel.

In other words, lots to learn! By the end, you’ll not only get to dive into Jamstack and serverless concepts, but also hands-on experience with a really neat combination of tech that I think you’ll really like.

Creating a Redwood app

Redwood is a framework for serverless applications that pulls together React (for front-end component), GraphQL (for data) and Prisma (for database queries).

There are other front-end frameworks that we could use here. One example is Bison, created by Chris Ball. It leverages GraphQL in a similar fashion to Redwood, but uses a slightly different lineup of GraphQL libraries, such as Nexus in place of Apollo Client and GraphQL Codegen, in place of the Redwood CLI. But it’s only been around a few months, so the project is still very new compared to Redwood, which has been in development since June 2019.

There are many great Redwood starter templates we could use to bootstrap our application, but I want to start by generating a Redwood boilerplate project and looking at the different pieces that make up a Redwood app. We’ll then build up the project, piece by piece.

We will need to install Yarn to use the Redwood CLI to get going. Once that’s good to go, here’s what to run in a terminal

yarn create redwood-app ./csstricks

We’ll now cd into our new project directory and start our development server.

cd csstricks yarn rw dev

Our project’s front-end is now running on localhost:8910. Our back-end is running on localhost:8911 and ready to receive GraphQL queries. By default, Redwood comes with a GraphiQL playground that we’ll use towards the end of the article.

Let’s head over to localhost:8910 in the browser. If all is good, the Redwood landing page should load up.

The Redwood starting page indicates that the front end of our app is ready to go. It also provides a nice instruction for how to start creating custom routes for the app.

Redwood is currently at version 0.21.0, as of this writing. The docs warn against using it in production until it officially reaches 1.0. They also have a community forum where they welcome feedback and input from developers like yourself.

Directory structure

Redwood values convention over configuration and makes a lot of decisions for us, including the choice of technologies, how files are organized, and even naming conventions. This can result in an overwhelming amount of generated boilerplate code that is hard to comprehend, especially if you’re just digging into this for the first time.

Here’s how the project is structured:

├── api │   ├── prisma │   │   ├── schema.prisma │   │   └── seeds.js │   └── src │       ├── functions │       │   └── graphql.js │       ├── graphql │       ├── lib │       │   └── db.js │       └── services └── web     ├── public     │   ├── favicon.png     │   ├── README.md     │   └── robots.txt     └── src         ├── components         ├── layouts         ├── pages         │   ├── FatalErrorPage         │   │   └── FatalErrorPage.js         │   └── NotFoundPage         │       └── NotFoundPage.js         ├── index.css         ├── index.html         ├── index.js         └── Routes.js

Don’t worry too much about what all this means yet; the first thing to notice is things are split into two main directories: web and api. Yarn workspaces allows each side to have its own path in the codebase.

web contains our front-end code for:

  • Pages
  • Layouts
  • Components

api contains our back-end code for:

  • Function handlers
  • Schema definition language
  • Services for back-end business logic
  • Database client

Redwood assumes Prisma as a data store, but we’re going to use Fauna instead. Why Fauna when we could just as easily use Firebase? Well, it’s just a personal preference. After Google purchased Firebase they launched a real-time document database, Cloud Firestore, as the successor to the original Firebase Realtime Database. By integrating with the larger Firebase ecosystem, we could have access to a wider range of features than what Fauna offers. At the same time, there are even a handful of community projects that have experimented with Firestore and GraphQL but there isn’t first class GraphQL support from Google.

Since we will be querying Fauna directly, we can delete the prisma directory and everything in it. We can also delete all the code in db.js. Just don’t delete the file as we’ll be using it to connect to the Fauna client.

index.html

We’ll start by taking a look at the web side since it should look familiar to developers with experience using React or other single-page application frameworks.

But what actually happens when we build a React app? It takes the entire site and shoves it all into one big ball of JavaScript inside index.js, then shoves that ball of JavaScript into the “root” DOM node, which is on line 11 of index.html.

<!DOCTYPE html> <html lang="en">   <head>     <meta charset="UTF-8" />     <meta name="viewport" content="width=device-width, initial-scale=1.0" />     <link rel="icon" type="image/png" href="/favicon.png" />     <title><%= htmlWebpackPlugin.options.title %></title>   </head>    <body>     <div id="redwood-app"></div> // HIGHLIGHT   </body> </html>

While Redwood uses Jamstack in the documentation and marketing of itself, Redwood doesn’t do pre-rendering yet (like Next or Gatsby can), but is still Jamstack in that it’s shipping static files and hitting APIs with JavaScript for data.

index.js

index.js contains our root component (that big ball of JavaScript) that is rendered to the root DOM node. document.getElementById() selects an element with an id containing redwood-app, and ReactDOM.render() renders our application into the root DOM element.

RedwoodProvider

The <Routes /> component (and by extension all the application pages) are contained within the <RedwoodProvider> tags. Flash uses the Context API for passing message objects between deeply nested components. It provides a typical message display unit for rendering the messages provided to FlashContext.

FlashContext’s provider component is packaged with the <RedwoodProvider /> component so it’s ready to use out of the box. Components pass message objects by subscribing to it (think, “send and receive”) via the provided useFlash hook.

FatalErrorBoundary

The provider itself is then contained within the <FatalErrorBoundary> component which is taking in <FatalErrorPage> as a prop. This defaults your website to an error page when all else fails.

import ReactDOM from 'react-dom' import { RedwoodProvider, FatalErrorBoundary } from '@redwoodjs/web' import FatalErrorPage from 'src/pages/FatalErrorPage' import Routes from 'src/Routes' import './index.css'  ReactDOM.render(   <FatalErrorBoundary page={FatalErrorPage}>     <RedwoodProvider>       <Routes />     </RedwoodProvider>   </FatalErrorBoundary>,    document.getElementById('redwood-app') )

Routes.js

Router contains all of our routes and each route is specified with a Route. The Redwood Router attempts to match the current URL to each route, stopping when it finds a match and then renders only that route. The only exception is the notfound route which renders a single Route with a notfound prop when no other route matches.

import { Router, Route } from '@redwoodjs/router'  const Routes = () => {   return (     <Router>       <Route notfound page={NotFoundPage} />     </Router>   ) }  export default Routes

Pages

Now that our application is set up, let’s start creating pages! We’ll use the Redwood CLI generate page command to create a named route function called home. This renders the HomePage component when it matches the URL path to /.

We can also use rw instead of redwood and g instead of generate to save some typing.

yarn rw g page home /

This command performs four separate actions:

  • It creates web/src/pages/HomePage/HomePage.js. The name specified in the first argument gets capitalized and “Page” is appended to the end.
  • It creates a test file at web/src/pages/HomePage/HomePage.test.js with a single, passing test so you can pretend you’re doing test-driven development.
  • It creates a Storybook file at web/src/pages/HomePage/HomePage.stories.js.
  • It adds a new <Route> in web/src/Routes.js that maps the / path to the HomePage component.

HomePage

If we go to web/src/pages we’ll see a HomePage directory containing a HomePage.js file. Here’s what’s in it:

// web/src/pages/HomePage/HomePage.js  import { Link, routes } from '@redwoodjs/router'  const HomePage = () => {   return (     <>       <h1>HomePage</h1>       <p>         Find me in <code>./web/src/pages/HomePage/HomePage.js</code>       </p>       <p>         My default route is named <code>home</code>, link to me with `         <Link to={routes.home()}>Home</Link>`       </p>     </>   ) }  export default HomePage
The HomePage.js file has been set as the main route, /.

We’re going to move our page navigation into a re-usable layout component which means we can delete the Link and routes imports as well as <Link to={routes.home()}>Home</Link>. This is what we’re left with:

// web/src/pages/HomePage/HomePage.js  const HomePage = () => {   return (     <>       <h1>RedwoodJS+FaunaDB+Vercel 🚀</h1>       <p>Taking Fullstack to the Jamstack</p>     </>   ) }  export default HomePage

AboutPage

To create our AboutPage, we’ll enter almost the exact same command we just did, but with about instead of home. We also don’t need to specify the path since it’s the same as the name of our route. In this case, the name and path will both be set to about.

yarn rw g page about
AboutPage.js is now available at /about.
// web/src/pages/AboutPage/AboutPage.js  import { Link, routes } from '@redwoodjs/router'  const AboutPage = () => {   return (     <>       <h1>AboutPage</h1>       <p>         Find me in <code>./web/src/pages/AboutPage/AboutPage.js</code>       </p>       <p>         My default route is named <code>about</code>, link to me with `         <Link to={routes.about()}>About</Link>`       </p>     </>   ) }  export default AboutPage

We’ll make a few edits to the About page like we did with our Home page. That includes taking out the <Link> and routes imports and deleting Link to={routes.about()}>About</Link>.

Here’s the end result:

// web/src/pages/AboutPage/AboutPage.js  const AboutPage = () => {   return (     <>       <h1>About 🚀🚀</h1>       <p>For those who want to stack their Jam, fully</p>     </>   ) }

If we return to Routes.js we’ll see our new routes for home and about. Pretty nice that Redwood does this for us!

const Routes = () => {   return (     <Router>       <Route path="/about" page={AboutPage} name="about" />       <Route path="/" page={HomePage} name="home" />       <Route notfound page={NotFoundPage} />     </Router>   ) }

Layouts

Now we want to create a header with navigation links that we can easily import into our different pages. We want to use a layout so we can add navigation to as many pages as we want by importing the component instead of having to write the code for it on every single page.

BlogLayout

You may now be wondering, “is there a generator for layouts?” The answer to that is… of course! The command is almost identical as what we’ve been doing so far, except with rw g layout followed by the name of the layout, instead of rw g page followed by the name and path of the route.

yarn rw g layout blog
// web/src/layouts/BlogLayout/BlogLayout.js  const BlogLayout = ({ children }) => {   return <>{children}</> }  export default BlogLayout

To create links between different pages we’ll need to:

  • Import Link and routes from @redwoodjs/router into BlogLayout.js
  • Create a <Link to={}></Link> component for each link
  • Pass a named route function, such as routes.home(), into the to={} prop for each route
// web/src/layouts/BlogLayout/BlogLayout.js  import { Link, routes } from '@redwoodjs/router'  const BlogLayout = ({ children }) => {   return (     <>       <header>         <h1>RedwoodJS+FaunaDB+Vercel 🚀</h1>          <nav>           <ul>             <li>               <Link to={routes.home()}>Home</Link>             </li>             <li>               <Link to={routes.about()}>About</Link>             </li>           </ul>         </nav>        </header>        <main>         <p>{children}</p>       </main>     </>   ) }  export default BlogLayout

We won’t see anything different in the browser yet. We created the BlogLayout but have not imported it into any pages. So let’s import BlogLayout into HomePage and wrap the entire return statement with the BlogLayout tags.

// web/src/pages/HomePage/HomePage.js  import BlogLayout from 'src/layouts/BlogLayout'  const HomePage = () => {   return (     <BlogLayout>       <p>Taking Fullstack to the Jamstack</p>     </BlogLayout>   ) }  export default HomePage
Hey look, the navigation is taking shape!

If we click the link to the About page we’ll be taken there but we are unable to get back to the previous page because we haven’t imported BlogLayout into AboutPage yet. Let’s do that now:

// web/src/pages/AboutPage/AboutPage.js  import BlogLayout from 'src/layouts/BlogLayout'  const AboutPage = () => {   return (     <BlogLayout>       <p>For those who want to stack their Jam, fully</p>     </BlogLayout>   ) }  export default AboutPage

Now we can navigate back and forth between the pages by clicking the navigation links! Next up, we’ll now create our GraphQL schema so we can start working with data.

Fauna schema definition language

To make this work, we need to create a new file called sdl.gql and enter the following schema into the file. Fauna will take this schema and make a few transformations.

// sdl.gql  type Post {   title: String!   body: String! }  type Query {   posts: [Post] }

Save the file and upload it to Fauna’s GraphQL Playground. Note that, at this point, you will need a Fauna account to continue. There’s a free tier that works just fine for what we’re doing.

The GraphQL Playground is located in the selected database.
The Fauna shell allows us to write, run and test queries.

It’s very important that Redwood and Fauna agree on the SDL, so we cannot use the original SDL that was entered into Fauna because that is no longer an accurate representation of the types as they exist on our Fauna database.

The Post collection and posts Index will appear unaltered if we run the default queries in the shell, but Fauna creates an intermediary PostPage type which has a data object.

Redwood schema definition language

This data object contains an array with all the Post objects in the database. We will use these types to create another schema definition language that lives inside our graphql directory on the api side of our Redwood project.

// api/src/graphql/posts.sdl.js  import gql from 'graphql-tag'  export const schema = gql`   type Post {     title: String!     body: String!   }    type PostPage {     data: [Post]   }    type Query {     posts: PostPage   } ` 

Services

The posts service sends a query to the Fauna GraphQL API. This query is requesting an array of posts, specifically the title and body for each. These are contained in the data object from PostPage.

// api/src/services/posts/posts.js  import { request } from 'src/lib/db' import { gql } from 'graphql-request'  export const posts = async () => {   const query = gql`   {     posts {       data {         title         body       }     }   }   `    const data = await request(query, 'https://graphql.fauna.com/graphql')    return data['posts'] } 

At this point, we can install graphql-request, a minimal client for GraphQL with a promise-based API that can be used to send GraphQL requests:

cd api yarn add graphql-request graphql

Attach the Fauna authorization token to the request header

So far, we have GraphQL for data, Fauna for modeling that data, and graphql-request to query it. Now we need to establish a connection between graphql-request and Fauna, which we’ll do by importing graphql-request into db.js and use it to query an endpoint that is set to https://graphql.fauna.com/graphql.

// api/src/lib/db.js  import { GraphQLClient } from 'graphql-request'  export const request = async (query = {}) => {   const endpoint = 'https://graphql.fauna.com/graphql'    const graphQLClient = new GraphQLClient(endpoint, {     headers: {       authorization: 'Bearer ' + process.env.FAUNADB_SECRET     },   })    try {     return await graphQLClient.request(query)   } catch (error) {     console.log(error)     return error   } } 

A GraphQLClient is instantiated to set the header with an authorization token, allowing data to flow to our app.

Create

We’ll use the Fauna Shell and run a couple of Fauna Query Language (FQL) commands to seed the database. First, we’ll create a blog post with a title and body.

Create(   Collection("Post"),   {     data: {       title: "Deno is a secure runtime for JavaScript and TypeScript.",       body: "The original creator of Node, Ryan Dahl, wanted to build a modern, server-side JavaScript framework that incorporates the knowledge he gained building out the initial Node ecosystem."     }   } )
{   ref: Ref(Collection("Post"), "282083736060690956"),   ts: 1605274864200000,   data: {     title: "Deno is a secure runtime for JavaScript and TypeScript.",     body:       "The original creator of Node, Ryan Dahl, wanted to build a modern, server-side JavaScript framework that incorporates the knowledge he gained building out the initial Node ecosystem."   } }

Let’s create another one.

Create(   Collection("Post"),   {     data: {       title: "NextJS is a React framework for building production grade applications that scale.",       body: "To build a complete web application with React from scratch, there are many important details you need to consider such as: bundling, compilation, code splitting, static pre-rendering, server-side rendering, and client-side rendering."     }   } )
{   ref: Ref(Collection("Post"), "282083760102441484"),   ts: 1605274887090000,   data: {     title:       "NextJS is a React framework for building production grade applications that scale.",     body:       "To build a complete web application with React from scratch, there are many important details you need to consider such as: bundling, compilation, code splitting, static pre-rendering, server-side rendering, and client-side rendering."   } }

And maybe one more just to fill things up.

Create(   Collection("Post"),   {     data: {       title: "Vue.js is an open-source front end JavaScript framework for building user interfaces and single-page applications.",       body: "Evan You wanted to build a framework that combined many of the things he loved about Angular and Meteor but in a way that would produce something novel. As React rose to prominence, Vue carefully observed and incorporated many lessons from React without ever losing sight of their own unique value prop."     }   } )
{   ref: Ref(Collection("Post"), "282083792286384652"),   ts: 1605274917780000,   data: {     title:       "Vue.js is an open-source front end JavaScript framework for building user interfaces and single-page applications.",     body:       "Evan You wanted to build a framework that combined many of the things he loved about Angular and Meteor but in a way that would produce something novel. As React rose to prominence, Vue carefully observed and incorporated many lessons from React without ever losing sight of their own unique value prop."   } }

Cells

Cells provide a simple and declarative approach to data fetching. They contain the GraphQL query along with loading, empty, error, and success states. Each one renders itself automatically depending on what state the cell is in.

BlogPostsCell

yarn rw generate cell BlogPosts   export const QUERY = gql`   query BlogPostsQuery {     blogPosts {       id     }   } ` export const Loading = () => <div>Loading...</div> export const Empty = () => <div>Empty</div> export const Failure = ({ error }) => <div>Error: {error.message}</div>  export const Success = ({ blogPosts }) => {   return JSON.stringify(blogPosts) }

By default we have the query render the data with JSON.stringify on the page where the cell is imported. We’ll make a handful of changes to make the query and render the data we need. So, let’s:

  • Change blogPosts to posts.
  • Change BlogPostsQuery to POSTS.
  • Change the query itself to return the title and body of each post.
  • Map over the data object in the success component.
  • Create a component with the title and body of the posts returned through the data object.

Here’s how that looks:

// web/src/components/BlogPostsCell/BlogPostsCell.js  export const QUERY = gql`   query POSTS {     posts {       data {         title         body       }     }   } ` export const Loading = () => <div>Loading...</div> export const Empty = () => <div>Empty</div> export const Failure = ({ error }) => <div>Error: {error.message}</div>  export const Success = ({ posts }) => {   const {data} = posts   return data.map(post => (     <>       <header>         <h2>{post.title}</h2>       </header>       <p>{post.body}</p>     </>   )) }

The POSTS query is sending a query for posts, and when it’s queried, we get back a data object containing an array of posts. We need to pull out the data object so we can loop over it and get the actual posts. We do this with object destructuring to get the data object and then we use the map() function to map over the data object and pull out each post. The title of each post is rendered with an <h2> inside <header> and the body is rendered with a <p> tag.

Import BlogPostsCell to HomePage

// web/src/pages/HomePage/HomePage.js  import BlogLayout from 'src/layouts/BlogLayout' import BlogPostsCell from 'src/components/BlogPostsCell/BlogPostsCell.js'  const HomePage = () => {   return (     <BlogLayout>       <p>Taking Fullstack to the Jamstack</p>       <BlogPostsCell />     </BlogLayout>   ) }  export default HomePage
Check that out! Posts are returned to the app and rendered on the front end.

Vercel

We do mention Vercel in the title of this post, and we’re finally at the point where we need it. Specifically, we’re using it to build the project and deploy it to Vercel’s hosted platform, which offers build previews when code it pushed to the project repository. So, if you don’t already have one, grab a Vercel account. Again, the free pricing tier works just fine for this work.

Why Vercel over, say, Netlify? It’s a good question. Redwood even began with Netlify as its original deploy target. Redwood still has many well-documented Netlify integrations. Despite the tight integration with Netlify, Redwood seeks to be universally portable to as many deploy targets as possible. This now includes official support for Vercel along with community integrations for the Serverless framework, AWS Fargate, and PM2. So, yes, we could use Netlify here, but it’s nice that we have a choice of available services.

We only have to make one change to the project’s configuration to integrate it with Vercel. Let’s open netlify.toml and change the apiProxyPath to "/api". Then, let’s log into Vercel and click the “Import Project” button to connect its service to the project repository. This is where we enter the URL of the repo so Vercel can watch it, then trigger a build and deploy when it noticed changes.

I’m using GitHub to host my project, but Vercel is capable of working with GitLab and Bitbucket as well.

Redwood has a preset build command that works out of the box in Vercel:

Simply select “Redwood” from the preset options and we’re good to go.

We’re pretty far along, but even though the site is now “live” the database isn’t connected:

To fix that, we’ll add the FAUNADB_SECRET token from our Fauna account to our environment variables in Vercel:

Now our application is complete!

We did it! I hope this not only gets you super excited about working with Jamstack and serverless, but got a taste of some new technologies in the process.


The post Deploying a Serverless Jamstack Site with RedwoodJS, Fauna, and Vercel appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , , ,
[Top]

How to create a client-serverless Jamstack app using Netlify, Gatsby and Fauna

The Jamstack is a modern web development architecture based on client-side JavaScript, reusable APIs, and prebuilt Markup.

The key aspects of a Jamstack application are the following:

  • The entire app runs on a CDN (or ADN). CDN stands for Content Delivery Network and an ADN is an Application Delivery Network.
  • Everything lives in Git.
  • Automated builds run with a workflow when developers push the code.
  • There’s Automatic deployment of the prebuilt markup to the CDN/ADN.
  • Reusable APIs make hasslefree integrations with many of the services. To take a few examples, Stripe for the payment and checkout, Mailgun for email services, etc. We can also write custom APIs targeted to a specific use-case. We will see such examples of custom APIs in this article.
  • It’s practically Serverless. To put it more clearly, we do not maintain any servers, rather make use of already existing services (like email, media, database, search, and so on) or serverless functions.

In this article, we will learn how to build a Jamstack application that has:

  • A global data store with GraphQL support to store and fetch data with ease. We will use Fauna to accomplish this.
  • Serverless functions that also act as the APIs to fetch data from the Fauna data store. We will use Netlify serverless functions for this.
  • We will build the client side of the app using a Static Site Generator called Gatsbyjs.
  • Finally we will deploy the app on a CDN configured and managed by Netlify CDN.

So, what are we building today?

We all love shopping. How cool would it be to manage all of our shopping notes in a centralized place? So we’ll be building an app called ‘shopnote’ that allows us to manage shop notes. We can also add one or more items to a note, mark them as done, mark them as urgent, etc.

At the end of this article, our shopnote app will look like this,

TL;DR

We will learn things with a step-by-step approach in this article. If you want to jump into the source code or demonstration sooner, here are links to them.

Set up Fauna

Fauna is the data API for client-serverless applications. If you are familiar with any traditional RDBMS, a major difference with Fauna would be, it is a relational NOSQL system that gives all the capabilities of the legacy RDBMS. It is very flexible without compromising scalability and performance.

Fauna supports multiple APIs for data-access,

  • GraphQL: An open source data query and manipulation language. If you are new to the GraphQL, you can find more details from here, https://graphql.org/
  • Fauna Query Language (FQL): An API for querying Fauna. FQL has language specific drivers which makes it flexible to use with languages like JavaScript, Java, Go, etc. Find more details of FQL from here.

In this article we will explain the usages of GraphQL for the ShopNote application.

First thing first, sign up using this URL. Please select the free plan which is with a generous daily usage quota and more than enough for our usage.

Next, create a database by providing a database name of your choice. I have used shopnotes as the database name.

After creating the database, we will be defining the GraphQL schema and importing it into the database. A GraphQL schema defines the structure of the data. It defines the data types and the relationship between them. With schema we can also specify what kind of queries are allowed.

At this stage, let us create our project folder. Create a project folder somewhere on your hard drive with the name, shopnote. Create a file with the name, shopnotes.gql with the following content:

type ShopNote {   name: String!   description: String   updatedAt: Time   items: [Item!] @relation }   type Item {   name: String!   urgent: Boolean   checked: Boolean   note: ShopNote! }   type Query {   allShopNotes: [ShopNote!]! }

Here we have defined the schema for a shopnote list and item, where each ShopNote contains name, description, update time and a list of Items. Each Item type has properties like, name, urgent, checked and which shopnote it belongs to. 

Note the @relation directive here. You can annotate a field with the @relation directive to mark it for participating in a bi-directional relationship with the target type. In this case, ShopNote and Item are in a one-to-many relationship. It means, one ShopNote can have multiple Items, where each Item can be related to a maximum of one ShopNote.

You can read more about the @relation directive from here. More on the GraphQL relations can be found from here.

As a next step, upload the shopnotes.gql file from the Fauna dashboard using the IMPORT SCHEMA button,

Upon importing a GraphQL Schema, FaunaDB will automatically create, maintain, and update, the following resources:

  • Collections for each non-native GraphQL Type; in this case, ShopNote and Item.
  • Basic CRUD Queries/Mutations for each Collection created by the Schema, e.g. createShopNote allShopNotes; each of which are powered by FQL.
  • For specific GraphQL directives: custom Indexes or FQL for establishing relationships (i.e. @relation), uniqueness (@unique), and more!

Behind the scene, Fauna will also help to create the documents automatically. We will see that in a while.

Fauna supports a schema-free object relational data model. A database in Fauna may contain a group of collections. A collection may contain one or more documents. Each of the data records are inserted into the document. This forms a hierarchy which can be visualized as:

Here the data record can be arrays, objects, or of any other supported types. With the Fauna data model we can create indexes, enforce constraints. Fauna indexes can combine data from multiple collections and are capable of performing computations. 

At this stage, Fauna already created a couple of collections for us, ShopNote and Item. As we start inserting records, we will see the Documents are also getting created. We will be able view and query the records and utilize the power of indexes. You may see the data model structure appearing in your Fauna dashboard like this in a while,

Point to note here, each of the documents is identified by the unique ref attribute. There is also a ts field which returns the timestamp of the recent modification to the document. The data record is part of the data field. This understanding is really important when you interact with collections, documents, records using FQL built-in functions. However, in this article we will interact with them using GraphQL queries with Netlify Functions.

With all these understanding, let us start using our Shopenotes database that is created successfully and ready for use. 

Let us try some queries

Even though we have imported the schema and underlying things are in place, we do not have a document yet. Let us create one. To do that, copy the following GraphQL mutation query to the left panel of the GraphQL playground screen and execute.

mutation {   createShopNote(data: {     name: "My Shopping List"     description: "This is my today's list to buy from Tom's shop"     items: {       create: [         { name: "Butther - 1 pk", urgent: true }         { name: "Milk - 2 ltrs", urgent: false }         { name: "Meat - 1lb", urgent: false }       ]     }   }) {     _id     name     description     items {       data {         name,         urgent       }     }   } }

Note, as Fauna already created the GraphQL mutation classes in the background, we can directly use it like, createShopNote. Once successfully executed, you can see the response of a ShopNote creation at the right side of the editor.

The newly created ShopNote document has all the required details we have passed while creating it. We have seen ShopNote has a one-to-many relation with Item. You can see the shopnote response has the item data nested within it. In this case, one shopnote has three items. This is really powerful. Once the schema and relation are defined, the document will be created automatically keeping that relation in mind.

Now, let us try fetching all the shopnotes. Here is the GraphQL query:

query {     allShopNotes {     data {       _id       name       description       updatedAt       items {         data {           name,           checked,           urgent         }       }     }   } }

Let’s try the query in the playground as before:

Now we have a database with a schema and it is fully operational with creating and fetch functionality. Similarly, we can create queries for adding, updating, removing items to a shopnote and also updating and deleting a shopnote. These queries will be used at a later point in time when we create the serverless functions.

If you are interested to run other queries in the GraphQL editor, you can find them from here,

Create a Server Secret Key

Next, we need to create a secured server key to make sure the access to the database is authenticated and authorized.

Click on the SECURITY option available in the FaunaDB interface to create the key, like so,

On successful creation of the key, you will be able to view the key’s secret. Make sure to copy and save it somewhere safe.

We do not want anyone else to know about this key. It is not even a good idea to commit it to the source code repository. To maintain this secrecy, create an empty file called .env at the root level of your project folder.

Edit the .env file and add the following line to it (paste the generated server key in the place of, <YOUR_FAUNA_KEY_SECRET>).

FAUNA_SERVER_SECRET=<YOUR_FAUNA_KEY_SECRET>

Add a .gitignore file and write the following content to it. This is to make sure we do not commit the .env file to the source code repo accidentally. We are also ignoring node_modules as a best practice.

.env

We are done with all that had to do with Fauna’s setup. Let us move to the next phase to create serverless functions and APIs to access data from the Fauna data store. At this stage, the directory structure may look like this,

Set up Netlify Serverless Functions

Netlify is a great platform to create hassle-free serverless functions. These functions can interact with databases, file-system, and in-memory objects.

Netlify functions are powered by AWS Lambda. Setting up AWS Lambdas on our own can be a fairly complex job. With Netlify, we will simply set a folder and drop our functions. Writing simple functions automatically becomes APIs. 

First, create an account with Netlify. This is free and just like the FaunaDB free tier, Netlify is also very flexible.

Now we need to install a few dependencies using either npm or yarn. Make sure you have nodejs installed. Open a command prompt at the root of the project folder. Use the following command to initialize the project with node dependencies,

npm init -y

Install the netlify-cli utility so that we can run the serverless function locally.

npm install netlify-cli -g

Now we will install two important libraries, axios and dotenv. axios will be used for making the HTTP calls and dotenv will help to load the FAUNA_SERVER_SECRET environment variable from the .env file into process.env.

yarn add axios dotenv

Or:

npm i axios dotenv

Create serverless functions

Create a folder with the name, functions at the root of the project folder. We are going to keep all serverless functions under it.

Now create a subfolder called utils under the functions folder. Create a file called query.js under the utils folder. We will need some common code to query the data store for all the serverless functions. The common code will be in the query.js file.

First we import the axios library functionality and load the .env file. Next, we export an async function that takes the query and variables. Inside the async function, we make calls using axios with the secret key. Finally, we return the response.

// query.js   const axios = require("axios"); require("dotenv").config();   module.exports = async (query, variables) => {   const result = await axios({       url: "https://graphql.fauna.com/graphql",       method: "POST",       headers: {           Authorization: `Bearer $ {process.env.FAUNA_SERVER_SECRET}`       },       data: {         query,         variables       }  });    return result.data; };

Create a file with the name, get-shopnotes.js under the functions folder. We will perform a query to fetch all the shop notes.

// get-shopnotes.js   const query = require("./utils/query");   const GET_SHOPNOTES = `    query {        allShopNotes {        data {          _id          name          description          updatedAt          items {            data {              _id,              name,              checked,              urgent          }        }      }    }  }   `;   exports.handler = async () => {   const { data, errors } = await query(GET_SHOPNOTES);     if (errors) {     return {       statusCode: 500,       body: JSON.stringify(errors)     };   }     return {     statusCode: 200,     body: JSON.stringify({ shopnotes: data.allShopNotes.data })   }; };

Time to test the serverless function like an API. We need to do a one time setup here. Open a command prompt at the root of the project folder and type:

netlify login

This will open a browser tab and ask you to login and authorize access to your Netlify account. Please click on the Authorize button.

Next, create a file called, netlify.toml at the root of your project folder and add this content to it,

[build]     functions = "functions"   [[redirects]]    from = "/api/*"    to = "/.netlify/functions/:splat"    status = 200

This is to tell Netlify about the location of the functions we have written so that it is known at the build time.

Netlify automatically provides the APIs for the functions. The URL to access the API is in this form, /.netlify/functions/get-shopnotes which may not be very user-friendly. We have written a redirect to make it like, /api/get-shopnotes.

Ok, we are done. Now in command prompt type,

netlify dev

By default the app will run on localhost:8888 to access the serverless function as an API.

Open a browser tab and try this URL, http://localhost:8888/api/get-shopnotes:

Congratulations!!! You have got your first serverless function up and running.

Let us now write the next serverless function to create a ShopNote. This is going to be simple. Create a file named, create-shopnote.js under the functions folder. We need to write a mutation by passing the required parameters. 

//create-shopnote.js   const query = require("./utils/query");   const CREATE_SHOPNOTE = `   mutation($ name: String!, $ description: String!, $ updatedAt: Time!, $ items: ShopNoteItemsRelation!) {     createShopNote(data: {name: $ name, description: $ description, updatedAt: $ updatedAt, items: $ items}) {       _id       name       description       updatedAt       items {         data {           name,           checked,           urgent         }       }     }   } `;   exports.handler = async event => {      const { name, items } = JSON.parse(event.body);   const { data, errors } = await query(     CREATE_SHOPNOTE, { name, items });     if (errors) {     return {       statusCode: 500,       body: JSON.stringify(errors)     };   }     return {     statusCode: 200,     body: JSON.stringify({ shopnote: data.createShopNote })   }; };

Please give your attention to the parameter, ShopNotesItemRelation. As we had created a relation between the ShopNote and Item in our schema, we need to maintain that while writing the query as well.

We have de-structured the payload to get the required information from the payload. Once we got those, we just called the query method to create a ShopNote.

Alright, let’s test it out. You can use postman or any other tools of your choice to test it like an API. Here is the screenshot from postman.

Great, we can create a ShopNote with all the items we want to buy from a shopping mart. What if we want to add an item to an existing ShopNote? Let us create an API for it. With the knowledge we have so far, it is going to be really quick.

Remember, ShopNote and Item are related? So to create an item, we have to mandatorily tell which ShopNote it is going to be part of. Here is our next serverless function to add an item to an existing ShopNote.

//add-item.js   const query = require("./utils/query");   const ADD_ITEM = `   mutation($ name: String!, $ urgent: Boolean!, $ checked: Boolean!, $ note: ItemNoteRelation!) {     createItem(data: {name: $ name, urgent: $ urgent, checked: $ checked, note: $ note}) {       _id       name       urgent       checked       note {         name       }     }   } `;   exports.handler = async event => {      const { name, urgent, checked, note} = JSON.parse(event.body);   const { data, errors } = await query(     ADD_ITEM, { name, urgent, checked, note });     if (errors) {     return {       statusCode: 500,       body: JSON.stringify(errors)     };   }     return {     statusCode: 200,     body: JSON.stringify({ item: data.createItem })   }; };

We are passing the item properties like, name, if it is urgent, the check value and the note the items should be part of. Let’s see how this API can be called using postman,

As you see, we are passing the id of the note while creating an item for it.

We won’t bother writing the rest of the API capabilities in this article,  like updating, deleting a shop note, updating, deleting items, etc. In case, you are interested, you can look into those functions from the GitHub Repository.

However, after creating the rest of the API, you should have a directory structure like this,

We have successfully created a data store with Fauna, set it up for use, created an API backed by serverless functions, using Netlify Functions, and tested those functions/routes.

Congratulations, you did it. Next, let us build some user interfaces to show the shop notes and add items to it. To do that, we will use Gatsby.js (aka, Gatsby) which is a super cool, React-based static site generator.

The following section requires you to have basic knowledge of ReactJS. If you are new to it, you can learn it from here. If you are familiar with any other user interface technologies like, Angular, Vue, etc feel free to skip the next section and build your own using the APIs explained so far.

Set up the User Interfaces using Gatsby

We can set up a Gatsby project either using the starter projects or initialize it manually. We will build things from scratch to understand it better.

Install gatsby-cli globally. 

npm install -g gatsby-cli

Install gatsby, react and react-dom

yarn add gatsby react react-dom

Edit the scripts section of the package.json file to add a script for develop.

"scripts": {   "develop": "gatsby develop"  }

Gatsby projects need a special configuration file called, gatsby-config.js. Please create a file named, gatsby-config.js at the root of the project folder with the following content,

module.exports = {   // keep it empty     }

Let’s create our first page with Gatsby. Create a folder named, src at the root of the project folder. Create a subfolder named pages under src. Create a file named, index.js under src/pages with the following content:

import React, { useEffect, useState } from 'react';       export default () => {       const [loading, setLoading ] = useState(false);       const [shopnotes, setShopnotes] = useState(null);         return (     <>           <h1>Shopnotes to load here...</h1>     </>           )     } 

Let’s run it. We generally need to use the command gatsby develop to run the app locally. As we have to run the client side application with netlify functions, we will continue to use, netlify dev command.

netlify dev

That’s all. Try accessing the page at http://localhost:8888. You should see something like this,

Gatsby project build creates a couple of output folders which you may not want to push to the source code repository. Let us add a few entries to the .gitignore file so that we do not get unwanted noise.

Add .cache, node_modules and public to the .gitignore file. Here is the full content of the file:

.cache public node_modules *.env

At this stage, your project directory structure should match with the following:

Thinking of the UI components

We will create small logical components to achieve the ShopNote user interface. The components are:

  • Header: A header component consists of the Logo, heading and the create button to create a shopnote.
  • Shopenotes: This component will contain the list of the shop note (Note component).
  • Note: This is individual notes. Each of the notes will contain one or more items.
  • Item: Each of the items. It consists of the item name and actions to add, remove, edit an item.

You can see the sections marked in the picture below:

Install a few more dependencies

We will install a few more dependencies required for the user interfaces to be functional and look better. Open a command prompt at the root of the project folder and install these dependencies,

yarn add bootstrap lodash moment react-bootstrap react-feather shortid

Lets load all the Shop Notes

We will use the Reactjs useEffect hook to make the API call and update the shopnotes state variables. Here is the code to fetch all the shop notes. 

useEffect(() => {   axios("/api/get-shopnotes").then(result => {     if (result.status !== 200) {       console.error("Error loading shopnotes");       console.error(result);       return;     }     setShopnotes(result.data.shopnotes);     setLoading(true);   }); }, [loading]);

Finally, let us change the return section to use the shopnotes data. Here we are checking if the data is loaded. If so, render the Shopnotes component by passing the data we have received using the API.

return (   <div className="main">     <Header />     {       loading ? <Shopnotes data = { shopnotes } /> : <h1>Loading...</h1>     }   </div> );  

You can find the entire index.js file code from here The index.js file creates the initial route(/) for the user interface. It uses other components like, Shopnotes, Note and Item to make the UI fully operational. We will not go to a great length to understand each of these UI components. You can create a folder called components under the src folder and copy the component files from here.

Finally, the index.css file

Now we just need a css file to make things look better. Create a file called index.css under the pages folder. Copy the content from this CSS file to the index.css file.

import 'bootstrap/dist/css/bootstrap.min.css'; import './index.css'

That’s all. We are done. You should have the app up and running with all the shop notes created so far. We are not getting into the explanation of each of the actions on items and notes here not to make the article very lengthy. You can find all the code in the GitHub repo. At this stage, the directory structure may look like this,

A small exercise

I have not included the Create Note UI implementation in the GitHib repo. However, we have created the API already. How about you build the front end to add a shopnote? I suggest implementing a button in the header, which when clicked, creates a shopnote using the API we’ve already defined. Give it a try!

Let’s Deploy

All good so far. But there is one issue. We are running the app locally. While productive, it’s not ideal for the public to access. Let’s fix that with a few simple steps.

Make sure to commit all the code changes to the Git repository, say, shopnote. You have an account with Netlify already. Please login and click on the button, New site from Git.

Next, select the relevant Git services where your project source code is pushed. In my case, it is GitHub.

Browse the project and select it.

Provide the configuration details like the build command, publish directory as shown in the image below. Then click on the button to provide advanced configuration information. In this case, we will pass the FAUNA_SERVER_SECRET key value pair from the .env file. Please copy paste in the respective fields. Click on deploy.

You should see the build successful in a couple of minutes and the site will be live right after that.

In Summary

To summarize:

  • The Jamstack is a modern web development architecture based on client-side JavaScript, reusable APIs, and prebuilt Markup.
  • 70% – 80% of the features that once required a custom back-end can now be done either on the front end or there are APIs, services to take advantage of.
  • Fauna provides the data API for the client-serverless applications. We can use GraphQL or Fauna’s FQL to talk to the store.
  • Netlify serverless functions can be easily integrated with Fauna using the GraphQL mutations and queries. This approach may be useful when you have the need of  custom authentication built with Netlify functions and a flexible solution like Auth0.
  • Gatsby and other static site generators are great contributors to the Jamstack to give a fast end user experience.

Thank you for reading this far! Let’s connect. You can @ me on Twitter (@tapasadhikary) with comments, or feel free to follow.


The post How to create a client-serverless Jamstack app using Netlify, Gatsby and Fauna appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , , ,
[Top]