Tag: Fauna

Learn How to Build True Edge Apps With Cloudflare Workers and Fauna

(This is a sponsored post.)

There is a lot of buzz around apps running on the edge instead of on a centralized server in web development. Running your app on the edge allows your code to be closer to your users, which makes it faster. However, there is a spectrum of edge apps. Many apps only have some parts, usually static content, on the edge. But you can move even more to the edge, like computing and databases. This article describes how to do that.

Intro to the edge

First, let’s look at what the edge really is.

The “edge” refers to locations designed to be close to users instead of being at one centralized place. Edge servers are smaller servers put on the edge. Traditionally, servers have been centralized so that there was only one server available. This made websites slower and less reliable. They were slower because the server can often be far away from the user. Say if you have two users, one in Singapore and one in the U.S., and your server is in the U.S. For the customer in the U.S., the server would be close, but for the person in Singapore, the signal would have to travel across the entire Pacific. This adds latency, which makes your app slower and less responsive for the user. Placing your servers on the edge mitigates this latency problem.

Normal server architecture

With an edge server design, your servers have lighter-weight versions in multiple different areas, so a user in Singapore would be able to access a server in Singapore, and a user in the U.S. would also be able to access a close server. Multiple servers on the edge also make an app more reliable because if the server in Singapore went offline, the user in Singapore would still be able to access the U.S. server.

Edge architecture

Many apps have more than 100 different server locations on the edge. However, multiple server locations can add significant cost. To make it cheaper and easier for developers to harness the power of the edge, many services offer the ability to easily deploy to the edge without having to spend a lot of money or time managing multiple servers. There are many different types of these. The most basic and widely used is an edge Content Delivery Network (CDN), which allows static content to be served from the edge. However, CDNs cannot do anything more complicated than serving content. If you need databases or custom code on the edge, you will need a more advanced service.

Introducing edge functions and edge databases

Luckily, there are solutions to this. The first of which, for running code on the edge, is edge functions. These are small pieces of code, automatically provisioned when needed, that are designed to respond to HTTP requests. They are also commonly called serverless functions. However, not all serverless functions run on the edge. Some edge function providers are Lambda@Edge, Cloudflare Workers, and Deno Deploy. In this article, we will focus on Cloudflare Workers. We can also take databases to the edge to ensure that our serverless functions run fast even when querying a database. There are also solutions for databases, the easiest of which is Fauna. With traditional databases, it is very hard or almost impossible to scale to multiple different regions. You have to manage different servers and how database updates are replicated between them. Fauna, however, abstracts all of that away, allowing you to use a cross-region database with a click of a button. It also provides an easy-to-use GraphQL interface and its own query language if you need more. By using Cloudflare Workers and Fauna, we can build a true edge app where everything is run on the edge.

Using Cloudflare Workers and Fauna to build a URL shortener

Setting up Cloudflare Workers and the code

URL Shorteners need to be fast, which makes Cloudflare Workers and Fauna perfect for this. To get started, clone the repository at github.com/AsyncBanana/url-shortener and set your directory to the folder generated.

git clone https://github.com/AsyncBanana/url-shortener.git cd url-shortener

Then, install wrangler, the CLI needed for Cloudflare Workers. After that, install all npm dependencies.

npm install -g @cloudflare/wrangler npm install

Then, sign up for Cloudflare workers at https://dash.cloudflare.com/sign-up/workers and run wrangler login. Finally, to finish off the Cloudflare Workers set up, run wrangler whoami and take the account id from there and put it inside wrangler.toml, which is in the URL shortener.

Setting up Fauna

Good job, now we need to set up Fauna, which will provide the edge database for our URL shortener.

First, register for a Fauna account. Once you have finished that, create a new database by clicking “create database” on the dashboard. Enter URL-Shortener for the name, click classic for the region, and uncheck use demo data.

This is what it should look like

Once you create the database, click Collections on the dashboard sidebar and click “create new collection.” Name the collection URLs and click save.

Next, click the Security tab on the sidebar and click “New key.” Next, click Save on the modal and copy the resulting API key. You can also name the key, but it is not required. Finally, copy the key into the file named .env in the code under FAUNA_KEY.

Black code editor with code in it.
This is what the .env file should look like, except with API_KEY_HERE replaced with your key

Good job! Now we can start coding.

Create the URL shortener

There are two main folders in the code, public and src. The public folder is where all of the user-facing files are stored. src is the folder where the server code is. You can look through and edit the HTML, CSS, and client-side JavaScript if you want, but we will be focusing on the server-side code right now. If you look in src, you should see a file called urlManager.js. This is where our URL Shortening code will go.

This is the URL manager

First, we need to make the code to create shortened URLs. in the function createUrl, create a database query by running FaunaClient.query(). Now, we will use Fauna Query Language (FQL) to structure the query. Fauna Query Language is structured using functions, which are all available under q in this case. When you execute a query, you put all of the functions as arguments in FaunaClient.query(). Inside FaunaClient.query(), add:

q.Create(q.Collection("urls"),{   data: {     url: url   } })

What this does is creates a new document in the collection urls and puts in an object containing the URL to redirect to. Now, we need to get the id of the document so we can return it as a redirection point. First, to get the document id in the Fauna query, put q.Create in the second argument of q.Select, with the first argument being [“ref”,”id”]. This will get the id of the new document. Then, return the value returned by awaiting FaunaClient.query(). The function should now look like this:

return await FaunaClient.query(   q.Select(     ["ref", "id"],       q.Create(q.Collection("urls"), {         data: {           url: url,         },       })     )   ); }

Now, if you run wrangler dev and go to localhost:8787, you should see the URL shortener page. You can enter in a URL and click submit, and you should see another URL generated. However, if you go to that URL it will not do anything. Now we need to add the second part of this, the URL redirect.

Look back in urlManager.js. You should see a function called processUrl. In that function, put:

const res = await FaunaClient.query(q.Get(q.Ref(q.Collection("urls"), id)));

What this does is executes a Fauna query that gets the value of the document in the collection URLs with the specified id. You can use this to get the URL of the id in the URL. Next return res.data.url.url.

const res = await FaunaClient.query(q.Get(q.Ref(q.Collection("urls"), id))); return res.data.url.url

Now you should be all set! Just run wrangler publish, go to your workers.dev domain, and try it out!

Conclusion

Now have a URL shortener that runs entirely on the edge! If you want to add more features or learn more about Fauna or Cloudflare Workers, look below. I hope you have learned something from this, and thank you for reading.

Next steps

  • Further improve the speed of your URL shortener by adding caching
  • Add analytics
  • Read more about Fauna

Read more about Cloudflare Workers


The post Learn How to Build True Edge Apps With Cloudflare Workers and Fauna appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , , , ,

How to Build a Full-Stack Mobile Application With Flutter, Fauna, and GraphQL

(This is a sponsored post.)

Flutter is Google’s UI framework used to create flexible, expressive cross-platform mobile applications. It is one of the fastest-growing frameworks for mobile app development. On the other hand, Fauna is a transactional, developer-friendly serverless database that supports native GraphQL. Flutter + Fauna is a match made in Heaven. If you are looking to build and ship a feature-rich full-stack application within record time, Flutter and Fauna is the right tool for the job. In this article, we will walk you through building your very first Flutter application with Fauna and GraphQL back-end.

You can find the complete code for this article, on GitHub.

Learning objective

By the end of this article, you should know how to:

  1. set up a Fauna instance,
  2. compose GraphQL schema for Fauna,
  3. set up GraphQL client in a Flutter app, and
  4. perform queries and mutations against Fauna GraphQL back-end.

Fauna vs. AWS Amplify vs. Firebase: What problems does Fauna solve? How is it different from other serverless solutions? If you are new to Fauna and would like to learn more about how Fauna compares to other solutions, I recommend reading this article.

What are we building?

We will be building a simple mobile application that will allow users to add, delete and update their favorite characters from movies and shows.

Setting up Fauna

Head over to fauna.com and create a new account. Once logged in, you should be able to create a new database.

Give a name to your database. I am going to name mine flutter_demo. Next, we can select a region group. For this demo, we will choose classic. Fauna is a globally distributed serverless database. It is the only database that supports low latency read and writes access from anywhere. Think of it as CDN (Content Delivery Network) but for your database. To learn more about region groups, follow this guide.

Generating an admin key

Once the database is created head, over to the security tab. Click on the new key button and create a new key for your database. Keep this key secure as we need this for our GraphQL operations.

We will be creating an admin key for our database. Keys with an admin role are used for managing their associated database, including the database access providers, child databases, documents, functions, indexes, keys, tokens, and user-defined roles. You can learn more about Fauna’s various security keys and access roles in the following link.

Compose a GraphQL schema

We will be building a simple app that will allow the users to add, update, and delete their favorite TV characters.

Creating a new Flutter project

Let’s create a new flutter project by running the following commands.

flutter create my_app

Inside the project directory, we will create a new file called graphql/schema.graphql.

In the schema file, we will define the structure of our collection. Collections in Fauna are similar to tables in SQL. We only need one collection for now. We will call it Character.

### schema.graphql type Character {     name: String!     description: String!     picture: String } type Query {     listAllCharacters: [Character] } 

As you can see above, we defined a type called Character with several properties (i.e., name, description, picture, etc.). Think of properties as columns of SQL database or key-value paid of an NoSQL database. We have also defined a Query. This query will return a list of the characters.

Now let’s go back to Fauna dashboard. Click on GraphQL and click on import schema to upload our schema to Fauna.

Once the importing is done, we will see that Fauna has generated the GraphQL queries and mutations.

Don’t like auto-generated GraphQL? Want more control over your business logic? In that case, Fauna allows you to define your custom GraphQL resolvers. To learn more, follow this link.

Setup GraphQL client in Flutter app

Let’s open up our pubspec.yaml file and add the required dependencies.

... dependencies:   graphql_flutter: ^4.0.0-beta   hive: ^1.3.0   flutter:     sdk: flutter ... 

We added two dependencies here. graphql_flutter is a GraphQL client library for flutter. It brings all the modern features of GraphQL clients into one easy-to-use package. We also added the hive package as our dependency. Hive is a lightweight key-value database written in pure Dart for local storage. We are using hive to cache our GraphQL queries.

Next, we will create a new file lib/client_provider.dart. We will create a provider class in this file that will contain our Fauna configuration.

To connect to Fauna’s GraphQL API, we first need to create a GraphQLClient. A GraphQLClient requires a cache and a link to be initialized. Let’s take a look at the code below.

// lib/client_provider.dart import 'package:graphql_flutter/graphql_flutter.dart'; import 'package:flutter/material.dart';  ValueNotifier<GraphQLClient> clientFor({   @required String uri,   String subscriptionUri, }) {    final HttpLink httpLink = HttpLink(     uri,   );   final AuthLink authLink = AuthLink(     getToken: () async => 'Bearer fnAEPAjy8QACRJssawcwuywad2DbB6ssrsgZ2-2',   );   Link link = authLink.concat(httpLink);   return ValueNotifier<GraphQLClient>(     GraphQLClient(       cache: GraphQLCache(store: HiveStore()),       link: link,     ),   ); }  

In the code above, we created a ValueNotifier to wrap the GraphQLClient. Notice that we configured the AuthLink in lines 13 – 15 (highlighted). On line 14, we have added the admin key from Fauna as a part of the token. Here I have hardcoded the admin key. However, in a production application, we must avoid hard-coding any security keys from Fauna.

There are several ways to store secrets in Flutter application. Please take a look at this blog post for reference.

We want to be able to call Query and Mutation from any widget of our application. To do so we need to wrap our widgets with GraphQLProvider widget.

// lib/client_provider.dart  ....  /// Wraps the root application with the `graphql_flutter` client. /// We use the cache for all state management. class ClientProvider extends StatelessWidget {   ClientProvider({     @required this.child,     @required String uri,   }) : client = clientFor(           uri: uri,         );   final Widget child;   final ValueNotifier<GraphQLClient> client;   @override   Widget build(BuildContext context) {     return GraphQLProvider(       client: client,       child: child,     );   } } 

Next, we go to our main.dart file and wrap our main widget with the ClientProvider widget. Let’s take a look at the code below.

// lib/main.dart ...  void main() async {   await initHiveForFlutter();   runApp(MyApp()); } final graphqlEndpoint = 'https://graphql.fauna.com/graphql'; class MyApp extends StatelessWidget {    @override   Widget build(BuildContext context) {     return ClientProvider(       uri: graphqlEndpoint,       child: MaterialApp(         title: 'My Character App',         debugShowCheckedModeBanner: false,         initialRoute: '/',         routes: {           '/': (_) => AllCharacters(),           '/new': (_) => NewCharacter(),         }       ),     );   } } 

At this point, all our downstream widgets will have access to run Queries and Mutations functions and can interact with the GraphQL API.

Application pages

Demo applications should be simple and easy to follow. Let’s go ahead and create a simple list widget that will show the list of all characters. Let’s create a new file lib/screens/character-list.dart. In this file, we will write a new widget called AllCharacters.

// lib/screens/character-list.dart.dart  class AllCharacters extends StatelessWidget {   const AllCharacters({Key key}) : super(key: key);   @override   Widget build(BuildContext context) {     return Scaffold(       body: CustomScrollView(         slivers: [           SliverAppBar(             pinned: true,             snap: false,             floating: true,             expandedHeight: 160.0,             title: Text(               'Characters',               style: TextStyle(                 fontWeight: FontWeight.w400,                  fontSize: 36,               ),             ),             actions: <Widget>[               IconButton(                 padding: EdgeInsets.all(5),                 icon: const Icon(Icons.add_circle),                 tooltip: 'Add new entry',                 onPressed: () {                    Navigator.pushNamed(context, '/new');                 },               ),             ],           ),           SliverList(             delegate: SliverChildListDelegate([                 Column(                   children: [                     for (var i = 0; i < 10; i++)                        CharacterTile()                   ],                 )             ])           )         ],       ),     );   } }  // Character-tile.dart class CharacterTile extends StatefulWidget {   CharacterTilee({Key key}) : super(key: key);   @override   _CharacterTileState createState() => _CharacterTileeState(); } class _CharacterTileState extends State<CharacterTile> {   @override   Widget build(BuildContext context) {     return Container(        child: Text(&quot;Character Tile&quot;),     );   } } 

As you can see in the code above, [line 37] we have a for loop to populate the list with some fake data. Eventually, we will be making a GraphQL query to our Fauna backend and fetch all the characters from the database. Before we do that, let’s try to run our application as it is. We can run our application with the following command

flutter run 

At this point we should be able to see the following screen.

Performing queries and mutations

Now that we have some basic widgets, we can go ahead and hook up GraphQL queries. Instead of hardcoded strings, we would like to get all the characters from our database and view them in AllCharacters widget.

Let’s go back to the Fauna’s GraphQL playground. Notice we can run the following query to list all the characters.

query ListAllCharacters {   listAllCharacters(_size: 100) {     data {       _id       name       description       picture     }     after   } } 

To perform this query from our widget we will need to make some changes to it.

import 'package:flutter/material.dart'; import 'package:graphql_flutter/graphql_flutter.dart'; import 'package:todo_app/screens/Character-tile.dart';  String readCharacters = ";";"; query ListAllCharacters {   listAllCharacters(_size: 100) {     data {       _id       name       description       picture     }     after   } } ";";";;  class AllCharacters extends StatelessWidget {   const AllCharacters({Key key}) : super(key: key);   @override   Widget build(BuildContext context) {     return Scaffold(       body: CustomScrollView(         slivers: [           SliverAppBar(             pinned: true,             snap: false,             floating: true,             expandedHeight: 160.0,             title: Text(               'Characters',               style: TextStyle(                 fontWeight: FontWeight.w400,                  fontSize: 36,               ),             ),             actions: <Widget>[               IconButton(                 padding: EdgeInsets.all(5),                 icon: const Icon(Icons.add_circle),                 tooltip: 'Add new entry',                 onPressed: () {                    Navigator.pushNamed(context, '/new');                 },               ),             ],           ),           SliverList(             delegate: SliverChildListDelegate([               Query(options: QueryOptions(                 document: gql(readCharacters), // graphql query we want to perform                 pollInterval: Duration(seconds: 120), // refetch interval               ),                builder: (QueryResult result, { VoidCallback refetch, FetchMore fetchMore }) {                 if (result.isLoading) {                   return Text('Loading');                 }                 return Column(                   children: [                     for (var item in result.data['listAllCharacters']['data'])                       CharacterTile(Character: item, refetch: refetch),                   ],                 );               })             ])           )         ],       ),     );   } }  

First of all, we defined the query string for getting all characters from the database [line 5 to 17]. We have wrapped our list widget with a Query widget from flutter_graphql.

Feel free to take a look at the official documentation for flutter_graphql library.

In the query options argument we provide the GraphQL query string itself. We can pass in any float number for the pollInterval argument. Poll Interval defines how often we would like to refetch data from our backend. The widget also has a standard builder function. We can use a builder function to pass the query result, refetch callback function and fetch more callback function down the widget tree.

Next, I am going to update the CharacterTile widget to display the character data on screen.

// lib/screens/character-tile.dart ... class CharacterTile extends StatelessWidget {   final Character;   final VoidCallback refetch;   final VoidCallback updateParent;   const CharacterTile({     Key key,      @required this.Character,      @required this.refetch,     this.updateParent,   }) : super(key: key);   @override   Widget build(BuildContext context) {     return InkWell(       onTap: () {       },       child: Padding(         padding: const EdgeInsets.all(10),         child: Row(           children: [             Container(               height: 90,               width: 90,               decoration: BoxDecoration(                 color: Colors.amber,                 borderRadius: BorderRadius.circular(15),                 image: DecorationImage(                   fit: BoxFit.cover,                   image: NetworkImage(Character['picture'])                 )               ),             ),             SizedBox(width: 10),             Expanded(               child: Column(                 mainAxisAlignment: MainAxisAlignment.center,                 crossAxisAlignment: CrossAxisAlignment.start,                 children: [                   Text(                     Character['name'],                     style: TextStyle(                       color: Colors.black87,                       fontWeight: FontWeight.bold,                     ),                   ),                   SizedBox(height: 5),                   Text(                     Character['description'],                     style: TextStyle(                       color: Colors.black87,                     ),                     maxLines: 2,                   ),                 ],               )             )           ],         ),       ),     );   } } 

Adding new data

We can add new characters to our database by running the mutation below.

mutation CreateNewCharacter($ data: CharacterInput!) {     createCharacter(data: $ data) {       _id       name       description       picture     } } 

To run this mutation from our widget we can use the Mutation widget from flutter_graphql library. Let’s create a new widget with a simple form for the users to interact with and input data. Once the form is submitted the createCharacter mutation will be called.

// lib/screens/new.dart ... String addCharacter = ";";";   mutation CreateNewCharacter($ data: CharacterInput!) {     createCharacter(data: $ data) {       _id       name       description       picture     }   } ";";";; class NewCharacter extends StatelessWidget {   const NewCharacter({Key key}) : super(key: key);   @override   Widget build(BuildContext context) {     return Scaffold(       appBar: AppBar(         title: const Text('Add New Character'),       ),       body: AddCharacterForm()     );   } } class AddCharacterForm extends StatefulWidget {   AddCharacterForm({Key key}) : super(key: key);   @override   _AddCharacterFormState createState() => _AddCharacterFormState(); } class _AddCharacterFormState extends State<AddCharacterForm> {   String name;   String description;   String imgUrl;   @override   Widget build(BuildContext context) {     return Form(       child: Padding(         padding: EdgeInsets.all(20),         child: Column(           crossAxisAlignment: CrossAxisAlignment.start,           children: [             TextField(               decoration: const InputDecoration(                 icon: Icon(Icons.person),                 labelText: 'Name *',               ),               onChanged: (text) {                 name = text;               },             ),             TextField(               decoration: const InputDecoration(                 icon: Icon(Icons.post_add),                 labelText: 'Description',               ),               minLines: 4,               maxLines: 4,               onChanged: (text) {                 description = text;               },             ),             TextField(               decoration: const InputDecoration(                 icon: Icon(Icons.image),                 labelText: 'Image Url',               ),               onChanged: (text) {                 imgUrl = text;               },             ),             SizedBox(height: 20),             Mutation(               options: MutationOptions(                 document: gql(addCharacter),                 onCompleted: (dynamic resultData) {                   print(resultData);                   name = '';                   description = '';                   imgUrl = '';                   Navigator.of(context).push(                     MaterialPageRoute(builder: (context) => AllCharacters())                   );                 },               ),                builder: (                 RunMutation runMutation,                 QueryResult result,               ) {                 return Center(                   child: ElevatedButton(                     child: const Text('Submit'),                     onPressed: () {                       runMutation({                         'data': {                           ";picture";: imgUrl,                           ";name";: name,                           ";description";: description,                         }                       });                     },                   ),                 );               }             )           ],         ),       ),     );   } } 

As you can see from the code above Mutation widget works very similar to the Query widget. Additionally, the Mutation widget provides us with a onComplete function. This function returns the updated result from the database after the mutation is completed.

Removing data

To remove a character from our database we can run the deleteCharacter mutation. We can add this mutation function to our CharacterTile and fire it when a button is pressed.

// lib/screens/character-tile.dart ...  String deleteCharacter = ";";";   mutation DeleteCharacter($ id: ID!) {     deleteCharacter(id: $ id) {       _id       name     }   } ";";";;  class CharacterTile extends StatelessWidget {   final Character;   final VoidCallback refetch;   final VoidCallback updateParent;   const CharacterTile({     Key key,      @required this.Character,      @required this.refetch,     this.updateParent,   }) : super(key: key);   @override   Widget build(BuildContext context) {     return InkWell(       onTap: () {         showModalBottomSheet(           context: context,           builder: (BuildContext context) {             print(Character['picture']);             return Mutation(               options: MutationOptions(                 document: gql(deleteCharacter),                 onCompleted: (dynamic resultData) {                   print(resultData);                   this.refetch();                 },               ),                builder: (                 RunMutation runMutation,                 QueryResult result,               ) {                 return Container(                   height: 400,                   padding: EdgeInsets.all(30),                   child: Center(                     child: Column(                       mainAxisAlignment: MainAxisAlignment.center,                       mainAxisSize: MainAxisSize.min,                       children: <Widget>[                         Text(Character['description']),                         ElevatedButton(                           child: Text('Delete Character'),                           onPressed: () {                             runMutation({                               'id': Character['_id'],                             });                             Navigator.pop(context);                           },                         ),                       ],                     ),                   ),                 );                }             );           }         );       },       child: Padding(         padding: const EdgeInsets.all(10),         child: Row(           children: [             Container(               height: 90,               width: 90,               decoration: BoxDecoration(                 color: Colors.amber,                 borderRadius: BorderRadius.circular(15),                 image: DecorationImage(                   fit: BoxFit.cover,                   image: NetworkImage(Character['picture'])                 )               ),             ),             SizedBox(width: 10),             Expanded(               child: Column(                 mainAxisAlignment: MainAxisAlignment.center,                 crossAxisAlignment: CrossAxisAlignment.start,                 children: [                   Text(                     Character['name'],                     style: TextStyle(                       color: Colors.black87,                       fontWeight: FontWeight.bold,                     ),                   ),                   SizedBox(height: 5),                   Text(                     Character['description'],                     style: TextStyle(                       color: Colors.black87,                     ),                     maxLines: 2,                   ),                 ],               )             )           ],         ),       ),     );   } } 

Editing data

Editing data works same as add and delete. It is just another mutation in the GraphQL API. We can create an edit character form widget similar to the new character form widget. The only difference is that the edit form will run updateCharacter mutation. For editing I created a new widget lib/screens/edit.dart. Here’s the code for this widget.

// lib/screens/edit.dart  String editCharacter = """ mutation EditCharacter($ name: String!, $ id: ID!, $ description: String!, $ picture: String!) {   updateCharacter(data:    {      name: $ name      description: $ description     picture: $ picture   }, id: $ id) {     _id     name     description     picture   } } """; class EditCharacter extends StatelessWidget {   final Character;   const EditCharacter({Key key, this.Character}) : super(key: key);   @override   Widget build(BuildContext context) {     return Scaffold(       appBar: AppBar(         title: const Text('Edit Character'),       ),       body: EditFormBody(Character: this.Character),     );   } } class EditFormBody extends StatefulWidget {   final Character;   EditFormBody({Key key, this.Character}) : super(key: key);   @override   _EditFormBodyState createState() => _EditFormBodyState(); } class _EditFormBodyState extends State<EditFormBody> {   String name;   String description;   String picture;   @override   Widget build(BuildContext context) {     return Container(        child: Padding(          padding: const EdgeInsets.all(8.0),          child: Column(            crossAxisAlignment: CrossAxisAlignment.start,            children: [             TextFormField(                initialValue: widget.Character['name'],                 decoration: const InputDecoration(                   icon: Icon(Icons.person),                   labelText: 'Name *',                 ),                 onChanged: (text) {                   name = text;                 }             ),             TextFormField(               initialValue: widget.Character['description'],               decoration: const InputDecoration(                 icon: Icon(Icons.person),                 labelText: 'Description',               ),               minLines: 4,               maxLines: 4,               onChanged: (text) {                 description = text;               }             ),             TextFormField(               initialValue: widget.Character['picture'],               decoration: const InputDecoration(                 icon: Icon(Icons.image),                 labelText: 'Image Url',               ),               onChanged: (text) {                 picture = text;               },             ),             SizedBox(height: 20),             Mutation(               options: MutationOptions(                 document: gql(editCharacter),                 onCompleted: (dynamic resultData) {                   print(resultData);                   Navigator.of(context).push(                     MaterialPageRoute(builder: (context) => AllCharacters())                   );                 },               ),               builder: (                 RunMutation runMutation,                 QueryResult result,               ) {                 print(result);                 return Center(                   child: ElevatedButton(                     child: const Text('Submit'),                     onPressed: () {                        runMutation({                         'id': widget.Character['_id'],                         'name': name != null ? name : widget.Character['name'],                         'description': description != null ? description : widget.Character['description'],                         'picture': picture != null ? picture : widget.Character['picture'],                       });                     },                   ),                 );               }             ),            ]          )        ),     );   } } 

You can take a look at the complete code for this article below.

Where to go from here

The main intention of this article is to get you up and running with Flutter and Fauna. We have only scratched the surface here. Fauna ecosystem provides a complete, auto-scaling, developer-friendly backend as a service for your mobile applications. If your goal is to ship a production-ready cross-platform mobile application in record time give Fauna and Flutter is the way to go.

I highly recommend checking out Fauna’s official documentation site. If you are interested in learning more about GraphQL clients for Dart/Flutter checkout the official GitHub repo for graphql_flutter.

Happy hacking and see you next time.


The post How to Build a Full-Stack Mobile Application With Flutter, Fauna, and GraphQL appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , , ,
[Top]

Building a Command Line Tool with Nodejs and Fauna

Command line tools are one of the most popular applications we have today. We use command line tools every day, and they range from git, npm or yarn. Command line tools are very fast and useful for automating applications and workflows.

We will be building a command line tool with Node.js and Fauna for our database in this post. In addition, we will be creating a random quotes application using Node.js, and add permission and a keyword for our app.

Prerequisites

To take full advantage of this tutorial, make sure you have the following installed on your local development environment:

  • Node.js version >= 16.x.x installed.
  • Have access to one package manager such as npm or yarn.
  • Access to Fauna dashboard.

Getting Started with Fauna

Register a new account using email credentials or a GitHub account. You can register a new account here. Once you have created a new account or signed in, you are going to be welcomed by the dashboard screen:

Creating a New Fauna Instance

To create a new database instance using Fauna services, you have to follow some simple steps. On the dashboard screen, press the button New Database:

Next, enter the name of the database and save. Once a database instance is set up, you are ready to access the key. Use access keys to connect authorization and a connection to the database from a single-page application. To create your access key, navigate to the side menu, and go to the Security tab and click on the New Key button.

Creating a Collection

Navigate to your dashboard, click on the Collections tab from the side menu, press the New Collection, button, input your desired name for the new collection, and save.

Creating Indexes

To complete setup, create indexes for our application. Indexes are essential because searching documents are done using indexes in Fauna by matching the user input against the tern field. Create an index by navigating to the Indexes tab of our Fauna dashboard.

Now, we are ready to build our notes command-line application using Node.js and our database.

Initializing a Node.js App and Installing Dependencies

This section will initialize a Node.js application and install the dependencies we need using the NPM package. We are also going to build a simple quotes application from this link.

Getting Started

To get started, let’s create a folder for our application inside the project folder using the code block below on our terminal:

mkdir quotes_cli cd quotes_cli touch quotes_app npm init -y

In the code block above, we created a new directory, navigated into the directory, and created a new file called quotes_app, and ended by initializing the npm dependencies. Next, add a package to make requests to the quotes server using axios.

npm i axios

Add a package for coloring our texts, chalk is an NPM package that helps us add color to print on the terminal. To add chalk, use the code block below

npm i chalk Let’s also import a dotenv package using the code block:

npm i dotenv

Building the Quotes App

In our quotes_app file, let’s add the code block below

const axios = require('axios') const chalk = require('chalk'); const dotenv = require('dotenv'); const url = process.env.APP_URL axios({   method: 'get',   url: url,   headers: { 'Accept': 'application/json' }, }).then(res => {   const quote = res.data.contents.quotes[0].quote   const author = res.data.contents.quotes[0].author   const log = chalk.red(`$ {quote} - $ Fortune Ikechi`)    console.log(log) }).catch(err => {   const log = chalk.red(err)    console.log(log) })

In the code block above, we imported axios, chalk, and dotenv. We added the URL of our database, our Fauna database, and using axios, we made a GET request to the URL and added headers to enable us to get our response in json.

To log a quote, we use JavaScript promises to log the quote and its author on our console and added a catch method for catching errors.

Before we run, let’s change the permissions on our file using the code below:

chmod +x quotes_app

Next, run the application using our keyword below:

./quotes_app

We should get a result similar to the image below

Conclusion

In this article, we learned more about Fauna and Node.js command-line tools. You can extend the application to be able to add date reminders in real-time.

Here is a list of some resources that you might like after reading this post:


The post Building a Command Line Tool with Nodejs and Fauna appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , ,
[Top]

Building a Headless CMS with Fauna and Vercel Functions

This article introduces the concept of the headless CMS, a backend-only content management system that allows developers to create, store, manage and publish the content over an API using the Fauna and Vercel functions. This improves the frontend-backend workflow, that enables developers to build excellent user experience quickly.

In this tutorial, we will learn and use headless CMS, Fauna, and Vercel functions to build a blogging platform, Blogify🚀. After that, you can easily build any web application using a headless CMS, Fauna and Vercel functions.

Introduction

According to MDN, A content management system (CMS) is a computer software used to manage the creation and modification of digital content. CMS typically has two major components: a content management application (CMA), as the front-end user interface that allows a user, even with limited expertise, to add, modify, and remove content from a website without the intervention of a webmaster; and a content delivery application (CDA), that compiles the content and updates the website.

The Pros And Cons Of Traditional vs Headless CMS

Choosing between these two can be quite confusing and complicated. But they both have potential advantages and drawbacks.

Traditional CMS Pros

  • Setting up your content on a traditional CMS is much easier as everything you need (content management, design, etc) are made available to you.
  • A lot of traditional CMS has drag and drop, making it easy for a person with no programming experience to work easily with them. It also has support for easy customization with zero to little coding knowledge.

Traditional CMS Cons

  • The plugins and themes which the traditional CMS relies on may contain malicious codes or bugs and slow the speed of the website or blog.
  • The traditional coupling of the front-end and back-end definitely would more time and money for maintenance and customization.

Headless CMS Pros

  • There’s flexibility with choice of frontend framework to use since the frontend and backend are separated from each other, it makes it possible for you to pick which front-end technology suits your needs. It gives the freewill to choose the tools need to build the frontend—flexibility during the development stage.
  • Deploying works easier with headless CMS. The applications (blogs, websites, etc) built with headless CMS can be easily be deployed to work on various displays such as web device, mobile devices, AR/VR devices.

Headless CMS Cons

  • You are left with the worries of managing your back-end infrastructures, setting up the UI component of your site, app.
  • Implementation of headless CMS are known to be more costly against the traditional CMS. Building headless CMS application that embodies analytics are not cost-effective.

Fauna uses a preexisting infrastructure to build web applications without the usually setting up a custom API server. This efficiently helps to save time for developers, and the stress of choosing regions and configuring storage that exists among other databases; which is global/multi-region by default, are nonexistent with Fauna. All maintenance we need are actively taken care of by engineers and automated DevOps at Fauna. We will use Fauna as our backend-only content management system.

Pros Of Using Fauna

  • The ease to use and create a Fauna database instance from within development environment of the hosting platforms like Netlify or Vercel.
  • Great support for querying data via GraphQL or use Fauna’s own query language. Fauna Query Language (FQL), for complex functions.
  • Access data in multiple models including relational, document, graph and temporal.
  • Capabilities like built-in authentication, transparent scalability and multi-tenancy are fully available on Fauna.
  • Add-on through Fauna Console as well as Fauna Shell makes it easy to manage database instance very easily.

Vercel Functions, also known as Serverless Functions, according to the docs are pieces of code written with backend languages that take an HTTP request and provide a response.

Prerequisites

To take full advantage of this tutorial, ensure the following tools are available or installed on your local development environment:

  • Access to Fauna dashboard
  • Basic knowledge of React and React Hooks
  • Have create-react-app installed as a global package or use npx to bootstrap the project.
  • Node.js version >= 12.x.x installed on your local machine.
  • Ensure that npm or yarn is also installed as a package manager

Database Setup With Fauna

Sign in into your fauna account to get started with Fauna, or first register a new account using either email credentials/details or using an existing Github account as a new user. You can register for a new account here. Once you have created a new account or signed in, you are going to be welcomed by the dashboard screen. We can also make use of the fauna shell if you love the shell environment. It easily allows you to create
and/or modify resources on Fauna through the terminal.

Using the fauna shell, the command is:

npm install --global fauna-shell fauna cloud-login

But we will use the website throughout this tutorial. Once signed in, the dashboard screen welcomes you:

Now we are logged in or have our accounts created, we can go ahead to create our Fauna. We’ll go through following simple steps to create the new fauna database using Fauna services. We start with naming our database, which we’ll use as our content management system. In this tutorial, we will name our database blogify.

With the database created, next step is to create a new data collection from the Fauna dashboard. Navigate to the Collection tab on the side menu and create a new collection by clicking on the NEW COLLECTION button.

We’ll then go ahead to give whatever name well suiting to our collection. Here we will call it blogify_posts.

Next step in getting our database ready is to create a new index. Navigate to the Indexes tab to create an index. Searching documents in Fauna can be done by using indexes, specifically by matching inputs against an index’s terms field. Click on the NEW INDEX button to create an index. Once in create index screen, fill out the form: selecting the collection we’ve created previously, then giving a name to our index. In this tutorial, we will name ours all_posts. We can now save our index.

After creating an index, now it’s time to create our DOCUMENT, this will contain the contents/data we want to use for our CMS website. Click on the NEW DOCUMENT button to get started. With the text editor to create our document, we’ll create an object data to serve our needs for the website.

The above post object represents the unit data we need to create our blog post. Your choice of data can be so different from what we have here, serving the purpose whatever you want it for within your website. You can create as much document you may need for your CMS website. To keep things simple, we just have three blog posts.

Now that we have our database setup complete to our choice, we can move on to create our React app, the frontend.

Create A New React App And Install Dependencies

For the frontend development, we will need dependencies such as Fauna SDK, styled-components and vercel in our React app. We will use the styled-components for the UI styling, use the vercel within our terminal to host our application. The Fauna SDK would be used to access our contents at the database we had setup. You can always replace the styled-components for whatever library you decide to use for your UI styling. Also use any UI framework or library you preferred to others.

npx create-react-app blogify # install dependencies once directory is done/created yarn add fauna styled-components # install vercel globally yarn global add vercel

The fauna package is Fauna JavaScript driver for Fauna. The library styled-components allows you to write actual CSS code to style your components. Once done with all the installation for the project dependencies, check the package.json file to confirm all installation was done
successfully.

Now let’s start an actual building of our blog website UI. We’ll start with the header section. We will create a Navigation component within the components folder inside the src folder, src/components, to contain our blog name, Blogify🚀.

import styled from "styled-components"; function Navigation() {   return (     <Wrapper>       <h1>Blogify🚀</h1>     </Wrapper>   ); } const Wrapper = styled.div`   background-color: #23001e;   color: #f3e0ec;   padding: 1.5rem 5rem;   & > h1 {     margin: 0px;   } `; export default Navigation;

After being imported within the App components, the above code coupled with the stylings through the styled-components library, will turn out to look like the below UI:

Now time to create the body of the website, that will contain the post data from our database. We structure a component, called Posts, which will contains our blog posts created on the backend.

import styled from "styled-components"; function Posts() {   return (     <Wrapper>       <h3>My Recent Articles</h3>       <div className="container"></div>     </Wrapper>   ); } const Wrapper = styled.div`   margin-top: 3rem;   padding-left: 5rem;   color: #23001e;   & > .container {     display: flex;     flex-wrap: wrap;   }   & > .container > div {     width: 50%;     padding: 1rem;     border: 2px dotted #ca9ce1;     margin-bottom: 1rem;     border-radius: 0.2rem;   }   & > .container > div > h4 {     margin: 0px 0px 5px 0px;   }   & > .container > div > button {     padding: 0.4rem 0.5rem;     border: 1px solid #f2befc;     border-radius: 0.35rem;     background-color: #23001e;     color: #ffffff;     font-weight: medium;     margin-top: 1rem;     cursor: pointer;   }   & > .container > div > article {     margin-top: 1rem;   } `; export default Posts;

The above code contains styles for JSX that we’ll still create once we start querying for data from the backend to the frontend.

Integrate Fauna SDK Into Our React App

To integrate the fauna client with the React app, you have to make an initial connection from the app. Create a new file db.js at the directory path src/config/. Then import the fauna driver and define a new client.
The secret passed as the argument to the fauna.Client() method is going to hold the access key from .env file:

import fauna from 'fauna'; const client = new fauna.Client({   secret: process.env.REACT_APP_DB_KEY, }); const q = fauna.query; export { client, q };

Inside the Posts component create a state variable called posts using useState React Hooks with a default value of an array. It is going to store the value of the content we’ll get back from our database using the setPosts function. Then define a second state variable, visible, with a default value of false, that we’ll use to hide or show more post content using the handleDisplay function that would be triggered by a button we’ll add later in the tutorial.

function App() {   const [posts, setPosts] = useState([]);   const [visible, setVisibility] = useState(false);   const handleDisplay = () => setVisibility(!visible);   // ... }

Creating A Serverless Function By Writing Queries

Since our blog website is going to perform only one operation, that’s to get the data/contents we created on the database, let’s create a new directory called src/api/ and inside it, we create a new file called index.js. Making the request with ES6, we’ll use import to import the client and the query instance from the config/db.js file:

export const getAllPosts = client   .query(q.Paginate(q.Match(q.Ref('indexes/all_posts'))))     .then(response => {       const expenseRef = response.data;       const getAllDataQuery = expenseRef.map(ref => {         return q.Get(ref);       });      return client.query(getAllDataQuery).then(data => data);    })    .catch(error => console.error('Error: ', error.message));
 })  .catch(error => console.error('Error: ', error.message));

The query above to the database is going to return a ref that we can map over to get the actual results need for the application. We’ll make sure to append the catch that will help check for an error while querying the database, so we can log it out.

Next is to display all the data returned from our CMS, database—from the Fauna collection. We’ll do so by invoking the query getAllPosts from the ./api/index.js file inside the useEffect Hook inside our Posts component. This is because when the Posts component renders for the first time, it iterates over the data, checking if there are any post in the database:

useEffect(() => {   getAllPosts.then((res) => {     setPosts(res);     console.log(res);   }); }, []);

Open the browser’s console to inspect the data returned from the database. If all things being right, and you’re closely following, the return data should look like the below:

With these data successfully returned from the database, we can now complete our Posts components, adding all necessary JSX elements that we’ve styled using styled-components library. We’ll use JavaScript map to loop over the posts state, array, only when the array is not empty:

import { useEffect, useState } from "react"; import styled from "styled-components"; import { getAllPosts } from "../api";  function Posts() {   useEffect(() => {     getAllPosts.then((res) => {       setPosts(res);       console.log(res);     });   }, []);    const [posts, setPosts] = useState([]);   const [visible, setVisibility] = useState(false);   const handleDisplay = () => setVisibility(!visible);    return (     <Wrapper>       <h3>My Recent Articles</h3>       <div className="container">         {posts &&           posts.map((post) => (             <div key={post.ref.id} id={post.ref.id}>               <h4>{post.data.post.title}</h4>               <em>{post.data.post.date}</em>               <article>                 {post.data.post.mainContent}                 <p style={{ display: visible ? "block" : "none" }}>                   {post.data.post.subContent}                 </p>               </article>               <button onClick={handleDisplay}>                 {visible ? "Show less" : "Show more"}               </button>             </div>           ))}       </div>     </Wrapper>   ); }  const Wrapper = styled.div`   margin-top: 3rem;   padding-left: 5rem;   color: #23001e;   & > .container {     display: flex;     flex-wrap: wrap;   }   & > .container > div {     width: 50%;     padding: 1rem;     border: 2px dotted #ca9ce1;     margin-bottom: 1rem;     border-radius: 0.2rem;   }   & > .container > div > h4 {     margin: 0px 0px 5px 0px;   }   & > .container > div > button {     padding: 0.4rem 0.5rem;     border: 1px solid #f2befc;     border-radius: 0.35rem;     background-color: #23001e;     color: #ffffff;     font-weight: medium;     margin-top: 1rem;     cursor: pointer;   }   & > .container > div > article {     margin-top: 1rem;   } `;  export default Posts;

With the complete code structure above, our blog website, Blogify🚀, will look like the below UI:

Deploying To Vercel

Vercel CLI provides a set of commands that allow you to deploy and manage your projects. The following steps will get your project hosted from your terminal on vercel platform fast and easy:

vercel login

Follow the instructions to login into your vercel account on the terminal

vercel

Using the vercel command from the root of a project directory. This will prompt questions that we will provide answers to depending on what’s asked.

vercel ? Set up and deploy “~/Projects/JavaScript/React JS/blogify”? [Y/n]  ? Which scope do you want to deploy to? ikehakinyemi ? Link to existing project? [y/N] n ? What’s your project’s name? (blogify)    # click enter if you don't want to change the name of the project ? In which directory is your code located? ./    # click enter if you running this deployment from root directory ? ? Want to override the settings? [y/N] n

This will deploy your project to vercel. Visit your vercel account to complete any other setup needed for CI/CD purpose.

Conclusion

I’m glad you followed the tutorial to this point, hope you’ve learnt how to use Fauna as Headless CMS. The combination of Fauna with Headless CMS concepts you can build great web application, from e-commerce application to Notes keeping application, any web application that needs data to be stored and retrieved for use on the frontend. Here’s the GitHub link to code sample we used within our tutorial, and the live demo which is hosted on vercel.


The post Building a Headless CMS with Fauna and Vercel Functions appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , ,
[Top]

How to Build a FullStack Serverless HN Clone With Svelte and Fauna

Svelte is a free and open-source front end JavaScript framework that enables developers to build highly performant applications with smaller application bundles. Svelte also empowers developers with its awesome developer experience.

Svelte provides a different approach to building web apps than some of the other frameworks such as React and Vue. While frameworks like React and Vue do the bulk of their work in the user’s browser while the app is running, Svelte shifts that work into a compile step that happens only when you build your app, producing highly-optimized vanilla JavaScript.

The outcome of this approach is not only smaller application bundles and better performance, but also a developer experience that is more approachable for people that have limited experience of the modern tooling ecosystem.

Svelte sticks closely to the classic web development model of HTML, CSS, and JS, just adding a few extensions to HTML and JavaScript. It arguably has fewer concepts and tools to learn than some of the other framework options.

Project Setup

The recommended way to initialize a Svelte app is by using degit which sets up everything automatically for you.

You will be required to either have yarn or npm installed.

# for Rollup npx degit "sveltejs/sapper-template#rollup" hn-clone # for webpack npx degit "sveltejs/sapper-template#webpack" hn-clone
cd hn-clone yarn #or just npm install

Project Structure

── package.json ├── README.md ├── rollup.config.js ├── scripts │   └── setupTypeScript.js ├── src │   ├── ambient.d.ts │   ├── client.js │   ├── components │   │   └── Nav.svelte │   ├── node_modules │   │   └── images │   │   	└── successkid.jpg │   ├── routes │   │   ├── about.svelte │   │   ├── blog │   │   │   ├── index.json.js │   │   │   ├── index.svelte │   │   │   ├── _posts.js │   │   │   ├── [slug].json.js │   │   │   └── [slug].svelte │   │   ├── _error.svelte │   │   ├── index.svelte │   │   └── _layout.svelte │   ├── server.js │   ├── service-worker.js │   └── template.html ├── static │   ├── favicon.png │   ├── global.css │   ├── logo-192.png │   ├── logo-512.png │   └── manifest.json └── yarn.lock

The Application

In this tutorial, we will build a basic HN clone with the ability to create a post and comment on that post.

Setting Up Fauna

yarn add faunadb

Creating Your Own Database on Fauna

To hold all our application’s data, we will first need to create a database. Fortunately, this is just a single command or line of code, as shown below. Don’t forget to create a Fauna account before continuing!

Fauna Shell

Fauna’s API has many interfaces/clients, such as drivers in JS, GO, Java and more, a cloud console, local and cloud shells, and even a VS Code extension! For this article, we’ll start with the local Fauna Shell, which is almost 100% interchangeable with the other interfaces.

npm install -g fauna-shell

After installing the Fauna Shell with npm, log in with your Fauna credentials:

$   fauna cloud-login Email: email@example.comPassword: **********

Now we are able to create our database.

fauna create-database hn-clone

Create Collections

Now that we have our database created, it’s time to create our collections.

In Fauna, a database is made up of one or more collections. The data you create is represented as documents and saved in a collection. A collection is like an SQL table. Or rather, a collection, is a collection of documents.

A fair comparison with a traditional SQL database would be as below.

FaunaDB Terminology SQL Terminology
Database Database
Collection Table
Document Row
Index Index

For our two microservices, we will create two collections in our database. Namely:

  1. a posts collection, and
  2. a comments collection.

To start an interactive shell for querying our new database, we need to run:

fauna shell hn-clone

We can now operate our database from this shell.

$   fauna shell hn-clone  Starting shell for database hn-clone Connected to https://db.fauna.com Type Ctrl+D or .exit to exit the shell hn-clone>

To create our posts collection, run the following command in the shell to create the collection with the default configuration settings for collections.

hn-clone> CreateCollection({ name: "posts" })

Next, let’s do the same for the comments collections.

hn-clone> CreateCollection({ name: "comments" })

Creating a Posts/Feed Page

To view all our posts we will create a page to view all our posts in a time ordered feed.

In our src/routes/index.svelte file add the following content. This will create the list of all available posts that are stored in our Fauna database.

<script context="module">   import faunadb, { query as q } from "faunadb";   import Comment from "../components/Comment.svelte";   const client = new faunadb.Client({     secret: process.env.FAUNA_SECRET,   });     export async function preload(page, session) {       let posts = await client.query(       q.Get(         q.Paginate(q.Documents(q.Collection("posts"))       )     )   );   console.log(posts)     return { posts }; } </script>   <script>   import Post from "../components/Post.svelte";     export let posts;   console.log(posts); </script>   <main class="container">   {#each posts as post}     <!-- content here -->     <div class="card">       <div class="card-body">         <p>{post}</p>           <Comment postId={post.id}/>       </div>     </div>   {/each} </main>

Creating a Comments Component

To create a comment we will create a component to send our data to Fauna using the query below.

let response = await client.query(   q.Create(     q.Collection('comments'),     { data: { title: comment, post: postId } },)   )

Our final component will have the following code.

<script>    export let postId;   import faunadb, { query as q } from 'faunadb'      const client = new faunadb.Client({secret: process.env.FAUNA_SECRET})      let comment;     let postComment = async () => {     if (!comment) { return }     let response = await client.query(     q.Create(       q.Collection('comments'),       { data: { title: comment, post: postId } },     )   )          console.log(response)   } </script>   <div class="row">   <div class="col-sm-12 col-lg-6 col-sm-8">     <p></p>     <textarea type="text" class="form-control" bind:value={comment} placeholder="Comment" ></textarea>     <p></p>     <button class="btn btn-warning" style="float:right;" on:click={postComment}>Post comment</button>   </div> </div>

We run the dev server:

yarn dev

When you visit http://localhost:5000 you will be greeted with the feeds with a panel to comment on the same page.

Conclusion

For this tutorial, we are able to see how fast it can be to develop a full stack application with Fauna and Svelte.

Svelte provides a highly productive, powerful and fast framework that we can use to develop both backend and frontend components of our full stack app while using the pre-rendered framework Sapper.

Secondly, we can see how Fauna is indeed a powerful database; with a powerful FQL, which supports complex querying and integration with the serverless and JAMStack ecosystem through its API first approach. This enables developers to simplify code and ship faster.

I hope you find Fauna to be exciting, like I do, and that you enjoyed this article. Feel free to follow me on Twitter @theAmolo if you enjoyed this!


The post How to Build a FullStack Serverless HN Clone With Svelte and Fauna appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , ,
[Top]

How I Built my SaaS MVP With Fauna ($150 in revenue so far)

Are you a beginner coder trying to implement to launch your MVP? I’ve just finished my MVP of ReviewBolt.com, a competitor analysis tool. And it’s built using React + Fauna + Next JS. It’s my first paid SaaS tool so earning $ 150 is a big accomplishment for me.

In this post you’ll see why I chose Fauna for ReviewBolt and how you can implement a similar set up. I’ll show you why I chose Fauna as my primary database. It easily stores massive amounts of data and gets it to me fast.By the end of this article, you’ll be able to decide on whether you also want to create your own serverless website with Fauna as your back end.

What is ReviewBolt?

The website allows you to search any website and get a detailed review of a company’s ad strategies, tech stack, and user experiences.

Reviewbolt currently pulls data from seven different sources to give you an analysis of any website in the world. It will estimate Facebook spend, Google spend, yearly revenue, traffic growth metrics, user reviews, and more!

Why did I build it?

I’ve dabbled in entrepreneurship and I’m always scouting for new opportunities. I thought building ReviewBolt would help me (1) determine how big a company is… and (2) determine its primary distribution channel. This is super important because if you can’t get new users then your business is pretty much dead.

Some other cool tidbits about it:

  • You get a large overview of everything that’s going on with a website.
  • What’s more, every search you make on the website creates a page that gets saved and indexed. So ReviewBolt grows a tiny bit bigger with every user search.

So far, it’s made $ 150, 50 users, analysed over 3,000 websites and helped 5,000+ people with their research. So a good start for a solo dev indie-hacker like myself.

It was featured on Betalist and it’s quite popular in entrepreneur circles. You can see my real-time statistics here: reviewbolt.com/stats

I’m not a coder… all self-taught

Building it so far was no easy feat! Originally I graduated as an english major from McGill University in Canada with zero tech skills. I actually took one programming class in my last year and got a 50%… the lowest passing grade possible.

But between then and now a lot has changed. For the last two years I’ve been learning web and app development. This year my goal was to make a profitable SaaS company but to also to make something that I would find useful.

I built ReviewBolt in my little home office in London during this massive Lockdown. The project works and that’s one step for me on my journey. And luckily I chose Fauna because it was quite easy to get a fast, reliable database that actually works with very low costs.

Why did I pick Fauna?

Fauna provides a great free tier and as a solo dev project, I wanted to keep my costs lean to see first if this would actually work.

Warning: I’m no Fauna expert. I actually still have a long way to go to master it. However, this was my setup to create the MVP of ReviewBolt.com that you see today. I made some really dumb mistakes like storing my data objects as strings instead of objects… But you live and learn.

I didn’t start off with Fauna…

ReviewBolt first started as just one large google sheet. Every time someone made a wesbite search, it pulled the data from the various sources and saved it as a row in a google sheet.

Simple enough right? But there was a problem…

After about 1,000 searches Google Sheets started to break down like an old car on a road trip…. It was barely able to start when I loaded the page. So I quickly looked for something more stable.

Then I found Fauna 😇

I discovered that Fauna was really fast and quite reliable. I started out using their GraphQL feature but realized the native FQL language had much better documentation.

There’s a great dashboard that gives you immediate insight for your usage.

I primarily use Fauna in the following ways:

  1. Storage of 110,000 company bios that I scraped.
  2. Storage of Google Ads data
  3. Storage of Facebook Ad data
  4. Storage of Google Trends data
  5. Storage of tech stack
  6. Storage of user reviews

The 110k companies are stored in one collection and the live data about websites is stored in another. I could have probably created created relational databases within fauna but that was way beyond me at the time 😅 and it was easier to store everything as one very large object.

For testing, Fauna actually provides the built-in web shell. This is really useful, because I can follow the tutorials and try them in real-time on the website without load visual studio.

What frameworks does the website use?

The website works using React and NextJS. To load a review of a website you just type in the site.

Every search looks like this: reviewbolt.com/r/[website.com]

The first thing that happens on the back end is that it uses a Fauna Index to see if this search has already been done. Fauna is very efficient to search your database. Even with a 110k collection of documents it still works really well because of its use of indexing. So when a page loads — say reviewbolt.com/r/fauna — it first checks to see if there’s a match. If a match is found then it loads the saved data and renders that on the page.

If there’s no match then the page brings up a spinner and in the backend it queries all these public APIs about the requested website. As soon as it’s done it loads the data for the user.

And when that new website is analyzed it saves this data into my Fauna Collection. So then the next user won’t have to load everything but rather we can use Fauna to fetch it.

My use case is to index all of ReviewBolt’s website searches and then being able to retrieve those searches easily.

What else can Fauna do?

The next step is to create a charts section. So far I built a very basic version of this just for Shopify’s top 90 stores.

But ideally I have one that works by the category using Fauna’s index binding to create multiple indexes around: Top Facebook Spenders, Top Google Spenders, Top Traffic, Top Revenue, Top CRMs by traffic. And that will really be interesting to see who’s at the top for competitor research. Because in marketing, you always want to take inspiration from the winners.

But ideally I have one that works by the category using Fauna’s index binding to create multiple indexes around: Top Facebook Spenders, Top Google Spenders, Top Traffic, Top Revenue, Top CRMs by traffic. And that will really be interesting to see who’s at the top for competitor research. Because in marketing, you always want to take inspiration from the winners.

export async function findByName(name){    var data = await client.query(Map(     Paginate(       Match(Index("rbCompByName"), name)     ),     Lambda(       "person",       Get(Var("person"))     )   ))   return data.data//[0].data }

This queries Fauna to paginate the results and return the found object.

I run this function when searching for the website name. And then to create a company I use this code:

export async function createCompany(slug,linkinfo,trending,googleData,trustpilotReviews,facebookData,tech,date,trafficGrowth,growthLevels,trafficLevel,faunaData){    var Slug = slug   var Author = linkinfo   var Trends = trending   var Google = googleData   var Reviews = trustpilotReviews   var Facebook = facebookData   var TechData = tech   var myDate = date   var myTrafficGrowth = trafficGrowth   var myGrowthLevels = growthLevels   var myFaunaData = faunaData     client.query(     Create(Collection('RBcompanies'), {       data: {         "Slug": Slug,         "Author": Author,         "Trends": Trends,         "Google": Google,         "Reviews": Reviews,         "Facebook": Facebook,         "TechData": TechData,         "Date": myDate,         "TrafficGrowth":myTrafficGrowth,         "GrowthLevels":myGrowthLevels,         "TrafficLevels":trafficLevel,         "faunaData":JSON.parse(myFaunaData),       }     })   ).then(result=>console.log(result)).catch(error => console.error('Error mate: ', error.message));  }

Which is a bit longer because I’m pulling so much information on various aspects of the website and storing it as one large object.

The Fauna FQL language is quite simple once you get your head around. Especially since for what I’m doing at least I don’t need to many commands.

I followed this tutorial on building a twitter clone and that really helped.

This will change when I introduce charts and I’m sorting a variety of indexes but luckily it’s quite easy to do this in Fauna.

What’s the next step to learn more about Fauna?

I highly recommend watching the video above and also going through the tutorial on fireship.io. It’s great for going through the basic concepts. It really helped get to the grips with the fauna query language.

Conclusion

Fauna was quite easy to implement as a basic CRUD system where I didn’t have to worry about fees. The free tier is currently 100k reads and 50k writes and for the traffic level that ReviewBolt is getting that works. So I’m quite happy with it so far and I’d recommend it for future projects.


The post How I Built my SaaS MVP With Fauna ($ 150 in revenue so far) appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , ,
[Top]

How to Build a GraphQL API for Text Analytics with Python, Flask and Fauna

GraphQL is a query language and server-side runtime environment for building APIs. It can also  be considered as the syntax that you write in order to describe the kind of data you want from APIs. What this means for you as a backend developer is that with GraphQL, you are able to expose a single endpoint on your server to handle GraphQL queries from client applications, as opposed to the many endpoints you’d need to create to handle specific kinds of requests with REST and turn serve data from those endpoints.

If a client needs new data that is not already provisioned. You’d need to create new endpoints for it and update the API docs. GraphQL makes it possible to send queries to a single endpoint, these queries are then passed on to the server to be handled by the server’s predefined resolver functions and the requested information can be provided over the network.

Running Flask server

Flask is a minimalist framework for building Python servers. I always use Flask to expose my GraphQL API to serve my machine learning models. Requests from client applications are then forwarded by the GraphQL gateway. Overall, microservice architecture allows us to use the best technology for the right job and it allows us to use advanced patterns like schema federation. 

In this article, we will start small with the implementation of the so-called Levenshetein distance. We will use the well-known NLTK library and expose the Levenshtein distance functionality with the GraphQL API. In this article, i assume that you are familiar with basic GraphQL concepts like BUILDING GraphQL mutations. 

Note: We will be working with the free and open source example repository with the following:

In the projects, Pipenv was used for managing the python dependencies. If you are located in the project folder. We can create our virtual environment with this:

…and install dependencies from Pipfile.

We usually define a couple of script aliases in our Pipfile to ease our development workflow.

It allows us to run our dev environment easily with a command aliases as follows:

The flask server should be then exposed by default at port 5000. You can immediately move on to the GraphQL Playground, which serves as the IDE for the live documentation and query execution for GraphQL servers. GraphQL playground uses the so-called GraphQL introspection for fetching information about our GraphQL types. The following code initializes our Flask server:

It is a good practice to use the WSGI server when running a production environment. Therefore, we have to set-up a script alias for the gunicorn with: 

Levenshtein distance (edit distance)

The levenshtein distance, also known as edit distance, is a string metric. It is defined as the minimum number of single-character edits needed to change a one character sequence  a to another one b. If we denote the length of such sequences |a| and |b| respectively, we get the following:

Where

1(ai≠bj) is the distance between the first i characaters of a and the first j character of b. For more on the theoretical background, feel free to check out the Wiki.

In practice, let’s say that someone misspelt ‘machine learning’ and wrote ‘machinlt learning’. We would need to make the following edits:

Edit Edit type Word state
0 Machinlt lerning
1 Substitution Machinet lerning
2 Deletion Machine lerning
3 Insertion Machine learning

For these two strings, we get a Levenshtein distance equal to 3. The levenshtein distance has many applications, such as spell checkers, correction system for optical character recognition, or similarity calculations.

Building a Graphql server with graphene in Python

We will build the following schema in our article:

Each GraphQL schema is required to have at least one query. We usually define our first query in order to health check our microservice. The query can be called like this:

query {   healthcheck }

However, the main function of our schema is to enable us to calculate the Levenshtien distance. We will use variables to pass dynamic parameters in the following GraphQL document:

We have defined our schema so far in SDL format. In the python ecosystem, however, we do not have libraries like graphql-tools, so we need to define our schema with the code-first approach. The schema is defined as follows using the Graphene library:

We have followed the best practices for overall schema and mutations. Our input object type is written in Graphene as follows:

Each time, we execute our mutation in GraphQL playground:

With the following variables:

{   "input": {     "s1": "test1",     "s2": "test2"   } }

We obtain the Levenshtein distance between our two input strings. For our simple example of strings test1 and test2, we get 1. We can leverage the well-known NLTK library for natural language processing (NLP). The following code is executed from the resolver:

It is also straightforward to implement the Levenshtein distance by ourselves using, for example, an iterative matrix, but I would suggest to not reinvent the wheel and use the default NLTK functions.

Serverless GraphQL APIs with Fauna

First off some introductions, before we jump right in. It’s only fair that I give Fauna a proper introduction as it is about to make our lives a whole lot easier. 

Fauna is a serverless database service, that handles all optimization and maintenance tasks so that developers don’t have to bother about them and can focus on developing their apps and shipping to market faster.

Again, serverless doesn’t actually mean “NO SERVERS” but to simply put it: what serverless means is that you can get things working without necessarily having to set things up from scratch. Some apps that use serverless concepts don’t have a backend service written from scratch, they employ the use of cloud functions which are technically scripts written on cloud platforms to handle necessary tasks like login, registration, serving data etc.

Where does Fauna fit into all of this? When we build servers we need to provision our server with a database, and when we do that it’s usually a running instance of our database. With serverless technology like Fauna, we can shift that workload to the cloud and focus on actually writing our auth systems, implementing business logic for our app. Fauna also manages things like, maintenance and scaling which usually calls for concern with systems that use conventional databases.

If you are interested in getting more info about Fauna and it’s features, check the Fauna docs. Let’s get started with building our GraphQL API the serverless way with the GraphQL.

Requirements

  1. Fauna account: That’s all you need, an account with Fauna is all you need for the session, so click here to go to the sign up page.
  2. Creating a Fauna Database
  3. Login to your Fauna account once you have created an account with them. Once on the dashboard, you should see a button to create a new database, click on that and you should see a little form to fill in the name of the database, that resembles the ones below:

I call mine “graphqlbyexample” but you can call yours anything you wish. Please, ignore the pre-populate with demo data option we don’t need that for this demo. Click “Save” and you should be brought to a new screen as shown below:

Adding a GraphQL Schema to Fauna

  1. In order to get our GraphQL server up and running, Fauna allows us to upload our own graphQL schema, on the page we are currently on, you should see a GraphQL options; select that and it will prompt you to upload a schema file. This file usually contains raw GraphQL schema and is saved with either the .gql or .graphql file extension. Let’s create our schema and upload it to Fauna to spin up our server.
  1. Create a new file anywhere you like. I’m creating it in the same directory as our previous app, because it has no impact on it. I’m calling it schema.gql and we will add the following to it:

Here, we simply define our data types in tandem to our two tables (Notes, and user). Save this and go back to that page to upload this schema.gql that we just created. Once that is done, Fauna processes it and takes us to a new page — our GraphQL API playground.

We have literally created a graphql server by simply uploading that really simple schema to Fauna and to highlight some of the really cool feats that Fauna has, observe:

  1. Fauna automatically generates collections for us, if you notice, we did not create any collection(translates to Tables, if you are only familiar with relational databases). Fauna is a NoSQL database and collections are technically the same as tables, and documents as to rows in tables. If we go to the collections options and click on that we had see the tables that were auto-generated on our behalf, courtesy of the schema file that we uploaded.
  1. Fauna automatically creates indexes on our behalf: head over to the indexes option and see what indexes have been created on behalf of the API. Fauna is a document-oriented database and does not have primary keys or foreign-keys as you have in relational databases for search and index purposes, instead, we create indexes in Fauna to help with data retrieval.
  1. Fauna automatically generates graphql queries and mutations as well as API Docs on our behalf: This is one of my personal favorites and I can’t seem to get over just how efficient Fauna does this. Fauna is able to intelligently generate some queries that it thinks you might want in your newly created API. Head over back to the GraphQL option and click on the “Docs” tab to open up the Docs on the playground.

As you can see two queries and a handful of mutations already auto-generated (even though we did nit add them to our schema file), you can click on each one in the docs to see the details. 

Testing our server

Let’s test out some of these queries and mutations from the playground, we also use our server outside of the playground (by the way, it is a fully functional GraphQL server).

Testing from the Playground

  1. We will test our first off by creating a new user, with the predefined createUser mutations as follows:

If we go to the collections options and choose User, we should have our newly created entry(document aka row) in our User collections.

  1. Let’s create a new note and associate it with a user as the author via it’s document ref id, which is a special ID generated by Fauna for all documents for the sake of references like this much like a key in relational tables. To find the ID for the user we just created simply navigate to the collection and from the list of documents you should see the option(a copy Icon)to copy Ref ID:

Once you have this you can create a new note and associate is as follows:

  1. Let’s make a query this time, this time to get data from the database. Currently, we can fetch users by ID or fetch a note by it’s ID. Let’s see that in action:

You must have been thinking it, what if we wanted to fetch info of all users, currently, we can’t do that because Fauna did not generate that automatically for us, but we can update our schema so let’s add our custom query to our schema.gql file, as follows. Note that this is an update to the file so don’t clear everything in the file out just add this to it:

Once you have added this, save the file and click on the update schema option on the playground to upload the file again, it should take a few seconds to update, once it’s done we will be able to use our newly created query, as follows:

Don’t forget that as opposed to having all the info about users served (namely: name, email, password) we can choose what fields we want because it’s a GraphQL and not just that. It’s Fauna’s GraphQL so feel free to specify more fields if you want. 

Testing from without the playground – Using python (requests library)

Now that we’ve seen that our API works from the playground lets see how we can actually use this from an application outside the playground environment, using python’s request library so if you don’t have it installed kindly install it using pip as follows:

pip install requests
  • Before we write any code we need to get our API key from Fauna which is what will help us communicate with our API from outside the playground. Head over to security on your dashboard and on the keys tab select the option to create a new key, it should bring up a form like this:

Leave the database option as the current one, change the role of the key from admin to the server and then save. It’ll generate for you a new key secret that you must copy and save somewhere safe, as an environment variable most probably.

  • For this I’m going to create a simple script to demonstrate, so add a new file call it whatever you wish — I’m calling mine test.py to your current working directory or anywhere you wish. In this file we’ll add the following:

Here we add a couple of imports, including the requests library which we use to send the requests, as well as the os module used here to load our environment variables which is where I stored the Fauna secret key we got from the previous step.

Note the URL where the request is to be sent, this is gotten from the Fauna GraphQL playground here:

Next, we create a query which is to be sent this example shows a simple fetch query to find a user by id (which is one of the automatically generated queries from Fauna), we then retrieve the key from the environment variable and store it in a variable called a token, and create a dictionary to represent out headers, this is, after all, an HTTP request so we can set headers here, and in fact, we have to because Fauna will look for our secret key from the headers of our request.

  • The concluding part of the code features how we use the request library to create the request, and is shown as follows:

We create a request object and check to see if the request went through via its status_code and print the response from the server if it went well otherwise we print an error message lets run test.py and see what it returns.

Conclusion

In this article, we covered creating GraphQL servers from scratch and looked at creating servers right from Fauna without having to do much work, we also saw some of the awesome, cool perks that come with using the serverless system that Fauna provides, we went on to further see how we could test our servers and validate that they work.

Hopefully, this was worth your time, and taught you a thing or two about GraphQL, Serverless, Fauna, and Flask and text analytics. To learn more about Fauna, you can also sign up for a free account and try it out yourself!


By: Adesina Abdrulrahman


The post How to Build a GraphQL API for Text Analytics with Python, Flask and Fauna appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , , ,
[Top]

Building an Ethereum app using Redwood.js and Fauna

With the recent climb of Bitcoin’s price over 20k $ USD, and to it recently breaking 30k, I thought it’s worth taking a deep dive back into creating Ethereum applications. Ethereum, as you should know by now, is a public (meaning, open-to-everyone-without-restrictions) blockchain that functions as a distributed consensus and data processing network, with the data being in the canonical form of “transactions” (txns). However, the current capabilities of Ethereum let it store (constrained by gas fees) and process (constrained by block size or size of the parties participating in consensus) only so many txns and txns/sec. Now, since this is a “how to” article on building with Redwood and Fauna and not an article on “how does […],” I will not go further into the technical details about how Ethereum works, what constraints it has and does not have, et cetera. Instead, I will assume you, as the reader, already have some understanding about Ethereum and how to build on it or with it.

I realized that there will be some new people stumbling onto this post with no prior experience with Ethereum, and it would behoove me to point these viewers in some direction. Thankfully, as of the time of this rewriting, Ethereum recently revamped their Developers page with tons of resources and tutorials. I highly recommend newcomers to go through it!

Although, I will be providing relevant specific details on how I got to learn how to make my own blockchain application as we go along so that anyone familiar with either building Ethereum apps, Redwood.js apps, or apps that rely on a Fauna, can easily follow the content in this tutorial. With that out of the way, let’s dive in!

Preliminaries

This project is a fork of the Emanator monorepo, a project that is well described by Patrick Gallagher, one of the creators of the app, in his blog post he made for his team’s Superfluid hackathon submission. While Patrick’s app used Heroku for their database, I will be showing how you can use Fauna with this same app!

Since this project is a fork, make sure to have downloaded the MetaMask browser extension before continuing.

Fauna

Fauna is a web-native GraphQL interface, with support for custom business logic and integration with the serverless ecosystem, enabling developers to simplify code and ship faster. The underlying globally-distributed storage and compute fabric is fast, consistent, and reliable, with a modern security infrastructure. Fauna is easy to get started with and offers a 100 percent serverless experience with nothing to manage.

Fauna also provides us with a High Availability solution with each server globally located containing a partition of our database, replicating our data asynchronously with each request with a copy of our database or the transaction made.

Some of the benefits to using Fauna can be summarized as:

  • Transactional
  • Multi-document
  • Geo-distributed

In short, Fauna frees the developer from worrying about single or multi-document solutions. Guarantees consistent data without burdening the developer on how to model their system to avoid consistency issues. To get a good overview of how Fauna does this see this blog post about the FaunaDB distributed transaction protocol.

There are a few other alternatives that one could choose instead of using Fauna such as:

  • Firebase
  • Cassandra
  • MongoDB

But these options don’t give us the ACID guarantees that Fauna does, compromising scaling. ACID stands for:

  • Atomic:  all transactions are a single unit of truth, either they all pass or none. If we have multiple transactions in the same request then either both are good or neither are, one cannot fail and the other succeed.
  • Consistent: A transaction can only bring the database from one valid state to another, that is, any data written to the database must follow the rules set out by the database, this ensures that all transactions are legal.
  • Isolation: When a transaction is made or created, concurrent transactions leave the state of the database the same as they would be if each request was made sequentially.
  • Durability: Any transaction that is made and committed to the database is persisted in the database, regardless of down time of the system or failure.

Redwood.js

Since I’ve used Fauna several times, I can vouch for Fauna’s database first-hand, and of all the things I enjoy about it, what I love the most is how simple and easy it is to use! Not only that, but Fauna is also great and easy to pair with GraphQL and GraphQL tools like Apollo Client and Apollo Server!! However, we will not be using Apollo Client and Apollo Server directly. We’ll be using Redwood.js instead, a full-stack JavaScript/TypeScript (not production-ready) serverless framework which comes prepackaged with Apollo Client/Server!

You can check out Redwood.js on its site, and the GitHub page.

Redwood.js is a newer framework to come out of the woodwork (lol) and was started by Tom Preston-Werner (one of the founders of GitHub). Even so, do be warned that this is an opinionated web-app framework, coming with a lot of the dev environment decisions already made for you. While some folk may not like this approach, it does offer us a faster way to build Ethereum apps, which is what this post is all about.

Superfluid

One of the challenges of working with Ethereum applications is block confirmations. The corollary to block confirmations is txn confirmations (i.e. data), and confirmations take time, which means time (usually minutes) that the user must wait until a computation they initiated (either directly via a UI or indirectly via another smart contract) is considered truthful or trustworthy. Superfluid is a protocol that aims to address this issue by introducing cashflows or txn streams to enable real-time financial applications; that is; apps where the user no longer needs to wait for txn confirmations and can immediately follow-up on the next set of computational actions.

Learn more about Superfluid by reading their documentation.

Emanator

Patrick’s team did something really cool and applied Superfluid’s streaming functionality to NFTs, allowing for a user to “mint a continuous supply of NFTs”. This stream of NFTs can then be sold via auctions. Another interesting part of the emanator app is that these NFTs are for creators, artists 👩‍🎨 , or musicians 🎼 .

There are a lot more technical details about how this application works, like the use of a Superfluid Instant Distribution Agreement (IDA), revenue split per auction, auction process, and the smart contract itself; however, since this is a “how-to” and not a “how does […]” tutorial, I’ll leave you with a link to the README.md of the original Emanator `monorepo`, if you want to learn more.

Finally, let’s get to some code!

Setup

1. Download the repo from redwood-eth-with-fauna

Git clone the redwood-eth-with-fauna repo on your terminal, favorite text editor, or IDE. For greater cognitive ease, I’ll be using VSCode for this tutorial.

2. Install app dependencies and setup environment variables 🔐

To install this project’s dependencies after you’ve cloned the repo, just run:

yarn

…at the root of the directory. Then, we need to get our .env file from our .env.example file. To do that run:

cp .env.example .env

In your .env file, you still need to provide INFURA_ENDPOINT_KEY. Contrary to what you might initially think, this variable is actually your PROJECT ID of your Infura app.

If you don’t have an Infura account, you can create one for free! 🆓 🕺

An example view of the Infura dashboard for my redwood-eth-with-fauna app. Copy the PROJECT ID and paste it in your .env file as for INFURA_ENDPOINT_KEY

3. Update the GraphQL schema and run the database migration

In the schema file found by at:

api/prisma/schema.prisma 

…we need to add a field to the Auction model. This is due to a bug in the code where this field is actually missing from the monorepo. So, we must add it to get our app working!

We are adding line 33, a contentHash field with the type `String` so that our Auctions can be added to our database and then shown to the user.

After that, we need to run a database migration using a Redwood.js command that will automatically update some of our project’s code. (How generous of the Redwood devs to abstract this responsibility from us; this command just works!) To do that, run:

yarn rw db save redwood-eth-with-fauna && yarn rw db up

You should see something like the following if this process was successful.

At this point, you could start the app by running

yarn rw dev

…and create, and then mint your first NFT! 🎉 🎉

Note: You may get the following error when minting a new NFT:

If you do, just refresh the page to see your new NFT on the right!

You can also click on the name of your new NFT to view it’s auction details like the one shown below:

You can also notice on your terminal that Redwood updates the API resolver when you navigate to this page.

That’s all for the setup! Unfortunately, I won’t be touching on how to use this part of the UI, but you’re welcome to visit Emanator’s monorepo to learn more.

Now, we want to add Fauna to our app.

Adding Fauna

Before we get to adding Fauna to our Redwood app, let’s make sure to power it down by pressing CTL+C (on macOS). Redwood handles hot reloading for us and will automatically re-render pages as we make edits which can get quite annoying while we make your adjustments. So, we’ll keep our app down for now until we’ve finished adding Fauna.

Next, we want to make sure we have a Fauna secret API key from a Fauna database that we create on Fauna’s dashboard (I will not walk through how to do that, but this helpful article does a good job of covering it!). Once you have copied your key secret, paste it into your .env file by replacing <FAUNA_SECRET_KEY>:

Make sure to leave the quotation marks in place!

Importing GraphQL Schema to Fauna

To import our GraphQL schema of our project to Fauna, we need to first schema stitch our 3 separate schemas together, a process we’ll do manually. Make a new file api/src/graphql/fauna-schema-to-import.gql. In this file, we will add the following:

type Query {  bids: [Bid!]!   auctions: [Auction!]!  auction(address: String!): Auction   web3Auction(address: String!): Web3Auction!  web3User(address: String!, auctionAddress: String!): Web3User! }   # ------ Auction schema ------ type Auction {  id: Int!  owner: String!  address: String!  name: String!  winLength: Int!  description: String  contentHash: String  createdAt: String!  status: String!  highBid: Int!  generation: Int!  revenue: Int!  bids: [Bid]! }   input CreateAuctionInput {  address: String!  name: String!  owner: String!  winLength: Int!  description: String!  contentHash: String!  status: String  highBid: Int  generation: Int }   # Comment out to bypass Fauna `Import your GraphQL schema' error # type Mutation { #   createAuction(input: CreateAuctionInput!): Auction # }  # ------ Bids ------ type Bid {  id: Int!  amount: Int!  auction: Auction!  auctionAddress: String! }     input CreateBidInput {  amount: Int!  auctionAddress: String! }   input UpdateBidInput {  amount: Int  auctionAddress: String }   # ------ Web3 ------ type Web3Auction {  address: String!  highBidder: String!  status: String!  highBid: Int!  currentGeneration: Int!  auctionBalance: Int!  endTime: String!  lastBidTime: String!  # Unfortunately, the Fauna GraphQL API does not support custom scalars.  # So, we'll this field from the app.  # pastAuctions: JSON!  revenue: Int! }   type Web3User {  address: String!  auctionAddress: String!  superTokenBalance: String!  isSubscribed: Boolean! }

Using this schema, we can now import it to our Fauna database.

Also, don’t forget to make the necessary changes to our 3 separate schema files api/src/graphql/auctions.sdl.js, api/src/graphql/bids.sdl.js, and api/src/graphql/web3.sdl.js to correspond to our new Fauna GraphQL schema!! This is important to maintain consistency between our app’s GraphQL schema and Fauna’s.

View Complete Project Diffs — Quick Start section

If you want to take a deep dive and learn the necessary changes required to get this project up and running, great! Head on to the next section!!

Otherwise, if you want to just get up and running quickly, this section is for you.

You can git checkout the `integrating-fauna` branch at the root directory of this project’s repo. To do that, run the following command:

git checkout integrating-fauna

Then, run yarn again, for a sanity check:

yarn

To start the app, you can then run:

yarn rw dev

Steps to add Fauna

Now for some more steps to get our project going!

1. Install faunadb and graphql-request

First, let’s install the Fauna JavaScript driver faunadb and the graphql-request. We will use both of these for our main modifications to our database scripts folder to add Fauna.

To install, run:

yarn workspace api add faunadb graphql-request

2. Edit  api/src/lib/db.js and api/src/functions/graphql.js

Now, we will replace the PrismaClient instance in api/src/lib/db.js with our Fauna instance. You can delete everything in file and replace it with the following:

Then, we must make a small update to our api/src/functions/graphql.js file like so:

3. Create api/src/lib/fauna-client.js

In this simple file, we will instantiate our client-side instance of the Fauna database with two variables which we will be using in the next step. This file should end up looking like the following:

4. Update our first service under api/src/services/auctions/auctions.js

Here comes the hard part. In order to get our services running, we need to replace all Prisma related commands with commands using an instance of the Fauna client from our fauna-client.js we just created. This part doesn’t seem straightforward initially, but with some deep thought and thinking, all the necessary changes come down to understanding how Fauna’s FQL commands work.

FQL (Fauna Query Language) is Fauna’s native API for querying Fauna. Since FQL is expression-oriented, using it is as simple as chaining several functional commands. Thus, for the first changes in api/services/auctions/auctions.js, we’ll do the following:

To break this down a bit, first, we import the client variables and `db` instance from the proper project file paths. Then, we remove line 11, and replace it with lines 13 – 28 (you can ignore the comments for now, but if you really want to see the rest of these, you can check out the integrating-fauna branch from this project’s repo to see the complete diffs). Here, all we’re doing is using FQL to query the auctions Index of our Fauna Indexes to get all the auctions data from our Fauna database. You can test this out by running console.log(auctionsRaw).

From running that console.log(), we see that we need to do some object destructing to get the data we need to update what was previously line 18:

const auctions = await auctionsRaw.map(async (auction, i) => {

Since we dealing with an object, but we want an array, we’ll add the following in the next line after finishing the declaration of const auctionsRaw:

Now we can see that we’re getting the right data format.

Next, let’s update the call instance of `auctionsRaw` to our new auctionsDataObjects:

Here comes the most challenging part of updating this file. We want to update the simple return statement of both the auction and createAuction functions. Actually, the changes we make are actually quite similar. So, let’s make update our auction function like so:

Again, you can ignore the comments, as this comment is just to note the preference return command statement that was there prior to our changes.

All this query says is, “in the auction Collection, find one specific auction that has this address.”

This next step to complete this createAuctin function is admittedly quite hacky. While making this tutorial, I realized that Fauna’s GraphQL API unfortunately does not support custom scalars (you can read more about that under the Limitations section of their GraphQL documentation). This sadly meant that the GraphQL schema of Emanator’s monorepo would not work directly out of the box. In the end, this resulted in having to make many minor changes to get the app to properly run the creation of an auction. So, instead of walking in detail through this section, I will first show you the diff, then briefly summarize the purpose of the changes.

Looking at the green lines of 100 and 101, we can see that the functional commands we’re using here are not that much different; here, we’re just creating a new document in our Auction collection, instead of reading from the Indexes.

Turning back to the data fields of this createAuction function, we can see that we are given an input as argument, which actually refers to the UI input fields of the new NFT auction form on the Home page. Thus, input is an object of six fields, namely address, name, owner, winLength, description, and contentHash. However, the other four fields that are required to fulfill our GraphQL schema for an Auction type are still missing! Therefore, the other variables I created, id, dateTime, status, and highBid are variables I, more or less, hardcoded so that this function could complete successfully.

Lastly, we need to complete the export of the Auction constant. To do that, we’ll make use of the Fauna client once more to make the following changes:

And, we’re finally done with our first service 🎊 , phew!

Completing GraphQL services

By now, you may be feeling a bit tired from these changes from updating the GraphQL services (I know I was while I was trying to learn the necessary changes to make!). So, to save you time getting this app to work, I’ll instead of walking through them entirely, I will share the git diffs again from the integrating-fauna branch that I have already working in the repo. After sharing them, I will summarize the changes that were made.

First file to update is api/src/services/bids/bids.js:

And, updating our last GraphQL service:

Finally, one final change in web/src/components/AuctionCell/AuctionCell.js:

So, back to Fauna not supporting custom scalars. Since Fauna doesn’t support custom scalars, we had to comment out the pastAuctions field from our web3.js service query (along with commenting it out from our GraphQL schemas).

The last change that was made in web/src/components/AuctionCell/AuctionCell.js is another hacky change to make the newly created NFT address domains (you can navigate to these when you click on the hyperlink of the NFT name, located on the right of the home page after you create a new NFT) clickable without throwing an error. 😄

Conclusion

Finally, when you run:

yarn rw dev

…and you create a new token, you can now do so using Fauna!! 🎉🎉🎉🎉

Final notes

There are two caveats. First, you will see this annoying error message appear above the create NFT form after you have created one and confirmed the transaction with MetaMask.

Unfortunately, I couldn’t find a solution for this besides refreshing the page. So, we will do this just like we did with our original Emanator monorepo version.

But when you do refresh the page, you should see your new shiny token displayed on the right! 👏

And, this is with the NFT token data fetched from Fauna! 🙌 🕺 🙌🙌

The second caveat is that the page for a new NFT is still not renderable due to the bug web/src/components/AuctionCell/AuctionCell.js.

This is another issue I couldn’t solve. However, this is where you, the community, can step in! This repo, redwood-eth-with-fauna is openly available on GitHub, along with the (currently) finalized integrating-fauna branch that has a working (as it currently does 😅) version of the Emanator app. So, if you’re really interested in this app and would like to explore how to leverage this app further with Fauna, feel free to fork the project and explore or make changes! I can always be reached on GitHub and am always happy to help you! 😊

That’s all for this tut, and I hope you enjoyed! Feel free to reach out with any questions on GitHub!


The post Building an Ethereum app using Redwood.js and Fauna appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , ,
[Top]

Deploying a Serverless Jamstack Site with RedwoodJS, Fauna, and Vercel

This article is for anyone interested in the emerging ecosystem of tools and technologies related to Jamstack and serverless. We’re going to use Fauna’s GraphQL API as a serverless back-end for a Jamstack front-end built with the Redwood framework and deployed with a one-click deploy on Vercel.

In other words, lots to learn! By the end, you’ll not only get to dive into Jamstack and serverless concepts, but also hands-on experience with a really neat combination of tech that I think you’ll really like.

Creating a Redwood app

Redwood is a framework for serverless applications that pulls together React (for front-end component), GraphQL (for data) and Prisma (for database queries).

There are other front-end frameworks that we could use here. One example is Bison, created by Chris Ball. It leverages GraphQL in a similar fashion to Redwood, but uses a slightly different lineup of GraphQL libraries, such as Nexus in place of Apollo Client and GraphQL Codegen, in place of the Redwood CLI. But it’s only been around a few months, so the project is still very new compared to Redwood, which has been in development since June 2019.

There are many great Redwood starter templates we could use to bootstrap our application, but I want to start by generating a Redwood boilerplate project and looking at the different pieces that make up a Redwood app. We’ll then build up the project, piece by piece.

We will need to install Yarn to use the Redwood CLI to get going. Once that’s good to go, here’s what to run in a terminal

yarn create redwood-app ./csstricks

We’ll now cd into our new project directory and start our development server.

cd csstricks yarn rw dev

Our project’s front-end is now running on localhost:8910. Our back-end is running on localhost:8911 and ready to receive GraphQL queries. By default, Redwood comes with a GraphiQL playground that we’ll use towards the end of the article.

Let’s head over to localhost:8910 in the browser. If all is good, the Redwood landing page should load up.

The Redwood starting page indicates that the front end of our app is ready to go. It also provides a nice instruction for how to start creating custom routes for the app.

Redwood is currently at version 0.21.0, as of this writing. The docs warn against using it in production until it officially reaches 1.0. They also have a community forum where they welcome feedback and input from developers like yourself.

Directory structure

Redwood values convention over configuration and makes a lot of decisions for us, including the choice of technologies, how files are organized, and even naming conventions. This can result in an overwhelming amount of generated boilerplate code that is hard to comprehend, especially if you’re just digging into this for the first time.

Here’s how the project is structured:

├── api │   ├── prisma │   │   ├── schema.prisma │   │   └── seeds.js │   └── src │       ├── functions │       │   └── graphql.js │       ├── graphql │       ├── lib │       │   └── db.js │       └── services └── web     ├── public     │   ├── favicon.png     │   ├── README.md     │   └── robots.txt     └── src         ├── components         ├── layouts         ├── pages         │   ├── FatalErrorPage         │   │   └── FatalErrorPage.js         │   └── NotFoundPage         │       └── NotFoundPage.js         ├── index.css         ├── index.html         ├── index.js         └── Routes.js

Don’t worry too much about what all this means yet; the first thing to notice is things are split into two main directories: web and api. Yarn workspaces allows each side to have its own path in the codebase.

web contains our front-end code for:

  • Pages
  • Layouts
  • Components

api contains our back-end code for:

  • Function handlers
  • Schema definition language
  • Services for back-end business logic
  • Database client

Redwood assumes Prisma as a data store, but we’re going to use Fauna instead. Why Fauna when we could just as easily use Firebase? Well, it’s just a personal preference. After Google purchased Firebase they launched a real-time document database, Cloud Firestore, as the successor to the original Firebase Realtime Database. By integrating with the larger Firebase ecosystem, we could have access to a wider range of features than what Fauna offers. At the same time, there are even a handful of community projects that have experimented with Firestore and GraphQL but there isn’t first class GraphQL support from Google.

Since we will be querying Fauna directly, we can delete the prisma directory and everything in it. We can also delete all the code in db.js. Just don’t delete the file as we’ll be using it to connect to the Fauna client.

index.html

We’ll start by taking a look at the web side since it should look familiar to developers with experience using React or other single-page application frameworks.

But what actually happens when we build a React app? It takes the entire site and shoves it all into one big ball of JavaScript inside index.js, then shoves that ball of JavaScript into the “root” DOM node, which is on line 11 of index.html.

<!DOCTYPE html> <html lang="en">   <head>     <meta charset="UTF-8" />     <meta name="viewport" content="width=device-width, initial-scale=1.0" />     <link rel="icon" type="image/png" href="/favicon.png" />     <title><%= htmlWebpackPlugin.options.title %></title>   </head>    <body>     <div id="redwood-app"></div> // HIGHLIGHT   </body> </html>

While Redwood uses Jamstack in the documentation and marketing of itself, Redwood doesn’t do pre-rendering yet (like Next or Gatsby can), but is still Jamstack in that it’s shipping static files and hitting APIs with JavaScript for data.

index.js

index.js contains our root component (that big ball of JavaScript) that is rendered to the root DOM node. document.getElementById() selects an element with an id containing redwood-app, and ReactDOM.render() renders our application into the root DOM element.

RedwoodProvider

The <Routes /> component (and by extension all the application pages) are contained within the <RedwoodProvider> tags. Flash uses the Context API for passing message objects between deeply nested components. It provides a typical message display unit for rendering the messages provided to FlashContext.

FlashContext’s provider component is packaged with the <RedwoodProvider /> component so it’s ready to use out of the box. Components pass message objects by subscribing to it (think, “send and receive”) via the provided useFlash hook.

FatalErrorBoundary

The provider itself is then contained within the <FatalErrorBoundary> component which is taking in <FatalErrorPage> as a prop. This defaults your website to an error page when all else fails.

import ReactDOM from 'react-dom' import { RedwoodProvider, FatalErrorBoundary } from '@redwoodjs/web' import FatalErrorPage from 'src/pages/FatalErrorPage' import Routes from 'src/Routes' import './index.css'  ReactDOM.render(   <FatalErrorBoundary page={FatalErrorPage}>     <RedwoodProvider>       <Routes />     </RedwoodProvider>   </FatalErrorBoundary>,    document.getElementById('redwood-app') )

Routes.js

Router contains all of our routes and each route is specified with a Route. The Redwood Router attempts to match the current URL to each route, stopping when it finds a match and then renders only that route. The only exception is the notfound route which renders a single Route with a notfound prop when no other route matches.

import { Router, Route } from '@redwoodjs/router'  const Routes = () => {   return (     <Router>       <Route notfound page={NotFoundPage} />     </Router>   ) }  export default Routes

Pages

Now that our application is set up, let’s start creating pages! We’ll use the Redwood CLI generate page command to create a named route function called home. This renders the HomePage component when it matches the URL path to /.

We can also use rw instead of redwood and g instead of generate to save some typing.

yarn rw g page home /

This command performs four separate actions:

  • It creates web/src/pages/HomePage/HomePage.js. The name specified in the first argument gets capitalized and “Page” is appended to the end.
  • It creates a test file at web/src/pages/HomePage/HomePage.test.js with a single, passing test so you can pretend you’re doing test-driven development.
  • It creates a Storybook file at web/src/pages/HomePage/HomePage.stories.js.
  • It adds a new <Route> in web/src/Routes.js that maps the / path to the HomePage component.

HomePage

If we go to web/src/pages we’ll see a HomePage directory containing a HomePage.js file. Here’s what’s in it:

// web/src/pages/HomePage/HomePage.js  import { Link, routes } from '@redwoodjs/router'  const HomePage = () => {   return (     <>       <h1>HomePage</h1>       <p>         Find me in <code>./web/src/pages/HomePage/HomePage.js</code>       </p>       <p>         My default route is named <code>home</code>, link to me with `         <Link to={routes.home()}>Home</Link>`       </p>     </>   ) }  export default HomePage
The HomePage.js file has been set as the main route, /.

We’re going to move our page navigation into a re-usable layout component which means we can delete the Link and routes imports as well as <Link to={routes.home()}>Home</Link>. This is what we’re left with:

// web/src/pages/HomePage/HomePage.js  const HomePage = () => {   return (     <>       <h1>RedwoodJS+FaunaDB+Vercel 🚀</h1>       <p>Taking Fullstack to the Jamstack</p>     </>   ) }  export default HomePage

AboutPage

To create our AboutPage, we’ll enter almost the exact same command we just did, but with about instead of home. We also don’t need to specify the path since it’s the same as the name of our route. In this case, the name and path will both be set to about.

yarn rw g page about
AboutPage.js is now available at /about.
// web/src/pages/AboutPage/AboutPage.js  import { Link, routes } from '@redwoodjs/router'  const AboutPage = () => {   return (     <>       <h1>AboutPage</h1>       <p>         Find me in <code>./web/src/pages/AboutPage/AboutPage.js</code>       </p>       <p>         My default route is named <code>about</code>, link to me with `         <Link to={routes.about()}>About</Link>`       </p>     </>   ) }  export default AboutPage

We’ll make a few edits to the About page like we did with our Home page. That includes taking out the <Link> and routes imports and deleting Link to={routes.about()}>About</Link>.

Here’s the end result:

// web/src/pages/AboutPage/AboutPage.js  const AboutPage = () => {   return (     <>       <h1>About 🚀🚀</h1>       <p>For those who want to stack their Jam, fully</p>     </>   ) }

If we return to Routes.js we’ll see our new routes for home and about. Pretty nice that Redwood does this for us!

const Routes = () => {   return (     <Router>       <Route path="/about" page={AboutPage} name="about" />       <Route path="/" page={HomePage} name="home" />       <Route notfound page={NotFoundPage} />     </Router>   ) }

Layouts

Now we want to create a header with navigation links that we can easily import into our different pages. We want to use a layout so we can add navigation to as many pages as we want by importing the component instead of having to write the code for it on every single page.

BlogLayout

You may now be wondering, “is there a generator for layouts?” The answer to that is… of course! The command is almost identical as what we’ve been doing so far, except with rw g layout followed by the name of the layout, instead of rw g page followed by the name and path of the route.

yarn rw g layout blog
// web/src/layouts/BlogLayout/BlogLayout.js  const BlogLayout = ({ children }) => {   return <>{children}</> }  export default BlogLayout

To create links between different pages we’ll need to:

  • Import Link and routes from @redwoodjs/router into BlogLayout.js
  • Create a <Link to={}></Link> component for each link
  • Pass a named route function, such as routes.home(), into the to={} prop for each route
// web/src/layouts/BlogLayout/BlogLayout.js  import { Link, routes } from '@redwoodjs/router'  const BlogLayout = ({ children }) => {   return (     <>       <header>         <h1>RedwoodJS+FaunaDB+Vercel 🚀</h1>          <nav>           <ul>             <li>               <Link to={routes.home()}>Home</Link>             </li>             <li>               <Link to={routes.about()}>About</Link>             </li>           </ul>         </nav>        </header>        <main>         <p>{children}</p>       </main>     </>   ) }  export default BlogLayout

We won’t see anything different in the browser yet. We created the BlogLayout but have not imported it into any pages. So let’s import BlogLayout into HomePage and wrap the entire return statement with the BlogLayout tags.

// web/src/pages/HomePage/HomePage.js  import BlogLayout from 'src/layouts/BlogLayout'  const HomePage = () => {   return (     <BlogLayout>       <p>Taking Fullstack to the Jamstack</p>     </BlogLayout>   ) }  export default HomePage
Hey look, the navigation is taking shape!

If we click the link to the About page we’ll be taken there but we are unable to get back to the previous page because we haven’t imported BlogLayout into AboutPage yet. Let’s do that now:

// web/src/pages/AboutPage/AboutPage.js  import BlogLayout from 'src/layouts/BlogLayout'  const AboutPage = () => {   return (     <BlogLayout>       <p>For those who want to stack their Jam, fully</p>     </BlogLayout>   ) }  export default AboutPage

Now we can navigate back and forth between the pages by clicking the navigation links! Next up, we’ll now create our GraphQL schema so we can start working with data.

Fauna schema definition language

To make this work, we need to create a new file called sdl.gql and enter the following schema into the file. Fauna will take this schema and make a few transformations.

// sdl.gql  type Post {   title: String!   body: String! }  type Query {   posts: [Post] }

Save the file and upload it to Fauna’s GraphQL Playground. Note that, at this point, you will need a Fauna account to continue. There’s a free tier that works just fine for what we’re doing.

The GraphQL Playground is located in the selected database.
The Fauna shell allows us to write, run and test queries.

It’s very important that Redwood and Fauna agree on the SDL, so we cannot use the original SDL that was entered into Fauna because that is no longer an accurate representation of the types as they exist on our Fauna database.

The Post collection and posts Index will appear unaltered if we run the default queries in the shell, but Fauna creates an intermediary PostPage type which has a data object.

Redwood schema definition language

This data object contains an array with all the Post objects in the database. We will use these types to create another schema definition language that lives inside our graphql directory on the api side of our Redwood project.

// api/src/graphql/posts.sdl.js  import gql from 'graphql-tag'  export const schema = gql`   type Post {     title: String!     body: String!   }    type PostPage {     data: [Post]   }    type Query {     posts: PostPage   } ` 

Services

The posts service sends a query to the Fauna GraphQL API. This query is requesting an array of posts, specifically the title and body for each. These are contained in the data object from PostPage.

// api/src/services/posts/posts.js  import { request } from 'src/lib/db' import { gql } from 'graphql-request'  export const posts = async () => {   const query = gql`   {     posts {       data {         title         body       }     }   }   `    const data = await request(query, 'https://graphql.fauna.com/graphql')    return data['posts'] } 

At this point, we can install graphql-request, a minimal client for GraphQL with a promise-based API that can be used to send GraphQL requests:

cd api yarn add graphql-request graphql

Attach the Fauna authorization token to the request header

So far, we have GraphQL for data, Fauna for modeling that data, and graphql-request to query it. Now we need to establish a connection between graphql-request and Fauna, which we’ll do by importing graphql-request into db.js and use it to query an endpoint that is set to https://graphql.fauna.com/graphql.

// api/src/lib/db.js  import { GraphQLClient } from 'graphql-request'  export const request = async (query = {}) => {   const endpoint = 'https://graphql.fauna.com/graphql'    const graphQLClient = new GraphQLClient(endpoint, {     headers: {       authorization: 'Bearer ' + process.env.FAUNADB_SECRET     },   })    try {     return await graphQLClient.request(query)   } catch (error) {     console.log(error)     return error   } } 

A GraphQLClient is instantiated to set the header with an authorization token, allowing data to flow to our app.

Create

We’ll use the Fauna Shell and run a couple of Fauna Query Language (FQL) commands to seed the database. First, we’ll create a blog post with a title and body.

Create(   Collection("Post"),   {     data: {       title: "Deno is a secure runtime for JavaScript and TypeScript.",       body: "The original creator of Node, Ryan Dahl, wanted to build a modern, server-side JavaScript framework that incorporates the knowledge he gained building out the initial Node ecosystem."     }   } )
{   ref: Ref(Collection("Post"), "282083736060690956"),   ts: 1605274864200000,   data: {     title: "Deno is a secure runtime for JavaScript and TypeScript.",     body:       "The original creator of Node, Ryan Dahl, wanted to build a modern, server-side JavaScript framework that incorporates the knowledge he gained building out the initial Node ecosystem."   } }

Let’s create another one.

Create(   Collection("Post"),   {     data: {       title: "NextJS is a React framework for building production grade applications that scale.",       body: "To build a complete web application with React from scratch, there are many important details you need to consider such as: bundling, compilation, code splitting, static pre-rendering, server-side rendering, and client-side rendering."     }   } )
{   ref: Ref(Collection("Post"), "282083760102441484"),   ts: 1605274887090000,   data: {     title:       "NextJS is a React framework for building production grade applications that scale.",     body:       "To build a complete web application with React from scratch, there are many important details you need to consider such as: bundling, compilation, code splitting, static pre-rendering, server-side rendering, and client-side rendering."   } }

And maybe one more just to fill things up.

Create(   Collection("Post"),   {     data: {       title: "Vue.js is an open-source front end JavaScript framework for building user interfaces and single-page applications.",       body: "Evan You wanted to build a framework that combined many of the things he loved about Angular and Meteor but in a way that would produce something novel. As React rose to prominence, Vue carefully observed and incorporated many lessons from React without ever losing sight of their own unique value prop."     }   } )
{   ref: Ref(Collection("Post"), "282083792286384652"),   ts: 1605274917780000,   data: {     title:       "Vue.js is an open-source front end JavaScript framework for building user interfaces and single-page applications.",     body:       "Evan You wanted to build a framework that combined many of the things he loved about Angular and Meteor but in a way that would produce something novel. As React rose to prominence, Vue carefully observed and incorporated many lessons from React without ever losing sight of their own unique value prop."   } }

Cells

Cells provide a simple and declarative approach to data fetching. They contain the GraphQL query along with loading, empty, error, and success states. Each one renders itself automatically depending on what state the cell is in.

BlogPostsCell

yarn rw generate cell BlogPosts   export const QUERY = gql`   query BlogPostsQuery {     blogPosts {       id     }   } ` export const Loading = () => <div>Loading...</div> export const Empty = () => <div>Empty</div> export const Failure = ({ error }) => <div>Error: {error.message}</div>  export const Success = ({ blogPosts }) => {   return JSON.stringify(blogPosts) }

By default we have the query render the data with JSON.stringify on the page where the cell is imported. We’ll make a handful of changes to make the query and render the data we need. So, let’s:

  • Change blogPosts to posts.
  • Change BlogPostsQuery to POSTS.
  • Change the query itself to return the title and body of each post.
  • Map over the data object in the success component.
  • Create a component with the title and body of the posts returned through the data object.

Here’s how that looks:

// web/src/components/BlogPostsCell/BlogPostsCell.js  export const QUERY = gql`   query POSTS {     posts {       data {         title         body       }     }   } ` export const Loading = () => <div>Loading...</div> export const Empty = () => <div>Empty</div> export const Failure = ({ error }) => <div>Error: {error.message}</div>  export const Success = ({ posts }) => {   const {data} = posts   return data.map(post => (     <>       <header>         <h2>{post.title}</h2>       </header>       <p>{post.body}</p>     </>   )) }

The POSTS query is sending a query for posts, and when it’s queried, we get back a data object containing an array of posts. We need to pull out the data object so we can loop over it and get the actual posts. We do this with object destructuring to get the data object and then we use the map() function to map over the data object and pull out each post. The title of each post is rendered with an <h2> inside <header> and the body is rendered with a <p> tag.

Import BlogPostsCell to HomePage

// web/src/pages/HomePage/HomePage.js  import BlogLayout from 'src/layouts/BlogLayout' import BlogPostsCell from 'src/components/BlogPostsCell/BlogPostsCell.js'  const HomePage = () => {   return (     <BlogLayout>       <p>Taking Fullstack to the Jamstack</p>       <BlogPostsCell />     </BlogLayout>   ) }  export default HomePage
Check that out! Posts are returned to the app and rendered on the front end.

Vercel

We do mention Vercel in the title of this post, and we’re finally at the point where we need it. Specifically, we’re using it to build the project and deploy it to Vercel’s hosted platform, which offers build previews when code it pushed to the project repository. So, if you don’t already have one, grab a Vercel account. Again, the free pricing tier works just fine for this work.

Why Vercel over, say, Netlify? It’s a good question. Redwood even began with Netlify as its original deploy target. Redwood still has many well-documented Netlify integrations. Despite the tight integration with Netlify, Redwood seeks to be universally portable to as many deploy targets as possible. This now includes official support for Vercel along with community integrations for the Serverless framework, AWS Fargate, and PM2. So, yes, we could use Netlify here, but it’s nice that we have a choice of available services.

We only have to make one change to the project’s configuration to integrate it with Vercel. Let’s open netlify.toml and change the apiProxyPath to "/api". Then, let’s log into Vercel and click the “Import Project” button to connect its service to the project repository. This is where we enter the URL of the repo so Vercel can watch it, then trigger a build and deploy when it noticed changes.

I’m using GitHub to host my project, but Vercel is capable of working with GitLab and Bitbucket as well.

Redwood has a preset build command that works out of the box in Vercel:

Simply select “Redwood” from the preset options and we’re good to go.

We’re pretty far along, but even though the site is now “live” the database isn’t connected:

To fix that, we’ll add the FAUNADB_SECRET token from our Fauna account to our environment variables in Vercel:

Now our application is complete!

We did it! I hope this not only gets you super excited about working with Jamstack and serverless, but got a taste of some new technologies in the process.


The post Deploying a Serverless Jamstack Site with RedwoodJS, Fauna, and Vercel appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , , ,
[Top]

How to create a client-serverless Jamstack app using Netlify, Gatsby and Fauna

The Jamstack is a modern web development architecture based on client-side JavaScript, reusable APIs, and prebuilt Markup.

The key aspects of a Jamstack application are the following:

  • The entire app runs on a CDN (or ADN). CDN stands for Content Delivery Network and an ADN is an Application Delivery Network.
  • Everything lives in Git.
  • Automated builds run with a workflow when developers push the code.
  • There’s Automatic deployment of the prebuilt markup to the CDN/ADN.
  • Reusable APIs make hasslefree integrations with many of the services. To take a few examples, Stripe for the payment and checkout, Mailgun for email services, etc. We can also write custom APIs targeted to a specific use-case. We will see such examples of custom APIs in this article.
  • It’s practically Serverless. To put it more clearly, we do not maintain any servers, rather make use of already existing services (like email, media, database, search, and so on) or serverless functions.

In this article, we will learn how to build a Jamstack application that has:

  • A global data store with GraphQL support to store and fetch data with ease. We will use Fauna to accomplish this.
  • Serverless functions that also act as the APIs to fetch data from the Fauna data store. We will use Netlify serverless functions for this.
  • We will build the client side of the app using a Static Site Generator called Gatsbyjs.
  • Finally we will deploy the app on a CDN configured and managed by Netlify CDN.

So, what are we building today?

We all love shopping. How cool would it be to manage all of our shopping notes in a centralized place? So we’ll be building an app called ‘shopnote’ that allows us to manage shop notes. We can also add one or more items to a note, mark them as done, mark them as urgent, etc.

At the end of this article, our shopnote app will look like this,

TL;DR

We will learn things with a step-by-step approach in this article. If you want to jump into the source code or demonstration sooner, here are links to them.

Set up Fauna

Fauna is the data API for client-serverless applications. If you are familiar with any traditional RDBMS, a major difference with Fauna would be, it is a relational NOSQL system that gives all the capabilities of the legacy RDBMS. It is very flexible without compromising scalability and performance.

Fauna supports multiple APIs for data-access,

  • GraphQL: An open source data query and manipulation language. If you are new to the GraphQL, you can find more details from here, https://graphql.org/
  • Fauna Query Language (FQL): An API for querying Fauna. FQL has language specific drivers which makes it flexible to use with languages like JavaScript, Java, Go, etc. Find more details of FQL from here.

In this article we will explain the usages of GraphQL for the ShopNote application.

First thing first, sign up using this URL. Please select the free plan which is with a generous daily usage quota and more than enough for our usage.

Next, create a database by providing a database name of your choice. I have used shopnotes as the database name.

After creating the database, we will be defining the GraphQL schema and importing it into the database. A GraphQL schema defines the structure of the data. It defines the data types and the relationship between them. With schema we can also specify what kind of queries are allowed.

At this stage, let us create our project folder. Create a project folder somewhere on your hard drive with the name, shopnote. Create a file with the name, shopnotes.gql with the following content:

type ShopNote {   name: String!   description: String   updatedAt: Time   items: [Item!] @relation }   type Item {   name: String!   urgent: Boolean   checked: Boolean   note: ShopNote! }   type Query {   allShopNotes: [ShopNote!]! }

Here we have defined the schema for a shopnote list and item, where each ShopNote contains name, description, update time and a list of Items. Each Item type has properties like, name, urgent, checked and which shopnote it belongs to. 

Note the @relation directive here. You can annotate a field with the @relation directive to mark it for participating in a bi-directional relationship with the target type. In this case, ShopNote and Item are in a one-to-many relationship. It means, one ShopNote can have multiple Items, where each Item can be related to a maximum of one ShopNote.

You can read more about the @relation directive from here. More on the GraphQL relations can be found from here.

As a next step, upload the shopnotes.gql file from the Fauna dashboard using the IMPORT SCHEMA button,

Upon importing a GraphQL Schema, FaunaDB will automatically create, maintain, and update, the following resources:

  • Collections for each non-native GraphQL Type; in this case, ShopNote and Item.
  • Basic CRUD Queries/Mutations for each Collection created by the Schema, e.g. createShopNote allShopNotes; each of which are powered by FQL.
  • For specific GraphQL directives: custom Indexes or FQL for establishing relationships (i.e. @relation), uniqueness (@unique), and more!

Behind the scene, Fauna will also help to create the documents automatically. We will see that in a while.

Fauna supports a schema-free object relational data model. A database in Fauna may contain a group of collections. A collection may contain one or more documents. Each of the data records are inserted into the document. This forms a hierarchy which can be visualized as:

Here the data record can be arrays, objects, or of any other supported types. With the Fauna data model we can create indexes, enforce constraints. Fauna indexes can combine data from multiple collections and are capable of performing computations. 

At this stage, Fauna already created a couple of collections for us, ShopNote and Item. As we start inserting records, we will see the Documents are also getting created. We will be able view and query the records and utilize the power of indexes. You may see the data model structure appearing in your Fauna dashboard like this in a while,

Point to note here, each of the documents is identified by the unique ref attribute. There is also a ts field which returns the timestamp of the recent modification to the document. The data record is part of the data field. This understanding is really important when you interact with collections, documents, records using FQL built-in functions. However, in this article we will interact with them using GraphQL queries with Netlify Functions.

With all these understanding, let us start using our Shopenotes database that is created successfully and ready for use. 

Let us try some queries

Even though we have imported the schema and underlying things are in place, we do not have a document yet. Let us create one. To do that, copy the following GraphQL mutation query to the left panel of the GraphQL playground screen and execute.

mutation {   createShopNote(data: {     name: "My Shopping List"     description: "This is my today's list to buy from Tom's shop"     items: {       create: [         { name: "Butther - 1 pk", urgent: true }         { name: "Milk - 2 ltrs", urgent: false }         { name: "Meat - 1lb", urgent: false }       ]     }   }) {     _id     name     description     items {       data {         name,         urgent       }     }   } }

Note, as Fauna already created the GraphQL mutation classes in the background, we can directly use it like, createShopNote. Once successfully executed, you can see the response of a ShopNote creation at the right side of the editor.

The newly created ShopNote document has all the required details we have passed while creating it. We have seen ShopNote has a one-to-many relation with Item. You can see the shopnote response has the item data nested within it. In this case, one shopnote has three items. This is really powerful. Once the schema and relation are defined, the document will be created automatically keeping that relation in mind.

Now, let us try fetching all the shopnotes. Here is the GraphQL query:

query {     allShopNotes {     data {       _id       name       description       updatedAt       items {         data {           name,           checked,           urgent         }       }     }   } }

Let’s try the query in the playground as before:

Now we have a database with a schema and it is fully operational with creating and fetch functionality. Similarly, we can create queries for adding, updating, removing items to a shopnote and also updating and deleting a shopnote. These queries will be used at a later point in time when we create the serverless functions.

If you are interested to run other queries in the GraphQL editor, you can find them from here,

Create a Server Secret Key

Next, we need to create a secured server key to make sure the access to the database is authenticated and authorized.

Click on the SECURITY option available in the FaunaDB interface to create the key, like so,

On successful creation of the key, you will be able to view the key’s secret. Make sure to copy and save it somewhere safe.

We do not want anyone else to know about this key. It is not even a good idea to commit it to the source code repository. To maintain this secrecy, create an empty file called .env at the root level of your project folder.

Edit the .env file and add the following line to it (paste the generated server key in the place of, <YOUR_FAUNA_KEY_SECRET>).

FAUNA_SERVER_SECRET=<YOUR_FAUNA_KEY_SECRET>

Add a .gitignore file and write the following content to it. This is to make sure we do not commit the .env file to the source code repo accidentally. We are also ignoring node_modules as a best practice.

.env

We are done with all that had to do with Fauna’s setup. Let us move to the next phase to create serverless functions and APIs to access data from the Fauna data store. At this stage, the directory structure may look like this,

Set up Netlify Serverless Functions

Netlify is a great platform to create hassle-free serverless functions. These functions can interact with databases, file-system, and in-memory objects.

Netlify functions are powered by AWS Lambda. Setting up AWS Lambdas on our own can be a fairly complex job. With Netlify, we will simply set a folder and drop our functions. Writing simple functions automatically becomes APIs. 

First, create an account with Netlify. This is free and just like the FaunaDB free tier, Netlify is also very flexible.

Now we need to install a few dependencies using either npm or yarn. Make sure you have nodejs installed. Open a command prompt at the root of the project folder. Use the following command to initialize the project with node dependencies,

npm init -y

Install the netlify-cli utility so that we can run the serverless function locally.

npm install netlify-cli -g

Now we will install two important libraries, axios and dotenv. axios will be used for making the HTTP calls and dotenv will help to load the FAUNA_SERVER_SECRET environment variable from the .env file into process.env.

yarn add axios dotenv

Or:

npm i axios dotenv

Create serverless functions

Create a folder with the name, functions at the root of the project folder. We are going to keep all serverless functions under it.

Now create a subfolder called utils under the functions folder. Create a file called query.js under the utils folder. We will need some common code to query the data store for all the serverless functions. The common code will be in the query.js file.

First we import the axios library functionality and load the .env file. Next, we export an async function that takes the query and variables. Inside the async function, we make calls using axios with the secret key. Finally, we return the response.

// query.js   const axios = require("axios"); require("dotenv").config();   module.exports = async (query, variables) => {   const result = await axios({       url: "https://graphql.fauna.com/graphql",       method: "POST",       headers: {           Authorization: `Bearer $ {process.env.FAUNA_SERVER_SECRET}`       },       data: {         query,         variables       }  });    return result.data; };

Create a file with the name, get-shopnotes.js under the functions folder. We will perform a query to fetch all the shop notes.

// get-shopnotes.js   const query = require("./utils/query");   const GET_SHOPNOTES = `    query {        allShopNotes {        data {          _id          name          description          updatedAt          items {            data {              _id,              name,              checked,              urgent          }        }      }    }  }   `;   exports.handler = async () => {   const { data, errors } = await query(GET_SHOPNOTES);     if (errors) {     return {       statusCode: 500,       body: JSON.stringify(errors)     };   }     return {     statusCode: 200,     body: JSON.stringify({ shopnotes: data.allShopNotes.data })   }; };

Time to test the serverless function like an API. We need to do a one time setup here. Open a command prompt at the root of the project folder and type:

netlify login

This will open a browser tab and ask you to login and authorize access to your Netlify account. Please click on the Authorize button.

Next, create a file called, netlify.toml at the root of your project folder and add this content to it,

[build]     functions = "functions"   [[redirects]]    from = "/api/*"    to = "/.netlify/functions/:splat"    status = 200

This is to tell Netlify about the location of the functions we have written so that it is known at the build time.

Netlify automatically provides the APIs for the functions. The URL to access the API is in this form, /.netlify/functions/get-shopnotes which may not be very user-friendly. We have written a redirect to make it like, /api/get-shopnotes.

Ok, we are done. Now in command prompt type,

netlify dev

By default the app will run on localhost:8888 to access the serverless function as an API.

Open a browser tab and try this URL, http://localhost:8888/api/get-shopnotes:

Congratulations!!! You have got your first serverless function up and running.

Let us now write the next serverless function to create a ShopNote. This is going to be simple. Create a file named, create-shopnote.js under the functions folder. We need to write a mutation by passing the required parameters. 

//create-shopnote.js   const query = require("./utils/query");   const CREATE_SHOPNOTE = `   mutation($ name: String!, $ description: String!, $ updatedAt: Time!, $ items: ShopNoteItemsRelation!) {     createShopNote(data: {name: $ name, description: $ description, updatedAt: $ updatedAt, items: $ items}) {       _id       name       description       updatedAt       items {         data {           name,           checked,           urgent         }       }     }   } `;   exports.handler = async event => {      const { name, items } = JSON.parse(event.body);   const { data, errors } = await query(     CREATE_SHOPNOTE, { name, items });     if (errors) {     return {       statusCode: 500,       body: JSON.stringify(errors)     };   }     return {     statusCode: 200,     body: JSON.stringify({ shopnote: data.createShopNote })   }; };

Please give your attention to the parameter, ShopNotesItemRelation. As we had created a relation between the ShopNote and Item in our schema, we need to maintain that while writing the query as well.

We have de-structured the payload to get the required information from the payload. Once we got those, we just called the query method to create a ShopNote.

Alright, let’s test it out. You can use postman or any other tools of your choice to test it like an API. Here is the screenshot from postman.

Great, we can create a ShopNote with all the items we want to buy from a shopping mart. What if we want to add an item to an existing ShopNote? Let us create an API for it. With the knowledge we have so far, it is going to be really quick.

Remember, ShopNote and Item are related? So to create an item, we have to mandatorily tell which ShopNote it is going to be part of. Here is our next serverless function to add an item to an existing ShopNote.

//add-item.js   const query = require("./utils/query");   const ADD_ITEM = `   mutation($ name: String!, $ urgent: Boolean!, $ checked: Boolean!, $ note: ItemNoteRelation!) {     createItem(data: {name: $ name, urgent: $ urgent, checked: $ checked, note: $ note}) {       _id       name       urgent       checked       note {         name       }     }   } `;   exports.handler = async event => {      const { name, urgent, checked, note} = JSON.parse(event.body);   const { data, errors } = await query(     ADD_ITEM, { name, urgent, checked, note });     if (errors) {     return {       statusCode: 500,       body: JSON.stringify(errors)     };   }     return {     statusCode: 200,     body: JSON.stringify({ item: data.createItem })   }; };

We are passing the item properties like, name, if it is urgent, the check value and the note the items should be part of. Let’s see how this API can be called using postman,

As you see, we are passing the id of the note while creating an item for it.

We won’t bother writing the rest of the API capabilities in this article,  like updating, deleting a shop note, updating, deleting items, etc. In case, you are interested, you can look into those functions from the GitHub Repository.

However, after creating the rest of the API, you should have a directory structure like this,

We have successfully created a data store with Fauna, set it up for use, created an API backed by serverless functions, using Netlify Functions, and tested those functions/routes.

Congratulations, you did it. Next, let us build some user interfaces to show the shop notes and add items to it. To do that, we will use Gatsby.js (aka, Gatsby) which is a super cool, React-based static site generator.

The following section requires you to have basic knowledge of ReactJS. If you are new to it, you can learn it from here. If you are familiar with any other user interface technologies like, Angular, Vue, etc feel free to skip the next section and build your own using the APIs explained so far.

Set up the User Interfaces using Gatsby

We can set up a Gatsby project either using the starter projects or initialize it manually. We will build things from scratch to understand it better.

Install gatsby-cli globally. 

npm install -g gatsby-cli

Install gatsby, react and react-dom

yarn add gatsby react react-dom

Edit the scripts section of the package.json file to add a script for develop.

"scripts": {   "develop": "gatsby develop"  }

Gatsby projects need a special configuration file called, gatsby-config.js. Please create a file named, gatsby-config.js at the root of the project folder with the following content,

module.exports = {   // keep it empty     }

Let’s create our first page with Gatsby. Create a folder named, src at the root of the project folder. Create a subfolder named pages under src. Create a file named, index.js under src/pages with the following content:

import React, { useEffect, useState } from 'react';       export default () => {       const [loading, setLoading ] = useState(false);       const [shopnotes, setShopnotes] = useState(null);         return (     <>           <h1>Shopnotes to load here...</h1>     </>           )     } 

Let’s run it. We generally need to use the command gatsby develop to run the app locally. As we have to run the client side application with netlify functions, we will continue to use, netlify dev command.

netlify dev

That’s all. Try accessing the page at http://localhost:8888. You should see something like this,

Gatsby project build creates a couple of output folders which you may not want to push to the source code repository. Let us add a few entries to the .gitignore file so that we do not get unwanted noise.

Add .cache, node_modules and public to the .gitignore file. Here is the full content of the file:

.cache public node_modules *.env

At this stage, your project directory structure should match with the following:

Thinking of the UI components

We will create small logical components to achieve the ShopNote user interface. The components are:

  • Header: A header component consists of the Logo, heading and the create button to create a shopnote.
  • Shopenotes: This component will contain the list of the shop note (Note component).
  • Note: This is individual notes. Each of the notes will contain one or more items.
  • Item: Each of the items. It consists of the item name and actions to add, remove, edit an item.

You can see the sections marked in the picture below:

Install a few more dependencies

We will install a few more dependencies required for the user interfaces to be functional and look better. Open a command prompt at the root of the project folder and install these dependencies,

yarn add bootstrap lodash moment react-bootstrap react-feather shortid

Lets load all the Shop Notes

We will use the Reactjs useEffect hook to make the API call and update the shopnotes state variables. Here is the code to fetch all the shop notes. 

useEffect(() => {   axios("/api/get-shopnotes").then(result => {     if (result.status !== 200) {       console.error("Error loading shopnotes");       console.error(result);       return;     }     setShopnotes(result.data.shopnotes);     setLoading(true);   }); }, [loading]);

Finally, let us change the return section to use the shopnotes data. Here we are checking if the data is loaded. If so, render the Shopnotes component by passing the data we have received using the API.

return (   <div className="main">     <Header />     {       loading ? <Shopnotes data = { shopnotes } /> : <h1>Loading...</h1>     }   </div> );  

You can find the entire index.js file code from here The index.js file creates the initial route(/) for the user interface. It uses other components like, Shopnotes, Note and Item to make the UI fully operational. We will not go to a great length to understand each of these UI components. You can create a folder called components under the src folder and copy the component files from here.

Finally, the index.css file

Now we just need a css file to make things look better. Create a file called index.css under the pages folder. Copy the content from this CSS file to the index.css file.

import 'bootstrap/dist/css/bootstrap.min.css'; import './index.css'

That’s all. We are done. You should have the app up and running with all the shop notes created so far. We are not getting into the explanation of each of the actions on items and notes here not to make the article very lengthy. You can find all the code in the GitHub repo. At this stage, the directory structure may look like this,

A small exercise

I have not included the Create Note UI implementation in the GitHib repo. However, we have created the API already. How about you build the front end to add a shopnote? I suggest implementing a button in the header, which when clicked, creates a shopnote using the API we’ve already defined. Give it a try!

Let’s Deploy

All good so far. But there is one issue. We are running the app locally. While productive, it’s not ideal for the public to access. Let’s fix that with a few simple steps.

Make sure to commit all the code changes to the Git repository, say, shopnote. You have an account with Netlify already. Please login and click on the button, New site from Git.

Next, select the relevant Git services where your project source code is pushed. In my case, it is GitHub.

Browse the project and select it.

Provide the configuration details like the build command, publish directory as shown in the image below. Then click on the button to provide advanced configuration information. In this case, we will pass the FAUNA_SERVER_SECRET key value pair from the .env file. Please copy paste in the respective fields. Click on deploy.

You should see the build successful in a couple of minutes and the site will be live right after that.

In Summary

To summarize:

  • The Jamstack is a modern web development architecture based on client-side JavaScript, reusable APIs, and prebuilt Markup.
  • 70% – 80% of the features that once required a custom back-end can now be done either on the front end or there are APIs, services to take advantage of.
  • Fauna provides the data API for the client-serverless applications. We can use GraphQL or Fauna’s FQL to talk to the store.
  • Netlify serverless functions can be easily integrated with Fauna using the GraphQL mutations and queries. This approach may be useful when you have the need of  custom authentication built with Netlify functions and a flexible solution like Auth0.
  • Gatsby and other static site generators are great contributors to the Jamstack to give a fast end user experience.

Thank you for reading this far! Let’s connect. You can @ me on Twitter (@tapasadhikary) with comments, or feel free to follow.


The post How to create a client-serverless Jamstack app using Netlify, Gatsby and Fauna appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , , ,
[Top]