Tag: GraphQL

How to Build a GraphQL API for Text Analytics with Python, Flask and Fauna

GraphQL is a query language and server-side runtime environment for building APIs. It can also  be considered as the syntax that you write in order to describe the kind of data you want from APIs. What this means for you as a backend developer is that with GraphQL, you are able to expose a single endpoint on your server to handle GraphQL queries from client applications, as opposed to the many endpoints you’d need to create to handle specific kinds of requests with REST and turn serve data from those endpoints.

If a client needs new data that is not already provisioned. You’d need to create new endpoints for it and update the API docs. GraphQL makes it possible to send queries to a single endpoint, these queries are then passed on to the server to be handled by the server’s predefined resolver functions and the requested information can be provided over the network.

Running Flask server

Flask is a minimalist framework for building Python servers. I always use Flask to expose my GraphQL API to serve my machine learning models. Requests from client applications are then forwarded by the GraphQL gateway. Overall, microservice architecture allows us to use the best technology for the right job and it allows us to use advanced patterns like schema federation. 

In this article, we will start small with the implementation of the so-called Levenshetein distance. We will use the well-known NLTK library and expose the Levenshtein distance functionality with the GraphQL API. In this article, i assume that you are familiar with basic GraphQL concepts like BUILDING GraphQL mutations. 

Note: We will be working with the free and open source example repository with the following:

In the projects, Pipenv was used for managing the python dependencies. If you are located in the project folder. We can create our virtual environment with this:

…and install dependencies from Pipfile.

We usually define a couple of script aliases in our Pipfile to ease our development workflow.

It allows us to run our dev environment easily with a command aliases as follows:

The flask server should be then exposed by default at port 5000. You can immediately move on to the GraphQL Playground, which serves as the IDE for the live documentation and query execution for GraphQL servers. GraphQL playground uses the so-called GraphQL introspection for fetching information about our GraphQL types. The following code initializes our Flask server:

It is a good practice to use the WSGI server when running a production environment. Therefore, we have to set-up a script alias for the gunicorn with: 

Levenshtein distance (edit distance)

The levenshtein distance, also known as edit distance, is a string metric. It is defined as the minimum number of single-character edits needed to change a one character sequence  a to another one b. If we denote the length of such sequences |a| and |b| respectively, we get the following:

Where

1(ai≠bj) is the distance between the first i characaters of a and the first j character of b. For more on the theoretical background, feel free to check out the Wiki.

In practice, let’s say that someone misspelt ‘machine learning’ and wrote ‘machinlt learning’. We would need to make the following edits:

Edit Edit type Word state
0 Machinlt lerning
1 Substitution Machinet lerning
2 Deletion Machine lerning
3 Insertion Machine learning

For these two strings, we get a Levenshtein distance equal to 3. The levenshtein distance has many applications, such as spell checkers, correction system for optical character recognition, or similarity calculations.

Building a Graphql server with graphene in Python

We will build the following schema in our article:

Each GraphQL schema is required to have at least one query. We usually define our first query in order to health check our microservice. The query can be called like this:

query {   healthcheck }

However, the main function of our schema is to enable us to calculate the Levenshtien distance. We will use variables to pass dynamic parameters in the following GraphQL document:

We have defined our schema so far in SDL format. In the python ecosystem, however, we do not have libraries like graphql-tools, so we need to define our schema with the code-first approach. The schema is defined as follows using the Graphene library:

We have followed the best practices for overall schema and mutations. Our input object type is written in Graphene as follows:

Each time, we execute our mutation in GraphQL playground:

With the following variables:

{   "input": {     "s1": "test1",     "s2": "test2"   } }

We obtain the Levenshtein distance between our two input strings. For our simple example of strings test1 and test2, we get 1. We can leverage the well-known NLTK library for natural language processing (NLP). The following code is executed from the resolver:

It is also straightforward to implement the Levenshtein distance by ourselves using, for example, an iterative matrix, but I would suggest to not reinvent the wheel and use the default NLTK functions.

Serverless GraphQL APIs with Fauna

First off some introductions, before we jump right in. It’s only fair that I give Fauna a proper introduction as it is about to make our lives a whole lot easier. 

Fauna is a serverless database service, that handles all optimization and maintenance tasks so that developers don’t have to bother about them and can focus on developing their apps and shipping to market faster.

Again, serverless doesn’t actually mean “NO SERVERS” but to simply put it: what serverless means is that you can get things working without necessarily having to set things up from scratch. Some apps that use serverless concepts don’t have a backend service written from scratch, they employ the use of cloud functions which are technically scripts written on cloud platforms to handle necessary tasks like login, registration, serving data etc.

Where does Fauna fit into all of this? When we build servers we need to provision our server with a database, and when we do that it’s usually a running instance of our database. With serverless technology like Fauna, we can shift that workload to the cloud and focus on actually writing our auth systems, implementing business logic for our app. Fauna also manages things like, maintenance and scaling which usually calls for concern with systems that use conventional databases.

If you are interested in getting more info about Fauna and it’s features, check the Fauna docs. Let’s get started with building our GraphQL API the serverless way with the GraphQL.

Requirements

  1. Fauna account: That’s all you need, an account with Fauna is all you need for the session, so click here to go to the sign up page.
  2. Creating a Fauna Database
  3. Login to your Fauna account once you have created an account with them. Once on the dashboard, you should see a button to create a new database, click on that and you should see a little form to fill in the name of the database, that resembles the ones below:

I call mine “graphqlbyexample” but you can call yours anything you wish. Please, ignore the pre-populate with demo data option we don’t need that for this demo. Click “Save” and you should be brought to a new screen as shown below:

Adding a GraphQL Schema to Fauna

  1. In order to get our GraphQL server up and running, Fauna allows us to upload our own graphQL schema, on the page we are currently on, you should see a GraphQL options; select that and it will prompt you to upload a schema file. This file usually contains raw GraphQL schema and is saved with either the .gql or .graphql file extension. Let’s create our schema and upload it to Fauna to spin up our server.
  1. Create a new file anywhere you like. I’m creating it in the same directory as our previous app, because it has no impact on it. I’m calling it schema.gql and we will add the following to it:

Here, we simply define our data types in tandem to our two tables (Notes, and user). Save this and go back to that page to upload this schema.gql that we just created. Once that is done, Fauna processes it and takes us to a new page — our GraphQL API playground.

We have literally created a graphql server by simply uploading that really simple schema to Fauna and to highlight some of the really cool feats that Fauna has, observe:

  1. Fauna automatically generates collections for us, if you notice, we did not create any collection(translates to Tables, if you are only familiar with relational databases). Fauna is a NoSQL database and collections are technically the same as tables, and documents as to rows in tables. If we go to the collections options and click on that we had see the tables that were auto-generated on our behalf, courtesy of the schema file that we uploaded.
  1. Fauna automatically creates indexes on our behalf: head over to the indexes option and see what indexes have been created on behalf of the API. Fauna is a document-oriented database and does not have primary keys or foreign-keys as you have in relational databases for search and index purposes, instead, we create indexes in Fauna to help with data retrieval.
  1. Fauna automatically generates graphql queries and mutations as well as API Docs on our behalf: This is one of my personal favorites and I can’t seem to get over just how efficient Fauna does this. Fauna is able to intelligently generate some queries that it thinks you might want in your newly created API. Head over back to the GraphQL option and click on the “Docs” tab to open up the Docs on the playground.

As you can see two queries and a handful of mutations already auto-generated (even though we did nit add them to our schema file), you can click on each one in the docs to see the details. 

Testing our server

Let’s test out some of these queries and mutations from the playground, we also use our server outside of the playground (by the way, it is a fully functional GraphQL server).

Testing from the Playground

  1. We will test our first off by creating a new user, with the predefined createUser mutations as follows:

If we go to the collections options and choose User, we should have our newly created entry(document aka row) in our User collections.

  1. Let’s create a new note and associate it with a user as the author via it’s document ref id, which is a special ID generated by Fauna for all documents for the sake of references like this much like a key in relational tables. To find the ID for the user we just created simply navigate to the collection and from the list of documents you should see the option(a copy Icon)to copy Ref ID:

Once you have this you can create a new note and associate is as follows:

  1. Let’s make a query this time, this time to get data from the database. Currently, we can fetch users by ID or fetch a note by it’s ID. Let’s see that in action:

You must have been thinking it, what if we wanted to fetch info of all users, currently, we can’t do that because Fauna did not generate that automatically for us, but we can update our schema so let’s add our custom query to our schema.gql file, as follows. Note that this is an update to the file so don’t clear everything in the file out just add this to it:

Once you have added this, save the file and click on the update schema option on the playground to upload the file again, it should take a few seconds to update, once it’s done we will be able to use our newly created query, as follows:

Don’t forget that as opposed to having all the info about users served (namely: name, email, password) we can choose what fields we want because it’s a GraphQL and not just that. It’s Fauna’s GraphQL so feel free to specify more fields if you want. 

Testing from without the playground – Using python (requests library)

Now that we’ve seen that our API works from the playground lets see how we can actually use this from an application outside the playground environment, using python’s request library so if you don’t have it installed kindly install it using pip as follows:

pip install requests
  • Before we write any code we need to get our API key from Fauna which is what will help us communicate with our API from outside the playground. Head over to security on your dashboard and on the keys tab select the option to create a new key, it should bring up a form like this:

Leave the database option as the current one, change the role of the key from admin to the server and then save. It’ll generate for you a new key secret that you must copy and save somewhere safe, as an environment variable most probably.

  • For this I’m going to create a simple script to demonstrate, so add a new file call it whatever you wish — I’m calling mine test.py to your current working directory or anywhere you wish. In this file we’ll add the following:

Here we add a couple of imports, including the requests library which we use to send the requests, as well as the os module used here to load our environment variables which is where I stored the Fauna secret key we got from the previous step.

Note the URL where the request is to be sent, this is gotten from the Fauna GraphQL playground here:

Next, we create a query which is to be sent this example shows a simple fetch query to find a user by id (which is one of the automatically generated queries from Fauna), we then retrieve the key from the environment variable and store it in a variable called a token, and create a dictionary to represent out headers, this is, after all, an HTTP request so we can set headers here, and in fact, we have to because Fauna will look for our secret key from the headers of our request.

  • The concluding part of the code features how we use the request library to create the request, and is shown as follows:

We create a request object and check to see if the request went through via its status_code and print the response from the server if it went well otherwise we print an error message lets run test.py and see what it returns.

Conclusion

In this article, we covered creating GraphQL servers from scratch and looked at creating servers right from Fauna without having to do much work, we also saw some of the awesome, cool perks that come with using the serverless system that Fauna provides, we went on to further see how we could test our servers and validate that they work.

Hopefully, this was worth your time, and taught you a thing or two about GraphQL, Serverless, Fauna, and Flask and text analytics. To learn more about Fauna, you can also sign up for a free account and try it out yourself!


By: Adesina Abdrulrahman


The post How to Build a GraphQL API for Text Analytics with Python, Flask and Fauna appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , , ,

Rendering the WordPress philosophy in GraphQL

WordPress is a CMS that’s coded in PHP. But, even though PHP is the foundation, WordPress also holds a philosophy where user needs are prioritized over developer convenience. That philosophy establishes an implicit contract between the developers building WordPress themes and plugins, and the user managing a WordPress site.

GraphQL is an interface that retrieves data from—and can submit data to—the server. A GraphQL server can have its own opinionatedness in how it implements the GraphQL spec, as to prioritize some certain behavior over another.

Can the WordPress philosophy that depends on server-side architecture co-exist with a JavaScript-based query language that passes data via an API?

Let’s pick that question apart, and explain how the GraphQL API WordPress plugin I authored establishes a bridge between the two architectures.

You may be aware of WPGraphQL. The plugin GraphQL API for WordPress (or “GraphQL API” from now on) is a different GraphQL server for WordPress, with different features.

Reconciling the WordPress philosophy within the GraphQL service

This table contains the expected behavior of a WordPress application or plugin, and how it can be interpreted by a GraphQL service running on WordPress:

Category WordPress app expected behavior Interpretation for GraphQL service running on WordPress
Accessing data Democratizing publishing: Any user (irrespective of having technical skills or not) must be able to use the software Democratizing data access and publishing: Any user (irrespective of having technical skills or not) must be able to visualize and modify the GraphQL schema, and execute a GraphQL query
Extensibility The application must be extensible through plugins The GraphQL schema must be extensible through plugins
Dynamic behavior The behavior of the application can be modified through hooks The results from resolving a query can be modified through directives
Localization The application must be localized, to be used by people from any region, speaking any language The GraphQL schema must be localized, to be used by people from any region, speaking any language
User interfaces Installing and operating functionality must be done through a user interface, resorting to code as little as possible Adding new entities (types, fields, directives) to the GraphQL schema, configuring them, executing queries, and defining permissions to access the service must be done through a user interface, resorting to code as little as possible
Access control Access to functionalities can be granted through user roles and permissions Access to the GraphQL schema can be granted through user roles and permissions
Preventing conflicts Developers do not know in advance who will use their plugins, or what configuration/environment those sites will run, meaning the plugin must be prepared for conflicts (such as having two plugins define the SMTP service), and attempt to prevent them, as much as possible Developers do not know in advance who will access and modify the GraphQL schema, or what configuration/environment those sites will run, meaning the plugin must be prepared for conflicts (such as having two plugins with the same name for a type in the GraphQL schema), and attempt to prevent them, as much as possible

Let’s see how the GraphQL API carries out these ideas.

Accessing data

Similar to REST, a GraphQL service must be coded through PHP functions. Who will do this, and how?

Altering the GraphQL schema through code

The GraphQL schema includes types, fields and directives. These are dealt with through resolvers, which are pieces of PHP code. Who should create these resolvers?

The best strategy is for the GraphQL API to already satisfy the basic GraphQL schema with all known entities in WordPress (including posts, users, comments, categories, and tags), and make it simple to introduce new resolvers, for instance for Custom Post Types (CPTs).

This is how the user entity is already provided by the plugin. The User type is provided through this code:

class UserTypeResolver extends AbstractTypeResolver {   public function getTypeName(): string   {     return 'User';   }    public function getSchemaTypeDescription(): ?string   {     return __('Representation of a user', 'users');   }    public function getID(object $ user)   {     return $ user->ID;   }    public function getTypeDataLoaderClass(): string   {     return UserTypeDataLoader::class;   } }

The type resolver does not directly load the objects from the database, but instead delegates this task to a TypeDataLoader object (in the example above, from UserTypeDataLoader. This decoupling is to follow the SOLID principles, providing different entities to tackle different responsibilities, as to make the code maintainable, extensible and understandable.

Adding username, email and url fields to the User type is done via a FieldResolver object:

class UserFieldResolver extends AbstractDBDataFieldResolver {   public static function getClassesToAttachTo(): array   {     return [       UserTypeResolver::class,     ];   }    public static function getFieldNamesToResolve(): array   {     return [       'username',       'email',       'url',     ];   }    public function getSchemaFieldDescription(     TypeResolverInterface $ typeResolver,     string $ fieldName   ): ?string {     $ descriptions = [       'username' => __("User's username handle", "graphql-api"),       'email' => __("User's email", "graphql-api"),       'url' => __("URL of the user's profile in the website", "graphql-api"),     ];     return $ descriptions[$ fieldName];   }    public function getSchemaFieldType(     TypeResolverInterface $ typeResolver,     string $ fieldName   ): ?string {     $ types = [       'username' => SchemaDefinition::TYPE_STRING,       'email' => SchemaDefinition::TYPE_EMAIL,       'url' => SchemaDefinition::TYPE_URL,     ];     return $ types[$ fieldName];   }    public function resolveValue(     TypeResolverInterface $ typeResolver,     object $ user,     string $ fieldName,     array $ fieldArgs = []   ) {     switch ($ fieldName) {       case 'username':         return $ user->user_login;        case 'email':         return $ user->user_email;        case 'url':         return get_author_posts_url($ user->ID);     }      return null;   } }

As it can be observed, the definition of a field for the GraphQL schema, and its resolution, has been split into a multitude of functions:

  • getSchemaFieldDescription
  • getSchemaFieldType
  • resolveValue

Other functions include:

This code is more legible than if all functionality is satisfied through a single function, or through a configuration array, thus making it easier to implement and maintain the resolvers.

Retrieving plugin or custom CPT data

What happens when a plugin has not integrated its data to the GraphQL schema by creating new type and field resolvers? Could the user then query data from this plugin through GraphQL? For instance, let’s say that WooCommerce has a CPT for products, but it does not introduce the corresponding Product type to the GraphQL schema. Is it possible to retrieve the product data?

Concerning CPT entities, their data can be fetched via type GenericCustomPost, which acts as a kind of wildcard, to encompass any custom post type installed in the site. The records are retrieved by querying Root.genericCustomPosts(customPostTypes: [cpt1, cpt2, ...]) (in this notation for fields, Root is the type, and genericCustomPosts is the field).

Then, to fetch the product data, corresponding to CPT with name "wc_product", we execute this query:

{   genericCustomPosts(customPostTypes: "[wc_product]") {     id     title     url     date   } }

However, all the available fields are only those ones present in every CPT entity: title, url, date, etc. If the CPT for a product has data for price, a corresponding field price is not available. wc_product refers to a CPT created by the WooCommerce plugin, so for that, either the WooCommerce or the website’s developers will have to implement the Product type, and define its own custom fields.

CPTs are often used to manage private data, which must not be exposed through the API. For this reason, the GraphQL API initially only exposes the Page type, and requires defining which other CPTs can have their data publicly queried:

The plugin provides an interface for whitelisting CPTs to be exposed in the API.

Transitioning from REST to GraphQL via persisted queries

While GraphQL is provided as a plugin, WordPress has built-in support for REST, through the WP REST API. In some circumstances, developers working with the WP REST API may find it problematic to transition to GraphQL. For instance, consider these differences:

  • A REST endpoint has its own URL, and can be queried via GET, while GraphQL, normally operates through a single endpoint, queried via POST only
  • The REST endpoint can be cached on the server-side (when queried via GET), while the GraphQL endpoint normally cannot

As a consequence, REST provides better out-of-the-box support for caching, making the application more performant and reducing the load on the server. GraphQL, instead, places more emphasis in caching on the client-side, as supported by the Apollo client.

After switching from REST to GraphQL, will the developer need to re-architect the application on the client-side, introducing the Apollo client just to introduce a layer of caching? That would be regrettable.

The “persisted queries” feature provides a solution for this situation. Persisted queries combine REST and GraphQL together, allowing us to:

  • create queries using GraphQL, and
  • publish the queries on their own URL, similar to REST endpoints.

The persisted query endpoint has the same behavior as a REST endpoint: it can be accessed via GET, and it can be cached server-side. But it was created using the GraphQL syntax, and the exposed data has no under/over fetching.

Extensibility

The architecture of the GraphQL API will define how easy it is to add our own extensions.

Decoupling type and field resolvers

The GraphQL API uses the Publish-subscribe pattern to have fields be “subscribed” to types.

Reappraising the field resolver from earlier on:

class UserFieldResolver extends AbstractDBDataFieldResolver {   public static function getClassesToAttachTo(): array   {     return [UserTypeResolver::class];   }    public static function getFieldNamesToResolve(): array   {     return [       'username',       'email',       'url',     ];   } }

The User type does not know in advance which fields it will satisfy, but these (username, email and url) are instead injected to the type by the field resolver.

This way, the GraphQL schema becomes easily extensible. By simply adding a field resolver, any plugin can add new fields to an existing type (such as WooCommerce adding a field for User.shippingAddress), or override how a field is resolved (such as redefining User.url to return the user’s website instead).

Code-first approach

Plugins must be able to extend the GraphQL schema. For instance, they could make available a new Product type, add an additional coauthors field on the Post type, provide a @sendEmail directive, or anything else.

To achieve this, the GraphQL API follows a code-first approach, in which the schema is generated from PHP code, on runtime.

The alternative approach, called SDL-first (Schema Definition Language), requires the schema be provided in advance, for instance, through some .gql file.

The main difference between these two approaches is that, in the code-first approach, the GraphQL schema is dynamic, adaptable to different users or applications. This suits WordPress, where a single site could power several applications (such as website and mobile app) and be customized for different clients. The GraphQL API makes this behavior explicit through the “custom endpoints” feature, which enables to create different endpoints, with access to different GraphQL schemas, for different users or applications.

To avoid performance hits, the schema is made static by caching it to disk or memory, and it is re-generated whenever a new plugin extending the schema is installed, or when the admin updates the settings.

Support for novel features

Another benefit of using the code-first approach is that it enables us to provide brand-new features that can be opted into, before these are supported by the GraphQL spec.

For instance, nested mutations have been requested for the spec but not yet approved. The GraphQL API complies with the spec, using types QueryRoot and MutationRoot to deal with queries and mutations respectively, as exposed in the standard schema. However, by enabling the opt-in “nested mutations” feature, the schema is transformed, and both queries and mutations will instead be handled by a single Root type, providing support for nested mutations.

Let’s see this novel feature in action. In this query, we first query the post through Root.post, then execute mutation Post.addComment on it and obtain the created comment object, and finally execute mutation Comment.reply on it and query some of its data (uncomment the first mutation to log the user in, as to be allowed to add comments):

# mutation { #   loginUser( #     usernameOrEmail:"test", #     password:"pass" #   ) { #     id #     name #   } # } mutation {   post(id:1459) {     id     title     addComment(comment:"That's really beautiful!") {       id       date       content       author {         id         name       }       reply(comment:"Yes, it is!") {         id         date         content       }     }   } }

Dynamic behavior

WordPress uses hooks (filters and actions) to modify behavior. Hooks are simple pieces of code that can override a value, or enable to execute a custom action, whenever triggered.

Is there an equivalent in GraphQL?

Directives to override functionality

Searching for a similar mechanism for GraphQL, I‘ve come to the conclusion that directives could be considered the equivalent to WordPress hooks to some extent: like a filter hook, a directive is a function that modifies the value of a field, thus augmenting some other functionality. For instance, let’s say we retrieve a list of post titles with this query:

query {   posts {     title   } }

…which produces this response:

{   "data": {     "posts": [       {         "title": "Scheduled by Leo"       },       {         "title": "COPE with WordPress: Post demo containing plenty of blocks"       },       {         "title": "A lovely tango, not with leo"       },       {       "title": "Hello world!"       },     ]   } }

These results are in English. How can we translate them to Spanish? With a directive @translate applied on field title (implemented through this directive resolver), which gets the value of the field as an input, calls the Google Translate API to translate it, and has its result override the original input, as in this query:

query {   posts {     title @translate(from:"en", to"es")   } }

…which produces this response:

{   "data": {     "posts": [       {         "title": "Programado por Leo"       },       {         "title": "COPE con WordPress: publica una demostración que contiene muchos bloques"       },       {         "title": "Un tango lindo, no con leo"       },       {         "title": "¡Hola Mundo!"       }     ]   } }

Please notice how directives are unconcerned with who the input is. In this case, it was a Post.title field, but it could’ve been Post.excerpt, Comment.content, or any other field of type String. Then, resolving fields and overriding their value is cleanly decoupled, and directives are always reusable.

Directives to connect to third parties

As WordPress keeps steadily becoming the OS of the web (currently powering 39% of all sites, more than any other software), it also progressively increases its interactions with external services (think of Stripe for payments, Slack for notifications, AWS S3 for hosting assets, and others).

As we‘ve seen above, directives can be used to override the response of a field. But where does the new value come from? It could come from some local function, but it could perfectly well also originate from some external service (as for directive @translate we’ve seen earlier on, which retrieves the new value from the Google Translate API).

For this reason, GraphQL API has decided to make it easy for directives to communicate with external APIs, enabling those services to transform the data from the WordPress site when executing a query, such as for:

  • translation,
  • image compression,
  • sourcing through a CDN, and
  • sending emails, SMS and Slack notifications.

As a matter of fact, GraphQL API has decided to make directives as powerful as possible, by making them low-level components in the server’s architecture, even having the query resolution itself be based on a directive pipeline. This grants directives the power to perform authorizations, validations, and modification of the response, among others.

Localization

GraphQL servers using the SDL-first approach find it difficult to localize the information in the schema (the corresponding issue for the spec was created more than four years ago, and still has no resolution).

Using the code-first approach, though, the GraphQL API can localize the descriptions in a straightforward manner, through the __('some text', 'domain') PHP function, and the localized strings will be retrieved from a POT file corresponding to the region and language selected in the WordPress admin.

For instance, as we saw earlier on, this code localizes the field descriptions:

class UserFieldResolver extends AbstractDBDataFieldResolver {   public function getSchemaFieldDescription(     TypeResolverInterface $ typeResolver,     string $ fieldName   ): ?string {     $ descriptions = [       'username' => __("User's username handle", "graphql-api"),       'email' => __("User's email", "graphql-api"),       'url' => __("URL of the user's profile in the website", "graphql-api"),     ];     return $ descriptions[$ fieldName];   } }

User interfaces

The GraphQL ecosystem is filled with open source tools to interact with the service, including many provide the same user-friendly experience expected in WordPress.

Visualizing the GraphQL schema is done with GraphQL Voyager:

GraphQL Voyager enables us to interact with the schema, as to get a good grasp of how all entities in the application’s data model relate to each other.

This can prove particularly useful when creating our own CPTs, and checking out how and from where they can be accessed, and what data is exposed for them:

Interacting with the schema

Executing the query against the GraphQL endpoint is done with GraphiQL:

GraphiQL for the admin

However, this tool is not simple enough for everyone, since the user must have knowledge of the GraphQL query syntax. So, in addition, the GraphiQL Explorer is installed on top of it, as to compose the GraphQL query by clicking on fields:

GraphiQL with Explorer for the admin

Access control

WordPress provides different user roles (admin, editor, author, contributor and subscriber) to manage user permissions, and users can be logged-in the wp-admin (eg: the staff), logged-in the public-facing site (eg: clients), or not logged-in or have an account (any visitor). The GraphQL API must account for these, allowing to grant granular access to different users.

Granting access to the tools

The GraphQL API allows to configure who has access to the GraphiQL and Voyager clients to visualize the schema and execute queries against it:

  • Only the admin?
  • The staff?
  • The clients?
  • Openly accessible to everyone?

For security reasons, the plugin, by default, only provides access to the admin, and does not openly expose the service on the Internet.

In the images from the previous section, the GraphiQL and Voyager clients are available in the wp-admin, available to the admin user only. The admin user can grant access to users with other roles (editor, author, contributor) through the settings:

The admin user can grant access to users with other roles (editor, author, contributor) through the settings.

As to grant access to our clients, or anyone on the open Internet, we don’t want to give them access to the WordPress admin. Then, the settings enable to expose the tools under a new, public-facing URL (such as mywebsite.com/graphiql and mywebsite.com/graphql-interactive). Exposing these public URLs is an opt-in choice, explicitly set by the admin.

Granting access to the GraphQL schema

The WP REST API does not make it easy to customize who has access to some endpoint or field within an endpoint, since no user interface is provided and it must be accomplished through code.

The GraphQL API, instead, makes use of the metadata already available in the GraphQL schema to enable configuration of the service through a user interface (powered by the WordPress editor). As a result, non-technical users can also manage their APIs without touching a line of code.

Managing access control to the different fields (and directives) from the schema is accomplished by clicking on them and selecting, from a dropdown, which users (like those logged in or with specific capabilities) can access them.

Preventing conflicts

Namespacing helps avoid conflicts whenever two plugins use the same name for their types. For instance, if both WooCommerce and Easy Digital Downloads implement a type named Product, it would become ambiguous to execute a query to fetch products. Then, namespacing would transform the type names to WooCommerceProduct and EDDProduct, resolving the conflict.

The likelihood of such conflict arising, though, is not very high. So the best strategy is to have it disabled by default (as to keep the schema as simple as possible), and enable it only if needed.

If enabled, the GraphQL server automatically namespaces types using the corresponding PHP package name (for which all packages follow the PHP Standard Recommendation PSR-4). For instance, for this regular GraphQL schema:

Regular GraphQL schema

…with namespacing enabled, Post becomes PoPSchema_Posts_Post, Comment becomes PoPSchema_Comments_Comment, and so on.

Namespaced GraphQL schema

That’s all, folks

Both WordPress and GraphQL are captivating topics on their own, so I find the integration of WordPress and GraphQL greatly endearing. Having been at it for a few years now, I can say that designing the optimal way to have an old CMS manage content, and a new interface access it, is a challenge worth pursuing.

I could continue describing how the WordPress philosophy can influence the implementation of a GraphQL service running on WordPress, talking about it even for several hours, using plenty of material that I have not included in this write-up. But I need to stop… So I’ll stop now.

I hope this article has managed to provide a good overview of the whys and hows for satisfying the WordPress philosophy in GraphQL, as done by plugin GraphQL API for WordPress.


The post Rendering the WordPress philosophy in GraphQL appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , ,
[Top]

How to Make GraphQL and DynamoDB Play Nicely Together

Serverless, GraphQL, and DynamoDB are a powerful combination for building websites. The first two are well-loved, but DynamoDB is often misunderstood or actively avoided. It’s often dismissed by folks who consider it only worth the effort “at scale.”

That was my assumption, too, and I tried to stick with a SQL database for my serverless apps. But after learning and using DynamoDB, I see the benefits of it for projects of any scale.

To show you what I mean, let’s build an API from start to finish — without any heavy Object Relational Mapper (ORM) or GraphQL framework to hide what is really going on. Maybe when we’re done you might consider giving DynamoDB a second look. I think it is worth the effort.

The main objections to DynamoDB and GraphQL

The main objection to DynamoDB is that it is hard to learn, but few people argue about its power. I agree the learning curve feels very steep. But SQL databases are not the best fit with serverless applications. Where do you stand up that SQL database? How do you manage connections to it? These things just don’t mesh with the serverless model very well. DynamoDB is serverless-friendly by design. You are trading the up-front pain of learning something hard to save yourself from future pain. Future pain that only grows if your application grows.

The case against using GraphQL with DynamoDB is a little more nuanced. GraphQL seems to fit well with relational databases partly because it is assumed by a lot of the documentation, tutorials, and examples. Alex Debrie is a DynamoDB expert who wrote The DynamoDB Book which is a great resource to deeply learn it. Even he recommends against using the two together, mostly because of the way that GraphQL resolvers are often written as sequential independent database calls that can result in excessive database reads.

Another potential problem is that DynamoDB works best when you know your access patterns beforehand. One of the strengths of GraphQL is that it can handle arbitrary queries more easily by design than REST. This is more of a problem with a public API where users can write arbitrary queries. In reality, GraphQL is often used for private APIs where you control both the client and the server. In this case, you know and can control the queries you run. With a GraphQL API it is possible to write queries that clobber any database without taking steps to avoid them.

A basic data model

For this example API, we will model an organization with teams, users, and certifications. The entity relational diagram is shown below. Each team has many users and each user can have many certifications.

Relational database model

Our end goal is to model this data in a DynamoDB table, but if we did model it in a SQL database, it would look like the following diagram:

To represent the many-to-many relationship of users to certifications, we add an intermediate table called “Credential.” The only unique attribute on this table is the expiration date. There would be other attributes for each of the tables, but we reduce it to just a name for each for simplicity.

Access patterns

The key to designing a data model for DynamoDB is to know your access patterns up front. In a relational database you start with normalized data and perform joins across the data to access it. DynamoDB does not have joins, so we build a data model that matches how we intend to access it. This is an iterative process. The goal is to identify the most frequent patterns to start. Most of these will directly map to a GraphQL query, but some may be only used internally to the back end to authenticate or check permissions, etc. An access pattern that is rarely used, like a check run once a week by an administrator, does not need to be designed. Something very inefficient (like a table scan) can handle these queries.

Most frequently accessed:

  • User by ID or name
  • Team by ID or name
  • Certification by ID or name

Frequently accessed:

  • All Users on a Team by Team ID
  • All Certifications for a given User
  • All Teams
  • All Certifications

Rarely accessed

  • All Certifications of users on a Team
  • All Users who have a Certification
  • All Users who have a Certification on a Team

DynamoDB single table design

DynamoDB does not have joins and you can only query based on the primary key or predefined indexes. There is no set schema for items imposed by the database, so many different types of items can be stored in a single table. In fact, the recommended best practice for your data schema is to store all items in a single table so that you can access related items together with a single query. Below is a single table model representing our data. To design this schema, you take the access patterns above and choose attributes for the keys and indexes that match.

The primary key here is a composite of the partition/hash key (pk) and the sort key (sk). To retrieve an item in DynamoDB, you must specify the partition key exactly and either a single value or a range of values for the sort key. This allows you to retrieve more than one item if they share a partition key. The indexes here are shown as gsi1pk, gsi1sk, etc. These generic attribute names are used for the indexes (i.e. gsi1pk) so that the same index can be used to access different types of items with different access pattern. With a composite key, the sort key cannot be empty, so we use “#” as a placeholder when the sort key is not needed.

Access pattern Query conditions
Team, User, or Certification by ID   Primary Key, pk=”T#”+ID, sk=”#”  
Team, User, or Certification by name Index GSI 1, gsi1pk=type, gsi1sk=name
All Teams, Users, or Certifications   Index GSI 1, gsi1pk=type    
All Users on a Team by ID Index GSI 2, gsi2pk=”T#”+teamID
All Certifications for a User by ID Primary Key, pk=”U#”+userID, sk=”C#”+certID
All Users with a Certification by ID Index GSI 1, gsi1pk=”C#”+certID, gsi1sk=”U#”+userID

Database schema

We enforce the “database schema” in the application. The DynamoDB API is powerful, but also verbose and complicated. Many people jump directly to using an ORM to simplify it. Here, we will directly access the database using the helper functions below to create the schema for the Team item.

const DB_MAP = {   TEAM: {     get: ({ teamId }) => ({       pk: 'T#'+teamId,       sk: '#',     }),     put: ({ teamId, teamName }) => ({       pk: 'T#'+teamId,       sk: '#',       gsi1pk: 'Team',       gsi1sk: teamName,       _tp: 'Team',       tn: teamName,     }),     parse: ({ pk, tn, _tp }) => {       if (_tp === 'Team') {         return {           id: pk.slice(2),           name: tn,           };         } else return null;         },     queryByName: ({ teamName }) => ({       IndexName: 'gsi1pk-gsi1sk-index',       ExpressionAttributeNames: { '#p': 'gsi1pk', '#s': 'gsi1sk' },       KeyConditionExpression: '#p = :p AND #s = :s',       ExpressionAttributeValues: { ':p': 'Team', ':s': teamName },       ScanIndexForward: true,     }),     queryAll: {       IndexName: 'gsi1pk-gsi1sk-index',       ExpressionAttributeNames: { '#p': 'gsi1pk' },       KeyConditionExpression: '#p = :p ',       ExpressionAttributeValues: { ':p': 'Team' },       ScanIndexForward: true,     },   },   parseList: (list, type) => {     if (Array.isArray(list)) {       return list.map(i => DB_MAP[type].parse(i));     }     if (Array.isArray(list.Items)) {       return list.Items.map(i => DB_MAP[type].parse(i));     }   }, };

To put a new team item in the database you call:

DB_MAP.TEAM.put({teamId:"t_01",teamName:"North Team"})

This forms the index and key values that are passed to the database API. The parse method takes an item from the database and translates it back to the application model.

GraphQL schema

type Team {   id: ID!   name: String   members: [User] } type User {   id: ID!   name: String   team: Team   credentials: [Credential] } type Certification {   id: ID!   name: String } type Credential {   id: ID!   user: User   certification: Certification   expiration: String } type Query {   team(id: ID!): Team   teamByName(name: String!): [Team]   user(id: ID!): User   userByName(name: String!): [User]   certification(id: ID!): Certification   certificationByName(name: String!): [Certification]   allTeams: [Team]   allCertifications: [Certification]   allUsers: [User] }

Bridging the gap between GraphQL and DynamoDB with resolvers

Resolvers are where a GraphQL query is executed. You can get a long way in GraphQL without ever writing a resolver. But to build our API, we’ll need to write some. For each query in the GraphQL schema above there is a root resolver below (only the team resolvers are shown here). This root resolver returns either a promise or an object with part of the query results.

If the query returns a Team type as the result, then execution is passed down to the Team type resolver. That resolver has a function for each of the values in a Team. If there is no resolver for a given value (i.e. id), it will look to see if the root resolver already passed it down.

A query takes four arguments. The first, called root or parent, is an object passed down from the resolver above with any partial results. The second, called args, contains the arguments passed to the query. The third, called context, can contain anything the application needs to resolve the query. In this case, we add a reference for the database to the context. The final argument, called info, is not used here. It contains more details about the query (like an abstract syntax tree).

In the resolvers below, ctx.db.singletable is the reference to the DynamoDB table that contains all the data. The get and query methods directly execute against the database and the DB_MAP.TEAM.... translates the schema to the database using the helper functions we wrote earlier. The parse method translates the data back to the from needed for the GraphQL schema.

const resolverMap = {   Query: {     team: (root, args, ctx, info) => {       return ctx.db.singletable.get(DB_MAP.TEAM.get({ teamId: args.id }))         .then(data => DB_MAP.TEAM.parse(data));     },     teamByName: (root, args, ctx, info) =>; {       return ctx.db.singletable         .query(DB_MAP.TEAM.queryByName({ teamName: args.name }))         .then(data => DB_MAP.parseList(data, 'TEAM'));     },     allTeams: (root, args, ctx, info) => {       return ctx.db.singletable.query(DB_MAP.TEAM.queryAll)         .then(data => DB_MAP.parseList(data, 'TEAM'));     },   },   Team: {     name: (root, _, ctx) => {       if (root.name) {         return root.name;       } else {         return ctx.db.singletable.get(DB_MAP.TEAM.get({ teamId: root.id }))           .then(data => DB_MAP.TEAM.parse(data).name);       }     },     members: (root, _, ctx) => {       return ctx.db.singletable         .query(DB_MAP.USER.queryByTeamId({ teamId: root.id }))         .then(data => DB_MAP.parseList(data, 'USER'));     },   },   User: {     name: (root, _, ctx) => {       if (root.name) {         return root.name;       } else {         return ctx.db.singletable.get(DB_MAP.USER.get({ userId: root.id }))           .then(data => DB_MAP.USER.parse(data).name);       }     },     credentials: (root, _, ctx) => {       return ctx.db.singletable         .query(DB_MAP.CREDENTIAL.queryByUserId({ userId: root.id }))         .then(data =>DB_MAP.parseList(data, 'CREDENTIAL'));     },   }, };

Now let’s follow the execution of the query below. First, the team root resolver reads the team by id and returns id and name. Then the Team type resolver reads all the members of that team. Then the User type resolver is called for each user to get all of their credentials and certifications. If there are five members on the team and each member has five credentials, that results in a total of seven reads for the database. You could argue that is too many. In a SQL database this might be reduced to four database calls. I would argue that the seven DynamoDB reads will be cheaper and faster than the four SQL reads in many cases. But this comes with a big dose of “it depends” on a lot of factors.

query { team( id:"t_01" ){   id   name   members{     id     name     credentials{       id       certification{         id         name       }     }   } }}

Over-fetching and the N+1 problem

Optimizing a GraphQL API involves balancing a whole lot of tradeoffs that we won’t get into here. But two that weigh heavily in the decision of DynamoDB versus SQL are over-fetching and the N+1 problem. In many ways, these are opposite sides of the same coin. Over-fetching is when a resolver requests more data from the database than it needs to respond to the query. This often happens when you try to make one call to the database in the root resolver or a type resolver (e.g., members in the Team type resolver above) to get as much of the data as you can. If the query did not request the name attribute, it can be seen as wasted effort.

The N+1 problem is almost the opposite. If all the reads are pushed down to the lowest level resolver, then the team root resolver and the members resolver (for Team type) would make only a minimal or no request to the database. They would just pass the IDs down to the Team type and User type resolver. In this case, instead of members making one call to get all five members, it would push down to User to make five separate reads. This would result in potentially 36 or more separate reads for the query above. In practice, this does not happen because an optimized server would use something like the DataLoader library that acts as a middleware to intercept those 36 calls and batch them into probably only four calls to the database. These smaller atomic read requests are needed so that the DataLoader (or similar tool) can efficiently batch them into fewer reads.

So, to optimize a GraphQL API with SQL, it is usually best to have small resolvers at the lowest levels and use something like DataLoader to optimize them. But for a DynamoDB API it is better to have “smarter” resolvers higher up that better match the access patterns your single table database it written for. The over-fetching that results in this case is usually the lesser of the two evils.

Deploy this example in 60 seconds

This is where you realize the full payoff of using DynamoDB together with serverless GraphQL. I built this example with Architect. It is an open-source tool to build serverless apps on AWS without most of the headaches of directly using AWS. Once you clone the repo and run npm install, you can launch the app for local development (including a built-in local version of the database) with a single command. Not only that, you can also deploy it straight to production infrastructure (including DynamoDB) on AWS with a single command when you are ready.


The post How to Make GraphQL and DynamoDB Play Nicely Together appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , ,
[Top]

Let’s Create Our Own Authentication API with Nodejs and GraphQL

Authentication is one of the most challenging tasks for developers just starting with GraphQL. There are a lot of technical considerations, including what ORM would be easy to set up, how to generate secure tokens and hash passwords, and even what HTTP library to use and how to use it. 

In this article, we’ll focus on local authentication. It’s perhaps the most popular way of handling authentication in modern websites and does so by requesting the user’s email and password (as opposed to, say, using Google auth.)

Moreover, This article uses Apollo Server 2, JSON Web Tokens (JWT), and Sequelize ORM to build an authentication API with Node.

Handling authentication

As in, a log in system:

  • Authentication identifies or verifies a user.
  • Authorization is validating the routes (or parts of the app) the authenticated user can have access to. 

The flow for implementing this is:

  1. The user registers using password and email
  2. The user’s credentials are stored in a database
  3. The user is redirected to the login when registration is completed
  4. The user is granted access to specific resources when authenticated
  5. The user’s state is stored in any one of the browser storage mediums (e.g. localStorage, cookies, session) or JWT.

Pre-requisites

Before we dive into the implementation, here are a few things you’ll need to follow along.

Dependencies 

This is a big list, so let’s get into it:

  • Apollo Server: An open-source GraphQL server that is compatible with any kind of GraphQL client. We won’t be using Express for our server in this project. Instead, we will use the power of Apollo Server to expose our GraphQL API.
  • bcryptjs: We want to hash the user passwords in our database. That’s why we will use bcrypt. It relies on Web Crypto API‘s getRandomValues interface to obtain secure random numbers.
  • dotenv: We will use dotenv to load environment variables from our .env file. 
  • jsonwebtoken: Once the user is logged in, each subsequent request will include the JWT, allowing the user to access routes, services, and resources that are permitted with that token. jsonwebtokenwill be used to generate a JWT which will be used to authenticate users.
  • nodemon: A tool that helps develop Node-based applications by automatically restarting the node application when changes in the directory are detected. We don’t want to be closing and starting the server every time there’s a change in our code. Nodemon inspects changes every time in our app and automatically restarts the server. 
  • mysql2: An SQL client for Node. We need it connect to our SQL server so we can run migrations.
  • sequelize: Sequelize is a promise-based Node ORM for Postgres, MySQL, MariaDB, SQLite and Microsoft SQL Server. We will use Sequelize to automatically generate our migrations and models. 
  • sequelize cli: We will use Sequelize CLI to run Sequelize commands. Install it globally with yarn add --global sequelize-cli  in the terminal.

Setup directory structure and dev environment

Let’s create a brand new project. Create a new folder and this inside of it:

yarn init -y

The -y flag indicates we are selecting yes to all the yarn init questions and using the defaults.

We should also put a package.json file in the folder, so let’s install the project dependencies:

yarn add apollo-server bcrpytjs dotenv jsonwebtoken nodemon sequelize sqlite3

Next, let’s add Babeto our development environment:

yarn add babel-cli babel-preset-env babel-preset-stage-0 --dev

Now, let’s configure Babel. Run touch .babelrc in the terminal. That creates and opens a Babel config file and, in it, we’ll add this:

{   "presets": ["env", "stage-0"] }

It would also be nice if our server starts up and migrates data as well. We can automate that by updating package.json with this:

"scripts": {   "migrate": " sequelize db:migrate",   "dev": "nodemon src/server --exec babel-node -e js",   "start": "node src/server",   "test": "echo "Error: no test specified" && exit 1" },

Here’s our package.json file in its entirety at this point:

{   "name": "graphql-auth",   "version": "1.0.0",   "main": "index.js",   "scripts": {     "migrate": " sequelize db:migrate",     "dev": "nodemon src/server --exec babel-node -e js",     "start": "node src/server",     "test": "echo "Error: no test specified" && exit 1"   },   "dependencies": {     "apollo-server": "^2.17.0",     "bcryptjs": "^2.4.3",     "dotenv": "^8.2.0",     "jsonwebtoken": "^8.5.1",     "nodemon": "^2.0.4",     "sequelize": "^6.3.5",     "sqlite3": "^5.0.0"   },   "devDependencies": {     "babel-cli": "^6.26.0",     "babel-preset-env": "^1.7.0",     "babel-preset-stage-0": "^6.24.1"   } }

Now that our development environment is set up, let’s turn to the database where we’ll be storing things.

Database setup

We will be using MySQL as our database and Sequelize ORM for our relationships. Run sequelize init (assuming you installed it globally earlier). The command should create three folders: /config /models and /migrations. At this point, our project directory structure is shaping up. 

Let’s configure our database. First, create a .env file in the project root directory and paste this:

NODE_ENV=development DB_HOST=localhost DB_USERNAME= DB_PASSWORD= DB_NAME=

Then go to the /config folder we just created and rename the config.json file in there to config.js. Then, drop this code in there:

require('dotenv').config() const dbDetails = {   username: process.env.DB_USERNAME,   password: process.env.DB_PASSWORD,   database: process.env.DB_NAME,   host: process.env.DB_HOST,   dialect: 'mysql' } module.exports = {   development: dbDetails,   production: dbDetails }

Here we are reading the database details we set in our .env file. process.env is a global variable injected by Node and it’s used to represent the current state of the system environment.

Let’s update our database details with the appropriate data. Open the SQL database and create a table called graphql_auth. I use Laragon as my local server and phpmyadmin to manage database tables.

What ever you use, we’ll want to update the .env file with the latest information:

NODE_ENV=development DB_HOST=localhost DB_USERNAME=graphql_auth DB_PASSWORD= DB_NAME=<your_db_username_here>

Let’s configure Sequelize. Create a .sequelizerc file in the project’s root and paste this:

const path = require('path') 
 module.exports = {   config: path.resolve('config', 'config.js') }

Now let’s integrate our config into the models. Go to the index.js in the /models folder and edit the config variable.

const config = require(__dirname + '/../../config/config.js')[env]

Finally, let’s write our model. For this project, we need a User model. Let’s use Sequelize to auto-generate the model. Here’s what we need to run in the terminal to set that up:

sequelize model:generate --name User --attributes username:string,email:string,password:string

Let’s edit the model that creates for us. Go to user.js in the /models folder and paste this:

'use strict'; module.exports = (sequelize, DataTypes) => {   const User = sequelize.define('User', {     username: {       type: DataTypes.STRING,     },     email: {       type: DataTypes.STRING,       },     password: {       type: DataTypes.STRING,     }   }, {});   return User; };

Here, we created attributes and fields for username, email and password. Let’s run a migration to keep track of changes in our schema:

yarn migrate

Let’s now write the schema and resolvers.

Integrate schema and resolvers with the GraphQL server 

In this section, we’ll define our schema, write resolver functions and expose them on our server.

The schema

In the src folder, create a new folder called /schema and create a file called schema.js. Paste in the following code:

const { gql } = require('apollo-server') const typeDefs = gql`   type User {     id: Int!     username: String     email: String!   }   type AuthPayload {     token: String!     user: User!   }   type Query {     user(id: Int!): User     allUsers: [User!]!     me: User   }   type Mutation {     registerUser(username: String, email: String!, password: String!): AuthPayload!     login (email: String!, password: String!): AuthPayload!   } ` module.exports = typeDefs

Here we’ve imported graphql-tag from apollo-server. Apollo Server requires wrapping our schema with gql

The resolvers

In the src folder, create a new folder called /resolvers and create a file in it called resolver.js. Paste in the following code:

const bcrypt = require('bcryptjs') const jsonwebtoken = require('jsonwebtoken') const models = require('../models') require('dotenv').config() const resolvers = {     Query: {       async me(_, args, { user }) {         if(!user) throw new Error('You are not authenticated')         return await models.User.findByPk(user.id)       },       async user(root, { id }, { user }) {         try {           if(!user) throw new Error('You are not authenticated!')           return models.User.findByPk(id)         } catch (error) {           throw new Error(error.message)         }       },       async allUsers(root, args, { user }) {         try {           if (!user) throw new Error('You are not authenticated!')           return models.User.findAll()         } catch (error) {           throw new Error(error.message)         }       }     },     Mutation: {       async registerUser(root, { username, email, password }) {         try {           const user = await models.User.create({             username,             email,             password: await bcrypt.hash(password, 10)           })           const token = jsonwebtoken.sign(             { id: user.id, email: user.email},             process.env.JWT_SECRET,             { expiresIn: '1y' }           )           return {             token, id: user.id, username: user.username, email: user.email, message: "Authentication succesfull"           }         } catch (error) {           throw new Error(error.message)         }       },       async login(_, { email, password }) {         try {           const user = await models.User.findOne({ where: { email }})           if (!user) {             throw new Error('No user with that email')           }           const isValid = await bcrypt.compare(password, user.password)           if (!isValid) {             throw new Error('Incorrect password')           }           // return jwt           const token = jsonwebtoken.sign(             { id: user.id, email: user.email},             process.env.JWT_SECRET,             { expiresIn: '1d'}           )           return {            token, user           }       } catch (error) {         throw new Error(error.message)       }     }   }, 
 } module.exports = resolvers

That’s a lot of code, so let’s see what’s happening in there.

First we imported our models, bcrypt and  jsonwebtoken, and then initialized our environmental variables. 

Next are the resolver functions. In the query resolver, we have three functions (me, user and allUsers):

  • me query fetches the details of the currently loggedIn user. It accepts a user object as the context argument. The context is used to provide access to our database which is used to load the data for a user by the ID provided as an argument in the query.
  • user query fetches the details of a user based on their ID. It accepts id as the context argument and a user object. 
  • alluser query returns the details of all the users.

user would be an object if the user state is loggedIn and it would be null, if the user is not. We would create this user in our mutations. 

In the mutation resolver, we have two functions (registerUser and loginUser):

  • registerUser accepts the username, email  and password of the user and creates a new row with these fields in our database. It’s important to note that we used the bcryptjs package to hash the users password with bcrypt.hash(password, 10). jsonwebtoken.sign synchronously signs the given payload into a JSON Web Token string (in this case the user id and email). Finally, registerUser returns the JWT string and user profile if successful and returns an error message if something goes wrong.
  • login accepts email and password , and checks if these details match with the one that was supplied. First, we check if the email value already exists somewhere in the user database.
models.User.findOne({ where: { email }}) if (!user) {   throw new Error('No user with that email') }

Then, we use bcrypt’s bcrypt.compare method to check if the password matches. 

const isValid = await bcrypt.compare(password, user.password) if (!isValid) {   throw new Error('Incorrect password') }

Then, just like we did previously in registerUser, we use jsonwebtoken.sign to generate a JWT string. The login mutation returns the token and user object.

Now let’s add the JWT_SECRET to our .env file.

JWT_SECRET=somereallylongsecret

The server

Finally, the server! Create a server.js in the project’s root folder and paste this:

const { ApolloServer } = require('apollo-server') const jwt =  require('jsonwebtoken') const typeDefs = require('./schema/schema') const resolvers = require('./resolvers/resolvers') require('dotenv').config() const { JWT_SECRET, PORT } = process.env const getUser = token => {   try {     if (token) {       return jwt.verify(token, JWT_SECRET)     }     return null   } catch (error) {     return null   } } const server = new ApolloServer({   typeDefs,   resolvers,   context: ({ req }) => {     const token = req.get('Authorization') || ''     return { user: getUser(token.replace('Bearer', ''))}   },   introspection: true,   playground: true }) server.listen({ port: process.env.PORT || 4000 }).then(({ url }) => {   console.log(`🚀 Server ready at $ https://css-tricks.com/lets-create-our-own-authentication-api-with-nodejs-and-graphql/`); });

Here, we import the schema, resolvers and jwt, and initialize our environment variables. First, we verify the JWT token with verify. jwt.verify accepts the token and the JWT secret as parameters.

Next, we create our server with an ApolloServer instance that accepts typeDefs and resolvers.

We have a server! Let’s start it up by running yarn dev in the terminal.

Testing the API

Let’s now test the GraphQL API with GraphQL Playground. We should be able to register, login and view all users — including a single user — by ID.

We’ll start by opening up the GraphQL Playground app or just open localhost://4000 in the browser to access it.

Mutation for register user

mutation {   registerUser(username: "Wizzy", email: "ekpot@gmail.com", password: "wizzyekpot" ){     token   } }

We should get something like this:

{   "data": {     "registerUser": {       "token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpZCI6MTUsImVtYWlsIjoiZWtwb3RAZ21haWwuY29tIiwiaWF0IjoxNTk5MjQwMzAwLCJleHAiOjE2MzA3OTc5MDB9.gmeynGR9Zwng8cIJR75Qrob9bovnRQT242n6vfBt5PY"     }   } }

Mutation for login 

Let’s now log in with the user details we just created:

mutation {   login(email:"ekpot@gmail.com" password:"wizzyekpot"){     token   } }

We should get something like this:

{   "data": {     "login": {       "token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpZCI6MTUsImVtYWlsIjoiZWtwb3RAZ21haWwuY29tIiwiaWF0IjoxNTk5MjQwMzcwLCJleHAiOjE1OTkzMjY3NzB9.PDiBKyq58nWxlgTOQYzbtKJ-HkzxemVppLA5nBdm4nc"     }   } }

Awesome!

Query for a single user

For us to query a single user, we need to pass the user token as authorization header. Go to the HTTP Headers tab.

Showing the GraphQL interface with the HTTP Headers tab highlighted in red in the bottom left corner of the screen,

…and paste this:

{   "Authorization": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpZCI6MTUsImVtYWlsIjoiZWtwb3RAZ21haWwuY29tIiwiaWF0IjoxNTk5MjQwMzcwLCJleHAiOjE1OTkzMjY3NzB9.PDiBKyq58nWxlgTOQYzbtKJ-HkzxemVppLA5nBdm4nc" }

Here’s the query:

query myself{   me {     id     email     username   } }

And we should get something like this:

{   "data": {     "me": {       "id": 15,       "email": "ekpot@gmail.com",       "username": "Wizzy"     }   } }

Great! Let’s now get a user by ID:

query singleUser{   user(id:15){     id     email     username   } }

And here’s the query to get all users:

{   allUsers{     id     username     email   } }

Summary

Authentication is one of the toughest tasks when it comes to building websites that require it. GraphQL enabled us to build an entire Authentication API with just one endpoint. Sequelize ORM makes creating relationships with our SQL database so easy, we barely had to worry about our models. It’s also remarkable that we didn’t require a HTTP server library (like Express) and use Apollo GraphQL as middleware. Apollo Server 2, now enables us to create our own library-independent GraphQL servers!

Check out the source code for this tutorial on GitHub.


The post Let’s Create Our Own Authentication API with Nodejs and GraphQL appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , ,
[Top]

A Complete Walkthrough of GraphQL APIs with React and FaunaDB

As a web developer, there is an interesting bit of back and forth that always comes along with setting up a new application. Even using a full stack web framework like Ruby on Rails can be non-trivial to set up and deploy, especially if it’s your first time doing so in a while.

Personally I have always enjoyed being able to dig in and write the actual bit of application logic more so than setting up the apps themselves. Lately I have become a big fan of React applications together with a GraphQL API and storing state with the Apollo library.

Setting up a React application has become very easy in the past few years, but setting up a backend with a GraphQL API? Not so much. So when I was working on a project recently, I decided to look for an easier way to integrate a GraphQL API and was delighted to find FaunaDB.

FaunaDB is a NoSQL database as a service that makes provisioning a GraphQL API an incredibly simple process, and even comes with a free tier. Frankly I was surprised and really impressed with how quickly I was able to go from zero to a working API.

The service also touts its production readiness, with a focus on making scalability much easier than if you were managing your own backend. Although I haven’t explored its more advanced features yet, if it’s anything like my first impression then the prospects and implications of using FaunaDB are quite exciting. For now, I can confirm that for many of my projects, it provides an excellent solution for managing state together with a React application.

While working on my project, I did run into a few configuration issues when making all of the frameworks work together which I think could’ve been addressed with a guide that focuses on walking through standing up an application in its entirety. So in this article, I’m going to do a thorough walkthrough of setting up a small to-do React application on Heroku, then persisting data to that application with FaunaDB using the Apollo library. You can find the full source code here.

Our Application

For this walkthrough, we’re building a to-do list where a user can take the following actions:

  • Add a new item
  • Mark an item as complete
  • Remove an item

From a technical perspective, we’re going to accomplish this by doing the following:

  • Creating a React application
  • Deploying the application to Heroku
  • Provisioning a new FaunaDB database
  • Declaring a GraphQL API schema
  • Provisioning a new database key
  • Configuring Apollo in our React application to interact with our API
  • Writing application logic and consume our API to persist information

Here’s a preview of what the final result will look like:

Creating the React Application

First we’ll create a boilerplate React application and make sure it runs. Assuming you have create-react-app installed, the commands to create a new application are:

create-react-app fauna-todo cd fauna-todo yarn start

After which you should be able to head to http://localhost:3000 and see the generated homepage for the application.

Deploying to Heroku

As I mentioned above, deploying React applications has become awesomely easy over the last few years. I’m using Heroku here since it’s been my go-to platform as a service for a while now, but you could just as easily use another service like Netlify (though of course the configuration will be slightly different). Assuming you have a Heroku account and the Heroku CLI installed, then this article shows that you only need a few lines of code to create and deploy a React application.

git init heroku create -b https://github.com/mars/create-react-app-buildpack.git git push heroku master

And your app is deployed! To view it you can run:

heroku open

Provisioning a FaunaDB Database

Now that we have a React app up and running, let’s add persistence to the application using FaunaDB. Head to fauna.com to create a free account. After you have an account, click “New Database” on the dashboard, and enter in a name of your choosing:

Creating an API via GraphQL Schema in FaunaDB

In this example, we’re going to declare a GraphQL schema then use that file to automatically generate our API within FaunaDB. As a first stab at the schema for our to-do application, let’s suppose that there is simply a collection of “Items” with “name” as its sole field. Since I plan to build upon this schema later and like being able to see the schema itself at a glance, I’m going to create a schema.graphql file and add it to the top level of my React application. Here is the content for this file:

type Item {  name: String } type Query {  allItems: [Item!] }

If you’re unfamiliar with the concept of defining a GraphQL schema, think of it as a manifest for declaring what kinds of objects and queries are available within your API. In this case, we’re saying that there is going to be an Item type with a name string field and that we are going to have an explicit query allItems to look up all item records. You can read more about schemas in this Apollo article and types in this graphql.org article. FaunaDB also provides a reference document for declaring and importing a schema file.

We can now upload this schema.graphql file and use it to generate a GraphQL API. Head to the FaunaDB dashboard again and click on “GraphQL” then upload your newly created schema file here:

Congratulations! You have created a fully functional GraphQL API. This page turns into a “GraphQL Playground” which lets you interact with your API. Click on the “Docs” tab in the sidebar to see the available queries and mutations.

Note that in addition to our allItems query, FaunaDB has generated the following queries/mutations automatically on our behalf:

  • findItemByID
  • createItem
  • updateItem
  • deleteItem

All of these were derived by declaring the Item type in our schema file. Pretty cool right? Let’s give these queries and mutations a spin to familiarize ourselves with them. We can execute queries and mutations directly in the “GraphQL Playground.” Let’s first run a query for items. Enter this query into the left pane of the playground:

query MyItemQuery {  allItems {    data {     name    }  } }

Then click on the play button to run it:

The result is listed on the right pane, and unsurprisingly returns no results since we haven’t created any items yet. Fortunately createItem was one of the mutations that was automatically generated from the schema and we can use that to populate a sample item. Let’s run this mutation:

mutation MyItemCreation {  createItem(data: { name: "My first todo item" }) {    name  } }

You can see the result of the mutation in the right pane. It seems like our item was created successfully, but just to double check we can re-run our first query and see the result:

You can see that if we add our first query back to the left pane in the playground that the play button gives you a choice as to which operation you’d like to perform. Finally, note in step 3 of the screenshot above that our item was indeed created successfully.

In addition to running the query above, we can also look in the “Collections” tab of FaunaDB to view the collection directly:

Provisioning a New Database Key

Now that we have the database itself configured, we need a way for our React application to access it.

For the sake of simplicity in this application, this will be done with a secret key that we can add as an environment variable to our React application. We aren’t going to have authentication for individual users. Instead we’ll generate an application key which has permission to create, read, update, and delete items.

Authentication and authorization are substantial topics on their own — if you would like to learn more on how FaunaDB handles them as a follow up exercise to this guide, you can read all about the topic here.

The application key we generate has an associate set of permissions that are grouped together in a “role.” Let’s begin by first defining a role that has permission to perform CRUD operations on items, as well as perform the allItems query. Start by going to the “Security” tab, then clicking on “Manage Roles”:

There are 2 built in roles, admin and server. We could in theory use these roles for our key, but this is a bad idea as those keys would allow whoever has access to this key to perform database level operations such as creating new collections or even destroy the database itself. So instead, let’s create a new role by clicking on “New Custom Role” button:

You can name the role whatever you’d like, here we’re using the name ItemEditor and giving the role permission to read, write, create, and delete items — as well as permission to read the allItems index.

Save this role then, head to the “Security” tab and create a new key:

When creating a key, make sure to select “ItemEditor” for the role and whatever name you please:

Next you’ll be presented with your secret key which you’ll need to copy:

In order for React to load the key’s value as an environment variable, create a new file called .env.local which lives at the root level of your React application. In this file, add an entry for the generated key:

REACT_APP_FAUNA_SECRET=fnADzT7kXcACAFHdiKG-lIUWq-hfWIVxqFi4OtTv

Important: Since it’s not good practice to store secrets directly in source control in plain text, make sure that you also have a .gitignore file in your project’s root directory that contains .env.local so that your secrets won’t be added to your git repo and shared with others.

It’s critical that this variable’s name starts with “REACT_APP_” otherwise it won’t be recognized when the application is started. By adding the value to the .env.local file, it will still be loaded for the application when running locally. You’ll have to explicitly stop and restart your application with yarn start in order to see these changes take.

If you’re interested in reading more about how environment variables are loaded in apps created via create-react-app, there is a full explanation here. We’ll cover adding this secret as an environment variable in Heroku later on in this article.

Connecting to FaunaDB in React with Apollo

In order for our React application to interact with our GraphQL API, we need some sort of GraphQL client library. Fortunately for us, the Apollo client provides an elegant interface for making API requests as well as caching and interacting with the results.

To install the relevant Apollo packages we’ll need, run:

yarn add @apollo/client graphql @apollo/react-hooks

Now in your src directory of your application, add a new file named client.js with the following content:

import { ApolloClient, InMemoryCache } from "@apollo/client"; export const client = new ApolloClient({  uri: "https://graphql.fauna.com/graphql",  headers: {    authorization: `Bearer $ {process.env.REACT_APP_FAUNA_SECRET}`,  },  cache: new InMemoryCache(), });

What we’re doing here is configuring Apollo to make requests to our FaunaDB database. Specifically, the uri makes the request to Fauna itself, then the authorization header indicates that we’re connecting to the specific database instance for the provided key that we generated earlier.

There are 2 important implications from this snippet of code:

  1. The authorization header contains the key with the “ItemEditor” role, and is currently hard coded to use the same header regardless of which user is looking at our application. If you were to update this application to show a different to-do list for each user, you would need to login for each user and generate a token which could instead be passed in this header. Again, the FaunaDB documentation covers this concept if you care to learn more about it.
  2. As with any time you add a layer of caching to a system (as we are doing here with Apollo), you introduce the potential to have stale data. FaunaDB’s operations are strongly consistent, and you can configure Apollo’s fetchPolicy to minimize the potential for stale data. In order to prevent stale reads to our cache, we’ll use a combination of refetch queries and specifying response fields in our mutations.

Next we’ll replace the contents of the home page’s component. Head to App.js and replace its content with:

import React from "react"; import { ApolloProvider } from "@apollo/client"; import { client } from "./client"; function App() {  return (    <ApolloProvider client={client}>      <div style={{ padding: "5px" }}>        <h3>My Todo Items</h3>        <div>items to get loaded here</div>      </div>    </ApolloProvider>  ); }

Note: For this sample application I’m focusing on functionality over presentation, so you’ll see some inline styles. While I certainly wouldn’t recommend this for a production-grade application, I think it does at least demonstrate any added styling in the most straightforward manner within a small demo.

Visit http://localhost:3000 again and you’ll see:

Which contains the hard coded values we’ve set in our jsx above. What we would really like to see however is the to-do item we created in our database. In the src directory, let’s create a component called ItemList which lists out any items in our database:

import React from "react"; import gql from "graphql-tag"; import { useQuery } from "@apollo/react-hooks"; const ITEMS_QUERY = gql`  {    allItems {      data {        _id        name      }    }  } `; export function ItemList() {  const { data, loading } = useQuery(ITEMS_QUERY); if (loading) {    return "Loading...";  } return (    <ul>      {data.allItems.data.map((item) => {        return <li key={item._id}>{item.name}</li>;      })}    </ul>  ); }

Then update App.js to render this new component  —  see the full commit in this example’s source code to see this step in its entirety. Previewing your app in again, you’ll see that your to-do item has loaded:

Now is a good time to commit your progress in git. And since we’re using Heroku, deploying is a snap:

git push heroku master heroku open

When you run heroku open though, you’ll see that the page is blank. If we inspect the network traffic and request to FaunaDB, we’ll see an error about how the database secret is missing:

Which makes sense since we haven’t configured this value in Heroku yet. Let’s set it by going to the Heroku dashboard, selecting your application, then clicking on the “Settings” tab. In there you should add the REACT_APP_FAUNA_SECRET key and value used in the .env.local file earlier. Reusing this key is done for demonstration purposes. In a “real” application, you would probably have a separate database and separate keys for each environment.

If you would prefer to use the command line rather than Heroku’s web interface, you can use the following command and replace the secret with your key instead:

heroku config:set REACT_APP_FAUNA_SECRET=fnADzT7kXcACAFHdiKG-lIUWq-hfWIVxqFi4OtTv

Important: as noted in the Heroku docs, you need to trigger a deploy in order for this environment variable to apply in your app:

git commit — allow-empty -m 'Add REACT_APP_FAUNA_SECRET env var' git push heroku master heroku open

After running this last command, your Heroku-hosted app should appear and load the items from your database.

Adding New To-Do Items

We now have all of the pieces in place for accessing our FaunaDB database both locally and a hosted Heroku environment. Now adding items is as simple as calling the mutation we used in the GraphQL Playground earlier. Here is the code for an AddItem component, which uses a bare bones html form to call the createItem mutation:

import React from "react"; import gql from "graphql-tag"; import { useMutation } from "@apollo/react-hooks"; const CREATE_ITEM = gql`  mutation CreateItem($ data: ItemInput!) {    createItem(data: $ data) {      _id    }  } `; const ITEMS_QUERY = gql`  {    allItems {      data {        _id        name      }    }  } `; export function AddItem() {  const [showForm, setShowForm] = React.useState(false);  const [newItemName, setNewItemName] = React.useState(""); const [createItem, { loading }] = useMutation(CREATE_ITEM, {    refetchQueries: [{ query: ITEMS_QUERY }],    onCompleted: () => {      setNewItemName("");      setShowForm(false);    },  }); if (showForm) {    return (      <form        onSubmit={(e) => {          e.preventDefault();          createItem({ variables: { data: { name: newItemName } } });        }}      >        <label>          <input            disabled={loading}            type="text"            value={newItemName}            onChange={(e) => setNewItemName(e.target.value)}            style={{ marginRight: "5px" }}          />        </label>        <input disabled={loading} type="submit" value="Add" />      </form>    );  } return <button onClick={() => setShowForm(true)}>Add Item</button>; }

After adding a reference to AddItem in our App component, we can verify that adding items works as expected. Again, you can see the full commit in the demo app for a recap of this step.

Deleting New To-Do Items

Similar to how we called the automatically generated AddItem mutation to add new items, we can call the generated DeleteItem mutation to remove items from our list. Here’s what our updated ItemList component looks like after adding this mutation:

import React from "react"; import gql from "graphql-tag"; import { useMutation, useQuery } from "@apollo/react-hooks"; const ITEMS_QUERY = gql`  {    allItems {      data {        _id        name      }    }  } `; const DELETE_ITEM = gql`  mutation DeleteItem($ id: ID!) {    deleteItem(id: $ id) {      _id    }  } `; export function ItemList() {  const { data, loading } = useQuery(ITEMS_QUERY); const [deleteItem, { loading: deleteLoading }] = useMutation(DELETE_ITEM, {    refetchQueries: [{ query: ITEMS_QUERY }],  }); if (loading) {    return <div>Loading...</div>;  } return (    <ul>      {data.allItems.data.map((item) => {        return (          <li key={item._id}>            {item.name}{" "}            <button              disabled={deleteLoading}              onClick={(e) => {                e.preventDefault();                deleteItem({ variables: { id: item._id } });              }}            >              Remove            </button>          </li>        );      })}    </ul>  ); }

Reloading our app and adding another item should result in a page that looks like this:

If you click on the “Remove” button for any item, the DELETE_ITEM mutation is fired and the entire list of items is fired upon completion as specified per the refetchQuery option. 

One thing you may have noticed is that in our ITEMS_QUERY, we’re specifying _id as one of the fields we’d like in the result set from our query. This _id field is automatically generated by FaunaDB as a unique identifier for each collection, and should be used when updating or deleting a record.

Marking Items as Complete

This wouldn’t be a fully functional to-do list without the ability to mark items as complete! So let’s add that now. By the time we’re done, we expect the app to look like this:

The first step we need to take is updating our Item schema within FaunaDB since right now the only information we store about an item is its name. Heading to our schema.graphql file, we can add a new field to track the completion state for an item:

type Item {  name: String  isComplete: Boolean } type Query {  allItems: [Item!] }

Now head to the GraphQL tab in the FaunaDB console and click on the “Update Schema” link to upload the newly updated schema file.

Note: there is also an “Override Schema” option, which can be used to rewrite your database’s schema from scratch if you’d like. One consideration to make when choosing to override the schema completely is that the data is deleted from your database. This may be fine for a test environment, but a test or production environment would require a proper database migration instead.

Since the changes we’re making here are additive, there won’t be any conflict with the existing schema so we can keep our existing data.

You can view the mutation itself and its expected schema in the GraphQL Playground in FaunaDB:

This tells us that we can call the deleteItem mutation with a data parameter of type ItemInput. As with our other requests, we could test this mutation in the playground if we wanted. For now, we can add it directly to the application. In ItemList.js, let’s add this mutation with this code as outlined in the example repository.

The references to UPDATE_ITEM are the most relevant changes we’ve made. It’s interesting to note as well that we don’t need the refetchQueries parameter for this mutation. When the update mutation returns, Apollo updates the corresponding item in the cache based on its identifier field (_id in this case) so our React component re-renders appropriately as the cache updates.

We now have all of the functionality for an initial version of a to-do application. As a final step, push your branch one more time to Heroku:

git push heroku master heroku open

Conclusion

Take a moment to pat yourself on the back! You’ve created a brand-new React application, added persistence at a database level with FaunaDB, and can do deployments available to the entire world with the push of a branch to Heroku.

Now that we’ve covered some of the concepts behind provisioning and interacting with FaunaDB, setting up any similar project in the future is an amazingly fast process. Being able to provision a GraphQL-accessible database in minutes is a dream for me when it comes to spinning up a new project. Not only that, but this is a production grade database that you don’t have to worry about configuring or scaling — and instead get to focus on writing the rest of your application instead of playing the role of a database administrator.


The post A Complete Walkthrough of GraphQL APIs with React and FaunaDB appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , ,
[Top]

Building Serverless GraphQL API in Node with Express and Netlify

I’ve always wanted to build an API, but was scared away by just how complicated things looked. I’d read a lot of tutorials that start with “first, install this library and this library and this library” without explaining why that was important. I’m kind of a Luddite when it comes to these things.

Well, I recently rolled up my sleeves and got my hands dirty. I wanted to build and deploy a simple read-only API, and goshdarnit, I wasn’t going to let some scary dependency lists and fancy cutting-edge services stop me¹.

What I discovered is that underneath many of the tutorials and projects out there is a small, easy-to-understand set of tools and techniques. In less than an hour and with only 30 lines of code, I believe anyone can write and deploy their very own read-only API. You don’t have to be a senior full-stack engineer — a basic grasp of JavaScript and some experience with npm is all you need.

At the end of this article you’ll be able to deploy your very own API without the headache of managing a server. I’ll list out each dependency and explain why we’re incorporating it. I’ll also give you an intro to some of the newer concepts involved, and provide links to resources to go deeper.

Let’s get started!

A rundown of the API concepts

There are a couple of common ways to work with APIs. But let’s begin by (super briefly) explaining what an API is all about: reading and updating data.

Over the past 20 years, some standard ways to build APIs have emerged. REST (short for REpresentational State Transfer) is one of the most common. To use a REST API, you make a call to a server through a URL — say api.example.com/rest/books — and expect to get a list of books back in a format like JSON or XML. To get a single book, we’d go back to the server at a URL — like api.example.com/rest/books/123 — and expect the data for book #123. Adding a new book or updating a specific book’s data means more trips to the server at similar, purpose-defined URLs.

That’s the basic idea of two concepts we’ll be looking at here: GraphQL and Serverless.

GraphQL

Applications that do a lot of getting and updating of data make a lot of API calls. Complicated software, like Twitter, might make hundreds of calls to get the data for a single page. Collecting the right data from a handful of URLs and formatting it can be a real headache. In 2012, Facebook developers starting looking for new ways to get and update data more efficiently.

Their key insight was that for the most part, data in complicated applications has relationships to other data. A user has followers, who are each users themselves, who each have their own followers, and those followers have tweets, which have replies from other users. Drawing the relationships between data results in a graph and that graph can help a server do a lot of clever work formatting and sending (or updating) data, and saving front-end developers time and frustration. Graph Query Language, aka GraphQL, was born.

GraphQL is different from the REST API approach in its use of URLs and queries. To get a list of books from our API using GraphQL, we don’t need to go to a specific URL (like our api.example.com/graphql/books example). Instead, we call up the API at the top level — which would be api.example.com/graphql in our example — and tell it what kind of information we want back with a JSON object:

{   books {     id     title     author   } }

The server sees that request, formats our data, and sends it back in another JSON object:

{   "books" : [     {       "id" : 123       "title" : "The Greatest CSS Tricks Vol. I"       "author" : "Chris Coyier"     }, {       // ...     }   ] }

Sebastian Scholl compares GraphQL to REST using a fictional cocktail party that makes the distinction super clear. The bottom line: GraphQL allows us to request the exact data we want while REST gives us a dump of everything at the URL.

Concept 2: Serverless

Whenever I see the word “serverless,” I think of Chris Watterston’s famous sticker.

Similarly, there is no such thing as a truly “serverless” application. Chris Coyier nice sums it up his “Serverless” post:

What serverless is trying to mean, it seems to me, is a new way to manage and pay for servers. You don’t buy individual servers. You don’t manage them. You don’t scale them. You don’t balance them. You aren’t really responsible for them. You just pay for what you use.

The serverless approach makes it easier to build and deploy back-end applications. It’s especially easy for folks like me who don’t have a background in back-end development. Rather than spend my time learning how to provision and maintain a server, I often hand the hard work off to someone (or even perhaps something) else.

It’s worth checking out the CSS-Tricks guide to all things serverless. On the Ideas page, there’s even a link to a tutorial on building a serverless API!

Picking our tools

If you browse through that serverless guide you’ll see there’s no shortage of tools and resources to help us on our way to building an API. But exactly which ones we use requires some initial thought and planning. I’m going to cover two specific tools that we’ll use for our read-only API.

Tool 1: NodeJS and Express

Again, I don’t have much experience with back-end web development. But one of the few things I have encountered is Node.js. Many of you are probably aware of it and what it does, but it’s essentially JavaScript that runs on a server instead of a web browser. Node.js is perfect for someone coming from the front-end development side of things because we can work directly in JavaScript — warts and all — without having to reach for some back-end language.

Express is one of the most popular frameworks for Node.js. Back before React was king (How Do You Do, Fellow Kids?), Express was the go-to for building web applications. It does all sorts of handy thing like routing, templating, and error handling.

I’ll be honest: frameworks like Express intimidate me. But for a simple API, Express is extremely easy to use and understand. There’s an official GraphQL helper for Express, and a plug-and-play library for making a serverless application called serverless-http. Neat, right?!

Tool 2: Netlify functions

The idea of running an application without maintaining a server sounds too good to be true. But check this out: not only can you accomplish this feat of modern sorcery, you can do it for free. Mind blowing.

Netlify offers a free plan with serverless functions that will give you up to 125,000 API calls in a month. Amazon offers a similar service called Lambda. We’ll stick with Netlify for this tutorial.

Netlify includes Netlify Dev which is a CLI for Netlify’s platform. Essentially, it lets us run a simulation of our in a fully-featured production environment, all within the safety of our local machine. We can use it to build and test our serverless functions without needing to deploy them.

At this point, I think it’s worth noting that not everyone agrees that running Express in a serverless function is a good idea. As Paul Johnston explains, if you’re building your functions for scale, it’s best to break each piece of functionality out into its own single-purpose function. Using Express the way I have means that every time a request goes to the API, the whole Express server has to be booted up from scratch — not very efficient. Deploy to production at your own risk.

Let’s get building!

Now that we have out tools in place, we can kick off the project. Let’s start by creating a new folder, navigating to fit in terminal, then running npm init  on it. Once npm creates a package.json file, we can install the dependencies we need. Those dependencies are:

  1. Express
  2. GraphQL and express-graphql. These allow us to receive and respond to GraphQL requests.
  3. Bodyparser. This is a small layer that translates the requests we get to and from JSON, which is what GraphQL expects.
  4. Serverless-http. This serves as a wrapper for Express that makes sure our application can be used on a serverless platform, like Netlify.

That’s it! We can install them all in a single command:

npm i express express-graphql graphql body-parser serverless-http

We also need to install Netlify Dev as a global dependency so we can use it as a CLI:

npm i -g netlify-dev

File structure

There’s a few files that are required for our API to work correctly. The first is netlify.toml which should be created at the project’s root directory. This is a configuration file to tell Netlify how to handle our project. Here’s what we need in the file to define our startup command, our build command and where our serverless functions are located:

[build] 
   # This command builds the site   command = "npm run build" 
   # This is the directory that will be deployed   publish = "build" 
   # This is where our functions are located   functions = "functions"

That functions line is super important; it tells Netlify where we’ll be putting our API code.

Next, let’s create that /functions folder at the project’s root, and create a new file inside it called api.js.  Open it up and add the following lines to the top so our dependencies are available to use and are included in the build:

const express = require("express"); const bodyParser = require("body-parser"); const expressGraphQL = require("express-graphql"); const serverless = require("serverless-http");

Setting up Express only takes a few lines of code. First, we’ll initial Express and wrap it in the serverless-http serverless function:

const app = express(); module.exports.handler = serverless(app);

These lines initialize Express, and wrap it in the serverless-http function. module.exports.handler lets Netlify know that our serverless function is the Express function.

Now let’s configure Express itself:

app.use(bodyParser.json()); app.use(   "/",   expressGraphQL({     graphiql: true   }) );

These two declarations tell Express what middleware we’re running. Middleware is what we want to happen between the request and response. In our case, we want to parse JSON using bodyparser, and handle it with express-graphql. The graphiql:true configuration for express-graphql will give us a nice user interface and playground for testing.

Defining the GraphQL schema

In order to understand requests and format responses, GraphQL needs to know what our data looks like. If you’ve worked with databases then you know that this kind of data blueprint is called a schema. GraphQL combines this well-defined schema with types — that is, definitions of different kinds of data — to work its magic.

The very first thing our schema needs is called a root query. This will handle any data requests coming in to our API. It’s called a “root” query because it’s accessed at the root of our API— say, api.example.com/graphql.

For this demonstration, we’ll build a hello world example; the root query should result in a response of “Hello world.”

So, our GraphQL API will need a schema (composed of types) for the root query. GraphQL provides some ready-built types, including a schema, a generic object², and a string.

Let’s get those by adding this below the imports:

const {   GraphQLSchema,   GraphQLObjectType,   GraphQLString } = require("graphql");

Then we’ll define our schema like this:

const schema = new GraphQLSchema({   query: new GraphQLObjectType({     name: 'HelloWorld',     fields: () => ({ /* we'll put our response here */ })   }) })

The first element in the object, with the key query, tells GraphQL how to handle a root query. Its value is a GraphQL object with the following configuration:

  • name – A reference used for documentation purposes
  • fields – Defines the data that our server will respond with. It might seem strange to have a function that just returns an object here, but this allows us to use variables and functions defined elsewhere in our file without needing to define them first³.
const schema = new GraphQLSchema({   query: new GraphQLObjectType({     name: "HelloWorld",     fields: () => ({       message: {         type: GraphQLString,         resolve: () => "Hello World",       },     }),   }), });

The fields function returns an object and our schema only has a single message field so far. The message we want to respond with is a string, so we specify its type as a GraphQLString. The resolve function is run by our server to generate the response we want. In this case, we’re only  returning “Hello World” but in a more complicated application, we’d probably use this function to go to our database and retrieve some data.

That’s our schema! We need to tell our Express server about it, so let’s open up api.js and make sure the Express configuration is updated to this:

app.use(   "/",   expressGraphQL({     schema: schema,     graphiql: true   }) );

Running the server locally

Believe it or not, we’re ready to start the server! Run netlify dev in Terminal from the project’s root folder. Netlify Dev will read the netlify.toml configuration, bundle up your api.js function, and make it available locally from there. If everything goes according to plan, you’ll see a message like “Server now ready on http://localhost:8888.” 

If you go to localhost:8888 like I did the first time, you might be a little disappointed to get a 404 error.

But fear not! Netlify is running the function, only in a different directory than you might expect, which is /.netlify/functions. So, if you go to localhost:8888/.netlify/functions/api, you should see the GraphiQL interface as expected. Success!

Now, that’s more like it!

The screen we get is the GraphiQL playground and we can use it to test out the API. First, clear out the comments in the left pane and replace them with the following:

{   message }

This might seem a little… naked… but you just wrote a GraphQL query! What we’re saying is that we’d like to see the message field we defined in api.js. Click the “Run” button, and on the righth, you’ll see the following:

{   "data": {     "message": "Hello World"   } }

I don’t know about you, but I did a little fist pump when I did this the first time. We built an API!

Bonus: Redirecting requests

One of my hang-ups while learning about Netlify’s serverless functions is that they run on the /.netlify/functions path. It wasn’t ideal to type or remember it and I nearly bailed for another solution. But it turns out you can easily redirect requests when running and deploying on Netlfiy. All it takes is creating a file in the project’s root directory called _redirects (no extension necessary) with the following line in it:

/api /.netlify/functions/api 200!

This tells Netlify that any traffic that goes to yoursite.com/api should be sent to /.netlify/functions/api. The 200! bit instructs the server to send back a status code of 200 (meaning everything’s OK).

Deploying the API

To deploy the project, we need to connect the source code to Netlfiy. I host mine in a GitHub repo, which allows for continuous deployment.

After connecting the repository to Netlfiy, the rest is automatic: the code is processed and deployed as a serverless function! You can log into the Netlify dashboard to see the logs from any function.

Conclusion

Just like that, we are able to create a serverless API using GraphQL with a few lines of JavaScript and some light configuration. And hey, we can even deploy — for free. 

The possibilities are endless. Maybe you want to create your own personal knowledge base, or a tool to serve up design tokens. Maybe you want to try your hand at making your own PokéAPI. Or, maybe you’re interesting in working with GraphQL.

Regardless of what you make, it’s these sorts of technologies that are getting more and more accessible every day. It’s exciting to be able to work with some of the most modern tools and techniques without needing a deep technical back-end knowledge.

If you’d like to see at the complete source code for this project, it’s available on GitHub.

Some of the code in this tutorial was adapted from Web Dev Simplified’s “Learn GraphQL in 40 minutes” article. It’s a great resource to go one step deeper into GraphQL. However, it’s also focused on a more traditional server-full Express.


  1. If you’d like to see the full result of my explorations, I’ve written a companion piece called “A design API in practice” on my website.
  2. The reasons you need a special GraphQL object, instead of a regular ol’ vanilla JavaScript object in curly braces, is a little beyond the scope of this tutorial. Just keep in mind that GraphQL is a finely-tuned machine that uses these specialized types to be fast and resilient.
  3. Scope and hoisting are some of the more confusing topics in JavaScript. MDN has a good primer that’s worth checking out.

The post Building Serverless GraphQL API in Node with Express and Netlify appeared first on CSS-Tricks.

CSS-Tricks

, , , , ,
[Top]

Get Started Building GraphQL APIs With Node

We all have a number of interests and passions. For example, I’m interested in JavaScript, 90’s indie rock and hip hop, obscure jazz, the city of Pittsburgh, pizza, coffee, and movies starring John Lurie. We also have family members, friends, acquaintances, classmates, and colleagues who also have their own social relationships, interests, and passions. Some of these relationships and interests overlap, like my friend Riley who shares my interest in 90’s hip hop and pizza. Others do not, like my colleague Harrison, who prefers Python to JavaScript, only drinks tea, and prefers current pop music. All together, we each have a connected graph of the people in our lives, and the ways that our relationships and interests overlap.

These types of interconnected data are exactly the challenge that GraphQL initially set out to solve in API development. By writing a GraphQL API we are able to efficiently connect data, which reduces the complexity and number of requests, while allowing us to serve the client precisely the data that it needs. (If you’re into more GraphQL metaphors, check out Meeting GraphQL at a Cocktail Mixer.)

In this article, we’ll build a GraphQL API in Node.js, using the Apollo Server package. To do so, we’ll explore fundamental GraphQL topics, write a GraphQL schema, develop code to resolve our schema functions, and access our API using the GraphQL Playground user interface.

What is GraphQL?

GraphQL is an open source query and data manipulation language for APIs. It was developed with the goal of providing single endpoints for data, allowing applications to request exactly the data that is needed. This has the benefit of not only simplifying our UI code, but also improving performance by limiting the amount of data that needs to be sent over the wire.

What we’re building

To follow along with this tutorial, you’ll need Node v8.x or later and some familiarity with working with the command line. 

We’re going to build an API application for book highlights, allowing us to store memorable passages from the things that we read. Users of the API will be able to perform “CRUD” (create, read, update, delete) operations against their highlights:

  • Create a new highlight
  • Read an individual highlight as well as a list of highlights
  • Update a highlight’s content
  • Delete a highlight

Getting started

To get started, first create a new directory for our project, initialize a new node project, and install the dependencies that we’ll need:

# make the new directory mkdir highlights-api # change into the directory cd highlights-api # initiate a new node project npm init -y # install the project dependencies npm install apollo-server graphql # install the development dependencies npm install nodemon --save-dev

Before moving on, let’s break down our dependencies:

  • apollo-server is a library that enables us to work with GraphQL within our Node application. We’ll be using it as a standalone library, but the team at Apollo has also created middleware for working with existing Node web applications in ExpresshapiFastify, and Koa.
  • graphql includes the GraphQL language and is a required peer dependency of apollo-server.
  • nodemon is a helpful library that will watch our project for changes and automatically restart our server.

With our packages installed, let’s next create our application’s root file, named index.js. For now, we’ll console.log() a message in this file:

console.log("📚 Hello Highlights");

To make our development process simpler, we’ll update the scripts object within our package.json file to make use of the nodemon package:

"scripts": {   "start": "nodemon index.js" },

Now, we can start our application by typing npm start in the terminal application. If everything is working properly, you will see 📚 Hello Highlights logged to your terminal.

GraphQL schema types

A schema is a written representation of our data and interactions. By requiring a schema, GraphQL enforces a strict plan for our API. This is because the API can only return data and perform interactions that are defined within the schema. The fundamental component of GraphQL schemas are object types. GraphQL contains five built-in types:

  • String: A string with UTF-8 character encoding
  • Boolean: A true or false value
  • Int: A 32-bit integer
  • Float: A floating-point value
  • ID: A unique identifier

We can construct a schema for an API with these basic components. In a file named schema.js, we can import the gql library and prepare the file for our schema syntax:

const { gql } = require('apollo-server');  const typeDefs = gql`   # The schema will go here `;  module.exports = typeDefs;

To write our schema, we first define the type. Let’s consider how we might define a schema for our highlights application. To begin, we would create a new type with a name of Highlight:

const typeDefs = gql`   type Highlight {   } `;

Each highlight will have a unique ID,  some content, a title, and an author. The Highlight schema will look something like this:

const typeDefs = gql`   type Highlight {     id: ID     content: String     title: String     author: String   } `;

We can make some of these fields required by adding an exclamation point:

const typeDefs = gql`   type Highlight {     id: ID!     content: String!     title: String     author: String   } `;

Though we’ve defined an object type for our highlights, we also need to provide a description of how a client will fetch that data. This is called a query. We’ll dive more into queries shortly, but for now let’s describe in our schema the ways in which someone will retrieve highlights. When requesting all of our highlights, the data will be returned as an array (represented as [Highlight]) and when we want to retrieve a single highlight we will need to pass an ID as a parameter.

const typeDefs = gql`   type Highlight {     id: ID!     content: String!     title: String     author: String   }   type Query {     highlights: [Highlight]!     highlight(id: ID!): Highlight   } `;

Now, in the index.js file, we can import our type definitions and set up Apollo Server:

const {ApolloServer } = require('apollo-server'); const typeDefs = require('./schema');  const server = new ApolloServer({ typeDefs });  server.listen().then(({ url }) => {   console.log(`📚 Highlights server ready at $ https://css-tricks.com/get-started-building-graphql-apis-with-node/`); });

If we’ve kept the node process running, the application will have automatically updated and relaunched, but if not, typing npm start  from the project’s directory in the terminal window will start the server. If we look at the terminal, we should see that nodemon is watching our files and the server is running on a local port:

[nodemon] 2.0.2 [nodemon] to restart at any time, enter `rs` [nodemon] watching dir(s): *.* [nodemon] watching extensions: js,mjs,json [nodemon] starting `node index.js` 📚 Highlights server ready at http://localhost:4000/

Visiting the URL in the browser will launch the GraphQL Playground application, which provides a user interface for interacting with our API.

GraphQL Resolvers

Though we’ve developed our project with an initial schema and Apollo Server setup, we can’t yet interact with our API. To do so, we’ll introduce resolvers. Resolvers perform exactly the action their name implies; they resolve the data that the API user has requested. We will write these resolvers by first defining them in our schema and then implementing the logic within our JavaScript code. Our API will contain two types of resolvers: queries and mutations.

Let’s first add some data to interact with. In an application, this would typically be data that we’re retrieving and writing to from a database, but for our example let’s use an array of objects. In the index.js file add the following:

let highlights = [   {     id: '1',     content: 'One day I will find the right words, and they will be simple.',     title: 'Dharma Bums',     author: 'Jack Kerouac'   },   {     id: '2',     content: 'In the limits of a situation there is humor, there is grace, and everything else.',     title: 'Arbitrary Stupid Goal',     author: 'Tamara Shopsin'   } ]

Queries

A query requests specific data from an API, in its desired format. The query will then return an object, containing the data that the API user has requested. A query never modifies the data; it only accesses it. We’ve already written a two queries in our schema. The first returns an array of highlights and the second returns a specific highlight. The next step is to write the resolvers that will return the data.

In the index.js file, we can add a resolvers object, which can contain our queries:

const resolvers = {   Query: {     highlights: () => highlights,     highlight: (parent, args) => {       return highlights.find(highlight => highlight.id === args.id);     }   } };

The highlights query returns the full array of highlights data. The highlight query accepts two parameters: parent and args. The parent is the first parameter of any GraqhQL query in Apollo Server and provides a way of accessing the context of the query. The args parameter allows us to access the user provided arguments. In this case, users of the API will be supplying an id argument to access a specific highlight.

We can then update our Apollo Server configuration to include the resolvers:

const server = new ApolloServer({ typeDefs, resolvers });

With our query resolvers written and Apollo Server updated, we can now query API using the GraphQL Playground. To access the GraphQL Playground, visit http://localhost:4000 in your web browser.

A query is formatted as so:

query {   queryName {       field       field     } }

With this in mind, we can write a query that requests the ID, content, title, and author for each our highlights:

query {   highlights {     id     content     title     author   } }

Let’s say that we had a page in our UI that lists only the titles and authors of our highlighted texts. We wouldn’t need to retrieve the content for each of those highlights. Instead, we could write a query that only requests the data that we need:

query {   highlights {     title     author   } }

We’ve also written a resolver to query for an individual note by including an ID parameter with our query. We can do so as follows:

query {   highlight(id: "1") {     content   } }

Mutations

We use a mutation when we want to modify the data in our API. In our highlight example, we will want to write a mutation to create a new highlight, one to update an existing highlight, and a third to delete a highlight. Similar to a query, a mutation is also expected to return a result in the form of an object, typically the end result of the performed action.

The first step to updating anything in GraphQL is to write the schema. We can include mutations in our schema, by adding a mutation type to our schema.js file:

type Mutation {   newHighlight (content: String! title: String author: String): Highlight!   updateHighlight(id: ID! content: String!): Highlight!   deleteHighlight(id: ID!): Highlight! }

Our newHighlight mutation will take the required value of content along with optional title and author values and return a Highlight. The updateHighlight mutation will require that a highlight id and content be passed as argument values and will return the updated Highlight. Finally, the deleteHighlight mutation will accept an ID argument, and will return the deleted Highlight.

With the schema updated to include mutations, we can now update the resolvers in our index.js file to perform these actions. Each mutation will update our highlights array of data.

const resolvers = {   Query: {     highlights: () => highlights,     highlight: (parent, args) => {       return highlights.find(highlight => highlight.id === args.id);     }   },   Mutation: {     newHighlight: (parent, args) => {       const highlight = {         id: String(highlights.length + 1),         title: args.title || '',         author: args.author || '',         content: args.content       };       highlights.push(highlight);       return highlight;     },     updateHighlight: (parent, args) => {       const index = highlights.findIndex(highlight => highlight.id === args.id);       const highlight = {         id: args.id,         content: args.content,         author: highlights[index].author,         title: highlights[index].title       };       highlights[index] = highlight;       return highlight;     },     deleteHighlight: (parent, args) => {       const deletedHighlight = highlights.find(         highlight => highlight.id === args.id       );       highlights = highlights.filter(highlight => highlight.id !== args.id);       return deletedHighlight;     }   } };

With these mutations written, we can use the GraphQL Playground to practice mutating the data. The structure of a mutation is nearly identical to that of a query, specifying the name of the mutation, passing the argument values, and requesting specific data in return. Let’s start by adding a new highlight:

mutation {   newHighlight(author: "Adam Scott" title: "JS Everywhere" content: "GraphQL is awesome") {     id     author     title     content   } }

We can then write mutations to update a highlight:

mutation {   updateHighlight(id: "3" content: "GraphQL is rad") {     id     content   } }

And to delete a highlight:

mutation {   deleteHighlight(id: "3") {     id   } }

Wrapping up

Congratulations! You’ve now successfully built a GraphQL API, using Apollo Server, and can run GraphQL queries and mutations against an in-memory data object. We’ve established a solid foundation for exploring the world of GraphQL API development.

Here are some potential next steps to level up:

The post Get Started Building GraphQL APIs With Node appeared first on CSS-Tricks.

CSS-Tricks

, , , ,
[Top]

Instant GraphQL Backend with Fine-grained Security Using FaunaDB

GraphQL is becoming popular and developers are constantly looking for frameworks that make it easy to set up a fast, secure and scalable GraphQL API. In this article, we will learn how to create a scalable and fast GraphQL API with authentication and fine-grained data-access control (authorization). As an example, we’ll build an API with register and login functionality. The API will be about users and confidential files so we’ll define advanced authorization rules that specify whether a logged-in user can access certain files. 

By using FaunaDB’s native GraphQL and security layer, we receive all the necessary tools to set up such an API in minutes. FaunaDB has a free tier so you can easily follow along by creating an account at https://dashboard.fauna.com/. Since FaunaDB automatically provides the necessary indexes and translates each GraphQL query to one FaunaDB query, your API is also as fast as it can be (no n+1 problems!).

Setting up the API is simple: drop in a schema and we are ready to start. So let’s get started!  

The use-case: users and confidential files

We need an example use-case that demonstrates how security and GraphQL API features can work together. In this example, there are users and files. Some files can be accessed by all users, and some are only meant to be accessed by managers. The following GraphQL schema will define our model:

type User {   username: String! @unique   role: UserRole! }  enum UserRole {   MANAGER   EMPLOYEE }  type File {   content: String!   confidential: Boolean! }  input CreateUserInput {   username: String!   password: String!   role: UserRole! }  input LoginUserInput {   username: String!   password: String! }  type Query {   allFiles: [File!]! }  type Mutation {   createUser(input: CreateUserInput): User! @resolver(name: "create_user")   loginUser(input: LoginUserInput): String! @resolver(name: "login_user") }

When looking at the schema, you might notice that the createUser and loginUser Mutation fields have been annotated with a special directive named @resolver. This is a directive provided by the FaunaDB GraphQL API, which allows us to define a custom behavior for a given Query or Mutation field. Since we’ll be using FaunaDB’s built-in authentication mechanisms, we will need to define this logic in FaunaDB after we import the schema. 

Importing the schema

First, let’s import the example schema into a new database. Log into the FaunaDB Cloud Console with your credentials. If you don’t have an account yet, you can sign up for free in a few seconds.

Once logged in, click the “New Database” button from the home page:

Choose a name for the new database, and click the “Save” button: 

Next, we will import the GraphQL schema listed above into the database we just created. To do so, create a file named schema.gql containing the schema definition. Then, select the GRAPHQL tab from the left sidebar, click the “Import Schema” button, and select the newly-created file: 

The import process creates all of the necessary database elements, including collections and indexes, for backing up all of the types defined in the schema. It automatically creates everything your GraphQL API needs to run efficiently. 

You now have a fully functional GraphQL API which you can start testing out in the GraphQL playground. But we do not have data yet. More specifically, we would like to create some users to start testing our GraphQL API. However, since users will be part of our authentication, they are special: they have credentials and can be impersonated. Let’s see how we can create some users with secure credentials!

Custom resolvers for authentication

Remember the createUser and loginUser mutation fields that have been annotated with a special directive named @resolver. createUser is exactly what we need to start creating users, however the schema did not really define how a user needs to created; instead, it was tagged with a @resolver tag.

By tagging a specific mutation with a custom resolver such as @resolver(name: "create_user") we are informing FaunaDB that this mutation is not implemented yet but will be implemented by a User-defined function (UDF). Since our GraphQL schema does not know how to express this, the import process will only create a function template which we still have to fill in.

A UDF is a custom FaunaDB function, similar to a stored procedure, that enables users to define a tailor-made operation in Fauna’s Query Language (FQL). This function is then used as the resolver of the annotated field. 

We will need a custom resolver since we will take advantage of the built-in authentication capabilities which can not be expressed in standard GraphQL. FaunaDB allows you to set a password on any database entity. This password can then be used to impersonate this database entity with the Login function which returns a token with certain permissions. The permissions that this token holds depend on the access rules that we will write.

Let’s continue to implement the UDF for the createUser field resolver so that we can create some test users. First, select the Shell tab from the left sidebar:

As explained before, a template UDF has already been created during the import process. When called, this template UDF prints an error message stating that it needs to be updated with a proper implementation. In order to update it with the intended behavior, we are going to use FQL’s Update function.

So, let’s copy the following FQL query into the web-based shell, and click the “Run Query” button:

Update(Function("create_user"), {   "body": Query(     Lambda(["input"],       Create(Collection("User"), {         data: {           username: Select("username", Var("input")),           role: Select("role", Var("input")),         },         credentials: {           password: Select("password", Var("input"))         }       })       )   ) });

Your screen should look similar to:

The create_user UDF will be in charge of properly creating a User document along with a password value. The password is stored in the document within a special object named credentials that is encrypted and cannot be retrieved back by any FQL function. As a result, the password is securely saved in the database making it impossible to read from either the FQL or the GraphQL APIs. The password will be used later for authenticating a User through a dedicated FQL function named Login, as explained next.

Now, let’s add the proper implementation for the UDF backing up the loginUser field resolver through the following FQL query:

Update(Function("login_user"), {   "body": Query(     Lambda(["input"],       Select(         "secret",         Login(           Match(Index("unique_User_username"), Select("username", Var("input"))),            { password: Select("password", Var("input")) }         )       )     )   ) });

Copy the query listed above and paste it into the shell’s command panel, and click the “Run Query” button:

The login_user UDF will attempt to authenticate a User with the given username and password credentials. As mentioned before, it does so via the Login function. The Login function verifies that the given password matches the one stored along with the User document being authenticated. Note that the password stored in the database is not output at any point during the login process. Finally, in case the credentials are valid, the login_user UDF returns an authorization token called a secret which can be used in subsequent requests for validating the User’s identity.

With the resolvers in place, we will continue with creating some sample data. This will let us try out our use case and help us better understand how the access rules are defined later on.

Creating sample data

First, we are going to create a manager user. Select the GraphQL tab from the left sidebar, copy the following mutation into the GraphQL Playground, and click the “Play” button:

mutation CreateManagerUser {   createUser(input: {     username: "bill.lumbergh"     password: "123456"     role: MANAGER   }) {     username     role   } }

Your screen should look like this:

Next, let’s create an employee user by running the following mutation through the GraphQL Playground editor:

mutation CreateEmployeeUser {   createUser(input: {     username: "peter.gibbons"     password: "abcdef"     role: EMPLOYEE   }) {     username     role   } }

You should see the following response:

Now, let’s create a confidential file by running the following mutation:

mutation CreateConfidentialFile {   createFile(data: {     content: "This is a confidential file!"     confidential: true   }) {     content     confidential   } }

As a response, you should get the following:

And lastly, create a public file with the following mutation:

mutation CreatePublicFile {   createFile(data: {     content: "This is a public file!"     confidential: false   }) {     content     confidential   } }

If successful, it should prompt the following response:

Now that all the sample data is in place, we need access rules since this article is about securing a GraphQL API. The access rules determine how the sample data we just created can be accessed, since by default a user can only access his own user entity. In this case, we are going to implement the following access rules: 

  1. Allow employee users to read public files only.
  2. Allow manager users to read both public files and, only during weekdays, confidential files.

As you might have already noticed, these access rules are highly specific. We will see however that the ABAC system is powerful enough to express very complex rules without getting in the way of the design of your GraphQL API.

Such access rules are not part of the GraphQL specification so we will define the access rules in the Fauna Query Language (FQL), and then verify that they are working as expected by executing some queries from the GraphQL API. 

But what is this “ABAC” system that we just mentioned? What does it stand for, and what can it do?

What is ABAC?

ABAC stands for Attribute-Based Access Control. As its name indicates, it’s an authorization model that establishes access policies based on attributes. In simple words, it means that you can write security rules that involve any of the attributes of your data. If our data contains users we could use the role, department, and clearance level to grant or refuse access to specific data. Or we could use the current time, day of the week, or location of the user to decide whether he can access a specific resource. 

In essence, ABAC allows the definition of fine-grained access control policies based on environmental properties and your data. Now that we know what it can do, let’s define some access rules to give you concrete examples.

Defining the access rules

In FaunaDB, access rules are defined in the form of roles. A role consists of the following data:

  • name —  the name that identifies the role
  • privileges — specific actions that can be executed on specific resources 
  • membership — specific identities that should have the specified privileges

Roles are created through the CreateRole FQL function, as shown in the following example snippet:

CreateRole({   name: "role_name",   membership: [     // ...   ],   privileges: [     // ...   ] })

You can see two important concepts in this role; membership and privileges. Membership defines who receives the privileges of the role and privileges defines what these permissions are. Let’s write a simple example rule to start with: “Any user can read all files.”

Since the rule applies on all users, we would define the membership like this: 

membership: {   resource: Collection("User") }

Simple right? We then continue to define the “Can read all files” privilege for all of these users.

privileges: [   {     resource: Collection("File"),     actions: { read: true }   } ]

The direct effect of this is that any token that you receive by logging in with a user via our loginUser GraphQL mutation can now access all files. 

This is the simplest rule that we can write, but in our example we want to limit access to some confidential files. To do that, we can replace the {read: true} syntax with a function. Since we have defined that the resource of the privilege is the “File” collection, this function will take each file that would be accessed by a query as the first parameter. You can then write rules such as: “A user can only access a file if it is not confidential”. In FaunaDB’s FQL, such a function is written by using Query(Lambda(‘x’, … <logic that users Var(‘x’)>)).

Below is the privilege that would only provide read access to non-confidential files: 

privileges: [   {     resource: Collection("File"),     actions: {       // Read and establish rule based on action attribute       read: Query(         // Read and establish rule based on resource attribute         Lambda("fileRef",           Not(Select(["data", "confidential"], Get(Var("fileRef"))))         )       )     }   } ]

This directly uses properties of the “File” resource we are trying to access. Since it’s just a function, we could also take into account environmental properties like the current time. For example, let’s write a rule that only allows access on weekdays. 

privileges: [     {       resource: Collection("File"),       actions: {         read: Query(           Lambda("fileRef",             Let(               {                 dayOfWeek: DayOfWeek(Now())               },               And(GTE(Var("dayOfWeek"), 1), LTE(Var("dayOfWeek"), 5))               )           )         )       }     } ]

As mentioned in our rules, confidential files should only be accessible by managers. Managers are also users, so we need a rule that applies to a specific segment of our collection of users. Luckily, we can also define the membership as a function; for example, the following Lambda only considers users who have the MANAGER role to be part of the role membership. 

membership: {   resource: Collection("User"),   predicate: Query(    // Read and establish rule based on user attribute     Lambda("userRef",        Equals(Select(["data", "role"], Get(Var("userRef"))), "MANAGER")     )   ) }

In sum, FaunaDB roles are very flexible entities that allow defining access rules based on all of the system elements’ attributes, with different levels of granularity. The place where the rules are defined — privileges or membership — determines their granularity and the attributes that are available, and will differ with each particular use case.

Now that we have covered the basics of how roles work, let’s continue by creating the access rules for our example use case!

In order to keep things neat and tidy, we’re going to create two roles: one for each of the access rules. This will allow us to extend the roles with further rules in an organized way if required later. Nonetheless, be aware that all of the rules could also have been defined together within just one role if needed.

Let’s implement the first rule: 

“Allow employee users to read public files only.”

In order to create a role meeting these conditions, we are going to use the following query:

CreateRole({   name: "employee_role",   membership: {     resource: Collection("User"),     predicate: Query(        Lambda("userRef",         // User attribute based rule:         // It grants access only if the User has EMPLOYEE role.         // If so, further rules specified in the privileges         // section are applied next.                 Equals(Select(["data", "role"], Get(Var("userRef"))), "EMPLOYEE")       )     )   },   privileges: [     {       // Note: 'allFiles' Index is used to retrieve the        // documents from the File collection. Therefore,        // read access to the Index is required here as well.       resource: Index("allFiles"),       actions: { read: true }      },     {       resource: Collection("File"),       actions: {         // Action attribute based rule:         // It grants read access to the File collection.         read: Query(           Lambda("fileRef",             Let(               {                 file: Get(Var("fileRef")),               },               // Resource attribute based rule:               // It grants access to public files only.               Not(Select(["data", "confidential"], Var("file")))             )           )         )       }     }   ] })

Select the Shell tab from the left sidebar, copy the above query into the command panel, and click the “Run Query” button:

Next, let’s implement the second access rule:

“Allow manager users to read both public files and, only during weekdays, confidential files.”

In this case, we are going to use the following query:

CreateRole({   name: "manager_role",   membership: {     resource: Collection("User"),     predicate: Query(       Lambda("userRef",          // User attribute based rule:         // It grants access only if the User has MANAGER role.         // If so, further rules specified in the privileges         // section are applied next.         Equals(Select(["data", "role"], Get(Var("userRef"))), "MANAGER")       )     )   },   privileges: [     {       // Note: 'allFiles' Index is used to retrieve       // documents from the File collection. Therefore,        // read access to the Index is required here as well.       resource: Index("allFiles"),       actions: { read: true }      },     {       resource: Collection("File"),       actions: {         // Action attribute based rule:         // It grants read access to the File collection.         read: Query(           Lambda("fileRef",             Let(               {                 file: Get(Var("fileRef")),                 dayOfWeek: DayOfWeek(Now())               },               Or(                 // Resource attribute based rule:                 // It grants access to public files.                 Not(Select(["data", "confidential"], Var("file"))),                 // Resource and environmental attribute based rule:                 // It grants access to confidential files only on weekdays.                 And(                   Select(["data", "confidential"], Var("file")),                   And(GTE(Var("dayOfWeek"), 1), LTE(Var("dayOfWeek"), 5))                   )               )             )           )         )       }     }   ] })

Copy the query into the command panel, and click the “Run Query” button:

At this point, we have created all of the necessary elements for implementing and trying out our example use case! Let’s continue with verifying that the access rules we just created are working as expected…

Putting everything in action

Let’s start by checking the first rule: 

“Allow employee users to read public files only.”

The first thing we need to do is log in as an employee user so that we can verify which files can be read on its behalf. In order to do so, execute the following mutation from the GraphQL Playground console:

mutation LoginEmployeeUser {   loginUser(input: {     username: "peter.gibbons"     password: "abcdef"   }) }

As a response, you should get a secret access token. The secret represents that the user has been authenticated successfully:

At this point, it’s important to remember that the access rules we defined earlier are not directly associated with the secret that is generated as a result of the login process. Unlike other authorization models, the secret token itself does not contain any authorization information on its own, but it’s just an authentication representation of a given document.

As explained before, access rules are stored in roles, and roles are associated with documents through their membership configuration. After authentication, the secret token can be used in subsequent requests to prove the caller’s identity and determine which roles are associated with it. This means that access rules are effectively verified in every subsequent request and not only during authentication. This model enables us to modify access rules dynamically without requiring users to authenticate again.

Now, we will use the secret issued in the previous step to validate the identity of the caller in our next query. In order to do so, we need to include the secret as a Bearer Token as part of the request. To achieve this, we have to modify the Authorization header value set by the GraphQL Playground. Since we don’t want to miss the admin secret that is being used as default, we’re going to do this in a new tab.

Click the plus (+) button to create a new tab, and select the HTTP HEADERS panel on the bottom left corner of the GraphQL Playground editor. Then, modify the value of the Authorization header to include the secret obtained earlier, as shown in the following example. Make sure to change the scheme value from Basic to Bearer as well:

{   "authorization": "Bearer fnEDdByZ5JACFANyg5uLcAISAtUY6TKlIIb2JnZhkjU-SWEaino" }

With the secret properly set in the request, let’s try to read all of the files on behalf of the employee user. Run the following query from the GraphQL Playground: 

query ReadFiles {   allFiles {     data {       content       confidential     }   } }

In the response, you should see the public file only:

Since the role we defined for employee users does not allow them to read confidential files, they have been correctly filtered out from the response!

Let’s move on now to verifying our second rule:

“Allow manager users to read both public files and, only during weekdays, confidential files.”

This time, we are going to log in as the employee user. Since the login mutation requires an admin secret token, we have to go back first to the original tab containing the default authorization configuration. Once there, run the following query:

mutation LoginManagerUser {   loginUser(input: {     username: "bill.lumbergh"     password: "123456"   }) }

You should get a new secret as a response:

Copy the secret, create a new tab, and modify the Authorization header to include the secret as a Bearer Token as we did before. Then, run the following query in order to read all of the files on behalf of the manager user:

query ReadFiles {   allFiles {     data {       content       confidential     }   } }

As long as you’re running this query on a weekday (if not, feel free to update this rule to include weekends), you should be getting both the public and the confidential file in the response:

And, finally, we have verified that all of the access rules are working successfully from the GraphQL API!

Conclusion

In this post, we have learned how a comprehensive authorization model can be implemented on top of the FaunaDB GraphQL API using FaunaDB’s built-in ABAC features. We have also reviewed ABAC’s distinctive capabilities, which allow defining complex access rules based on the attributes of each system component.

While access rules can only be defined through the FQL API at the moment, they are effectively verified for every request executed against the FaunaDB GraphQL API. Providing support for specifying access rules as part of the GraphQL schema definition is already planned for the future.

In short, FaunaDB provides a powerful mechanism for defining complex access rules on top of the GraphQL API covering most common use cases without the need for third-party services.

The post Instant GraphQL Backend with Fine-grained Security Using FaunaDB appeared first on CSS-Tricks.

CSS-Tricks

, , , , , ,
[Top]

Apollo GraphQL without JavaScript

It’s cool to see progressive enhancement being done even while using the fanciest of the fancy front-end technologies.

This is a button in a JSX React component that has a click handler applied directly to it that fires a data mutation Ajax request through Apollo GraphQL. That is about the least friendly environment for progressive enhancement I can imagine.

Hugo Giraudel writes that they do server-side rendering already, so the next tricky part is the click handler. Without JavaScript, the only mechanism we have for posting data is a <form>, so that’s what they do. It submits to the /graphql endpoint with the data it needs to perform the mutation via hidden inputs, plus additional data on where to redirect upon success or failure.

Pretty neat.

Direct Link to ArticlePermalink

The post Apollo GraphQL without JavaScript appeared first on CSS-Tricks.

CSS-Tricks

, , ,
[Top]

Practice GraphQL Queries With the State of JavaScript API

Learning how to build GraphQL APIs can be quite challenging. But you can learn how to use GraphQL APIs in 10 minutes! And it so happens I’ve got the perfect API for that: the brand new, fresh-of-the-VS-Code State of JavaScript GraphQL API.

The State of JavaScript survey is an annual survey of the JavaScript landscape. We’ve been doing it for four years now, and the most recent edition reached over 20,000 developers.

We’ve always relied on Gatsby to build our showcase site, but until this year, we were feeding our data to Gatsby in the form of static YAML files generated through some kind of arcane magic known to mere mortals as “ElasticSearch.”

But since Gatsby poops out all the data sources it eats as GraphQL anyway, we though we might as well skip the middleman and feed it GraphQL directly! Yes I know, this metaphor is getting grosser by the second and I already regret it. My point is: we built an internal GraphQL API for our data, and now we’re making it available to everybody so that you too can easily exploit out dataset!

“But wait,” you say. “I’ve spent all my life studying the blade which has left me no time to learn GraphQL!” Not to worry: that’s where this article comes in.

What is GraphQL?

At its core, GraphQL is a syntax that lets you specify what data you want to receive from an API. Note that I said API, and not database. Unlike SQL, a GraphQL query does not go to your database directly but to your GraphQL API endpoint which, in turn, can connect to a database or any other data source.

The big advantage of GraphQL over older paradigms like REST is that it lets you ask for what you want. For example:

query {   user(id: "foo123") {     name   } }

Would get you a user object with a single name field. Also need the email? Just do:

query {   user(id: "foo123") {     name     email   } }

As you can see, the user field in this example supports an id argument. And now we come to the coolest feature of GraphQL, nesting:

query {   user(id: "foo123") {     name     email     posts {        title       body     }   } }

Here, we’re saying that we want to find the user’s posts, and load their title and body. The nice thing about GraphQL is that our API layer can do the work of figuring out how to fetch that extra information in that specific format since we’re not talking to the database directly, even if it’s not stored in a nested format inside our actual database.

Sebastian Scholl does a wonderful job explaining GraphQL as if you were meeting it for the first time at a cocktail mixer.

Introducing GraphiQL

Building our first query with GraphiQL, the IDE for GraphQL

GraphiQL (note the “i” in there) is the most common GraphQL IDE out there, and it’s the tool we’ll use to explore the State of JavaScript API. You can launch it right now at graphiql.stateofjs.com and it’ll automatically connect to our endpoint (which is api.stateofjs.com/graphql). The UI consists of three main elements: the Explorer panel, the Query Builder and the Results panels. We’ll later add the Docs panels to that but let’s keep it simple for now.

The Explorer tab is part of a turbo-boosted version of GraphiQL developed and maintained by OneGraph. Much thanks to them for helping us integrate it. Be sure to check out their example repo if you want to deploy your own GraphiQL instance.

Don’t worry, I’m not going to make you write any code just yet. Instead, let’s start from an existing GraphQL query, such as the one corresponding to developer experience with React over the past four years.

Remember how I said we were using GraphQL internally to build our site? Not only are we exposing the API, we’re also exposing the queries themselves. Click the little “Export” button, copy the query in the “GraphQL” tab, paste it inside GraphiQL’s query builder window, and click the “Play” button.

Source URL
The GraphQL tab in the modal that triggers when clicking Export.

If everything went according to plan, you should see your data appear in the Results panel. Let’s take a moment to analyze the query.

query react_experienceQuery {   survey(survey: js) {     tool(id: react) {       id       entity {         homepage         name         github {           url         }       }       experience {         allYears {           year           total           completion {             count             percentage           }           awarenessInterestSatisfaction {             awareness             interest             satisfaction           }           buckets {             id             count             percentage           }         }       }     }   } }

First comes the query keyword which defines the start of our GraphQL query, along with the query’s name, react_experienceQuery. Query names are optional in GraphQL, but can be useful for debugging purposes.

We then have our first field, survey, which takes a survey argument. (We also have a State of CSS survey so we needed to specify the survey in question.) We then have a tool field which takes an id argument. And everything after that is related to the API results for that specific tool. entity gives you information on the specific tool selected (e.g. React) while experience contains the actual statistical data.

Now, rather than keep going through all those fields here, I’m going to teach you a little trick: Command + click (or Control + click) any of those fields inside GraphiQL, and it will bring up the Docs panel. Congrats, you’ve just witnessed another one of GraphQL’s nifty tricks, self-documentation! You can write documentation directly into your API definition and GraphiQL will in turn make it available to end users.

Changing variables

Let’s tweak things a bit: in the Query Builder, replace “react” with “vuejs” and you should notice another cool GraphQL thing: auto-completion. This is quite helpful to avoid making mistakes or to save time! Press “Play” again and you’ll get the same data, but for Vue this time.

Using the Explorer

We’ll now unlock one more GraphQL power tool: the Explorer. The Explorer is basically a tree of your entire API that not only lets you visualize its structure, but also build queries without writing a single line of code! So, let’s try recreating our React query using the explorer this time.

First, let’s open a new browser tab and load graphiql.stateofjs.com in it to start fresh. Click the survey node in the Explorer, and under it the tool node, click “Play.” The tool’s id field should be automatically added to the results and — by the way — this is a good time to change the default argument value (“typescript”) to “react.”

Next, let’s keep drilling down. If you add entity without any subfields, you should see a little squiggly red line underneath it letting you know you need to also specify at least one or more subfields. So, let’s add id, name and homepage at a minimum. Another useful trick: you can automatically tell GraphiQL to add all of a field’s subfields by option+control-clicking it in the Explorer.

Next up is experience. Keep adding fields and subfields until you get something that approaches the query you initially copied from the State of JavaScript site. Any item you select is instantly reflected inside the Query Builder panel. There you go, you just wrote your first GraphQL query!

Filtering data

You might have noticed a purple filters item under experience. This is actually they key reason why you’d want to use our GraphQL API as opposed to simply browsing our site: any aggregation provided by the API can be filtered by a number of factors, such as the respondent’s gender, company size, salary, or country.

Expand filters and select companySize and then eq and range_more_than_1000. You’ve just calculated React’s popularity among large companies! Select range_1 instead and you can now compare it with the same datapoint among freelancers and independent developers.

It’s important to note that GraphQL only defines very low-level primitives, such as fields and arguments, so these eq, in, nin, etc., filters are not part of GraphQL itself, but simply arguments we’ve defined ourselves when setting up the API. This can be a lot of work at first, but it does give you total control over how clients can query your API.

Conclusion

Hopefully you’ve seen that querying a GraphQL API isn’t that big a deal, especially with awesome tools like GraphiQL to help you do it. Now sure, actually integrating GraphQL data into a real-world app is another matter, but this is mostly due to the complexity of handling data transfers between client and server. The GraphQL part itself is actually quite easy!

Whether you’re hoping to get started with GraphQL or just learn enough to query our data and come up with some amazing new insights, I hope this guide will have proven useful!

And if you’re interested in taking part in our next survey (which should be the State of CSS 2020) then be sure to sign up for our mailing list so you can be notified when we launch it.

State of JavaScript API Reference

You can find more info about the API (including links to the actual endpoint and the GitHub repo) at api.stateofjs.com.

Here’s a quick glossary of the terms used inside the State of JS API.

Top-Level Fields

  • Demographics: Regroups all demographics info such as gender, company size, salary, etc.
  • Entity: Gives access to more info about a specific “entity” (library, framework, programming language, etc.).
  • Feature: Usage data for a specific JavaScript or CSS feature.
  • Features: Same, but across a range of features.
  • Matrices: Gives access to the data used to populate our cross-referential heatmaps.
  • Opinion: Opinion data for a specific question (e.g. “Do you think JavaScript is moving in the right direction?”).
  • OtherTools: Data for the “other tools” section (text editors, browsers, bundlers, etc.).
  • Resources: Data for the “resources” section (sites, blogs, podcasts, etc.).
  • Tool: Experience data for a specific tool (library, framework, etc.).
  • Tools: Same, but across a range of tools.
  • ToolsRankings: Rankings (awareness, interest, satisfaction) across a range of tools.

Common Fields

  • Completion: Which proportion of survey respondents answered any given question.
  • Buckets: The array containing the actual data.
  • Year/allYears: Whether to get the data for a specific survey year; or an array containing all years.

The post Practice GraphQL Queries With the State of JavaScript API appeared first on CSS-Tricks.

CSS-Tricks

, , , ,
[Top]