Tag: Create

Create a Tag Cloud with some Simple CSS and even Simpler JavaScript

I’ve always liked tag clouds. I like the UX of seeing what tags are most popular on a website by seeing the relative font size of the tags, popular tags being bigger. They seem to have fallen out of fashion, though you do often see versions of them used in illustrations in tools like Wordle.

How difficult is it to make a tag cloud? Not very difficult at all. Let’s see!

Let’s start with the markup

For our HTML, we’re going to put each of our tags into a list, <ul class="tags"><ul>. We’ll be injecting into that with JavaScript.

If your tag cloud is already in HTML, and you are just looking to do the relative font-size thing, that’s good! Progressive enhancement! You should be able to adapt the JavaScript later on so it does just that part, but not necessarily building and injecting the tags themselves.

I have mocked out some JSON with a certain amount of articles tagged with each property. Let’s write some JavaScript to go grab that JSON feed and do three things.

First, it we’ll create an <li> from each entry for our list. Imagine the HTML, so far, is like this:

<ul class="tags">   <li>align-content</li>   <li>align-items</li>   <li>align-self</li>   <li>animation</li>   <li>...</li>   <li>z-index</li> </ul>

Second, we’ll put the number of articles each property has in parentheses beside inside each list item. So now, the markup is like this:

<ul class="tags">   <li>align-content (2)</li>   <li>align-items (2)</li>   <li>align-self (2)</li>   <li>animation (9)</li>   <li>...</li>   <li>z-index (4)</li> </ul>

Third, and last, we’ll create a link around each tag that goes to the correct place. This is where we can set the font-size property for each item depending on how many articles that property is tagged with, so animation that has 13 articles will be much bigger than background-color which only has one article.

<li class="tag">   <a     class="tag__link"     href="https://example.com/tags/animation"     style="font-size: 5em">     animation (9)   </a> </li>

The JavasScript part

Let’s have a look at the JavaScript to do this.

const dataURL =   "https://gist.githubusercontent.com/markconroy/536228ed416a551de8852b74615e55dd/raw/9b96c9049b10e7e18ee922b4caf9167acb4efdd6/tags.json"; const tags = document.querySelector(".tags"); const fragment = document.createDocumentFragment(); const maxFontSizeForTag = 6;  fetch(dataURL)   .then(function (res) {     return res.json();   })   .then(function (data) {     // 1. Create a new array from data     let orderedData = data.map((x) => x);     // 2. Order it by number of articles each tag has     orderedData.sort(function(a, b) {       return a.tagged_articles.length - b.tagged_articles.length;     });     orderedData = orderedData.reverse();     // 3. Get a value for the tag with the most articles     const highestValue = orderedData[0].tagged_articles.length;     // 4. Create a list item for each result from data.     data.forEach((result) => handleResult(result, highestValue));     // 5. Append the full list of tags to the tags element     tags.appendChild(tag);   });

The JavaScript above uses the Fetch API to fetch the URL where tags.json is hosted. Once it gets this data, it returns it as JSON. Here we seque into a new array called orderedData (so we don’t mutate the original array), find the tag with the most articles. We’ll use this value later on in a font-scale so all other tags will have a font-size relative to it. Then, forEach result in the response, we call a function I have named handleResult() and pass the result and the highestValue to this function as a parameter. It also creates:

  • a variable called tags which is what we will use to inject each list item that we create from the results,
  • a variable for a fragment to hold the result of each iteration of the loop, which we will later append to the tags, and
  • a variable for the max font size, which we’ll use in our font scale later.

Next up, the handleResult(result) function:

function handleResult(result, highestValue) {   const tag = document.createElement("li");   tag.classList.add("tag");   tag.innerHTML = `<a class="tag__link" href="$ {result.href}" style="font-size: $ {result.tagged_articles.length * 1.25}em">$ {result.title} ($ {result.tagged_articles.length})</a>`;    // Append each tag to the fragment   fragment.appendChild(tag); }

This is pretty simple function that creates a list element set to the variable named tag and then adds a .tag class to this list element. Once that’s created, it sets the innerHTML of the list item to be a link and populates the values of that link with values from the JSON feed, such as a result.href for the link to the tag. When each li is created, it’s then added as a string to the fragment, which we will later then append to the tags variable. The most important item here is the inline style tag that uses the number of articles—result.tagged_articles.length—to set a relative font size using em units for this list item. Later, we’ll change that value to a formula to use a basic font scale.

I find this JavaScript just a little bit ugly and hard on the eyes, so let’s create some variables and a simple font scale formula for each of our properties to tidy it up and make it easier to read.

function handleResult(result, highestValue) {   // Set our variables   const name = result.title;   const link = result.href;   const numberOfArticles = result.tagged_articles.length;   let fontSize = numberOfArticles / highestValue * maxFontSizeForTag;   fontSize = +fontSize.toFixed(2);   const fontSizeProperty = `$ {fontSize}em`;    // Create a list element for each tag and inline the font size   const tag = document.createElement("li");   tag.classList.add("tag");   tag.innerHTML = `<a class="tag__link" href="$ {link}" style="font-size: $ {fontSizeProperty}">$ {name} ($ {numberOfArticles})</a>`;      // Append each tag to the fragment   fragment.appendChild(tag); }

By setting some variables before we get into creating our HTML, the code is a lot easier to read. And it also makes our code a little bit more DRY, as we can use the numberOfArticles variable in more than one place.

Once each of the tags has been returned in this .forEach loop, they are collected together in the fragment. After that, we use appendChild() to add them to the tags element. This means the DOM is manipulated only once, instead of being manipulated each time the loop runs, which is a nice performance boost if we happen to have a large number of tags.

Font scaling

What we have now will work fine for us, and we could start writing our CSS. However, our formula for the fontSize variable means that the tag with the most articles (which is “flex” with 25) will be 6em (25 / 25 * 6 = 6), but the tags with only one article are going to be 1/25th the size of that (1 / 25 * 6 = 0.24), making the content unreadable. If we had a tag with 100 articles, the smaller tags would fare even worse (1 / 100 * 6 = 0.06).

To get around this, I have added a simple if statement that if the fontSize that is returned is less than 1, set the fontSize to 1. If not, keep it at its current size. Now, all the tags will be within a font scale of 1em to 6em, rounded off to two decimal places. To increase the size of the largest tag, just change the value of maxFontSizeForTag. You can decide what works best for you based on the amount of content you are dealing with.

function handleResult(result, highestValue) {   // Set our variables   const numberOfArticles = result.tagged_articles.length;   const name = result.title;   const link = result.href;   let fontSize = numberOfArticles / highestValue * maxFontSizeForTag;   fontSize = +fontSize.toFixed(2);      // Make sure our font size will be at least 1em   if (fontSize <= 1) {     fontSize = 1;   } else {     fontSize = fontSize;   }   const fontSizeProperty = `$ {fontSize}em`;      // Then, create a list element for each tag and inline the font size.   tag = document.createElement("li");   tag.classList.add("tag");   tag.innerHTML = `<a class="tag__link" href="$ {link}" style="font-size: $ {fontSizeProperty}">$ {name} ($ {numberOfArticles})</a>`;    // Append each tag to the fragment   fragment.appendChild(tag); }

Now the CSS!

We’re using flexbox for our layout since each of the tags can be of varying width. We then center-align them with justify-content: center, and remove the list bullets.

.tags {   display: flex;   flex-wrap: wrap;   justify-content: center;   max-width: 960px;   margin: auto;   padding: 2rem 0 1rem;   list-style: none;   border: 2px solid white;   border-radius: 5px; }

We’ll also use flexbox for the individual tags. This allows us to vertically align them with align-items: center since they will have varying heights based on their font sizes.

.tag {   display: flex;   align-items: center;   margin: 0.25rem 1rem; }

Each link in the tag cloud has a small bit of padding, just to allow it to be clickable slightly outside of its strict dimensions.

.tag__link {   padding: 5px 5px 0;   transition: 0.3s;   text-decoration: none; }

I find this is handy on small screens especially for people who might find it harder to tap on links. The initial text-decoration is removed as I think we can assume each item of text in the tag cloud is a link and so a special decoration is not needed for them.

I’ll just drop in some colors to style things up a bit more:

.tag:nth-of-type(4n+1) .tag__link {   color: #ffd560; } .tag:nth-of-type(4n+2) .tag__link {   color: #ee4266; } .tag:nth-of-type(4n+3) .tag__link {   color: #9e88f7; } .tag:nth-of-type(4n+4) .tag__link {   color: #54d0ff; }

The color scheme for this was stolen directly from Chris’ blogroll, where every fourth tag starting at tag one is yellow, every fourth tag starting at tag two is red, every fourth tag starting at tag three is purple. and every fourth tag starting at tag four is blue.

Screenshot of the blogroll on Chris Coyier's personal website, showing lots of brightly colored links with the names of blogs included in the blogroll.

We then set the focus and hover states for each link:

.tag:nth-of-type(4n+1) .tag__link:focus, .tag:nth-of-type(4n+1) .tag__link:hover {   box-shadow: inset 0 -1.3em 0 0 #ffd560; } .tag:nth-of-type(4n+2) .tag__link:focus, .tag:nth-of-type(4n+2) .tag__link:hover {   box-shadow: inset 0 -1.3em 0 0 #ee4266; } .tag:nth-of-type(4n+3) .tag__link:focus, .tag:nth-of-type(4n+3) .tag__link:hover {   box-shadow: inset 0 -1.3em 0 0 #9e88f7; } .tag:nth-of-type(4n+4) .tag__link:focus, .tag:nth-of-type(4n+4) .tag__link:hover {   box-shadow: inset 0 -1.3em 0 0 #54d0ff; }

I could probably have created a custom variable for the colors at this stage—like --yellow: #ffd560, etc.—but decided to go with the longhand approach for IE 11 support. I love the box-shadow hover effect. It’s a very small amount of code to achieve something much more visually-appealing than a standard underline or bottom-border. Using em units here means we have decent control over how large the shadow would be in relation to the text it needed to cover.

OK, let’s top this off by setting every tag link to be black on hover:

.tag:nth-of-type(4n+1) .tag__link:focus, .tag:nth-of-type(4n+1) .tag__link:hover, .tag:nth-of-type(4n+2) .tag__link:focus, .tag:nth-of-type(4n+2) .tag__link:hover, .tag:nth-of-type(4n+3) .tag__link:focus, .tag:nth-of-type(4n+3) .tag__link:hover, .tag:nth-of-type(4n+4) .tag__link:focus, .tag:nth-of-type(4n+4) .tag__link:hover {   color: black; }

And we’re done! Here’s the final result:


The post Create a Tag Cloud with some Simple CSS and even Simpler JavaScript appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , , ,

Let’s Create a Lightweight Native Event Bus in JavaScript

An event bus is a design pattern (and while we’ll be talking about JavaScript here, it’s a design pattern in any language) that can be used to simplify communications between different components. It can also be thought of as publish/subscribe or pubsub.

The idea is that components can listen to the event bus to know when to do the things they do. For example, a “tab panel” component might listen for events telling it to change the active tab. Sure, that might happen from a click on one of the tabs, and thus handled entirely within that component. But with an event bus, some other elements could tell the tab to change. Imagine a form submission which causes an error that the user needs to be alerted to within a specific tab, so the form sends a message to the event bus telling the tabs component to change the active tab to the one with the error. That’s what it looks like aboard an event bus.

Pseudo-code for that situation would be like…

// Tab Component Tabs.changeTab = id =&gt; {   // DOM work to change the active tab. } MyEventBus.subscribe("change-tab", Tabs.changeTab(id));  // Some other component... // something happens, then: MyEventBus.publish(&quot;change-tab&quot;, 2);  

Do you need a JavaScript library to this? (Trick question: you never need a JavaScript library). Well, there are lots of options out there:

Also, check out Mitt which is a library that’s only 200 bytes gzipped. There is something about this simple pattern that inspires people to tackle it themselves in the most succincet way possible.

Let’s do that ourselves! We’ll use no third-party library at all and leverage an event listening system that is already built into JavaScript with the addEventListener we all know and love.

First, a little context

The addEventListener API in JavaScript is a member function of the EventTarget class. The reason we can bind a click event to a button is because the prototype interface of <button> (HTMLButtonElement) inherits from EventTarget indirectly.

Source: MDN Web Docs

Different from most other DOM interfaces, EventTarget can be created directly using the new keyword. It is supported in all modern browsers, but only fairly recently. As we can see in the screenshot above, Node inherits EventTarget, thus all DOM nodes have method addEventListener.

Here’s the trick

I’m suggesting an extremely lightweight Node type to act as our event-listening bus: an HTML comment (<!-- comment -->).

To a browser rendering engine, HTML comments are just notes in the code that have no functionality other than descriptive text for developers. But since comments are still written in HTML, they end up in the DOM as real nodes and have their own prototype interface—Comment—which inherits Node.

The Comment class can be created from new directly like EventTarget can:

const myEventBus = new Comment('my-event-bus');

We could also use the ancient, but widely-supported document.createComment API. It requires a data parameter, which is the content of the comment. It can even be an empty string:

const myEventBus = document.createComment('my-event-bus');

Now we can emit events using dispatchEvent, which accepts an Event Object. To pass user-defined event data, use CustomEvent, where the detail field can be used to contain any data.

myEventBus.dispatchEvent(   new CustomEvent('event-name', {      detail: 'event-data'   }) );

Internet Explorer 9-11 supports CustomEvent, but none of the versions support new CustomEvent. It’s complex to simulate it using document.createEvent, so if IE support is important to you, there’s a way to polyfill it.

Now we can bind event listeners:

myEventBus.addEventListener('event-name', ({ detail }) => {   console.log(detail); // => event-data });

If an event intends to be triggered only once, we may use { once: true } for one-time binding. Other options won’t fit here. To remove event listeners, we can use the native removeEventListener.

Debugging

The number of events bound to single event bus can be huge. There also can be memory leaks if you forget to remove them. What if we want to know how many events are bound to myEventBus?

myEventBus is a DOM node, so it can be inspected by DevTools in the browser. From there, we can find the events in the Elements → Event Listeners tab. Be sure to uncheck “Ancestors” to hide events bound on document and window.

An example

One drawback is that the syntax of EventTarget is slightly verbose. We can write a simple wrapper for it. Here is a demo in TypeScript below:

class EventBus<DetailType = any> {   private eventTarget: EventTarget;   constructor(description = '') { this.eventTarget = document.appendChild(document.createComment(description)); }   on(type: string, listener: (event: CustomEvent<DetailType>) => void) { this.eventTarget.addEventListener(type, listener); }   once(type: string, listener: (event: CustomEvent<DetailType>) => void) { this.eventTarget.addEventListener(type, listener, { once: true }); }   off(type: string, listener: (event: CustomEvent<DetailType>) => void) { this.eventTarget.removeEventListener(type, listener); }   emit(type: string, detail?: DetailType) { return this.eventTarget.dispatchEvent(new CustomEvent(type, { detail })); } }      // Usage const myEventBus = new EventBus<string>('my-event-bus'); myEventBus.on('event-name', ({ detail }) => {   console.log(detail); });  myEventBus.once('event-name', ({ detail }) => {   console.log(detail); });  myEventBus.emit('event-name', 'Hello'); // => HellonHello myEventBus.emit('event-name', 'World'); // => World

The following demo provides the compiled JavaScript.


And there we have it! We just created a dependency-free event-listening bus where one component can inform another component of changes to trigger an action. It doesn’t take a full library to do this sort of stuff, and the possibilities it opens up are pretty endless.


The post Let’s Create a Lightweight Native Event Bus in JavaScript appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , ,
[Top]

How to Create a Favicon That Changes Automatically

I found this Free Favicon Maker the other day. It’s a nice tool to make a favicon (true to its name), but unlike other favicon generators, this one lets you create one from scratch starting with a character or an emoji. Naturally, I was curious to look at the code to see how it works and, while I did, it got me thinking in different directions. I was reminded of something I read that says it is possible to dynamically change the favicon of a website. That’s actually what some websites use as a sort of notification for users: change the favicon to a red dot or some other indicator that communicates something is happening or has changed on the page.

I started browsing the emojis via emojipedia.org for inspiration and that’s when it struck: why not show the time with a clock emoji (🕛) and other related favicons? The idea is to check the time every minute and set the favicon with the corresponding clock emoji indicating the current time.

We’re going to do exactly that in this article, and it will work with plain JavaScript. Since I often use Gatsby for static sites though, we will also show how to do it in React. From there, the ideas should be usable regardless of what you want to do with your favicon and how.

Here’s a function that take an emoji as a parameter and returns a valid data URL that can be used as an image (or favicon!) source:

// Thanks to https://formito.com/tools/favicon const faviconHref = emoji =>   `data:image/svg+xml,<svg xmlns=%22http://www.w3.org/2000/svg%22 width=%22256%22 height=%22256%22 viewBox=%220 0 100 100%22><text x=%2250%%22 y=%2250%%22 dominant-baseline=%22central%22 text-anchor=%22middle%22 font-size=%2280%22>$ {emoji}</text></svg>`

And here’s a function that targets the favicon <link> in the <head> and changes it to that emoji:

const changeFavicon = emoji => {   // Ensure we have access to the document, i.e. we are in the browser.   if (typeof window === 'undefined') return    const link =     window.document.querySelector("link[rel*='icon']") ||     window.document.createElement("link")   link.type = "image/svg+xml"   link.rel = "shortcut icon"   link.href = faviconHref(emoji)    window.document.getElementsByTagName("head")[0].appendChild(link) }

(Shout out to this StackOverflow answer for that nice little trick on creating the link if it doesn’t exist.)

Feel free to give those a try! Open up the DevTools JavaScript console, copy and paste the two functions above, and call changeFavicon("💃"). You can do that right on this website and you’ll see the favicon change to that awesome dancing emoji.

Back to our clock/time project… If we want to show the emoji with the right emoji clock showing the correct time, we need to determine it from the current time. For example, if it’s 10:00 we want to show 🕙. If it’s 4.30 we want to show 🕟. There aren’t emojis for every single time, so we’ll show the best one we have. For example, between 9:45 to 10:14 we want to show the clock that shows 10:00; from 10:15 to 10:44 we want to show the clock that marks 10.30, etc.

We can do it with this function:

const currentEmoji = () => {   // Add 15 minutes and round down to closest half hour   const time = new Date(Date.now() + 15 * 60 * 1000)    const hours = time.getHours() % 12   const minutes = time.getMinutes() < 30 ? 0 : 30    return {     "0.0": "🕛",     "0.30": "🕧",     "1.0": "🕐",     "1.30": "🕜",     "2.0": "🕑",     "2.30": "🕝",     "3.0": "🕒",     "3.30": "🕞",     "4.0": "🕓",     "4.30": "🕟",     "5.0": "🕔",     "5.30": "🕠",     "6.0": "🕕",     "6.30": "🕡",     "7.0": "🕖",     "7.30": "🕢",     "8.0": "🕗",     "8.30": "🕣",     "9.0": "🕘",     "9.30": "🕤",     "10.0": "🕙",     "10.30": "🕥",     "11.0": "🕚",     "11.30": "🕦",   }[`$ {hours}.$ {minutes}`] }

Now we just have to call changeFavicon(currentEmoji()) every minute or so. If we had to do it with plain JavaScript, a simple setInterval would make the trick:

// One minute const delay = 60 * 1000  // Change the favicon when the page gets loaded... const emoji = currentEmoji() changeFavicon(emoji)  // ... and update it every minute setInterval(() => {   const emoji = currentEmoji()   changeFavicon(emoji) }, delay)

The React part

Since my blog is powered by Gatsby, I want to be able to use this code inside a React component, changing as little as possible. It is inherently imperative, as opposed to the declarative nature of React, and moreover has to be called every minute. How can we do that?

Enter Dan Abramov and his amazing blog post. Dan is a great writer who can explain complex things in a clear way, and I strongly suggest checking out this article, especially if you want a better understanding of React Hooks. You don’t necessarily need to understand everything in it — one of the strengths of hooks is that they can be used even without fully grasping the internal implementation. The important thing is to know how to use it. Here’s how that looks:

import { useEffect } from "react" import useInterval from "./useInterval"  const delay = 60 * 1000  const useTimeFavicon = () => {   // Change the favicon when the component gets mounted...   useEffect(() => {     const emoji = currentEmoji()     changeFavicon(emoji)   }, [])    // ... and update it every minute   useInterval(() => {     const emoji = currentEmoji()     changeFavicon(emoji)   }, delay) }

Finally, just call useTimeFavicon() in your root component, and you are good to go! Wanna see it work? Here’s a deployed CodePen Project where you can see it and here’s the project code itself.

Wrapping up

What we did here was cobble together three pieces of code together from three different sources to get the result we want. Ancient Romans would say Divide et Impera. (I’m Italian, so please indulge me a little bit of Latin!). That means “divide and conquer.” If you look at the task as a single entity, you may be a little anxious about it: “How can I show a favicon with the current time, always up to date, on my React website?” Getting all the details right is not trivial.

The good news is, you don’t always have to personally tackle all the details at the same moment: it is much more effective to divide the problem into sub-problems, and if any of these have already been solved by others, so much the better!

Sounds like web development, eh? There’s nothing wrong with using code written by others, as long as it’s done wisely. As they say, there is no need to reinvent the wheel and what we got here is a nice enhancement for any website — whether it’s displaying notifications, time updates, or whatever you can think of.


The post How to Create a Favicon That Changes Automatically appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , ,
[Top]

How to create a client-serverless Jamstack app using Netlify, Gatsby and Fauna

The Jamstack is a modern web development architecture based on client-side JavaScript, reusable APIs, and prebuilt Markup.

The key aspects of a Jamstack application are the following:

  • The entire app runs on a CDN (or ADN). CDN stands for Content Delivery Network and an ADN is an Application Delivery Network.
  • Everything lives in Git.
  • Automated builds run with a workflow when developers push the code.
  • There’s Automatic deployment of the prebuilt markup to the CDN/ADN.
  • Reusable APIs make hasslefree integrations with many of the services. To take a few examples, Stripe for the payment and checkout, Mailgun for email services, etc. We can also write custom APIs targeted to a specific use-case. We will see such examples of custom APIs in this article.
  • It’s practically Serverless. To put it more clearly, we do not maintain any servers, rather make use of already existing services (like email, media, database, search, and so on) or serverless functions.

In this article, we will learn how to build a Jamstack application that has:

  • A global data store with GraphQL support to store and fetch data with ease. We will use Fauna to accomplish this.
  • Serverless functions that also act as the APIs to fetch data from the Fauna data store. We will use Netlify serverless functions for this.
  • We will build the client side of the app using a Static Site Generator called Gatsbyjs.
  • Finally we will deploy the app on a CDN configured and managed by Netlify CDN.

So, what are we building today?

We all love shopping. How cool would it be to manage all of our shopping notes in a centralized place? So we’ll be building an app called ‘shopnote’ that allows us to manage shop notes. We can also add one or more items to a note, mark them as done, mark them as urgent, etc.

At the end of this article, our shopnote app will look like this,

TL;DR

We will learn things with a step-by-step approach in this article. If you want to jump into the source code or demonstration sooner, here are links to them.

Set up Fauna

Fauna is the data API for client-serverless applications. If you are familiar with any traditional RDBMS, a major difference with Fauna would be, it is a relational NOSQL system that gives all the capabilities of the legacy RDBMS. It is very flexible without compromising scalability and performance.

Fauna supports multiple APIs for data-access,

  • GraphQL: An open source data query and manipulation language. If you are new to the GraphQL, you can find more details from here, https://graphql.org/
  • Fauna Query Language (FQL): An API for querying Fauna. FQL has language specific drivers which makes it flexible to use with languages like JavaScript, Java, Go, etc. Find more details of FQL from here.

In this article we will explain the usages of GraphQL for the ShopNote application.

First thing first, sign up using this URL. Please select the free plan which is with a generous daily usage quota and more than enough for our usage.

Next, create a database by providing a database name of your choice. I have used shopnotes as the database name.

After creating the database, we will be defining the GraphQL schema and importing it into the database. A GraphQL schema defines the structure of the data. It defines the data types and the relationship between them. With schema we can also specify what kind of queries are allowed.

At this stage, let us create our project folder. Create a project folder somewhere on your hard drive with the name, shopnote. Create a file with the name, shopnotes.gql with the following content:

type ShopNote {   name: String!   description: String   updatedAt: Time   items: [Item!] @relation }   type Item {   name: String!   urgent: Boolean   checked: Boolean   note: ShopNote! }   type Query {   allShopNotes: [ShopNote!]! }

Here we have defined the schema for a shopnote list and item, where each ShopNote contains name, description, update time and a list of Items. Each Item type has properties like, name, urgent, checked and which shopnote it belongs to. 

Note the @relation directive here. You can annotate a field with the @relation directive to mark it for participating in a bi-directional relationship with the target type. In this case, ShopNote and Item are in a one-to-many relationship. It means, one ShopNote can have multiple Items, where each Item can be related to a maximum of one ShopNote.

You can read more about the @relation directive from here. More on the GraphQL relations can be found from here.

As a next step, upload the shopnotes.gql file from the Fauna dashboard using the IMPORT SCHEMA button,

Upon importing a GraphQL Schema, FaunaDB will automatically create, maintain, and update, the following resources:

  • Collections for each non-native GraphQL Type; in this case, ShopNote and Item.
  • Basic CRUD Queries/Mutations for each Collection created by the Schema, e.g. createShopNote allShopNotes; each of which are powered by FQL.
  • For specific GraphQL directives: custom Indexes or FQL for establishing relationships (i.e. @relation), uniqueness (@unique), and more!

Behind the scene, Fauna will also help to create the documents automatically. We will see that in a while.

Fauna supports a schema-free object relational data model. A database in Fauna may contain a group of collections. A collection may contain one or more documents. Each of the data records are inserted into the document. This forms a hierarchy which can be visualized as:

Here the data record can be arrays, objects, or of any other supported types. With the Fauna data model we can create indexes, enforce constraints. Fauna indexes can combine data from multiple collections and are capable of performing computations. 

At this stage, Fauna already created a couple of collections for us, ShopNote and Item. As we start inserting records, we will see the Documents are also getting created. We will be able view and query the records and utilize the power of indexes. You may see the data model structure appearing in your Fauna dashboard like this in a while,

Point to note here, each of the documents is identified by the unique ref attribute. There is also a ts field which returns the timestamp of the recent modification to the document. The data record is part of the data field. This understanding is really important when you interact with collections, documents, records using FQL built-in functions. However, in this article we will interact with them using GraphQL queries with Netlify Functions.

With all these understanding, let us start using our Shopenotes database that is created successfully and ready for use. 

Let us try some queries

Even though we have imported the schema and underlying things are in place, we do not have a document yet. Let us create one. To do that, copy the following GraphQL mutation query to the left panel of the GraphQL playground screen and execute.

mutation {   createShopNote(data: {     name: "My Shopping List"     description: "This is my today's list to buy from Tom's shop"     items: {       create: [         { name: "Butther - 1 pk", urgent: true }         { name: "Milk - 2 ltrs", urgent: false }         { name: "Meat - 1lb", urgent: false }       ]     }   }) {     _id     name     description     items {       data {         name,         urgent       }     }   } }

Note, as Fauna already created the GraphQL mutation classes in the background, we can directly use it like, createShopNote. Once successfully executed, you can see the response of a ShopNote creation at the right side of the editor.

The newly created ShopNote document has all the required details we have passed while creating it. We have seen ShopNote has a one-to-many relation with Item. You can see the shopnote response has the item data nested within it. In this case, one shopnote has three items. This is really powerful. Once the schema and relation are defined, the document will be created automatically keeping that relation in mind.

Now, let us try fetching all the shopnotes. Here is the GraphQL query:

query {     allShopNotes {     data {       _id       name       description       updatedAt       items {         data {           name,           checked,           urgent         }       }     }   } }

Let’s try the query in the playground as before:

Now we have a database with a schema and it is fully operational with creating and fetch functionality. Similarly, we can create queries for adding, updating, removing items to a shopnote and also updating and deleting a shopnote. These queries will be used at a later point in time when we create the serverless functions.

If you are interested to run other queries in the GraphQL editor, you can find them from here,

Create a Server Secret Key

Next, we need to create a secured server key to make sure the access to the database is authenticated and authorized.

Click on the SECURITY option available in the FaunaDB interface to create the key, like so,

On successful creation of the key, you will be able to view the key’s secret. Make sure to copy and save it somewhere safe.

We do not want anyone else to know about this key. It is not even a good idea to commit it to the source code repository. To maintain this secrecy, create an empty file called .env at the root level of your project folder.

Edit the .env file and add the following line to it (paste the generated server key in the place of, <YOUR_FAUNA_KEY_SECRET>).

FAUNA_SERVER_SECRET=<YOUR_FAUNA_KEY_SECRET>

Add a .gitignore file and write the following content to it. This is to make sure we do not commit the .env file to the source code repo accidentally. We are also ignoring node_modules as a best practice.

.env

We are done with all that had to do with Fauna’s setup. Let us move to the next phase to create serverless functions and APIs to access data from the Fauna data store. At this stage, the directory structure may look like this,

Set up Netlify Serverless Functions

Netlify is a great platform to create hassle-free serverless functions. These functions can interact with databases, file-system, and in-memory objects.

Netlify functions are powered by AWS Lambda. Setting up AWS Lambdas on our own can be a fairly complex job. With Netlify, we will simply set a folder and drop our functions. Writing simple functions automatically becomes APIs. 

First, create an account with Netlify. This is free and just like the FaunaDB free tier, Netlify is also very flexible.

Now we need to install a few dependencies using either npm or yarn. Make sure you have nodejs installed. Open a command prompt at the root of the project folder. Use the following command to initialize the project with node dependencies,

npm init -y

Install the netlify-cli utility so that we can run the serverless function locally.

npm install netlify-cli -g

Now we will install two important libraries, axios and dotenv. axios will be used for making the HTTP calls and dotenv will help to load the FAUNA_SERVER_SECRET environment variable from the .env file into process.env.

yarn add axios dotenv

Or:

npm i axios dotenv

Create serverless functions

Create a folder with the name, functions at the root of the project folder. We are going to keep all serverless functions under it.

Now create a subfolder called utils under the functions folder. Create a file called query.js under the utils folder. We will need some common code to query the data store for all the serverless functions. The common code will be in the query.js file.

First we import the axios library functionality and load the .env file. Next, we export an async function that takes the query and variables. Inside the async function, we make calls using axios with the secret key. Finally, we return the response.

// query.js   const axios = require("axios"); require("dotenv").config();   module.exports = async (query, variables) => {   const result = await axios({       url: "https://graphql.fauna.com/graphql",       method: "POST",       headers: {           Authorization: `Bearer $ {process.env.FAUNA_SERVER_SECRET}`       },       data: {         query,         variables       }  });    return result.data; };

Create a file with the name, get-shopnotes.js under the functions folder. We will perform a query to fetch all the shop notes.

// get-shopnotes.js   const query = require("./utils/query");   const GET_SHOPNOTES = `    query {        allShopNotes {        data {          _id          name          description          updatedAt          items {            data {              _id,              name,              checked,              urgent          }        }      }    }  }   `;   exports.handler = async () => {   const { data, errors } = await query(GET_SHOPNOTES);     if (errors) {     return {       statusCode: 500,       body: JSON.stringify(errors)     };   }     return {     statusCode: 200,     body: JSON.stringify({ shopnotes: data.allShopNotes.data })   }; };

Time to test the serverless function like an API. We need to do a one time setup here. Open a command prompt at the root of the project folder and type:

netlify login

This will open a browser tab and ask you to login and authorize access to your Netlify account. Please click on the Authorize button.

Next, create a file called, netlify.toml at the root of your project folder and add this content to it,

[build]     functions = "functions"   [[redirects]]    from = "/api/*"    to = "/.netlify/functions/:splat"    status = 200

This is to tell Netlify about the location of the functions we have written so that it is known at the build time.

Netlify automatically provides the APIs for the functions. The URL to access the API is in this form, /.netlify/functions/get-shopnotes which may not be very user-friendly. We have written a redirect to make it like, /api/get-shopnotes.

Ok, we are done. Now in command prompt type,

netlify dev

By default the app will run on localhost:8888 to access the serverless function as an API.

Open a browser tab and try this URL, http://localhost:8888/api/get-shopnotes:

Congratulations!!! You have got your first serverless function up and running.

Let us now write the next serverless function to create a ShopNote. This is going to be simple. Create a file named, create-shopnote.js under the functions folder. We need to write a mutation by passing the required parameters. 

//create-shopnote.js   const query = require("./utils/query");   const CREATE_SHOPNOTE = `   mutation($ name: String!, $ description: String!, $ updatedAt: Time!, $ items: ShopNoteItemsRelation!) {     createShopNote(data: {name: $ name, description: $ description, updatedAt: $ updatedAt, items: $ items}) {       _id       name       description       updatedAt       items {         data {           name,           checked,           urgent         }       }     }   } `;   exports.handler = async event => {      const { name, items } = JSON.parse(event.body);   const { data, errors } = await query(     CREATE_SHOPNOTE, { name, items });     if (errors) {     return {       statusCode: 500,       body: JSON.stringify(errors)     };   }     return {     statusCode: 200,     body: JSON.stringify({ shopnote: data.createShopNote })   }; };

Please give your attention to the parameter, ShopNotesItemRelation. As we had created a relation between the ShopNote and Item in our schema, we need to maintain that while writing the query as well.

We have de-structured the payload to get the required information from the payload. Once we got those, we just called the query method to create a ShopNote.

Alright, let’s test it out. You can use postman or any other tools of your choice to test it like an API. Here is the screenshot from postman.

Great, we can create a ShopNote with all the items we want to buy from a shopping mart. What if we want to add an item to an existing ShopNote? Let us create an API for it. With the knowledge we have so far, it is going to be really quick.

Remember, ShopNote and Item are related? So to create an item, we have to mandatorily tell which ShopNote it is going to be part of. Here is our next serverless function to add an item to an existing ShopNote.

//add-item.js   const query = require("./utils/query");   const ADD_ITEM = `   mutation($ name: String!, $ urgent: Boolean!, $ checked: Boolean!, $ note: ItemNoteRelation!) {     createItem(data: {name: $ name, urgent: $ urgent, checked: $ checked, note: $ note}) {       _id       name       urgent       checked       note {         name       }     }   } `;   exports.handler = async event => {      const { name, urgent, checked, note} = JSON.parse(event.body);   const { data, errors } = await query(     ADD_ITEM, { name, urgent, checked, note });     if (errors) {     return {       statusCode: 500,       body: JSON.stringify(errors)     };   }     return {     statusCode: 200,     body: JSON.stringify({ item: data.createItem })   }; };

We are passing the item properties like, name, if it is urgent, the check value and the note the items should be part of. Let’s see how this API can be called using postman,

As you see, we are passing the id of the note while creating an item for it.

We won’t bother writing the rest of the API capabilities in this article,  like updating, deleting a shop note, updating, deleting items, etc. In case, you are interested, you can look into those functions from the GitHub Repository.

However, after creating the rest of the API, you should have a directory structure like this,

We have successfully created a data store with Fauna, set it up for use, created an API backed by serverless functions, using Netlify Functions, and tested those functions/routes.

Congratulations, you did it. Next, let us build some user interfaces to show the shop notes and add items to it. To do that, we will use Gatsby.js (aka, Gatsby) which is a super cool, React-based static site generator.

The following section requires you to have basic knowledge of ReactJS. If you are new to it, you can learn it from here. If you are familiar with any other user interface technologies like, Angular, Vue, etc feel free to skip the next section and build your own using the APIs explained so far.

Set up the User Interfaces using Gatsby

We can set up a Gatsby project either using the starter projects or initialize it manually. We will build things from scratch to understand it better.

Install gatsby-cli globally. 

npm install -g gatsby-cli

Install gatsby, react and react-dom

yarn add gatsby react react-dom

Edit the scripts section of the package.json file to add a script for develop.

"scripts": {   "develop": "gatsby develop"  }

Gatsby projects need a special configuration file called, gatsby-config.js. Please create a file named, gatsby-config.js at the root of the project folder with the following content,

module.exports = {   // keep it empty     }

Let’s create our first page with Gatsby. Create a folder named, src at the root of the project folder. Create a subfolder named pages under src. Create a file named, index.js under src/pages with the following content:

import React, { useEffect, useState } from 'react';       export default () => {       const [loading, setLoading ] = useState(false);       const [shopnotes, setShopnotes] = useState(null);         return (     <>           <h1>Shopnotes to load here...</h1>     </>           )     } 

Let’s run it. We generally need to use the command gatsby develop to run the app locally. As we have to run the client side application with netlify functions, we will continue to use, netlify dev command.

netlify dev

That’s all. Try accessing the page at http://localhost:8888. You should see something like this,

Gatsby project build creates a couple of output folders which you may not want to push to the source code repository. Let us add a few entries to the .gitignore file so that we do not get unwanted noise.

Add .cache, node_modules and public to the .gitignore file. Here is the full content of the file:

.cache public node_modules *.env

At this stage, your project directory structure should match with the following:

Thinking of the UI components

We will create small logical components to achieve the ShopNote user interface. The components are:

  • Header: A header component consists of the Logo, heading and the create button to create a shopnote.
  • Shopenotes: This component will contain the list of the shop note (Note component).
  • Note: This is individual notes. Each of the notes will contain one or more items.
  • Item: Each of the items. It consists of the item name and actions to add, remove, edit an item.

You can see the sections marked in the picture below:

Install a few more dependencies

We will install a few more dependencies required for the user interfaces to be functional and look better. Open a command prompt at the root of the project folder and install these dependencies,

yarn add bootstrap lodash moment react-bootstrap react-feather shortid

Lets load all the Shop Notes

We will use the Reactjs useEffect hook to make the API call and update the shopnotes state variables. Here is the code to fetch all the shop notes. 

useEffect(() => {   axios("/api/get-shopnotes").then(result => {     if (result.status !== 200) {       console.error("Error loading shopnotes");       console.error(result);       return;     }     setShopnotes(result.data.shopnotes);     setLoading(true);   }); }, [loading]);

Finally, let us change the return section to use the shopnotes data. Here we are checking if the data is loaded. If so, render the Shopnotes component by passing the data we have received using the API.

return (   <div className="main">     <Header />     {       loading ? <Shopnotes data = { shopnotes } /> : <h1>Loading...</h1>     }   </div> );  

You can find the entire index.js file code from here The index.js file creates the initial route(/) for the user interface. It uses other components like, Shopnotes, Note and Item to make the UI fully operational. We will not go to a great length to understand each of these UI components. You can create a folder called components under the src folder and copy the component files from here.

Finally, the index.css file

Now we just need a css file to make things look better. Create a file called index.css under the pages folder. Copy the content from this CSS file to the index.css file.

import 'bootstrap/dist/css/bootstrap.min.css'; import './index.css'

That’s all. We are done. You should have the app up and running with all the shop notes created so far. We are not getting into the explanation of each of the actions on items and notes here not to make the article very lengthy. You can find all the code in the GitHub repo. At this stage, the directory structure may look like this,

A small exercise

I have not included the Create Note UI implementation in the GitHib repo. However, we have created the API already. How about you build the front end to add a shopnote? I suggest implementing a button in the header, which when clicked, creates a shopnote using the API we’ve already defined. Give it a try!

Let’s Deploy

All good so far. But there is one issue. We are running the app locally. While productive, it’s not ideal for the public to access. Let’s fix that with a few simple steps.

Make sure to commit all the code changes to the Git repository, say, shopnote. You have an account with Netlify already. Please login and click on the button, New site from Git.

Next, select the relevant Git services where your project source code is pushed. In my case, it is GitHub.

Browse the project and select it.

Provide the configuration details like the build command, publish directory as shown in the image below. Then click on the button to provide advanced configuration information. In this case, we will pass the FAUNA_SERVER_SECRET key value pair from the .env file. Please copy paste in the respective fields. Click on deploy.

You should see the build successful in a couple of minutes and the site will be live right after that.

In Summary

To summarize:

  • The Jamstack is a modern web development architecture based on client-side JavaScript, reusable APIs, and prebuilt Markup.
  • 70% – 80% of the features that once required a custom back-end can now be done either on the front end or there are APIs, services to take advantage of.
  • Fauna provides the data API for the client-serverless applications. We can use GraphQL or Fauna’s FQL to talk to the store.
  • Netlify serverless functions can be easily integrated with Fauna using the GraphQL mutations and queries. This approach may be useful when you have the need of  custom authentication built with Netlify functions and a flexible solution like Auth0.
  • Gatsby and other static site generators are great contributors to the Jamstack to give a fast end user experience.

Thank you for reading this far! Let’s connect. You can @ me on Twitter (@tapasadhikary) with comments, or feel free to follow.


The post How to create a client-serverless Jamstack app using Netlify, Gatsby and Fauna appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , , ,
[Top]

How to Create a Timeline Task List Component Using SVG

I’m thoroughly convinced that SVG unlocks a whole entire world of building interfaces on the web. It might seem daunting to learn SVG at first, but you have a spec that was designed to create shapes and yet, still has elements, like text, links, and aria labels available to you. You can accomplish some of the same effects in CSS, but it’s a little more particular to get positioning just right, especially across viewports and for responsive development.

What’s special about SVG is that all the positioning is based on a coordinate system, a little like the game Battleship. That means deciding where everything goes and how it’s drawn, as well as how it’s relative to each other, can be really straightforward to reason about. CSS positioning is for layout, which is great because you have things that correspond to one another in terms of the flow of the document. This otherwise positive trait is harder to work with if you’re making a component that’s very particular, with overlapping and precisely placed elements.

Truly, once you learn SVG, you can draw anything, and have it scale on any device. Even this very site uses SVG for custom UI elements, such as my avatar, above (meta!).

That little half circle below the author image is just SVG markup.

We won’t cover everything about SVGs in this post (you can learn some of those fundamentals here, here, here and here), but in order to illustrate the possibilities that SVG opens up for UI component development, let’s talk through one particular use case and break down how we would think about building something custom.

The timeline task list component

Recently, I was working on a project with my team at Netlify. We wanted to show the viewer which video in a series of videos in a course they were currently watching. In other words, we wanted to make some sort of thing that’s like a todo list, but shows overall progress as items are completed. (We made a free space-themed learning platform and it’s hella cool. Yes, I said hella.)

Here’s how that looks:

So how would we go about this? I’ll show an example in both Vue and React so that you can see how it might work in both frameworks.

The Vue version

We decided to make the platform in Next.js for dogfooding purposes (i.e. trying out our own Next on Netlify build plugin), but I’m more fluent in Vue so I wrote the initial prototype in Vue and ported it over to React.

Here is the full CodePen demo:

Let’s walk through this code a bit. First off, this is a single file component (SFC), so the template HTML, reactive script, and scoped styles are all encapsulated in this one file.

We’ll store some dummy tasks in data, including whether each task is completed or not. We’ll also make a method we can call on a click directive so that we can toggle whether the state is done or not.

<script> export default {   data() {     return {       tasks: [         {           name: 'thing',           done: false         },         // ...       ]     };   },   methods: {     selectThis(index) {       this.tasks[index].done = !this.tasks[index].done     }   } }; </script> 

Now, what we want to do is create an SVG that has a flexible viewBox depending on the amount of elements. We also want to tell screen readers that this a presentational element and that we will provide a title with a unique id of timeline. (Get more information on creating accessible SVGs.)

<template>   <div id="app">     <div>       <svg :viewBox="`0 0 30 $ {tasks.length * 50}`"            xmlns="http://www.w3.org/2000/svg"             width="30"             stroke="currentColor"             fill="white"            aria-labelledby="timeline"            role="presentation">            <title id="timeline">timeline element</title>         <!-- ... -->       </svg>     </div>   </div> </template>

The stroke is set to currentColor to allow for some flexibility — if we want to reuse the component in multiple places, it will inherit whatever color is used on the encapsulating div.

Next, inside the SVG, we want to create a vertical line that’s the length of the task list. Lines are fairly straightforward. We have x1 and x2 values (where the line is plotted on the x-axis), and similarly, y1 and y2.

<line x1="10" x2="10" :y1="num2" :y2="tasks.length * num1 - num2" />

The x-axis stays consistently at 10 because we’re drawing a line downward rather than left-to-right. We’ll store two numbers in data: the amount we want our spacing to be, which will be num1, and the amount we want our margin to be, which will be num2.

data() {   return {     num1: 32,     num2: 15,     // ...   } }

The y-axis starts with num2, which is subtracted from the end, as well as the margin. The tasks.length is multiplied by the spacing, which is num1.

Now, we’ll need the circles that lie on the line. Each circle is an indicator for whether a task has been completed or not. We’ll need one circle for each task, so we’ll use v-for with a unique key, which is the index (and is safe to use here as they will never reorder). We’ll connect the click directive with our method and pass in the index as a param as well.

CIrcles in SVG are made up of three attributes. The middle of the circle is plotted at cx and cy, and then we draw a radius with r. Like the line, cx starts at 10. The radius is 4 because that’s what’s readable at this scale. cy will be spaced like the line: index times the spacing (num1), plus the margin (num2).

Finally, we’ll put use a ternary to set the fill. If the task is done, it will be filled with currentColor. If not, it will be filled with white (or whatever the background is). This could be filled with a prop that gets passed in the background, for instance, where you have light and dark circles.

<circle    @click="selectThis(i)"    v-for="(task, i) in tasks"   :key="task.name"   cx="10"   r="4"   :cy="i * num1 + num2"   :fill="task.done ? 'currentColor' : 'white'"   class="select"/>

Finally, we are using CSS grid to align a div with the names of tasks. This is laid out much in the same way, where we’re looping through the tasks, and are also tied to that same click event to toggle the done state.

<template>   <div>     <div        @click="selectThis(i)"       v-for="(task, i) in tasks"       :key="task.name"       class="select">       {{ task.name }}     </div>   </div> </template>

The React version

Here is where we ended up with the React version. We’re working towards open sourcing this so that you can see the full code and its history. Here are a few modifications:

  • We’re using CSS modules rather than the SCFs in Vue
  • We’re importing the Next.js link, so that rather than toggling a “done” state, we’re taking a user to a dynamic page in Next.js
  • The tasks we’re using are actually stages of the course —or “Mission” as we call them — which are passed in here rather than held by the component.

Most of the other functionality is the same 🙂

import styles from './MissionTracker.module.css'; import React, { useState } from 'react'; import Link from 'next/link';  function MissionTracker({ currentMission, currentStage, stages }) {  const [tasks, setTasks] = useState([...stages]);  const num1 = [32];  const num2 = [15];   const updateDoneTasks = (index) => () => {    let tasksCopy = [...tasks];    tasksCopy[index].done = !tasksCopy[index].done;    setTasks(tasksCopy);  };   const taskTextStyles = (task) => {    const baseStyles = `$ {styles['tracker-select']} $ {styles['task-label']}`;     if (currentStage === task.slug.current) {      return baseStyles + ` $ {styles['is-current-task']}`;    } else {      return baseStyles;    }  };   return (    <div className={styles.container}>      <section>        {tasks.map((task, index) => (          <div            key={`mt-$ {task.slug}-$ {index}`}            className={taskTextStyles(task)}          >            <Link href={`/learn/$ {currentMission}/$ {task.slug.current}`}>              {task.title}            </Link>          </div>        ))}      </section>       <section>        <svg          viewBox={`0 0 30 $ {tasks.length * 50}`}          className={styles['tracker-svg']}          xmlns="http://www.w3.org/2000/svg"          width="30"          stroke="currentColor"          fill="white"          aria-labelledby="timeline"          role="presentation"        >          <title id="timeline">timeline element</title>           <line x1="10" x2="10" y1={num2} y2={tasks.length * num1 - num2} />          {tasks.map((task, index) => (            <circle              key={`mt-circle-$ {task.name}-$ {index}`}              onClick={updateDoneTasks(index)}              cx="10"              r="4"              cy={index * +num1 + +num2}              fill={                task.slug.current === currentStage ? 'currentColor' : 'black'              }              className={styles['tracker-select']}            />          ))}        </svg>      </section>    </div>  ); }  export default MissionTracker;

Final version

You can see the final working version here:

This component is flexible enough to accommodate lists small and large, multiple browsers, and responsive sizing. It also allows the user to have better understanding of where they are in their progress in the course.

But this is just one component. You can make any number of UI elements: knobs, controls, progress indicators, loaders… the sky’s the limit. You can style them with CSS, or inline styles, you can have them update based on props, on context, on reactive data, the sky’s the limit! I hope this opens some doors on how you yourself can develop more engaging UI elements for the web.


The post How to Create a Timeline Task List Component Using SVG appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , ,
[Top]

How to Create a Commenting Engine with Next.js and Sanity

One of the arguments against the Jamstack approach for building websites is that developing features gets complex and often requires a number of other services. Take commenting, for example. To set up commenting for a Jamstack site, you often need a third-party solution such as Disqus, Facebook, or even just a separate database service. That third-party solution usually means your comments live disconnected from their content.

When we use third-party systems, we have to live with the trade-offs of using someone else’s code. We get a plug-and-play solution, but at what cost? Ads displayed to our users? Unnecessary JavaScript that we can’t optimize? The fact that the comments content is owned by someone else? These are definitely things worth considering.

Monolithic services, like WordPress, have solved this by having everything housed under the same application. What if we could house our comments in the same database and CMS as our content, query it in the same way we query our content, and display it with the same framework on the front end?

It would make this particular Jamstack application feel much more cohesive, both for our developers and our editors.

Let’s make our own commenting engine

In this article, we’ll use Next.js and Sanity.io to create a commenting engine that meets those needs. One unified platform for content, editors, commenters, and developers.

Why Next.js?

Next.js is a meta-framework for React, built by the team at Vercel. It has built-in functionality for serverless functions, static site generation, and server-side rendering.

For our work, we’ll mostly be using its built-in “API routes” for serverless functions and its static site generation capabilities. The API routes will simplify the project considerably, but if you’re deploying to something like Netlify, these can be converted to serverless functions or we can use Netlify’s next-on-netlify package.

It’s this intersection of static, server-rendered, and serverless functions that makes Next.js a great solution for a project like this.

Why Sanity?

Sanity.io is a flexible platform for structured content. At its core, it is a data store that encourages developers to think about content as structured data. It often comes paired with an open-source CMS solution called the Sanity Studio.

We’ll be using Sanity to keep the author’s content together with any user-generated content, like comments. In the end, Sanity is a content platform with a strong API and a configurable CMS that allows for the customization we need to tie these things together.

Setting up Sanity and Next.js

We’re not going to start from scratch on this project. We’ll begin by using the simple blog starter created by Vercel to get working with a Next.js and Sanity integration. Since the Vercel starter repository has the front end and Sanity Studio separate, I’ve created a simplified repository that includes both.

We’ll clone this repository, and use it to create our commenting base. Want to see the final code? This “Starter” will get you set up with the repository, Vercel project, and Sanity project all connected.

The starter repo comes in two parts: the front end powered by Next.js, and Sanity Studio. Before we go any further, we need to get these running locally.

To get started, we need to set up our content and our CMS for Next to consume the data. First, we need to install the dependencies required for running the Studio and connecting to the Sanity API.

# Install the Sanity CLI globally npm install -g @sanity/cli # Move into the Studio directory and install the Studio's dependencies cd studio npm install

Once these finish installing, from within the /studio directory, we can set up a new project with the CLI.

# If you're not logged into Sanity via the CLI already sanity login # Run init to set up a new project (or connect an existing project) sanity init

The init command asks us a few questions to set everything up. Because the Studio code already has some configuration values, the CLI will ask us if we want to reconfigure it. We do.

From there, it will ask us which project to connect to, or if we want to configure a new project.

We’ll configure a new project with a descriptive project name. It will ask us to name the “dataset” we’re creating. This defaults to “production” which is perfectly fine, but can be overridden with whatever name makes sense for your project.

The CLI will modify the file ~/studio/sanity.json with the project’s ID and dataset name. These values will be important later, so keep this file handy.

For now, we’re ready to run the Studio locally.

# From within /studio npm run start

After the Studio compiles, it can be opened in the browser at http://localhost:3333.

At this point, it makes sense to go into the admin and create some test content. To make the front end work properly, we’ll need at least one blog post and one author, but additional content is always nice to get a feel for things. Note that the content will be synced in real-time to the data store even when you’re working from the Studio on localhost. It will become instantly available to query. Don’t forget to push publish so that the content is publicly available.

Once we have some content, it’s time to get our Next.js project running.

Getting set up with Next.js

Most things needed for Next.js are already set up in the repository. The main thing we need to do is connect our Sanity project to Next.js. To do this, there’s an example set of environment variables set in /blog-frontent/.env.local.example. Remove .example from that file and then we’ll modify the environment variables with the proper values.

We need an API token from our Sanity project. To create this value, let’s head over to the Sanity dashboard. In the dashboard, locate the current project and navigate to the Settings → API area. From here, we can create new tokens to use in our project. In many projects, creating a read-only token is all we need. In our project, we’ll be posting data back to Sanity, so we’ll need to create a Read+Write token.

Showing a modal open in the Sanity dashboard with a Add New Token heading, a text field to set the token label with a value of Comment Engine, and three radio buttons that set if the token as read, write or deploy studio access where the write option is selected.
Adding a new read and write token in the Sanity dashboard

When clicking “Add New Token,” we receive a pop-up with the token value. Once it’s closed, we can’t retrieve the token again, so be sure to grab it!

This string goes in our .env.local file as the value for SANITY_API_TOKEN. Since we’re already logged into manage.sanity.io , we can also grab the project ID from the top of the project page and paste it as the value of NEXT_PUBLIC_SANITY_PROJECT_ID. The SANITY_PREVIEW_SECRET is important for when we want to run Next.js in “preview mode”, but for the purposes of this demo, we don’t need to fill that out.

We’re almost ready to run our Next front-end. While we still have our Sanity Dashboard open, we need to make one more change to our Settings → API view. We need to allow our Next.js localhost server to make requests.

In the CORS Origins, we’ll add a new origin and populate it with the current localhost port: http://localhost:3000. We don’t need to be able to send authenticated requests, so we can leave this off When this goes live, we’ll need to add an additional Origin with the production URL to allow the live site to make requests as well.

Our blog is now ready to run locally!

# From inside /blog-frontend npm run dev

After running the command above, we now have a blog up and running on our computer with data pulling from the Sanity API. We can visit http://localhost:3000 to view the site.

Creating the schema for comments

To add comments to our database with a view in our Studio, we need to set up our schema for the data.

To add our schema, we’ll add a new file in our /studio/schemas directory named comment.js. This JavaScript file will export an object that will contain the definition of the overall data structure. This will tell the Studio how to display the data, as well as structuring the data that we will return to our frontend.

In the case of a comment, we’ll want what might be considered the “defaults” of the commenting world. We’ll have a field for a user’s name, their email, and a text area for a comment string. Along with those basics, we’ll also need a way of attaching the comment to a specific post. In Sanity’s API, the field type is a “reference” to another type of data.

If we wanted our site to get spammed, we could end there, but it would probably be a good idea to add an approval process. We can do that by adding a boolean field to our comment that will control whether or not to display a comment on our site.

export default {   name: 'comment',   type: 'document',   title: 'Comment',   fields: [     {       name: 'name',       type: 'string',     },     {       title: 'Approved',       name: 'approved',       type: 'boolean',       description: "Comments won't show on the site without approval"     },        {       name: 'email',       type: 'string',     },     {       name: 'comment',       type: 'text',     },     {       name: 'post',       type: 'reference',       to: [         {type: 'post'}       ]     }   ], }

After we add this document, we also need to add it to our /studio/schemas/schema.js file to register it as a new document type.

import createSchema from 'part:@sanity/base/schema-creator' import schemaTypes from 'all:part:@sanity/base/schema-type' import blockContent from './blockContent' import category from './category' import post from './post' import author from './author' import comment from './comment' // <- Import our new Schema export default createSchema({   name: 'default',   types: schemaTypes.concat([     post,     author,     category,     comment, // <- Use our new Schema     blockContent   ]) }) 

Once these changes are made, when we look into our Studio again, we’ll see a comment section in our main content list. We can even go in and add our first comment for testing (since we haven’t built any UI for it in the front end yet).

An astute developer will notice that, after adding the comment, the preview our comments list view is not very helpful. Now that we have data, we can provide a custom preview for that list view.

Adding a CMS preview for comments in the list view

After the fields array, we can specify a preview object. The preview object will tell Sanity’s list views what data to display and in what configuration. We’ll add a property and a method to this object. The select property is an object that we can use to gather data from our schema. In this case, we’ll take the comment’s name, comment, and post.title values. We pass these new variables into our prepare() method and use that to return a title and subtitle for use in list views.

export default {   // ... Fields information   preview: {       select: {         name: 'name',         comment: 'comment',         post: 'post.title'       },       prepare({name, comment, post}) {         return {           title: `$ {name} on $ {post}`,           subtitle: comment         }       }     }   }  }

The title will display large and the subtitle will be smaller and more faded. In this preview, we’ll make the title a string that contains the comment author’s name and the comment’s post, with a subtitle of the comment body itself. You can configure the previews to match your needs.

The data now exists, and our CMS preview is ready, but it’s not yet pulling into our site. We need to modify our data fetch to pull our comments onto each post.

Displaying each post’s comments

In this repository, we have a file dedicated to functions we can use to interact with Sanity’s API. The /blog-frontend/lib/api.js file has specific exported functions for the use cases of various routes in our site. We need to update the getPostAndMorePosts function in this file, which pulls the data for each post. It returns the proper data for posts associated with the current page’s slug, as well as a selection of new posts to display alongside it.

In this function, there are two queries: one to grab the data for the current post and one for the additional posts. The request we need to modify is the first request.

Changing the returned data with a GROQ projection

The query is made in the open-source graph-based querying language GROQ, used by Sanity for pulling data out of the data store. The query comes in three parts:

  • The filter – what set of data to find and send back *[_type == "post" && slug.current == $ slug]
  • An optional pipeline component — a modification to the data returned by the component to its left | order(_updatedAt desc)
  • An optional projection — the specific data elements to return for the query. In our case, everything between the brackets ({}).

In this example, we have a variable list of fields that most of our queries need, as well as the body data for the blog post. Directly following the body, we want to pull all the comments associated with this post.

In order to do this, we create a named property on the object returned called 'comments' and then run a new query to return the comments that contain the reference to the current post context.

The entire filter looks like this:

*[_type == "comment" && post._ref == ^._id && approved == true]

The filter matches all documents that meet the interior criteria of the square brackets ([]). In this case, we’ll find all documents of _type == "comment". We’ll then test if the current post’s _ref matches the comment’s _id. Finally, we check to see if the comment is approved == true.

Once we have that data, we select the data we want to return using an optional projection. Without the projection, we’d get all the data for each comment. Not important in this example, but a good habit to be in.

curClient.fetch(     `*[_type == "post" && slug.current == $ slug] | order(_updatedAt desc) {         $ {postFields}         body,         'comments': *[_type == "comment" && post._ref == ^._id && approved == true]{             _id,              name,              email,              comment,              _createdAt         }     }`,  { slug }  )  .then((res) => res?.[0]),

Sanity returns an array of data in the response. This can be helpful in many cases but, for us, we just need the first item in the array, so we’ll limit the response to just the zero position in the index.

Adding a Comment component to our post

Our individual posts are rendered using code found in the /blog-frontend/pages/posts/[slug].js file. The components in this file are already receiving the updated data in our API file. The main Post() function returns our layout. This is where we’ll add our new component.

Comments typically appear after the post’s content, so let’s add this immediately following the closing </article> tag.

// ... The rest of the component </article> // The comments list component with comments being passed in <Comments comments={post?.comments} />

We now need to create our component file. The component files in this project live in the /blog-frontend/components directory. We’ll follow the standard pattern for the components. The main functionality of this component is to take the array passed to it and create an unordered list with proper markup.

Since we already have a <Date /> component, we can use that to format our date properly.

# /blog-frontend/components/comments.js  import Date from './date'  export default function Comments({ comments = [] }) {   return (     <>      <h2 className="mt-10 mb-4 text-4xl lg:text-6xl leading-tight">Comments:</h2>       <ul>         {comments?.map(({ _id, _createdAt, name, email, comment }) => (           <li key={_id} className="mb-5">             <hr className="mb-5" />             <h4 className="mb-2 leading-tight"><a href={`mailto:$ {email}`}>{name}</a> (<Date dateString={_createdAt}/>)</h4>             <p>{comment}</p>             <hr className="mt-5 mb-5" />          </li>         ))       </ul>     </>   ) }

Back in our /blog-frontend/pages/posts/[slug].js file, we need to import this component at the top, and then we have a comment section displayed for posts that have comments.

import Comments from '../../components/comments'

We now have our manually-entered comment listed. That’s great, but not very interactive. Let’s add a form to the page to allow users to submit a comment to our dataset.

Adding a comment form to a blog post

For our comment form, why reinvent the wheel? We’re already in the React ecosystem with Next.js, so we might as well take advantage of it. We’ll use the react-hook-form package, but any form or form component will do.

First, we need to install our package.

npm install react-hook-form

While that installs, we can go ahead and set up our Form component. In the Post component, we can add a <Form /> component right after our new <Comments /> component.

// ... Rest of the component <Comments comments={post.comments} /> <Form _id={post._id} />

Note that we’re passing the current post _id value into our new component. This is how we’ll tie our comment to our post.

As we did with our comment component, we need to create a file for this component at /blog-frontend/components/form.js.

export default function Form ({_id}) {    // Sets up basic data state   const [formData, setFormData] = useState()             // Sets up our form states    const [isSubmitting, setIsSubmitting] = useState(false)   const [hasSubmitted, setHasSubmitted] = useState(false)            // Prepares the functions from react-hook-form   const { register, handleSubmit, watch, errors } = useForm()    // Function for handling the form submission   const onSubmit = async data => {     // ... Submit handler   }    if (isSubmitting) {     // Returns a "Submitting comment" state if being processed     return <h3>Submitting comment…</h3>   }   if (hasSubmitted) {     // Returns the data that the user submitted for them to preview after submission     return (       <>         <h3>Thanks for your comment!</h3>         <ul>           <li>             Name: {formData.name} <br />             Email: {formData.email} <br />             Comment: {formData.comment}           </li>         </ul>       </>     )   }    return (     // Sets up the Form markup   ) }

This code is primarily boilerplate for handling the various states of the form. The form itself will be the markup that we return.

// Sets up the Form markup <form onSubmit={handleSubmit(onSubmit)} className="w-full max-w-lg" disabled>   <input ref={register} type="hidden" name="_id" value={_id} /> 									   <label className="block mb-5">     <span className="text-gray-700">Name</span>     <input name="name" ref={register({required: true})} className="form-input mt-1 block w-full" placeholder="John Appleseed"/>     </label> 																																																									   <label className="block mb-5">     <span className="text-gray-700">Email</span>     <input name="email" type="email" ref={register({required: true})} className="form-input mt-1 block w-full" placeholder="your@email.com"/>   </label>    <label className="block mb-5">     <span className="text-gray-700">Comment</span>     <textarea ref={register({required: true})} name="comment" className="form-textarea mt-1 block w-full" rows="8" placeholder="Enter some long form content."></textarea>   </label> 																																					   {/* errors will return when field validation fails  */}   {errors.exampleRequired && <span>This field is required</span>} 	   <input type="submit" className="shadow bg-purple-500 hover:bg-purple-400 focus:shadow-outline focus:outline-none text-white font-bold py-2 px-4 rounded" /> </form>

In this markup, we’ve got a couple of special cases. First, our <form> element has an onSubmit attribute that accepts the handleSubmit() hook. That hook provided by our package takes the name of the function to handle the submission of our form.

The very first input in our comment form is a hidden field that contains the _id of our post. Any required form field will use the ref attribute to register with react-hook-form’s validation. When our form is submitted we need to do something with the data submitted. That’s what our onSubmit() function is for.

// Function for handling the form submission const onSubmit = async data => {   setIsSubmitting(true)            setFormData(data)            try {     await fetch('/api/createComment', {       method: 'POST',      body: JSON.stringify(data),      type: 'application/json'     })       setIsSubmitting(false)     setHasSubmitted(true)   } catch (err) {     setFormData(err)   } }

This function has two primary goals:

  1. Set state for the form through the process of submitting with the state we created earlier
  2. Submit the data to a serverless function via a fetch() request. Next.js comes with fetch() built in, so we don’t need to install an extra package.

We can take the data submitted from the form — the data argument for our form handler — and submit that to a serverless function that we need to create.

We could post this directly to the Sanity API, but that requires an API key with write access and you should protect that with environment variables outside of your front-end. A serverless function lets you run this logic without exposing the secret token to your visitors.

Submitting the comment to Sanity with a Next.js API route

In order to protect our credentials, we’ll write our form handler as a serverless function. In Next.js, we can use “API routes” to create serverless function. These live alongside our page routes in the /blog-frontent/pages directory in the api directory. We can create a new file here called createComment.js.

To write to the Sanity API, we first need to set up a client that has write permissions. Earlier in this demo, we set up a read+write token and put it in /blog-frontent/.env.local. This environment variable is already in use in a client object from /blog-frontend/lib/sanity.js. There’s a read+write client set up with the name previewClient that uses the token to fetch unpublished changes for preview mode.

At the top of our createClient file, we can import that object for use in our serverless function. A Next.js API route needs to export its handler as a default function with request and response arguments. Inside our function, we’ll destructure our form data from the request object’s body and use that to create a new document.

Sanity’s JavaScript client has a create() method which accepts a data object. The data object should have a _type that matches the type of document we wish to create along with any data we wish to store. In our example, we’ll pass it the name, email, and comment.

We need to do a little extra work to turn our post’s _id into a reference to the post in Sanity. We’ll define the post property as a reference and give the_id as the _ref property on this object. After we submit it to the API, we can return either a success status or an error status depending on our response from Sanity.

// This Next.js template already is configured to write with this Sanity Client import {previewClient} from '../../lib/sanity'  export default async function createComment(req, res) {   // Destructure the pieces of our request   const { _id, name, email, comment} = JSON.parse(req.body)   try {     // Use our Client to create a new document in Sanity with an object       await previewClient.create({       _type: 'comment',       post: {         _type: 'reference',         _ref: _id,       },      name,      email,      comment     })   } catch (err) {     console.error(err)     return res.status(500).json({message: `Couldn't submit comment`, err})   }        return res.status(200).json({ message: 'Comment submitted' }) }

Once this serverless function is in place, we can navigate to our blog post and submit a comment via the form. Since we have an approval process in place, after we submit a comment, we can view it in the Sanity Studio and choose to approve it, deny it, or leave it as pending.

Take the commenting engine further

This gets us the basic functionality of a commenting system and it lives directly with our content. There is a lot of potential when you control both sides of this flow. Here are a few ideas for taking this commenting engine further.


The post How to Create a Commenting Engine with Next.js and Sanity appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , ,
[Top]

Create an FAQ Slack app with Netlify functions and FaunaDB

Sometimes, when you’re looking for a quick answer, it’s really useful to have an FAQ system in place, rather than waiting for someone to respond to a question. Wouldn’t it be great if Slack could just answer these FAQs for us? In this tutorial, we’re going to be making just that: a slash command for Slack that will answer user FAQs. We’ll be storing our answers in FaunaDB, using FQL to search the database, and utilising a Netlify function to provide a serverless endpoint to connect Slack and FaunaDB.

Prerequisites

This tutorial assumes you have the following requirements:

  • Github account, used to log in to Netlify and Fauna, as well as storing our code
  • Slack workspace with permission to create and install new apps
  • Node.js v12

Create npm package

To get started, create a new folder and initialise a npm package by using your package manager of choice and run npm init -y from inside the folder. After the package has been created, we have a few npm packages to install.

Run this to install all the packages we will need for this tutorial:

npm install express body-parser faunadb encoding serverless-http netlify-lambda

These packages are explained below, but if you are already familiar with them, feel free to skip ahead.

Encoding has been installed due to a plugin error occurring in @netlify/plugin-functions-core at the time of writing and may not be needed when you follow this tutorial.

Packages

Express is a web application framework that will allow us to simplify writing multiple endpoints for our function. Netlify functions require handlers for each endpoint, but express combined with serverless-http will allow us to write the endpoints all in one place.

Body-parser is an express middleware which will take care of the application/x-www-form-urlencoded data Slack will be sending to our function.

Faunadb is an npm module that allows us to interact with the database through the FaunaDB Javascript driver. It allows us to pass queries from our function to the database, in order to get the answers

Serverless-http is a module that wraps Express applications to the format expected by Netlify functions, meaning we won’t have to rewrite our code when we shift from local development to Netlify.

Netlify-lambda is a tool which will allow us to build and serve our functions locally, in the same way they will be built and deployed on Netlify. This means we can develop locally before pushing our code to Netlify, increasing the speed of our workflow.

Create a function

With our npm packages installed, it’s time to begin work on the function. We’ll be using serverless to wrap an express app, which will allow us to deploy it to Netlify later. To get started, create a file called netlify.toml, and add the following into it:

[build]   functions = "functions"

We will use a .gitignore file, to prevent our node_modules and functions folders from being added to git later. Create a file called .gitignore, and add the following:

functions/

node_modules/

We will also need a folder called src, and a file inside it called server.js. Your final file structure should look like:

With this in place, create a basic express app by inserting the code below into server.js:

const express = require("express"); const bodyParser = require("body-parser"); const fauna = require("faunadb"); const serverless = require("serverless-http");   const app = express();   module.exports.handler = serverless(app);

Check out the final line; it looks a little different to a regular express app. Rather than listening on a port, we’re passing our app into serverless and using this as our handler, so that Netlify can invoke our function.

Let’s set up our body parser to use application/x-www-form-urlencoded data, as well as putting a router in place. Add the following to server.js after defining app: 

const router = express.Router(); app.use(bodyParser.json()); app.use(bodyParser.urlencoded({ extended: true })); app.use("/.netlify/functions/server", router);

Notice that the router is using /.netlify/functions/server as an endpoint. This is so that Netlify will be able to correctly deploy the function later in the tutorial. This means we will need to add this to any base URLs, in order to invoke the function.

Create a test route

With a basic app in place, let’s create a test route to check everything is working. Insert the following code to create a simple GET route, that returns a simple json object:

router.get("/test", (req, res) => {  res.json({ hello: "world" }); });

With this route in place, let’s spin up our function on localhost, and check that we get a response. We’ll be using netlify-lambda to serve our app, so that we can imitate a Netlify function locally on port 9000. In our package.json, add the following lines into the scripts section:

"start": "./node_modules/.bin/netlify-lambda serve src",    "build": "./node_modules/.bin/netlify-lambda build src"

With this in place, after saving the file, we can run npm start to begin netlify-lambda on port 9000.

The build command will be used when we deploy to Netlify later.

Once it is up and running, we can visit http://localhost:9000/.netlify/functions/server/test to check our function is working as expected.

The great thing about netlify-lambda is it will listen for changes to our code, and automatically recompile whenever we update something, so we can leave it running for the duration of this tutorial.

Start ngrok URL

Now we have a test route working on our local machine, let’s make it available online. To do this, we’ll be using ngrok, a npm package that provides a public URL for our function. If you don’t have ngrok installed already, first run npm install -g ngrok to globally install it on your machine. Then run ngrok http 9000 which will automatically direct traffic to our function running on port 9000.

After starting ngrok, you should see a forwarding URL in the terminal, which we can visit to confirm our server is available online. Copy this base URL to your browser, and follow it with /.netlify/functions/server/test. You should see the same result as when we made our calls on localhost, which means we can now use this URL as an endpoint for Slack!

Each time you restart ngrok, it creates a new URL, so if you need to stop it at any point, you will need to update your URL endpoint in Slack.

Setting up Slack

Now that we have a function in place, it’s time to move to Slack to create the app and slash command. We will have to deploy this app to our workspace, as well as making a few updates to our code to connect our function. For a more in depth set of instructions on how to create a new slash command, you can follow the official Slack documentation. For a streamlined set of instructions, follow along below:

Create a new Slack app

First off, let’s create our new Slack app for these FAQs. Visit https://api.slack.com/apps and select Create New App to begin. Give your app a name (I used Fauna FAQ), and select a development workspace for the app.

Create a slash command

After creating the app, we need to add a slash command to it, so that we can interact with the app. Select slash commands from the menu after the app has been created, then create a new command. Fill in the following form with the name of your command (I used /faq) as well as providing the URL from ngrok. Don’t forget to add /.netlify/functions/server/ to the end!

Install app to workspace

Once you have created your slash command, click on basic information in the sidebar on the left to return to the app’s main page. From here, select the dropdown “Install app to your workspace” and click the button to install it.

Once you have allowed access, the app will be installed, and you’ll be able to start using the slash command in your workspace.

Update the function

With our new app in place, we’ll need to create a new endpoint for Slack to send the requests to. For this, we’ll use the root endpoint for simplicity. The endpoint will need to be able to take a post request with application/x-www-form-urlencoded data, then return a 200 status response with a message. To do this, let’s create a new post route at the root by adding the following code to server.js:

router.post("/", async (req, res) => {   });

Now that we have our endpoint, we can also extract and view the text that has been sent by slack by adding the following line before we set the status:

const text = req.body.text; console.log(`Input text: $  {text}`);

For now, we’ll just pass this text into the response and send it back instantly, to ensure the slack app and function are communicating.

res.status(200); res.send(text);

Now, when you type /faq <somequestion> on a slack channel, you should get back the same message from the slack slash command.

Formatting the response

Rather than just sending back plaintext, we can make use of Slack’s Block Kit to use specialised UI elements to improve the look of our answers. If you want to create a more complex layout, Slack provides a Block Kit builder to visually design your layout.

For now, we’re going to keep things simple, and just provide a response where each answer is separated by a divider. Add the following function to your server.js file after the post route:

const format = (answers) => {  if (answers.length == 0) {    answers = ["No answers found"];  }    let formatted = {    blocks: [],  };    for (answer of answers) {    formatted["blocks"].push({      type: "divider",    });    formatted["blocks"].push({      type: "section",      text: {        type: "mrkdwn",        text: answer,      },    });  }    return formatted; };

With this in place, we now need to pass our answers into this function, to format the answers before returning them to Slack. Update the following in the root post route:

let answers = text; const formattedAnswers = format(answers);

Now when we enter the same command to the slash app, we should get back the same message, but this time in a formatted version!

Setting up Fauna

With our slack app in place, and a function to connect to it, we now need to start working on the database to store our answers. If you’ve never set up a database with FaunaDB before, there is some great documentation on how to quickly get started. A brief step-by-step overview for the database and collection is included below:

Create database

First, we’ll need to create a new database. After logging into the Fauna dashboard online, click New Database. Give your new database a name you’ll remember (I used “slack-faq”) and save the database.

Create collection

With this database in place, we now need a collection. Click the “New Collection” button that should appear on your dashboard, and give your collection a name (I used “faq”). The history days and TTL values can be left as their defaults, but you should ensure you don’t add a value to the TTL field, as we don’t want our documents to be removed automatically after a certain time.

Add question / answer documents

Now we have a database and collection in place, we can start adding some documents to it. Each document should follow the structure:

{    question: "a question string",    answer: "an answer string",    qTokens: [        "first token",        "second token",        "third token"    ] }

The qToken values should be key terms in the question, as we will use them for a tokenized search when we can’t match a question exactly. You can add as many qTokens as you like for each question. The more relevant the tokens are, the more accurate results will be. For example, if our question is “where are the bathrooms”, we should include the qTokens “bathroom”, “bathrooms”, “toilet”, “toilets” and any other terms you may think people will search for when trying to find information about a bathroom.

The questions I used to develop a proof of concept are as follows:

{   question: "where is the lobby",   answer: "On the third floor",   qTokens: ["lobby", "reception"], }, {   question: "when is payday",   answer: "On the first Monday of each month",   qTokens: ["payday", "pay", "paid"], }, {   question: "when is lunch",   answer: "Lunch break is *12 - 1pm*",   qTokens: ["lunch", "break", "eat"], }, {   question: "where are the bathrooms",   answer: "Next to the elevators on each floor",   qTokens: ["toilet", "bathroom", "toilets", "bathrooms"], }, {   question: "when are my breaks",   answer: "You can take a break whenever you want",   qTokens: ["break", "breaks"], }

Feel free to take this time to add as many documents as you like, and as many qTokens as you think each question needs, then we’ll move on to the next step.

Creating Indexes

With these questions in place, we will create two indexes to allow us to search the database. First, create an index called “answers_by_question”, selecting question as the term and answer as the value. This will allow us to search all answers by their associated question.

Then, create an index called “answers_by_qTokens”, selecting qTokens as the term and answer as the value. We will use this index to allow us to search through the qTokens of all items in the database.

Searching the database

To run a search in our database, we will do two things. First, we’ll run a search for an exact match to the question, so we can provide a single answer to the user. Second, if this search doesn’t find a result, we’ll do a search on the qTokens each answer has, returning any results that provide a match. We’ll use Fauna’s online shell to demonstrate and explain these queries, before using them in our function.

Exact Match

Before searching the tokens, we’ll test whether we can match the input question exactly, as this will allow for the best answer to what the user has asked. To search our questions, we will match against the “answers_by_question” index, then paginate our answers. Copy the following code into the online Fauna shell to see this in action:

q.Paginate(q.Match(q.Index("answers_by_question"), "where is the lobby"))

If you have a question matching the “where is the lobby” example above, you should see the expected answer of “On the third floor” as a result.

Searching the tokens

For cases where there is no exact match on the database, we will have to use our qTokens to find any relevant answers. For this, we will match against the “answers_by_qTokens” index we created and again paginate our answers. Copy the following into the online shell to see how this works:

q.Paginate(q.Match(q.Index("answers_by_qTokens"), "break"))

If you have any questions with the qToken “break” from the example questions, you should see all answers returned as a result.

Connect function to Fauna

We have our searches figured out, but currently we can only run them from the online shell. To use these in our function, there is some configuration required, as well as an update to our function’s code.

Function configuration

To connect to Fauna from our function, we will need to create a server key. From your database’s dashboard, select security in the left hand sidebar, and create a new key. Give your new key a name you will recognise, and ensure that the dropdown has Server selected, not Admin. Finally, once the key has been created, add the following code to server.js before the test route, replacing the <secretKey> value with the secret provided by Fauna.

const q = fauna.query; const client = new fauna.Client({  secret: "<secretKey>", });

It would be preferred to store this key in an environment variable in Netlify, rather than directly in the code, but that is beyond the scope of this tutorial. If you would like to use environment variables, this Netlify post explains how to do so.

Update function code

To include our new search queries in the function, copy the following code into server.js after the post route:

const searchText = async (text) => {  console.log("Beginning searchText");  const answer = await client.query(    q.Paginate(q.Match(q.Index("answers_by_question"), text))  );  console.log(`searchText response: $  {answer.data}`);  return answer.data; };   const getTokenResponse = async (text) => {  console.log("Beginning getTokenResponse");  let answers = [];  const questionTokens = text.split(/[ ]+/);  console.log(`Tokens: $  {questionTokens}`);  for (token of questionTokens) {    const tokenResponse = await client.query(      q.Paginate(q.Match(q.Index("answers_by_qTokens"), text))    );    answers = [...answers, ...tokenResponse.data];  }  console.log(`Token answers: $  {answers}`);  return answers; };

These functions replicate the same functionality as the queries we previously ran in the online Fauna shell, but now we can utilise them from our function.

Deploy to Netlify

Now the function is searching the database, the only thing left to do is put it on the cloud, rather than a local machine. To do this, we’ll be making use of a Netlify function deployed from a GitHub repository.

First things first, add a new repo on Github, and push your code to it. Once the code is there, go to Netlify and either sign up or log in using your Github profile. From the home page of Netlify, select “New site from git” to deploy a new site, using the repo you’ve just created in Github.

If you have never deployed a site in Netlify before, this post explains the process to deploy from git.

Ensure while you are creating the new site, that your build command is set to npm run build, to have Netlify build the function before deployment. The publish directory can be left blank, as we are only deploying a function, rather than any pages.

Netlify will now build and deploy your repo, generating a unique URL for the site deployment. We can use this base URL to access the test endpoint of our function from earlier, to ensure things are working.

The last thing to do is update the Slack endpoint to our new URL! Navigate to your app, then select ‘slash commands’ in the left sidebar. Click on the pencil icon to edit the slash command and paste in the new URL for the function. Finally, you can use your new slash command in any authorised Slack channels!

Conclusion

There you have it, an entirely serverless, functional slack slash command. We have used FaunaDB to store our answers and connected to it through a Netlify function. Also, by using Express, we have the flexibility to add further endpoints to the function for adding new questions, or anything else you can think up to further extend this project! Hopefully now, instead of waiting around for someone to answer your questions, you can just use /faq and get the answer instantly!


Matthew Williams is a software engineer from Melbourne, Australia who believes the future of technology is serverless. If you’re interested in more from him, check out his Medium articles, or his GitHub repos.


The post Create an FAQ Slack app with Netlify functions and FaunaDB appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , ,
[Top]

How to Create a Realistic Motion Blur with CSS Transitions

Before we delve into making a realistic motion blur in CSS, it’s worth doing a quick dive into what motion blur is, so we can have a better idea of what we’re trying to reproduce.

Have you ever taken a photo of something moving quickly, especially under low light, and it turned into a blurry streak? Or maybe the whole camera shook and the whole shot became a series of streaks? This is motion blur, and it’s a byproduct of how a camera works.

Motion Blur 101

Imagine a camera. It’s got a shutter, a door that opens to let light in, and then closes to stop the light from coming in. From the time it opens, to when it closes, is a single photograph, or a single frame of a moving image.

A blurry man wearing a red shirt on a blue bike speeding through the forest.
Real motion blur in action (Photo: Kevin Erdvig, Unsplash)

If the subject of the frame is moving during the time the shutter is open, we end up taking a photo of an object moving through the frame. On film, this shows up as being a steady smear, with the subject being in an infinite number of places between its starting point to its end. The moving object also ends up being semi-transparent, with parts of the background visible behind it.

What computers do to fake this is model several subframes, and then composite them together at a fraction of the opacity. Putting lots of copies of the same object in slightly different places along the path of motion creates a pretty convincing facsimile of a motion blur.

Video compositing apps tend to have settings for how many subdivisions their motion blur should have. If you set this value really low, you can see exactly how the technique works, like this, a frame of an animation of a simple white dot at four samples per frame:

Four overlapping white opaque circles on a black background.
Four samples per frame.
Twelve overlapping white opaque circles on a black background.
Here is 12 samples per frame.
Thirty-two overlapping white opaque circles on a black background.
And by the time we’re at 32 samples per frame, it’s pretty close to fully real, especially when seen at multiple frames per second.

The number of samples it takes to make a convincing motion blur is entirely relative to the content. Something small with sharp edges that’s moving super fast will need a lot of subframes; but something blurry moving slowly might need only a few. In general, using more will create a more convincing effect.

Doing this in CSS

In order to approximate this effect in CSS, we need to create a ton of identical elements, make them semi-transparent, and offset their animations by a tiny fraction of a second.

First, we’ll set up the base with the animation we want using a CSS transition. We’ll go with a simple black dot, and assign it a transform on hover (or tap, if you’re on mobile). We’ll also animate the border radius and color to show the flexibility of this approach.

Here is the base animation without motion blur:

Now, let’s make 20 identical copies of the black dot, all placed in the exact same place with absolute positioning. Each copy has an opacity of 10%, which is a little more than might be mathematically correct, but I find we need to make them more opaque to look solid enough.

The next step is where the magic happens. We add a slightly-increasing transition-delay value for each clone of our dot object. They’ll all run the exact same animation, but they’ll each be offset by three milliseconds. 

The beauty of this approach is that it creates a pseudo-motion blur effect that works for a ton of different animations. We can throw color changes on there, scaling transitions, odd timings, and the motion blur effect still works.

Using 20 object clones will work for plenty of fast and slow animations, but fewer can still produce a reasonable sense of motion blur. You may need to tweak the number of cloned objects, their opacity, and the amount of transition delay to work with your particular animation. The demo we just looked at has the blur effect slightly overclocked to make it more prominent.


Eventually, with the progress of computer power, I expect that some of the major browsers might start offering this effect natively. Then we can do away with the ridiculousness of having 20 identical objects. In the meantime, this is a reasonable way to approximate a realistic motion blur.


The post How to Create a Realistic Motion Blur with CSS Transitions appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , ,
[Top]

Let’s Create Our Own Authentication API with Nodejs and GraphQL

Authentication is one of the most challenging tasks for developers just starting with GraphQL. There are a lot of technical considerations, including what ORM would be easy to set up, how to generate secure tokens and hash passwords, and even what HTTP library to use and how to use it. 

In this article, we’ll focus on local authentication. It’s perhaps the most popular way of handling authentication in modern websites and does so by requesting the user’s email and password (as opposed to, say, using Google auth.)

Moreover, This article uses Apollo Server 2, JSON Web Tokens (JWT), and Sequelize ORM to build an authentication API with Node.

Handling authentication

As in, a log in system:

  • Authentication identifies or verifies a user.
  • Authorization is validating the routes (or parts of the app) the authenticated user can have access to. 

The flow for implementing this is:

  1. The user registers using password and email
  2. The user’s credentials are stored in a database
  3. The user is redirected to the login when registration is completed
  4. The user is granted access to specific resources when authenticated
  5. The user’s state is stored in any one of the browser storage mediums (e.g. localStorage, cookies, session) or JWT.

Pre-requisites

Before we dive into the implementation, here are a few things you’ll need to follow along.

Dependencies 

This is a big list, so let’s get into it:

  • Apollo Server: An open-source GraphQL server that is compatible with any kind of GraphQL client. We won’t be using Express for our server in this project. Instead, we will use the power of Apollo Server to expose our GraphQL API.
  • bcryptjs: We want to hash the user passwords in our database. That’s why we will use bcrypt. It relies on Web Crypto API‘s getRandomValues interface to obtain secure random numbers.
  • dotenv: We will use dotenv to load environment variables from our .env file. 
  • jsonwebtoken: Once the user is logged in, each subsequent request will include the JWT, allowing the user to access routes, services, and resources that are permitted with that token. jsonwebtokenwill be used to generate a JWT which will be used to authenticate users.
  • nodemon: A tool that helps develop Node-based applications by automatically restarting the node application when changes in the directory are detected. We don’t want to be closing and starting the server every time there’s a change in our code. Nodemon inspects changes every time in our app and automatically restarts the server. 
  • mysql2: An SQL client for Node. We need it connect to our SQL server so we can run migrations.
  • sequelize: Sequelize is a promise-based Node ORM for Postgres, MySQL, MariaDB, SQLite and Microsoft SQL Server. We will use Sequelize to automatically generate our migrations and models. 
  • sequelize cli: We will use Sequelize CLI to run Sequelize commands. Install it globally with yarn add --global sequelize-cli  in the terminal.

Setup directory structure and dev environment

Let’s create a brand new project. Create a new folder and this inside of it:

yarn init -y

The -y flag indicates we are selecting yes to all the yarn init questions and using the defaults.

We should also put a package.json file in the folder, so let’s install the project dependencies:

yarn add apollo-server bcrpytjs dotenv jsonwebtoken nodemon sequelize sqlite3

Next, let’s add Babeto our development environment:

yarn add babel-cli babel-preset-env babel-preset-stage-0 --dev

Now, let’s configure Babel. Run touch .babelrc in the terminal. That creates and opens a Babel config file and, in it, we’ll add this:

{   "presets": ["env", "stage-0"] }

It would also be nice if our server starts up and migrates data as well. We can automate that by updating package.json with this:

"scripts": {   "migrate": " sequelize db:migrate",   "dev": "nodemon src/server --exec babel-node -e js",   "start": "node src/server",   "test": "echo "Error: no test specified" && exit 1" },

Here’s our package.json file in its entirety at this point:

{   "name": "graphql-auth",   "version": "1.0.0",   "main": "index.js",   "scripts": {     "migrate": " sequelize db:migrate",     "dev": "nodemon src/server --exec babel-node -e js",     "start": "node src/server",     "test": "echo "Error: no test specified" && exit 1"   },   "dependencies": {     "apollo-server": "^2.17.0",     "bcryptjs": "^2.4.3",     "dotenv": "^8.2.0",     "jsonwebtoken": "^8.5.1",     "nodemon": "^2.0.4",     "sequelize": "^6.3.5",     "sqlite3": "^5.0.0"   },   "devDependencies": {     "babel-cli": "^6.26.0",     "babel-preset-env": "^1.7.0",     "babel-preset-stage-0": "^6.24.1"   } }

Now that our development environment is set up, let’s turn to the database where we’ll be storing things.

Database setup

We will be using MySQL as our database and Sequelize ORM for our relationships. Run sequelize init (assuming you installed it globally earlier). The command should create three folders: /config /models and /migrations. At this point, our project directory structure is shaping up. 

Let’s configure our database. First, create a .env file in the project root directory and paste this:

NODE_ENV=development DB_HOST=localhost DB_USERNAME= DB_PASSWORD= DB_NAME=

Then go to the /config folder we just created and rename the config.json file in there to config.js. Then, drop this code in there:

require('dotenv').config() const dbDetails = {   username: process.env.DB_USERNAME,   password: process.env.DB_PASSWORD,   database: process.env.DB_NAME,   host: process.env.DB_HOST,   dialect: 'mysql' } module.exports = {   development: dbDetails,   production: dbDetails }

Here we are reading the database details we set in our .env file. process.env is a global variable injected by Node and it’s used to represent the current state of the system environment.

Let’s update our database details with the appropriate data. Open the SQL database and create a table called graphql_auth. I use Laragon as my local server and phpmyadmin to manage database tables.

What ever you use, we’ll want to update the .env file with the latest information:

NODE_ENV=development DB_HOST=localhost DB_USERNAME=graphql_auth DB_PASSWORD= DB_NAME=<your_db_username_here>

Let’s configure Sequelize. Create a .sequelizerc file in the project’s root and paste this:

const path = require('path') 
 module.exports = {   config: path.resolve('config', 'config.js') }

Now let’s integrate our config into the models. Go to the index.js in the /models folder and edit the config variable.

const config = require(__dirname + '/../../config/config.js')[env]

Finally, let’s write our model. For this project, we need a User model. Let’s use Sequelize to auto-generate the model. Here’s what we need to run in the terminal to set that up:

sequelize model:generate --name User --attributes username:string,email:string,password:string

Let’s edit the model that creates for us. Go to user.js in the /models folder and paste this:

'use strict'; module.exports = (sequelize, DataTypes) => {   const User = sequelize.define('User', {     username: {       type: DataTypes.STRING,     },     email: {       type: DataTypes.STRING,       },     password: {       type: DataTypes.STRING,     }   }, {});   return User; };

Here, we created attributes and fields for username, email and password. Let’s run a migration to keep track of changes in our schema:

yarn migrate

Let’s now write the schema and resolvers.

Integrate schema and resolvers with the GraphQL server 

In this section, we’ll define our schema, write resolver functions and expose them on our server.

The schema

In the src folder, create a new folder called /schema and create a file called schema.js. Paste in the following code:

const { gql } = require('apollo-server') const typeDefs = gql`   type User {     id: Int!     username: String     email: String!   }   type AuthPayload {     token: String!     user: User!   }   type Query {     user(id: Int!): User     allUsers: [User!]!     me: User   }   type Mutation {     registerUser(username: String, email: String!, password: String!): AuthPayload!     login (email: String!, password: String!): AuthPayload!   } ` module.exports = typeDefs

Here we’ve imported graphql-tag from apollo-server. Apollo Server requires wrapping our schema with gql

The resolvers

In the src folder, create a new folder called /resolvers and create a file in it called resolver.js. Paste in the following code:

const bcrypt = require('bcryptjs') const jsonwebtoken = require('jsonwebtoken') const models = require('../models') require('dotenv').config() const resolvers = {     Query: {       async me(_, args, { user }) {         if(!user) throw new Error('You are not authenticated')         return await models.User.findByPk(user.id)       },       async user(root, { id }, { user }) {         try {           if(!user) throw new Error('You are not authenticated!')           return models.User.findByPk(id)         } catch (error) {           throw new Error(error.message)         }       },       async allUsers(root, args, { user }) {         try {           if (!user) throw new Error('You are not authenticated!')           return models.User.findAll()         } catch (error) {           throw new Error(error.message)         }       }     },     Mutation: {       async registerUser(root, { username, email, password }) {         try {           const user = await models.User.create({             username,             email,             password: await bcrypt.hash(password, 10)           })           const token = jsonwebtoken.sign(             { id: user.id, email: user.email},             process.env.JWT_SECRET,             { expiresIn: '1y' }           )           return {             token, id: user.id, username: user.username, email: user.email, message: "Authentication succesfull"           }         } catch (error) {           throw new Error(error.message)         }       },       async login(_, { email, password }) {         try {           const user = await models.User.findOne({ where: { email }})           if (!user) {             throw new Error('No user with that email')           }           const isValid = await bcrypt.compare(password, user.password)           if (!isValid) {             throw new Error('Incorrect password')           }           // return jwt           const token = jsonwebtoken.sign(             { id: user.id, email: user.email},             process.env.JWT_SECRET,             { expiresIn: '1d'}           )           return {            token, user           }       } catch (error) {         throw new Error(error.message)       }     }   }, 
 } module.exports = resolvers

That’s a lot of code, so let’s see what’s happening in there.

First we imported our models, bcrypt and  jsonwebtoken, and then initialized our environmental variables. 

Next are the resolver functions. In the query resolver, we have three functions (me, user and allUsers):

  • me query fetches the details of the currently loggedIn user. It accepts a user object as the context argument. The context is used to provide access to our database which is used to load the data for a user by the ID provided as an argument in the query.
  • user query fetches the details of a user based on their ID. It accepts id as the context argument and a user object. 
  • alluser query returns the details of all the users.

user would be an object if the user state is loggedIn and it would be null, if the user is not. We would create this user in our mutations. 

In the mutation resolver, we have two functions (registerUser and loginUser):

  • registerUser accepts the username, email  and password of the user and creates a new row with these fields in our database. It’s important to note that we used the bcryptjs package to hash the users password with bcrypt.hash(password, 10). jsonwebtoken.sign synchronously signs the given payload into a JSON Web Token string (in this case the user id and email). Finally, registerUser returns the JWT string and user profile if successful and returns an error message if something goes wrong.
  • login accepts email and password , and checks if these details match with the one that was supplied. First, we check if the email value already exists somewhere in the user database.
models.User.findOne({ where: { email }}) if (!user) {   throw new Error('No user with that email') }

Then, we use bcrypt’s bcrypt.compare method to check if the password matches. 

const isValid = await bcrypt.compare(password, user.password) if (!isValid) {   throw new Error('Incorrect password') }

Then, just like we did previously in registerUser, we use jsonwebtoken.sign to generate a JWT string. The login mutation returns the token and user object.

Now let’s add the JWT_SECRET to our .env file.

JWT_SECRET=somereallylongsecret

The server

Finally, the server! Create a server.js in the project’s root folder and paste this:

const { ApolloServer } = require('apollo-server') const jwt =  require('jsonwebtoken') const typeDefs = require('./schema/schema') const resolvers = require('./resolvers/resolvers') require('dotenv').config() const { JWT_SECRET, PORT } = process.env const getUser = token => {   try {     if (token) {       return jwt.verify(token, JWT_SECRET)     }     return null   } catch (error) {     return null   } } const server = new ApolloServer({   typeDefs,   resolvers,   context: ({ req }) => {     const token = req.get('Authorization') || ''     return { user: getUser(token.replace('Bearer', ''))}   },   introspection: true,   playground: true }) server.listen({ port: process.env.PORT || 4000 }).then(({ url }) => {   console.log(`🚀 Server ready at $ https://css-tricks.com/lets-create-our-own-authentication-api-with-nodejs-and-graphql/`); });

Here, we import the schema, resolvers and jwt, and initialize our environment variables. First, we verify the JWT token with verify. jwt.verify accepts the token and the JWT secret as parameters.

Next, we create our server with an ApolloServer instance that accepts typeDefs and resolvers.

We have a server! Let’s start it up by running yarn dev in the terminal.

Testing the API

Let’s now test the GraphQL API with GraphQL Playground. We should be able to register, login and view all users — including a single user — by ID.

We’ll start by opening up the GraphQL Playground app or just open localhost://4000 in the browser to access it.

Mutation for register user

mutation {   registerUser(username: "Wizzy", email: "ekpot@gmail.com", password: "wizzyekpot" ){     token   } }

We should get something like this:

{   "data": {     "registerUser": {       "token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpZCI6MTUsImVtYWlsIjoiZWtwb3RAZ21haWwuY29tIiwiaWF0IjoxNTk5MjQwMzAwLCJleHAiOjE2MzA3OTc5MDB9.gmeynGR9Zwng8cIJR75Qrob9bovnRQT242n6vfBt5PY"     }   } }

Mutation for login 

Let’s now log in with the user details we just created:

mutation {   login(email:"ekpot@gmail.com" password:"wizzyekpot"){     token   } }

We should get something like this:

{   "data": {     "login": {       "token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpZCI6MTUsImVtYWlsIjoiZWtwb3RAZ21haWwuY29tIiwiaWF0IjoxNTk5MjQwMzcwLCJleHAiOjE1OTkzMjY3NzB9.PDiBKyq58nWxlgTOQYzbtKJ-HkzxemVppLA5nBdm4nc"     }   } }

Awesome!

Query for a single user

For us to query a single user, we need to pass the user token as authorization header. Go to the HTTP Headers tab.

Showing the GraphQL interface with the HTTP Headers tab highlighted in red in the bottom left corner of the screen,

…and paste this:

{   "Authorization": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpZCI6MTUsImVtYWlsIjoiZWtwb3RAZ21haWwuY29tIiwiaWF0IjoxNTk5MjQwMzcwLCJleHAiOjE1OTkzMjY3NzB9.PDiBKyq58nWxlgTOQYzbtKJ-HkzxemVppLA5nBdm4nc" }

Here’s the query:

query myself{   me {     id     email     username   } }

And we should get something like this:

{   "data": {     "me": {       "id": 15,       "email": "ekpot@gmail.com",       "username": "Wizzy"     }   } }

Great! Let’s now get a user by ID:

query singleUser{   user(id:15){     id     email     username   } }

And here’s the query to get all users:

{   allUsers{     id     username     email   } }

Summary

Authentication is one of the toughest tasks when it comes to building websites that require it. GraphQL enabled us to build an entire Authentication API with just one endpoint. Sequelize ORM makes creating relationships with our SQL database so easy, we barely had to worry about our models. It’s also remarkable that we didn’t require a HTTP server library (like Express) and use Apollo GraphQL as middleware. Apollo Server 2, now enables us to create our own library-independent GraphQL servers!

Check out the source code for this tutorial on GitHub.


The post Let’s Create Our Own Authentication API with Nodejs and GraphQL appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , ,
[Top]

The Trickery it Takes to Create eBook-Like Text Columns

There’s some interesting CSS trickery in Jason Pamental’s latest Web Fonts & Typography News. Jason wanted to bring swipeable columns to his digital book experience on mobile. Which brings up an interesting question right away… how do you set full-width columns that add columns horizontally, as-needed ? Well that’s a good trick right there, and it’s a one-liner:

columns: 100vw auto;

But it gets more complicated and disappointing from there.

With just a smidge more formatting to the columns:

main {   columns: 100vw auto;   column-gap: 2rem;   overflow-x: auto;   height: calc(100vh - 2rem);   font: 120%/1.4 Georgia; }

We get this:

Which is so close to being perfect!

We probably wouldn’t apply this effect on desktop, but hey, that’s what media queries are for. On mobile we get…

That herky-jerky scrolling makes this a bad experience right there. We can smooth that out with -webkit-overflow-scrolling: touch;

The smoothness is maybe better, but the fact that the columns don’t snap into place makes it almost just as bad of a reading experience. That’s what scroll-snap is for, but alas:

Unfortunately it turns out you need a block-level element to which you can snap, and the artificially-created columns don’t count as such.

Oh noooooo. So close! But so far!

If we actually want scroll snapping, the content will need to be in block-level elements (like <div>). It’s easy enough to set up a horizontal row of <div> elements with flexbox like…

main {   display: flex; } main > div {   flex: 0 0 100vw; }

But… how many divs do we need? Who knows! This is arbitrary content that might change. And even if we did know, how would we flow content naturally between the divs? That’s not a thing. That’s why it sucks that CSS regions never happened. So to make this nice swiping experience possible in CSS, we either need to:

  • Allow scroll snapping to work on columns
  • Have some kind of CSS regions that is capable of auto-generating repeating block level elements as-needed by content

Neither of which is possible right now.

Jason didn’t stop there! He used JavaScript to figure out something that stops well short of some heavy scrolljacking thing. First, he figures out how many “pages” wide the CSS columns technique produces. Then, he adds spacer-divs to the scrolling element, each one the width of the page, and those are the things the scrolling element can scroll-snap to. Very clever.

At the moment, you can experience it at the book site by flipping on an optional setting.

The post The Trickery it Takes to Create eBook-Like Text Columns appeared first on CSS-Tricks.

CSS-Tricks

, , , , ,
[Top]