Tag: Building

Building a Blog with Next.js

In this article, we will use Next.js to build a static blog framework with the design and structure inspired by Jekyll. I’ve always been a big fan of how Jekyll makes it easier for beginners to setup a blog and at the same time also provides a great degree of control over every aspect of the blog for the advanced users.

With the introduction of Next.js in recent years, combined with the popularity of React, there is a new avenue to explore for static blogs. Next.js makes it super easy to build static websites based on the file system itself with little to no configuration required.

The directory structure of a typical bare-bones Jekyll blog looks like this:

. ├─── _posts/          ...blog posts in markdown ├─── _layouts/        ...layouts for different pages ├─── _includes/       ...re-usable components ├─── index.md         ...homepage └─── config.yml       ...blog config

The idea is to design our framework around this directory structure as much as possible so that it becomes easier to  migrate a blog from Jekyll by simply reusing the posts and configs defined in the blog.

For those unfamiliar with Jekyll, it is a static site generator that can transform your plain text into static websites and blogs. Refer the quick start guide to get up and running with Jekyll.

This article also assumes that you have a basic knowledge of React. If not, React’s getting started page is a good place to start.

Installation

Next.js is powered by React and written in Node.js. So we need to install npm first, before adding next, react and react-dom to the project.

mkdir nextjs-blog && cd $ _ npm init -y npm install next react react-dom --save

To run Next.js scripts on the command line, we have to add the next command to the scripts section of our package.json.

"scripts": {   "dev": "next" }

We can now run npm run dev on the command line for the first time. Let’s see what happens.

$  npm run dev > nextjs-blog@1.0.0 dev /~user/nextjs-blog > next  ready - started server on http://localhost:3000 Error: > Couldn't find a `pages` directory. Please create one under the project root

The compiler is complaining about a missing pages directory in the root of the project. We’ll learn about the concept of pages in the next section.

Concept of pages

Next.js is built around the concept of pages. Each page is a React component that can be of type .js or .jsx which is mapped to a route based on the filename. For example:

File                            Route ----                            ----- /pages/about.js                 /about /pages/projects/work1.js        /projects/work1 /pages/index.js                 /

Let’s create the pages directory in the root of the project and populate our first page, index.js, with a basic React component.

// pages/index.js export default function Blog() {   return <div>Welcome to the Next.js blog</div> }

Run npm run dev once again to start the server and navigate to http://localhost:3000 in the browser to view your blog for the first time.

Screenshot of the homepage in the browser. The content says welcome to the next.js blog.

Out of the box, we get:

  • Hot reloading so we don’t have to refresh the browser for every code change.
  • Static generation of all pages inside the /pages/** directory.
  • Static file serving for assets living in the/public/** directory.
  • 404 error page.

Navigate to a random path on localhost to see the 404 page in action. If you need a custom 404 page, the Next.js docs have great information.

Screenshot of the 404 page. It says 404 This page could not be found.

Dynamic pages

Pages with static routes are useful to build the homepage, about page, etc. However, to dynamically build all our posts, we will use the dynamic route capability of Next.js. For example:

File                        Route ----                        ----- /pages/posts/[slug].js      /posts/1                             /posts/abc                             /posts/hello-world

Any route, like /posts/1, /posts/abc, etc., will be matched by /posts/[slug].js and the slug parameter will be sent as a query parameter to the page. This is especially useful for our blog posts because we don’t want to create one file per post; instead we could dynamically pass the slug to render the corresponding post.

Anatomy of a blog

Now, since we understand the basic building blocks of Next.js, let’s define the anatomy of our blog.

. ├─ api │  └─ index.js             # fetch posts, load configs, parse .md files etc ├─ _includes │  ├─ footer.js            # footer component │  └─ header.js            # header component ├─ _layouts │  ├─ default.js           # default layout for static pages like index, about │  └─ post.js              # post layout inherts from the default layout ├─ pages │  ├─ index.js             # homepage |  └─ posts                # posts will be available on the route /posts/ |     └─ [slug].js       # dynamic page to build posts └─ _posts    ├─ welcome-to-nextjs.md    └─ style-guide-101.md

Blog API

A basic blog framework needs two API functions: 

  • A function to fetch the metadata of all the posts in _posts directory
  • A function to fetch a single post for a given slug with the complete HTML and metadata

Optionally, we would also like all the site’s configuration defined in config.yml to be available across all the components. So we need a function that will parse the YAML config into a native object.

Since, we would be dealing with a lot of non-JavaScript files, like Markdown (.md), YAML (.yml), etc, we’ll use the raw-loader library to load such files as strings to make it easier to process them. 

npm install raw-loader --save-dev

Next we need to tell Next.js to use raw-loader when we import .md and .yml file formats by creating a next.config.js file in the root of the project (more info on that).

module.exports = {   target: 'serverless',   webpack: function (config) {     config.module.rules.push({test:  /.md$ /, use: 'raw-loader'})     config.module.rules.push({test: /.yml$ /, use: 'raw-loader'})     return config   } }

Next.js 9.4 introduced aliases for relative imports which helps clean up the import statement spaghetti caused by relative paths. To use aliases, create a jsconfig.json file in the project’s root directory specifying the base path and all the module aliases needed for the project.

{   "compilerOptions": {     "baseUrl": "./",     "paths": {       "@includes/*": ["_includes/*"],       "@layouts/*": ["_layouts/*"],       "@posts/*": ["_posts/*"],       "@api": ["api/index"],     }   } }

For example, this allows us to import our layouts by just using:

import DefaultLayout from '@layouts/default'

Fetch all the posts

This function will read all the Markdown files in the _posts directory, parse the front matter defined at the beginning of the post using gray-matter and return the array of metadata for all the posts.

// api/index.js import matter from 'gray-matter' 
 export async function getAllPosts() {   const context = require.context('../_posts', false, /.md$ /)   const posts = []   for(const key of context.keys()){     const post = key.slice(2);     const content = await import(`../_posts/$ {post}`);     const meta = matter(content.default)     posts.push({       slug: post.replace('.md',''),       title: meta.data.title     })   }   return posts; }

A typical Markdown post looks like this:

--- title:  "Welcome to Next.js blog!" --- **Hello world**, this is my first Next.js blog post and it is written in Markdown. I hope you like it!

The section outlined by --- is called the front matter which holds the metadata of the post like, title, permalink, tags, etc. Here’s the output:

[   { slug: 'style-guide-101', title: 'Style Guide 101' },   { slug: 'welcome-to-nextjs', title: 'Welcome to Next.js blog!' } ]

Make sure you install the gray-matter library from npm first using the command npm install gray-matter --save-dev.

Fetch a single post

For a given slug, this function will locate the file in the _posts directory, parse the Markdown with the marked library and return the output HTML with metadata.

// api/index.js import matter from 'gray-matter' import marked from 'marked' 
 export async function getPostBySlug(slug) {   const fileContent = await import(`../_posts/$ {slug}.md`)   const meta = matter(fileContent.default)   const content = marked(meta.content)       return {     title: meta.data.title,      content: content   } }

Sample output:

{   title: 'Style Guide 101',   content: '<p>Incididunt cupidatat eiusmod ...</p>' }

Make sure you install the marked library from npm first using the command npm install marked --save-dev.

Config

In order to re-use the Jekyll config for our Next.js blog, we’ll parse the YAML file using the js-yaml library and export this config so that it can be used across components.

// config.yml title: "Next.js blog" description: "This blog is powered by Next.js" 
 // api/index.js import yaml from 'js-yaml' export async function getConfig() {   const config = await import(`../config.yml`)   return yaml.safeLoad(config.default) }

Make sure you install js-yaml from npm first using the command npm install js-yaml --save-dev.

Includes

Our _includes directory contains two basic React components, <Header> and <Footer>, which will be used in the different layout components defined in the _layouts directory.

// _includes/header.js export default function Header() {   return <header><p>Blog | Powered by Next.js</p></header> } 
 // _includes/footer.js export default function Footer() {   return <footer><p>©2020 | Footer</p></footer> }

Layouts

We have two layout components in the _layouts directory. One is the <DefaultLayout> which is the base layout on top of which every other layout component will be built.

// _layouts/default.js import Head from 'next/head' import Header from '@includes/header' import Footer from '@includes/footer' 
 export default function DefaultLayout(props) {   return (     <main>       <Head>         <title>{props.title}</title>         <meta name='description' content={props.description}/>       </Head>       <Header/>       {props.children}       <Footer/>     </main>   ) }

The second layout is the <PostLayout> component that will override the title defined in the <DefaultLayout> with the post title and render the HTML of the post. It also includes a link back to the homepage.

// _layouts/post.js import DefaultLayout from '@layouts/default' import Head from 'next/head' import Link from 'next/link' 
 export default function PostLayout(props) {   return (     <DefaultLayout>       <Head>         <title>{props.title}</title>       </Head>       <article>         <h1>{props.title}</h1>         <div dangerouslySetInnerHTML={{__html:props.content}}/>         <div><Link href='/'><a>Home</a></Link></div>        </article>     </DefaultLayout>   ) }

next/head is a built-in component to append elements to the <head> of the page. next/link is a built-in component that handles client-side transitions between the routes defined in the pages directory.

Homepage

As part of the index page, aka homepage, we will list all the posts inside the _posts directory. The list will contain the post title and the permalink to the individual post page. The index page will use the <DefaultLayout> and we’ll import the config in the homepage to pass the title and description to the layout.

// pages/index.js import DefaultLayout from '@layouts/default' import Link from 'next/link' import { getConfig, getAllPosts } from '@api' 
 export default function Blog(props) {   return (     <DefaultLayout title={props.title} description={props.description}>       <p>List of posts:</p>       <ul>         {props.posts.map(function(post, idx) {           return (             <li key={idx}>               <Link href={'/posts/'+post.slug}>                 <a>{post.title}</a>               </Link>             </li>           )         })}       </ul>     </DefaultLayout>   ) }  
 export async function getStaticProps() {   const config = await getConfig()   const allPosts = await getAllPosts()   return {     props: {       posts: allPosts,       title: config.title,       description: config.description     }   } }

getStaticProps is called at the build time to pre-render pages by passing props to the default component of the page. We use this function to fetch the list of all posts at build time and render the posts archive on the homepage.

Screenshot of the homepage showing the page title, a list with two post titles, and the footer.

Post page

This page will render the title and contents of the post for the slug supplied as part of the context. The post page will use the <PostLayout> component.

// pages/posts/[slug].js import PostLayout from '@layouts/post' import { getPostBySlug, getAllPosts } from "@api" 
 export default function Post(props) {   return <PostLayout title={props.title} content={props.content}/> } 
 export async function getStaticProps(context) {   return {     props: await getPostBySlug(context.params.slug)   } } 
 export async function getStaticPaths() {   let paths = await getAllPosts()   paths = paths.map(post => ({     params: { slug:post.slug }   }));   return {     paths: paths,     fallback: false   } }

If a page has dynamic routes, Next.js needs to know all the possible paths at build time. getStaticPaths supplies the list of paths that has to be rendered to HTML at build time. The fallback property ensures that if you visit a route that does not exist in the list of paths, it will return a 404 page.

Screenshot of the blog page showing a welcome header and a hello world blue above the footer.

Production ready

Add the following commands for build and start in package.json, under the scripts section and then run npm run build followed by npm run start to build the static blog and start the production server.

// package.json "scripts": {   "dev": "next",   "build": "next build",   "start": "next start" }

The entire source code in this article is available on this GitHub repository. Feel free to clone it locally and play around with it. The repository also includes some basic placeholders to apply CSS to your blog.

Improvements

The blog, although functional, is perhaps too basic for most average cases. It would be nice to extend the framework or submit a patch to include some more features like:

  • Pagination
  • Syntax highlighting
  • Categories and Tags for posts
  • Styling

Overall, Next.js seems really very promising to build static websites, like a blog. Combined with its ability to export static HTML, we can built a truly standalone app without the need of a server!

The post Building a Blog with Next.js appeared first on CSS-Tricks.

CSS-Tricks

, ,

Building Serverless GraphQL API in Node with Express and Netlify

I’ve always wanted to build an API, but was scared away by just how complicated things looked. I’d read a lot of tutorials that start with “first, install this library and this library and this library” without explaining why that was important. I’m kind of a Luddite when it comes to these things.

Well, I recently rolled up my sleeves and got my hands dirty. I wanted to build and deploy a simple read-only API, and goshdarnit, I wasn’t going to let some scary dependency lists and fancy cutting-edge services stop me¹.

What I discovered is that underneath many of the tutorials and projects out there is a small, easy-to-understand set of tools and techniques. In less than an hour and with only 30 lines of code, I believe anyone can write and deploy their very own read-only API. You don’t have to be a senior full-stack engineer — a basic grasp of JavaScript and some experience with npm is all you need.

At the end of this article you’ll be able to deploy your very own API without the headache of managing a server. I’ll list out each dependency and explain why we’re incorporating it. I’ll also give you an intro to some of the newer concepts involved, and provide links to resources to go deeper.

Let’s get started!

A rundown of the API concepts

There are a couple of common ways to work with APIs. But let’s begin by (super briefly) explaining what an API is all about: reading and updating data.

Over the past 20 years, some standard ways to build APIs have emerged. REST (short for REpresentational State Transfer) is one of the most common. To use a REST API, you make a call to a server through a URL — say api.example.com/rest/books — and expect to get a list of books back in a format like JSON or XML. To get a single book, we’d go back to the server at a URL — like api.example.com/rest/books/123 — and expect the data for book #123. Adding a new book or updating a specific book’s data means more trips to the server at similar, purpose-defined URLs.

That’s the basic idea of two concepts we’ll be looking at here: GraphQL and Serverless.

GraphQL

Applications that do a lot of getting and updating of data make a lot of API calls. Complicated software, like Twitter, might make hundreds of calls to get the data for a single page. Collecting the right data from a handful of URLs and formatting it can be a real headache. In 2012, Facebook developers starting looking for new ways to get and update data more efficiently.

Their key insight was that for the most part, data in complicated applications has relationships to other data. A user has followers, who are each users themselves, who each have their own followers, and those followers have tweets, which have replies from other users. Drawing the relationships between data results in a graph and that graph can help a server do a lot of clever work formatting and sending (or updating) data, and saving front-end developers time and frustration. Graph Query Language, aka GraphQL, was born.

GraphQL is different from the REST API approach in its use of URLs and queries. To get a list of books from our API using GraphQL, we don’t need to go to a specific URL (like our api.example.com/graphql/books example). Instead, we call up the API at the top level — which would be api.example.com/graphql in our example — and tell it what kind of information we want back with a JSON object:

{   books {     id     title     author   } }

The server sees that request, formats our data, and sends it back in another JSON object:

{   "books" : [     {       "id" : 123       "title" : "The Greatest CSS Tricks Vol. I"       "author" : "Chris Coyier"     }, {       // ...     }   ] }

Sebastian Scholl compares GraphQL to REST using a fictional cocktail party that makes the distinction super clear. The bottom line: GraphQL allows us to request the exact data we want while REST gives us a dump of everything at the URL.

Concept 2: Serverless

Whenever I see the word “serverless,” I think of Chris Watterston’s famous sticker.

Similarly, there is no such thing as a truly “serverless” application. Chris Coyier nice sums it up his “Serverless” post:

What serverless is trying to mean, it seems to me, is a new way to manage and pay for servers. You don’t buy individual servers. You don’t manage them. You don’t scale them. You don’t balance them. You aren’t really responsible for them. You just pay for what you use.

The serverless approach makes it easier to build and deploy back-end applications. It’s especially easy for folks like me who don’t have a background in back-end development. Rather than spend my time learning how to provision and maintain a server, I often hand the hard work off to someone (or even perhaps something) else.

It’s worth checking out the CSS-Tricks guide to all things serverless. On the Ideas page, there’s even a link to a tutorial on building a serverless API!

Picking our tools

If you browse through that serverless guide you’ll see there’s no shortage of tools and resources to help us on our way to building an API. But exactly which ones we use requires some initial thought and planning. I’m going to cover two specific tools that we’ll use for our read-only API.

Tool 1: NodeJS and Express

Again, I don’t have much experience with back-end web development. But one of the few things I have encountered is Node.js. Many of you are probably aware of it and what it does, but it’s essentially JavaScript that runs on a server instead of a web browser. Node.js is perfect for someone coming from the front-end development side of things because we can work directly in JavaScript — warts and all — without having to reach for some back-end language.

Express is one of the most popular frameworks for Node.js. Back before React was king (How Do You Do, Fellow Kids?), Express was the go-to for building web applications. It does all sorts of handy thing like routing, templating, and error handling.

I’ll be honest: frameworks like Express intimidate me. But for a simple API, Express is extremely easy to use and understand. There’s an official GraphQL helper for Express, and a plug-and-play library for making a serverless application called serverless-http. Neat, right?!

Tool 2: Netlify functions

The idea of running an application without maintaining a server sounds too good to be true. But check this out: not only can you accomplish this feat of modern sorcery, you can do it for free. Mind blowing.

Netlify offers a free plan with serverless functions that will give you up to 125,000 API calls in a month. Amazon offers a similar service called Lambda. We’ll stick with Netlify for this tutorial.

Netlify includes Netlify Dev which is a CLI for Netlify’s platform. Essentially, it lets us run a simulation of our in a fully-featured production environment, all within the safety of our local machine. We can use it to build and test our serverless functions without needing to deploy them.

At this point, I think it’s worth noting that not everyone agrees that running Express in a serverless function is a good idea. As Paul Johnston explains, if you’re building your functions for scale, it’s best to break each piece of functionality out into its own single-purpose function. Using Express the way I have means that every time a request goes to the API, the whole Express server has to be booted up from scratch — not very efficient. Deploy to production at your own risk.

Let’s get building!

Now that we have out tools in place, we can kick off the project. Let’s start by creating a new folder, navigating to fit in terminal, then running npm init  on it. Once npm creates a package.json file, we can install the dependencies we need. Those dependencies are:

  1. Express
  2. GraphQL and express-graphql. These allow us to receive and respond to GraphQL requests.
  3. Bodyparser. This is a small layer that translates the requests we get to and from JSON, which is what GraphQL expects.
  4. Serverless-http. This serves as a wrapper for Express that makes sure our application can be used on a serverless platform, like Netlify.

That’s it! We can install them all in a single command:

npm i express express-graphql graphql body-parser serverless-http

We also need to install Netlify Dev as a global dependency so we can use it as a CLI:

npm i -g netlify-dev

File structure

There’s a few files that are required for our API to work correctly. The first is netlify.toml which should be created at the project’s root directory. This is a configuration file to tell Netlify how to handle our project. Here’s what we need in the file to define our startup command, our build command and where our serverless functions are located:

[build] 
   # This command builds the site   command = "npm run build" 
   # This is the directory that will be deployed   publish = "build" 
   # This is where our functions are located   functions = "functions"

That functions line is super important; it tells Netlify where we’ll be putting our API code.

Next, let’s create that /functions folder at the project’s root, and create a new file inside it called api.js.  Open it up and add the following lines to the top so our dependencies are available to use and are included in the build:

const express = require("express"); const bodyParser = require("body-parser"); const expressGraphQL = require("express-graphql"); const serverless = require("serverless-http");

Setting up Express only takes a few lines of code. First, we’ll initial Express and wrap it in the serverless-http serverless function:

const app = express(); module.exports.handler = serverless(app);

These lines initialize Express, and wrap it in the serverless-http function. module.exports.handler lets Netlify know that our serverless function is the Express function.

Now let’s configure Express itself:

app.use(bodyParser.json()); app.use(   "/",   expressGraphQL({     graphiql: true   }) );

These two declarations tell Express what middleware we’re running. Middleware is what we want to happen between the request and response. In our case, we want to parse JSON using bodyparser, and handle it with express-graphql. The graphiql:true configuration for express-graphql will give us a nice user interface and playground for testing.

Defining the GraphQL schema

In order to understand requests and format responses, GraphQL needs to know what our data looks like. If you’ve worked with databases then you know that this kind of data blueprint is called a schema. GraphQL combines this well-defined schema with types — that is, definitions of different kinds of data — to work its magic.

The very first thing our schema needs is called a root query. This will handle any data requests coming in to our API. It’s called a “root” query because it’s accessed at the root of our API— say, api.example.com/graphql.

For this demonstration, we’ll build a hello world example; the root query should result in a response of “Hello world.”

So, our GraphQL API will need a schema (composed of types) for the root query. GraphQL provides some ready-built types, including a schema, a generic object², and a string.

Let’s get those by adding this below the imports:

const {   GraphQLSchema,   GraphQLObjectType,   GraphQLString } = require("graphql");

Then we’ll define our schema like this:

const schema = new GraphQLSchema({   query: new GraphQLObjectType({     name: 'HelloWorld',     fields: () => ({ /* we'll put our response here */ })   }) })

The first element in the object, with the key query, tells GraphQL how to handle a root query. Its value is a GraphQL object with the following configuration:

  • name – A reference used for documentation purposes
  • fields – Defines the data that our server will respond with. It might seem strange to have a function that just returns an object here, but this allows us to use variables and functions defined elsewhere in our file without needing to define them first³.
const schema = new GraphQLSchema({   query: new GraphQLObjectType({     name: "HelloWorld",     fields: () => ({       message: {         type: GraphQLString,         resolve: () => "Hello World",       },     }),   }), });

The fields function returns an object and our schema only has a single message field so far. The message we want to respond with is a string, so we specify its type as a GraphQLString. The resolve function is run by our server to generate the response we want. In this case, we’re only  returning “Hello World” but in a more complicated application, we’d probably use this function to go to our database and retrieve some data.

That’s our schema! We need to tell our Express server about it, so let’s open up api.js and make sure the Express configuration is updated to this:

app.use(   "/",   expressGraphQL({     schema: schema,     graphiql: true   }) );

Running the server locally

Believe it or not, we’re ready to start the server! Run netlify dev in Terminal from the project’s root folder. Netlify Dev will read the netlify.toml configuration, bundle up your api.js function, and make it available locally from there. If everything goes according to plan, you’ll see a message like “Server now ready on http://localhost:8888.” 

If you go to localhost:8888 like I did the first time, you might be a little disappointed to get a 404 error.

But fear not! Netlify is running the function, only in a different directory than you might expect, which is /.netlify/functions. So, if you go to localhost:8888/.netlify/functions/api, you should see the GraphiQL interface as expected. Success!

Now, that’s more like it!

The screen we get is the GraphiQL playground and we can use it to test out the API. First, clear out the comments in the left pane and replace them with the following:

{   message }

This might seem a little… naked… but you just wrote a GraphQL query! What we’re saying is that we’d like to see the message field we defined in api.js. Click the “Run” button, and on the righth, you’ll see the following:

{   "data": {     "message": "Hello World"   } }

I don’t know about you, but I did a little fist pump when I did this the first time. We built an API!

Bonus: Redirecting requests

One of my hang-ups while learning about Netlify’s serverless functions is that they run on the /.netlify/functions path. It wasn’t ideal to type or remember it and I nearly bailed for another solution. But it turns out you can easily redirect requests when running and deploying on Netlfiy. All it takes is creating a file in the project’s root directory called _redirects (no extension necessary) with the following line in it:

/api /.netlify/functions/api 200!

This tells Netlify that any traffic that goes to yoursite.com/api should be sent to /.netlify/functions/api. The 200! bit instructs the server to send back a status code of 200 (meaning everything’s OK).

Deploying the API

To deploy the project, we need to connect the source code to Netlfiy. I host mine in a GitHub repo, which allows for continuous deployment.

After connecting the repository to Netlfiy, the rest is automatic: the code is processed and deployed as a serverless function! You can log into the Netlify dashboard to see the logs from any function.

Conclusion

Just like that, we are able to create a serverless API using GraphQL with a few lines of JavaScript and some light configuration. And hey, we can even deploy — for free. 

The possibilities are endless. Maybe you want to create your own personal knowledge base, or a tool to serve up design tokens. Maybe you want to try your hand at making your own PokéAPI. Or, maybe you’re interesting in working with GraphQL.

Regardless of what you make, it’s these sorts of technologies that are getting more and more accessible every day. It’s exciting to be able to work with some of the most modern tools and techniques without needing a deep technical back-end knowledge.

If you’d like to see at the complete source code for this project, it’s available on GitHub.

Some of the code in this tutorial was adapted from Web Dev Simplified’s “Learn GraphQL in 40 minutes” article. It’s a great resource to go one step deeper into GraphQL. However, it’s also focused on a more traditional server-full Express.


  1. If you’d like to see the full result of my explorations, I’ve written a companion piece called “A design API in practice” on my website.
  2. The reasons you need a special GraphQL object, instead of a regular ol’ vanilla JavaScript object in curly braces, is a little beyond the scope of this tutorial. Just keep in mind that GraphQL is a finely-tuned machine that uses these specialized types to be fast and resilient.
  3. Scope and hoisting are some of the more confusing topics in JavaScript. MDN has a good primer that’s worth checking out.

The post Building Serverless GraphQL API in Node with Express and Netlify appeared first on CSS-Tricks.

CSS-Tricks

, , , , ,
[Top]

Building a hexagonal grid using CSS grid

I think of grids as arrangements of rectangles with vertical and horizontal lines running through. And they are, but that doesn’t mean we can’t still do clever things in how we place things on those grids and what we do with the elements afterwards.

In this demo by Jesse Breneman, a grid of hexagons is created by setting up the grid columns with math such that each block can span over three columns and two rows so that the blocks overlap in a way that allows a clip-path to be applied around them. This carves a block into a hexagon that is evenly spaced with the others. Very clever.

And, ha, that’s a hell of a domain name Jesse. Personally, I don’t know anything about blogging about CSS at a super cheesy domain name.

Direct Link to ArticlePermalink

The post Building a hexagonal grid using CSS grid appeared first on CSS-Tricks.

CSS-Tricks

, , ,
[Top]

Building Your First Serverless Service With AWS Lambda Functions

Many developers are at least marginally familiar with AWS Lambda functions. They’re reasonably straightforward to set up, but the vast AWS landscape can make it hard to see the big picture. With so many different pieces it can be daunting, and frustratingly hard to see how they fit seamlessly into a normal web application.

The Serverless framework is a huge help here. It streamlines the creation, deployment, and most significantly, the integration of Lambda functions into a web app. To be clear, it does much, much more than that, but these are the pieces I’ll be focusing on. Hopefully, this post strikes your interest and encourages you to check out the many other things Serverless supports. If you’re completely new to Lambda you might first want to check out this AWS intro.

There’s no way I can cover the initial installation and setup better than the quick start guide, so start there to get up and running. Assuming you already have an AWS account, you might be up and running in 5–10 minutes; and if you don’t, the guide covers that as well.

Your first Serverless service

Before we get to cool things like file uploads and S3 buckets, let’s create a basic Lambda function, connect it to an HTTP endpoint, and call it from an existing web app. The Lambda won’t do anything useful or interesting, but this will give us a nice opportunity to see how pleasant it is to work with Serverless.

First, let’s create our service. Open any new, or existing web app you might have (create-react-app is a great way to quickly spin up a new one) and find a place to create our services. For me, it’s my lambda folder. Whatever directory you choose, cd into it from terminal and run the following command:

sls create -t aws-nodejs --path hello-world

That creates a new directory called hello-world. Let’s crack it open and see what’s in there.

If you look in handler.js, you should see an async function that returns a message. We could hit sls deploy in our terminal right now, and deploy that Lambda function, which could then be invoked. But before we do that, let’s make it callable over the web.

Working with AWS manually, we’d normally need to go into the AWS API Gateway, create an endpoint, then create a stage, and tell it to proxy to our Lambda. With serverless, all we need is a little bit of config.

Still in the hello-world directory? Open the serverless.yaml file that was created in there.

The config file actually comes with boilerplate for the most common setups. Let’s uncomment the http entries, and add a more sensible path. Something like this:

functions:   hello:     handler: handler.hello #   The following are a few example events you can configure #   NOTE: Please make sure to change your handler code to work with those events #   Check the event documentation for details     events:       - http:         path: msg         method: get

That’s it. Serverless does all the grunt work described above.

CORS configuration 

Ideally, we want to call this from front-end JavaScript code with the Fetch API, but that unfortunately means we need CORS to be configured. This section will walk you through that.

Below the configuration above, add cors: true, like this

functions:   hello:     handler: handler.hello     events:       - http:         path: msg         method: get         cors: true

That’s the section! CORS is now configured on our API endpoint, allowing cross-origin communication.

CORS Lambda tweak

While our HTTP endpoint is configured for CORS, it’s up to our Lambda to return the right headers. That’s just how CORS works. Let’s automate that by heading back into handler.js, and adding this function:

const CorsResponse = obj => ({   statusCode: 200,   headers: {     "Access-Control-Allow-Origin": "*",     "Access-Control-Allow-Headers": "*",     "Access-Control-Allow-Methods": "*"   },   body: JSON.stringify(obj) });

Before returning from the Lambda, we’ll send the return value through that function. Here’s the entirety of handler.js with everything we’ve done up to this point:

'use strict'; const CorsResponse = obj => ({   statusCode: 200,   headers: {     "Access-Control-Allow-Origin": "*",     "Access-Control-Allow-Headers": "*",     "Access-Control-Allow-Methods": "*"   },   body: JSON.stringify(obj) }); 
 module.exports.hello = async event => {   return CorsResponse("HELLO, WORLD!"); };

Let’s run it. Type sls deploy into your terminal from the hello-world folder.

When that runs, we’ll have deployed our Lambda function to an HTTP endpoint that we can call via Fetch. But… where is it? We could crack open our AWS console, find the gateway API that serverless created for us, then find the Invoke URL. It would look something like this.

The AWS console showing the Settings tab which includes Cache Settings. Above that is a blue notice that contains the invoke URL.

Fortunately, there is an easier way, which is to type sls info into our terminal:

Just like that, we can see that our Lambda function is available at the following path:

https://6xpmc3g0ch.execute-api.us-east-1.amazonaws.com/dev/ms

Woot, now let’s call It!

Now let’s open up a web app and try fetching it. Here’s what our Fetch will look like:

fetch("https://6xpmc3g0ch.execute-api.us-east-1.amazonaws.com/dev/msg")   .then(resp => resp.json())   .then(resp => {     console.log(resp);   });

We should see our message in the dev console.

Console output showing Hello World.

Now that we’ve gotten our feet wet, let’s repeat this process. This time, though, let’s make a more interesting, useful service. Specifically, let’s make the canonical “resize an image” Lambda, but instead of being triggered by a new S3 bucket upload, let’s let the user upload an image directly to our Lambda. That’ll remove the need to bundle any kind of aws-sdk resources in our client-side bundle.

Building a useful Lambda

OK, from the start! This particular Lambda will take an image, resize it, then upload it to an S3 bucket. First, let’s create a new service. I’m calling it cover-art but it could certainly be anything else.

sls create -t aws-nodejs --path cover-art

As before, we’ll add a path to our HTTP endpoint (which in this case will be a POST, instead of GET, since we’re sending the file instead of receiving it) and enable CORS:

// Same as before   events:     - http:       path: upload       method: post       cors: true

Next, let’s grant our Lambda access to whatever S3 buckets we’re going to use for the upload. Look in your YAML file — there should be a iamRoleStatements section that contains boilerplate code that’s been commented out. We can leverage some of that by uncommenting it. Here’s the config we’ll use to enable the S3 buckets we want:

iamRoleStatements:  - Effect: "Allow"    Action:      - "s3:*"    Resource: ["arn:aws:s3:::your-bucket-name/*"]

Note the /* on the end. We don’t list specific bucket names in isolation, but rather paths to resources; in this case, that’s any resources that happen to exist inside your-bucket-name.

Since we want to upload files directly to our Lambda, we need to make one more tweak. Specifically, we need to configure the API endpoint to accept multipart/form-data as a binary media type. Locate the provider section in the YAML file:

provider:   name: aws   runtime: nodejs12.x

…and modify if it to:

provider:   name: aws   runtime: nodejs12.x   apiGateway:     binaryMediaTypes:       - 'multipart/form-data'

For good measure, let’s give our function an intelligent name. Replace handler: handler.hello with handler: handler.upload, then change module.exports.hello to module.exports.upload in handler.js.

Now we get to write some code

First, let’s grab some helpers.

npm i jimp uuid lambda-multipart-parser

Wait, what’s Jimp? It’s the library I’m using to resize uploaded images. uuid will be for creating new, unique file names of the sized resources, before uploading to S3. Oh, and lambda-multipart-parser? That’s for parsing the file info inside our Lambda.

Next, let’s make a convenience helper for S3 uploading:

const uploadToS3 = (fileName, body) => {   const s3 = new S3({});   const  params = { Bucket: "your-bucket-name", Key: `/$ {fileName}`, Body: body }; 
   return new Promise(res => {     s3.upload(params, function(err, data) {       if (err) {         return res(CorsResponse({ error: true, message: err }));       }       res(CorsResponse({          success: true,          url: `https://$ {params.Bucket}.s3.amazonaws.com/$ {params.Key}`        }));     });   }); };

Lastly, we’ll plug in some code that reads the upload files, resizes them with Jimp (if needed) and uploads the result to S3. The final result is below.

'use strict'; const AWS = require("aws-sdk"); const { S3 } = AWS; const path = require("path"); const Jimp = require("jimp"); const uuid = require("uuid/v4"); const awsMultiPartParser = require("lambda-multipart-parser"); 
 const CorsResponse = obj => ({   statusCode: 200,   headers: {     "Access-Control-Allow-Origin": "*",     "Access-Control-Allow-Headers": "*",     "Access-Control-Allow-Methods": "*"   },   body: JSON.stringify(obj) }); 
 const uploadToS3 = (fileName, body) => {   const s3 = new S3({});   var params = { Bucket: "your-bucket-name", Key: `/$ {fileName}`, Body: body };   return new Promise(res => {     s3.upload(params, function(err, data) {       if (err) {         return res(CorsResponse({ error: true, message: err }));       }       res(CorsResponse({          success: true,          url: `https://$ {params.Bucket}.s3.amazonaws.com/$ {params.Key}`        }));     });   }); }; 
 module.exports.upload = async event => {   const formPayload = await awsMultiPartParser.parse(event);   const MAX_WIDTH = 50;   return new Promise(res => {     Jimp.read(formPayload.files[0].content, function(err, image) {       if (err || !image) {         return res(CorsResponse({ error: true, message: err }));       }       const newName = `$ {uuid()}$ {path.extname(formPayload.files[0].filename)}`;       if (image.bitmap.width > MAX_WIDTH) {         image.resize(MAX_WIDTH, Jimp.AUTO);         image.getBuffer(image.getMIME(), (err, body) => {           if (err) {             return res(CorsResponse({ error: true, message: err }));           }           return res(uploadToS3(newName, body));         });       } else {         image.getBuffer(image.getMIME(), (err, body) => {           if (err) {             return res(CorsResponse({ error: true, message: err }));           }           return res(uploadToS3(newName, body));         });       }     });   }); };

I’m sorry to dump so much code on you but — this being a post about Amazon Lambda and serverless — I’d rather not belabor the grunt work within the serverless function. Of course, yours might look completely different if you’re using an image library other than Jimp.

Let’s run it by uploading a file from our client. I’m using the react-dropzone library, so my JSX looks like this:

<Dropzone   onDrop={files => onDrop(files)}   multiple={false} >   <div>Click or drag to upload a new cover</div> </Dropzone>

The onDrop function looks like this:

const onDrop = files => {   let request = new FormData();   request.append("fileUploaded", files[0]); 
   fetch("https://yb1ihnzpy8.execute-api.us-east-1.amazonaws.com/dev/upload", {     method: "POST",     mode: "cors",     body: request     })   .then(resp => resp.json())   .then(res => {     if (res.error) {       // handle errors     } else {       // success - woo hoo - update state as needed     }   }); };

And just like that, we can upload a file and see it appear in our S3 bucket! 

Screenshot of the AWS interface for buckets showing an uploaded file in a bucket that came from the Lambda function.

An optional detour: bundling

There’s one optional enhancement we could make to our setup. Right now, when we deploy our service, Serverless is zipping up the entire services folder and sending all of it to our Lambda. The content currently weighs in at 10MB, since all of our node_modules are getting dragged along for the ride. We can use a bundler to drastically reduce that size. Not only that, but a bundler will cut deploy time, data usage, cold start performance, etc. In other words, it’s a nice thing to have.

Fortunately for us, there’s a plugin that easily integrates webpack into the serverless build process. Let’s install it with:

npm i serverless-webpack --save-dev

…and add it via our YAML config file. We can drop this in at the very end:

// Same as before plugins:   - serverless-webpack

Naturally, we need a webpack.config.js file, so let’s add that to the mix:

const path = require("path"); module.exports = {   entry: "./handler.js",   output: {     libraryTarget: 'commonjs2',     path: path.join(__dirname, '.webpack'),     filename: 'handler.js',   },   target: "node",   mode: "production",   externals: ["aws-sdk"],   resolve: {     mainFields: ["main"]   } };

Notice that we’re setting target: node so Node-specific assets are treated properly. Also note that you may need to set the output filename to  handler.js. I’m also adding aws-sdk to the externals array so webpack doesn’t bundle it at all; instead, it’ll leave the call to const AWS = require("aws-sdk"); alone, allowing it to be handled by our Lamdba, at runtime. This is OK since Lambdas already have the aws-sdk available implicitly, meaning there’s no need for us to send it over the wire. Finally, the mainFields: ["main"] is to tell webpack to ignore any ESM module fields. This is necessary to fix some issues with the Jimp library.

Now let’s re-deploy, and hopefully we’ll see webpack running.

Now our code is bundled nicely into a single file that’s 935K, which zips down further to a mere 337K. That’s a lot of savings!

Odds and ends

If you’re wondering how you’d send other data to the Lambda, you’d add what you want to the request object, of type FormData, from before. For example:

request.append("xyz", "Hi there");

…and then read formPayload.xyz in the Lambda. This can be useful if you need to send a security token, or other file info.

If you’re wondering how you might configure env variables for your Lambda, you might have guessed by now that it’s as simple as adding some fields to your serverless.yaml file. It even supports reading the values from an external file (presumably not committed to git). This blog post by Philipp Müns covers it well.

Wrapping up

Serverless is an incredible framework. I promise, we’ve barely scratched the surface. Hopefully this post has shown you its potential, and motivated you to check it out even further.

If you’re interested in learning more, I’d recommend the learning materials from David Wells, an engineer at Netlify, and former member of the serverless team, as well as the Serverless Handbook by Swizec Teller

The post Building Your First Serverless Service With AWS Lambda Functions appeared first on CSS-Tricks.

CSS-Tricks

, , , , ,
[Top]

CSS Animation Timelines: Building a Rube Goldberg Machine

If you’re going to build a multi-step CSS animation or transition, you have a particular conundrum. The second step needs a delay that is equal to the duration of the first step. And the third step is equal to the duration of the first two steps, plus any delay in between. It gets more and more complicated until you might just be like, nahhhhh I’ll use more technology to help me.

Paul Hebert:

Lately I’ve been using custom properties to plan out pure CSS timelines for complex animations.

Cool. And it can get completely nuts.

Direct Link to ArticlePermalink

The post CSS Animation Timelines: Building a Rube Goldberg Machine appeared first on CSS-Tricks.

CSS-Tricks

, , , , ,
[Top]

Building a Scalable CSS Architecture With BEM and Utility Classes

Maintaining a large-scale CSS project is hard. Over the years, we’ve witnessed different approaches aimed at easing the process of writing scalable CSS. In the end, we all try to meet the following two goals:

  1. Efficiency: we want to reduce the time spent thinking about how things should be done and increase the time doing things.
  2. Consistency: we want to make sure all developers are on the same page.

For the past year and a half, I’ve been working on a component library and a front-end framework called CodyFrame. We currently have 220+ components. These components are not isolated modules: they’re reusable patterns, often merged into each other to create complex templates.

The challenges of this project have forced our team to develop a way of building scalable CSS architectures. This method relies on CSS globals, BEM, and utility classes.

I’m happy to share it! 👇

CSS Globals in 30 seconds

Globals are CSS files containing rules that apply crosswise to all components (e.g., spacing scale, typography scale, colors, etc.). Globals use tokens to keep the design consistent across all components and reduce the size of their CSS.

Here’s an example of typography global rules:

/* Typography | Global */ :root {   /* body font size */   --text-base-size: 1em; 
   /* type scale */   --text-scale-ratio: 1.2;   --text-xs: calc((1em / var(--text-scale-ratio)) / var(--text-scale-ratio));   --text-sm: calc(var(--text-xs) * var(--text-scale-ratio));   --text-md: calc(var(--text-sm) * var(--text-scale-ratio) * var(--text-scale-ratio));   --text-lg: calc(var(--text-md) * var(--text-scale-ratio));   --text-xl: calc(var(--text-lg) * var(--text-scale-ratio));   --text-xxl: calc(var(--text-xl) * var(--text-scale-ratio)); } 
 @media (min-width: 64rem) { /* responsive decision applied to all text elements */   :root {     --text-base-size: 1.25em;     --text-scale-ratio: 1.25;   } } 
 h1, .text-xxl   { font-size: var(--text-xxl, 2.074em); } h2, .text-xl    { font-size: var(--text-xl, 1.728em); } h3, .text-lg    { font-size: var(--text-lg, 1.44em); } h4, .text-md    { font-size: var(--text-md, 1.2em); } .text-base      { font-size: 1em; } small, .text-sm { font-size: var(--text-sm, 0.833em); } .text-xs        { font-size: var(--text-xs, 0.694em); }

BEM in 30 seconds

BEM (Blocks, Elements, Modifiers) is a naming methodology aimed at creating reusable components.

Here’s an example:

<header class="header">   <a href="#0" class="header__logo"><!-- ... --></a>   <nav class="header__nav">     <ul>       <li><a href="#0" class="header__link header__link--active">Homepage</a></li>       <li><a href="#0" class="header__link">About</a></li>       <li><a href="#0" class="header__link">Contact</a></li>     </ul>   </nav> </header>
  • A block is a reusable component
  • An element is a child of the block (e.g., .block__element)
  • A modifier is a variation of a block/element (e.g., .block--modifier, .block__element--modifier).

Utility classes in 30 seconds

A utility class is a CSS class meant to do only one thing. For example:

<section class="padding-md">   <h1>Title</h1>   <p>Lorem ipsum dolor sit amet consectetur adipisicing elit.</p> </section> 
 <style>   .padding-sm { padding: 0.75em; }   .padding-md { padding: 1.25em; }   .padding-lg { padding: 2em; } </style>

You can potentially build entire components out of utility classes:

<article class="padding-md bg radius-md shadow-md">   <h1 class="text-lg color-contrast-higher">Title</h1>   <p class="text-sm color-contrast-medium">Lorem ipsum dolor sit amet consectetur adipisicing elit.</p> </article>

You can connect utility classes to CSS globals:

/* Spacing | Global */ :root {   --space-unit: 1em;   --space-xs:   calc(0.5 * var(--space-unit));   --space-sm:   calc(0.75 * var(--space-unit));   --space-md:   calc(1.25 * var(--space-unit));   --space-lg:   calc(2 * var(--space-unit));   --space-xl:   calc(3.25 * var(--space-unit)); }  /* responsive rule affecting all spacing variables */ @media (min-width: 64rem) {   :root {     --space-unit:  1.25em; /* 👇 this responsive decision affects all margins and paddings */   } }
 /* margin and padding util classes - apply spacing variables */ .margin-xs { margin: var(--space-xs); } .margin-sm { margin: var(--space-sm); } .margin-md { margin: var(--space-md); } .margin-lg { margin: var(--space-lg); } .margin-xl { margin: var(--space-xl); }  .padding-xs { padding: var(--space-xs); } .padding-sm { padding: var(--space-sm); } .padding-md { padding: var(--space-md); } .padding-lg { padding: var(--space-lg); } .padding-xl { padding: var(--space-xl); }

A real-life example

Explaining a methodology using basic examples doesn’t bring up the real issues nor the advantages of the method itself.

Let’s build something together! 

We’ll create a gallery of card elements. First, we’ll do it using only the BEM approach, and we’ll point out the issues you may face by going BEM only. Next, we’ll see how Globals reduce the size of your CSS. Finally, we’ll make the component customizable introducing utility classes to the mix.

Here’s a look at the final result:

Let’s start this experiment by creating the gallery using only BEM:

<div class="grid">   <article class="card">     <a class="card__link" href="#0">       <figure>         <img class="card__img" src="/image.jpg" alt="Image description">       </figure> 
       <div class="card__content">         <h1 class="card__title-wrapper"><span class="card__title">Title of the card</span></h1> 
         <p class="card__description">Lorem ipsum dolor sit amet consectetur adipisicing elit. Tempore, totam?</p>       </div> 
       <div class="card__icon-wrapper" aria-hidden="true">         <svg class="card__icon" viewBox="0 0 24 24"><!-- icon --></svg>       </div>     </a>   </article> 
   <article class="card"><!-- card --></article>   <article class="card"><!-- card --></article>   <article class="card"><!-- card --></article> </div>

In this example, we have two components: .grid and .card. The first one is used to create the gallery layout. The second one is the card component.

First of all, let me point out the main advantages of using BEM: low specificity and scope.

/* without BEM */ .grid {} .card {} .card > a {} .card img {} .card-content {} .card .title {} .card .description {} 
 /* with BEM */ .grid {} .card {} .card__link {} .card__img {} .card__content {} .card__title {} .card__description {}

If you don’t use BEM (or a similar naming method), you end up creating inheritance relationships (.card > a).

/* without BEM */ .card > a.active {} /* high specificity */ 
 /* without BEM, when things go really bad */ div.container main .card.is-featured > a.active {} /* good luck with that 😦 */ 
 /* with BEM */ .card__link--active {} /* low specificity */

Dealing with inheritance and specificity in big projects is painful. That feeling when your CSS doesn’t seem to be working, and you find out it’s been overwritten by another class 😡! BEM, on the other hand, creates some kind of scope for your components and keeps specificity low.

But… there are two main downsides of using only BEM:

  1. Naming too many things is frustrating
  2. Minor customizations are not easy to do or maintain

In our example, to stylize the components, we’ve created the following classes:

.grid {} .card {} .card__link {} .card__img {} .card__content {} .card__title-wrapper {} .card__title {} .card__description {} .card__icon-wrapper {} .card__icon {}

The number of classes is not the issue. The issue is coming up with so many meaningful names (and having all your teammates use the same naming criteria).

For example, imagine you have to modify the card component by including an additional, smaller paragraph:

<div class="card__content">   <h1 class="card__title-wrapper"><span class="card__title">Title of the card</span></h1>   <p class="card__description">Lorem ipsum dolor...</p>   <p class="card__description card__description--small">Lorem ipsum dolor...</p> <!-- 👈 --> </div>

How do you call it? You could consider it a variation of the .card__description element and go for .card__description .card__description--small. Or, you could create a new element, something like .card__small, .card__small-p, or .card__tag. See where I’m going? No one wants to spend time thinking about class names. BEM is great as long as you don’t have to name too many things.

The second issue is dealing with minor customizations. For example, imagine you have to create a variation of the card component where the text is center-aligned.

You’ll probably do something like this:

<div class="card__content card__content--center"> <!-- 👈 -->   <h1 class="card__title-wrapper"><span class="card__title">Title of the card</span></h1>   <p class="card__description">Lorem ipsum dolor sit amet consectetur adipisicing elit. Tempore, totam?</p> </div> 
 <style>   .card__content--center { text-align: center; } </style>

One of your teammates, working on another component (.banner), is facing the same problem. They create a variation for their component as well:

<div class="banner banner--text-center"></div> 
 <style>   .banner--text-center { text-align: center; } </style>

Now imagine you have to include the banner component into a page. You need the variation where the text is aligned in the center. Without checking the CSS of the banner component, you may instinctively write something like banner banner--center in your HTML, because you always use --center when you create variations where the text is center-aligned. Not working! Your only option is to open the CSS file of the banner component, inspect the code, and find out what class should be applied to align the text in the center.

How long would it take, 5 minutes? Multiply 5 minutes by all the times this happens in a day, to you and all your teammates, and you realize how much time is wasted. Plus, adding new classes that do the same thing contributes to bloating your CSS.

CSS Globals and utility classes to the rescue

The first advantage of setting global styles is having a set of CSS rules that apply to all the components.

For example, if we set responsive rules in the spacing and typography globals, these rules will affect the grid and card components as well. In CodyFrame, we increase the body font size at a specific breakpoint; because we use “em” units for all margins and paddings, the whole spacing system is updated at once generating a cascade effect.

Spacing and typography responsive rules — no media queries on a component level 

As a consequence, in most cases, you won’t need to use media queries to increase the font size or the values of margins and paddings!

/* without globals */ .card { padding: 1em; } 
 @media (min-width: 48rem) {   .card { padding: 2em; }   .card__content { font-size: 1.25em; } } 
 /* with globals (responsive rules intrinsically applied) */ .card { padding: var(--space-md); }

Not just that! You can use the globals to store behavioral components that can be combined with all other components. For example, in CodyFrame, we define a .text-component class that is used as a “text wrapper.” It takes care of line height, vertical spacing, basic styling, and other things.

If we go back to our card example, the .card__content element could be replaced with the following:

<!-- without globals --> <div class="card__content">   <h1 class="card__title-wrapper"><span class="card__title">Title of the card</span></h1>   <p class="card__description">Lorem ipsum dolor sit amet consectetur adipisicing elit. Tempore, totam?</p> </div> 
 <!-- with globals --> <div class="text-component">   <h1 class="text-lg"><span class="card__title">Title of the card</span></h1>   <p>Lorem ipsum dolor sit amet consectetur adipisicing elit. Tempore, totam?</p> </div>

The text component will take care of the text formatting, and make it consistent across all the text blocks in your project. Plus, we’ve already eliminated a couple of BEM classes.

Finally, let’s introduce the utility classes to the mix!

Utility classes are particularly useful if you want the ability to customize the component later on without having to check its CSS.

Here’s how the structure of the card component changes if we swap some BEM classes with utility classes:

<article class="card radius-lg">   <a href="#0" class="block color-inherit text-decoration-none">     <figure>       <img class="block width-100%" src="image.jpg" alt="Image description">     </figure> 
     <div class="text-component padding-md">       <h1 class="text-lg"><span class="card__title">Title of the card</span></h1>       <p class="color-contrast-medium">Lorem ipsum dolor sit amet consectetur adipisicing elit. Tempore, totam?</p>     </div> 
     <div class="card__icon-wrapper" aria-hidden="true">       <svg class="icon icon--sm color-white" viewBox="0 0 24 24"><!-- icon --></svg>     </div>   </a> </article>

The number of BEM (component) classes has shrunk from 9 to 3:

.card {} .card__title {} .card__icon-wrapper {}

That means you won’t deal much with naming things. That said, we can’t avoid the naming issue entirely: even if you create Vue/React/SomeOtherFramework components out of utility classes, you still have to name the components.

All the other BEM classes have been replaced by utility classes. What if you have to make a card variation with a bigger title? Replace text-lg with text-xl. What if you want to change the icon color? Replace color-white with color-primary. How about aligning the text in the center? Add text-center to the text-component element. Less time thinking, more time doing!

Why don’t we just use utility classes?

Utility classes speed-up the design process and make it easier to customize things. So why don’t we forget about BEM and use only utility classes? Two main reasons:

By using BEM together with utility classes, the HTML is easier to read and customize.

Use BEM for:

  • DRY-ing the HTML from the CSS you don’t plan on customizing (e.g., behavioral CSS-like transitions, positioning, hover/focus effects),
  • advanced animations/effects.

Use utility classes for:

  • the “frequently-customized” properties, often used to create component variations (like padding, margin, text-alignment, etc.),
  • elements that are hard to identify with a new, meaningful class name (e.g., you need a parent element with a position: relative → create <div class="position-relative"><div class="my-component"></div></div>).

Example: 

<!-- use only Utility classes --> <article class="position-relative overflow-hidden bg radius-lg transition-all duration-300 hover:shadow-md col-6@sm col-4@md">   <!-- card content --> </article> 
 <!-- use BEM + Utility classes --> <article class="card radius-lg col-6@sm col-4@md">   <!-- card content --> </article>

For these reasons, we suggest that you don’t add the !important rule to your utility classes. Using utility classes doesn’t need to be like using a hammer. Do you think it would be beneficial to access and modify a CSS property in the HTML? Use a utility class. Do you need a bunch of rules that won’t need editing? Write them in your CSS. This process doesn’t need to be perfect the first time you do it: you can tweak the component later on if required. It may sound laborious “having to decide” but it’s quite straightforward when you put it to practice.

Utility classes are not your best ally when it comes to creating unique effects/animations.

Think about working with pseudo-elements, or crafting unique motion effects that require custom bezier curves. For those, you still need to open your CSS file.

Consider, for example, the animated background effect of the card we’ve designed. How hard would it be to create such an effect using utility classes?

The same goes for the icon animation, which requires animation keyframes to work:

.card:hover .card__title {   background-size: 100% 100%; } 
 .card:hover .card__icon-wrapper .icon {   animation: card-icon-animation .3s; } 
 .card__title {   background-image: linear-gradient(transparent 50%, alpha(var(--color-primary), 0.2) 50%);   background-repeat: no-repeat;   background-position: left center;   background-size: 0% 100%;   transition: background .3s; } 
 .card__icon-wrapper {   position: absolute;   top: 0;   right: 0;   width: 3em;   height: 3em;   background-color: alpha(var(--color-black), 0.85);   border-bottom-left-radius: var(--radius-lg);   display: flex;   justify-content: center;   align-items: center; } 
 @keyframes card-icon-animation {   0%, 100% {     opacity: 1;     transform: translateX(0%);   }   50% {     opacity: 0;     transform: translateX(100%);   }   51% {     opacity: 0;     transform: translateX(-100%);   } }

Final result

Here’s the final version of the cards gallery. It also includes grid utility classes to customize the layout.

File structure

Here’s how the structure of a project built using the method described in this article would look like:

project/ └── main/     ├── assets/     │   ├── css/     │   │   ├── components/     │   │   │   ├── _card.scss     │   │   │   ├── _footer.scss     │   │   │   └── _header.scss     │   │   ├── globals/     │   │   │   ├── _accessibility.scss     │   │   │   ├── _breakpoints.scss     │   │   │   ├── _buttons.scss     │   │   │   ├── _colors.scss     │   │   │   ├── _forms.scss     │   │   │   ├── _grid-layout.scss     │   │   │   ├── _icons.scss     │   │   │   ├── _reset.scss     │   │   │   ├── _spacing.scss     │   │   │   ├── _typography.scss     │   │   │   ├── _util.scss     │   │   │   ├── _visibility.scss     │   │   │   └── _z-index.scss     │   │   ├── _globals.scss     │   │   ├── style.css     │   │   └── style.scss     │   └── js/     │       ├── components/     │       │   └── _header.js     │       └── util.js     └── index.html

You can store the CSS (or SCSS) of each component into a separate file (and, optionally, use PostCSS plugins to compile each new /component/componentName.css file into style.css). Feel free to organize the globals as you prefer; you could also create a single globals.css file and avoid separating the globals in different files.

Conclusion

Working on large-scale projects requires a solid architecture if you want to open your files months later and don’t get lost. There are many methods out there that tackle this issue (CSS-in-JS, utility-first, atomic design, etc.).

The method I’ve shared with you today relies on creating crosswise rules (globals), using utility classes for rapid development, and BEM for modular (behavioral) classes.

You can learn in more detail about this method on CodyHouse. Any feedback is welcome!

The post Building a Scalable CSS Architecture With BEM and Utility Classes appeared first on CSS-Tricks.

CSS-Tricks

, , , ,
[Top]

Performant Expandable Animations: Building Keyframes on the Fly

Animations have come a long way, continuously providing developers with better tools. CSS Animations, in particular, have defined the ground floor to solve the majority of uses cases. However, there are some animations that require a little bit more work.

You probably know that animations should run on the composite layer. (I won’t extend myself here, but if you want to know more, check this article.) That means animating transform or opacity properties that don’t trigger layout or paint layers. Animating properties like height and width is a big no-no, as they trigger those layers, which force the browser to recalculate styles.

On top of that, even when animating transform properties, if you want to truly hit 60 FPS animations, you probably should get a little help from JavaScript, using the FLIP technique for extra smoother animations! 

However, the problem of using transform for expandable animations is that the scale function isn’t exactly the same as animating width/height properties. It creates a skewed effect on the content, as all elements get stretched (when scaling up) or squeezed (when scaling down).

So, because of that, my go-to solution has been (and probably still is, for reasons I will detail later), technique #3 from Brandon Smith’s article. This still has a transition on height, but uses Javascript to calculate the content size, and force a transition using requestAnimationFrame. At OutSystems, we actually used this to build the animation for the OutSystems UI Accordion Pattern.

Generating keyframes with JavaScript

Recently, I stumbled on another great article from Paul Lewis, that details a new solution for expanding and collapsing animations, which motivated me to write this article and spread this technique around.

Using his words, the main idea consists of generating dynamic keyframes, stepping…

[…] from 0 to 100 and calculate what scale values would be needed for the element and its contents. These can then be boiled down to a string, which can be injected into the page as a style element. 

To achieve this, there are three main steps.

Step 1: Calculate the start and end states

We need to calculate the correct scale value for both states. That means we use getBoundingClientRect() on the element that will serve as a proxy for the start state, and divide it with the value from the end state. It should be something like this:

function calculateStartScale () {   const start= startElement.getBoundingClientRect();   const end= endElement.getBoundingClientRect();   return {     x: start.width / end.width,     y: start.height / end.height   }; }

Step 2: Generate the Keyframes

Now, we need to run a for loop, using the number of frames needed as the length. (It shouldn’t really be less than 60 to ensure a smooth animation.) Then, in each iteration, we calculate the correct easing value, using an ease function:

function ease (v, pow=4) {   return 1 - Math.pow(1 - v, pow); }  let easedStep = ease(i / frame);

With that value, we’ll get the scale of the element on the current step, using the following math:

const xScale = x + (1 - x) * easedStep; const yScale = y + (1 - y) * easedStep;

And then we add the step to the animation string:

animation += `$ {step}% {   transform: scale($ {xScale}, $ {yScale}); }`;

To avoid the content to get stretched/ skewed, we should perform a counter animation on it, using the inverted values:

const invXScale = 1 / xScale; const invYScale = 1 / yScale;  inverseAnimation += `$ {step}% {   transform: scale($ {invXScale}, $ {invYScale}); }`;

Finally, we can return the completed animations, or directly inject them in a newly created style tag.

Step 3: Enable the CSS animations 

On the CSS side of things, we need to enable the animations on the correct elements:

.element--expanded {   animation-name: animation;   animation-duration: 300ms;   animation-timing-function: step-end; }  .element-contents--expanded {   animation-name: inverseAnimation ;   animation-duration: 300ms;   animation-timing-function: step-end; }

You can check the example of a Menu from Paul Lewis article, on Codepen (courtesy of Chris Coyer):

Building an expandable section 

After grasping these baseline concepts, I wanted to check if I could apply this technique to a different use case, like a expandable section.

We only need to animate the height in this case, specifically on the function to calculate scales. We’re getting the Y value from the section title, to serve as the collapsed state, and the whole section to represent the expanded state:

    _calculateScales () {       var collapsed = this._sectionItemTitle.getBoundingClientRect();       var expanded = this._section.getBoundingClientRect();              // create css variable with collapsed height, to apply on the wrapper       this._sectionWrapper.style.setProperty('--title-height', collapsed.height + 'px');        this._collapsed = {         y: collapsed.height / expanded.height       }     }

Since we want the expanded section to have absolute positioning (in order to avoid it taking space when in a collapsed state), we are setting the CSS variable for it with the collapsed height, applied on the wrapper. That will be the only element with relative positioning.

Next comes the function to create the keyframes: _createEaseAnimations(). This doesn’t differ much from what was explained above. For this use case, we actually need to create four animations:

  1. The animation to expand the wrapper
  2. The counter-expand animation on the content
  3. The animation to collapse the wrapper
  4. The counter-collapse animation on the content

We follow the same approach as before, running a for loop with a length of 60 (to get a smooth 60 FPS animation), and create a keyframe percentage, based on the eased step. Then, we push it to the final animations strings:

outerAnimation.push(`   $ {percentage}% {     transform: scaleY($ {yScale});   }`);    innerAnimation.push(`   $ {percentage}% {     transform: scaleY($ {invScaleY});   }`);

We start by creating a style tag to hold the finished animations. As this is built as a constructor, to be able to easily add multiple patterns, we want to have all these generated animations on the same stylesheet. So, first, we validate if the element exists. If not, we create it and add a meaningful class name. Otherwise, you would end up with a stylesheet for each section expandable, which is not ideal.

 var sectionEase = document.querySelector('.section-animations');  if (!sectionEase) {   sectionEase = document.createElement('style');   sectionEase.classList.add('section-animations');  }

Speaking of that, you may already be wondering, “Hmm, if we have multiple expandable sections, wouldn’t they still be using the same-named animation, with possibly wrong values for their content?” 

You’re absolutely right! So, to prevent that, we are also generating dynamic animation names. Cool, right?

We make use of the index passed to the constructor from the for loop when making the querySelectorAll('.section') to add a unique element to the name:

var sectionExpandAnimationName = "sectionExpandAnimation" + index; var sectionExpandContentsAnimationName = "sectionExpandContentsAnimation" + index;

Then we use this name to set a CSS variable on the current expandable section. As this variable is only in this scope, we just need to set the animation to the new variable in the CSS, and each pattern will get its respective animation-name value.

.section.is--expanded {   animation-name: var(--sectionExpandAnimation); }  .is--expanded .section-item {   animation-name: var(--sectionExpandContentsAnimation); }  .section.is--collapsed {   animation-name: var(--sectionCollapseAnimation); }  .is--collapsed .section-item {   animation-name: var(--sectionCollapseContentsAnimation); }

The rest of the script is related to adding event listeners, functions to toggle the collapse/expand status and some accessibility improvements.

About the HTML and CSS: it needs a little bit of extra work to make the expandable functionality work. We need an extra wrapper to be the relative element that doesn’t animate. The expandable children have an absolute position so that they don’t occupy space when collapsed.

Remember, since we need to make counter animations, we make it scale full size in order to avoid a skew effect on the content.

.section-item-wrapper {   min-height: var(--title-height);   position: relative; }  .section {   animation-duration: 300ms;   animation-timing-function: step-end;   contain: content;   left: 0;   position: absolute;   top: 0;   transform-origin: top left;   will-change: transform; }  .section-item {   animation-duration: 300ms;   animation-timing-function: step-end;   contain: content;   transform-origin: top left;   will-change: transform;   }

I would like to highlight the importance of the animation-timing-functionproperty. It should be set to linear or step-end to avoid easing between each keyframe.

The will-change property — as you probably know — will enable GPU acceleration for the transform animation for an even smoother experience. And using the contains property, with a value of contents, will help the browser treat the element independently from the rest of the DOM tree, limiting the area before it recalculates the layout, style, paint and size properties.

We use visibility and opacity to hide the content, and stop screen readers to access it, when collapsed.

.section-item-content {   opacity: 1;   transition: opacity 500ms ease; }  .is--collapsed .section-item-content {   opacity: 0;   visibility: hidden; }

And finally, we have our section expandable! Here’s the complete code and demo for you to check:

Performance check

Anytime we work with animations, performance ought to be in the back of our mind. So, let’s use developer tools to check if all this work was worthy, performance-wise. Using the Performance tab (I’m using Chrome DevTools), we can analyze the FPS and the CPU usage, during the animations.

And the results are great!

The higher the green bar, the higher the frames. And there’s no junk either, which would be signed by red sections.

Using the FPS meter tool to check the values at greater detail, we can see that it constantly hits the 60 FPS mark, even with abusive usage.

Final considerations

So, what’s the verdict? Does this replace all other methods? Is this the “Holy Grail” solution?

In my opinion, no. 

But… that’s OK, really! It’s another solution on the list. And, as is true with any other method, it should be analyzed if it’s the best approach for the use-case.

This technique definitely has its merits. As Paul Lewis says, this does take a lot of work to prepare. But, on the flip side, we only need to do it once, when the page loads. During interactions, we are merely toggling classes (and attributes in some cases, for accessibility).

However, this brings some limitations for the UI of the elements. As you could see for the expandable section element, the counter-scale makes it much more reliable for absolute and off-canvas elements, like floating-actions or menus. It’s also difficult to styled borders because it’s using overflow: hidden.

Nevertheless, I think there’s tons of potential with this approach. Let me know what you think!

The post Performant Expandable Animations: Building Keyframes on the Fly appeared first on CSS-Tricks.

CSS-Tricks

, , , ,
[Top]

Building a Real-Time Chat App with React and Firebase

In this article, we’ll cover key concepts for authenticating a user with Firebase in a real-time chat application. We’ll integrate third-party auth providers (e.g. Google, Twitter and GitHub) and, once users are signed in, we’ll learn how to store user chat data in the Firebase Realtime Database, where we can sync data with a NoSQL cloud database.

The client application is going to be built in React, as it is one of the most popular JavaScript frameworks out there, but the concepts can also be applied to other frameworks.

But first, what is Firebase?

Firebase is Google’s mobile platform for quickly developing apps. Firebase provides a suite of tools for authenticating applications, building reactive client apps, reporting analytics, as well as a host of other helpful resources for managing apps in general. It also provides back-end management for web, iOS, Android, and Unity, a 3D development platform.

Out of the box, Firebase is packaged with features that help developers like ourselves focus on building apps while it handles all server-side logic. Things like:

  • Authentication: This includes support for email and password authentication as well as single sign-on capabilities (via Facebook, Twitter and Google).
  • Realtime database: This is a “NoSQL” database that updates in real time.
  • Cloud functions: These run extra server-side logic.
  • Static hosting: This is a means of serving assets pre-built instead of rendering at runtime.
  • Cloud storage: This gives us a place to store media assets.

Firebase offers a generous free tier that includes authentication and access to their Realtime Database. The authentication providers we’ll be covering email and password — Google and GitHub — are free on that side as well. The Realtime Database allows up to 100 simultaneous connections and 1 gigabyte storage per month. A full table of pricing can be found on the Firebase website.

Here’s what we’re making

We’re going to build an application called Chatty. It will allow only authenticated users to send and read messages and users can sign up by providing their email and creating a password, or by authenticating through a Google or GitHub account. Check out source code if you want to refer to it or take a peek as we get started.

We’ll end up with something like this:

Setting up

You’re going to need a Google account to use Firebase, so snag one if you haven’t already. And once you do, we can officially kick the tires on this thing.

First off, head over to the Firebase Console and click the “Add project” option.

Next, let’s enter a name for the project. I’m going with Chatty.

You can choose to add analytics to your project, but it’s not required. Either way, click continue to proceed and Firebase will take a few seconds to delegate resources for the project.

Once that spins up, we are taken to the Firebase dashboard But, before we can start using Firebase in our web app, we have to get the configuration details down for our project. So, click on the web icon in the dashboard.

Then, enter a name for the app and click Register app.

Next up, we’ll copy and store the configuration details on the next screen in a safe place. That will come in handy in the next step.

Again, we’re going to authenticate users via email and password, with additional options for single sign-on with a Google or GitHub account. We need to enable these from the Authentication tab in the dashboard, but we’ll go through each of them one at a time.

Email and password authentication

There’s a Sign-in method tab in the Firebase dashboard. Click the Email/Password option and enable it.

Now we can use it in our app!

Setting up the web app

For our web app, we’ll be using React but most of the concepts can be applied to any other framework. Well need Node.js for a React setup, so download and install it if you haven’t already.

We’ll use create-react-app to bootstrap a new React project. This downloads and installs the necessary packages required for a React application. In the terminal, cd into where you’d like our Chatty project to go and run this to initialize it:

npx create-react-app chatty

This command does the initial setup for our react app and installs the dependencies in package.json. We’ll also install some additional packages. So, let’s cd into the project itself and add packages for React Router and Firebase.

cd chatty yarn add react-router-dom firebase

We already know why we need Firebase, but why React Router? Our chat app will have a couple of views we can use React Router to handle navigating between pages.

With that done, we can officially start the app:

yarn start

This starts a development server and opens a URL in your default browser. If everything got installed correctly, you should see a screen like this:

Looking at the folder structure, you would see something similar to this:

For our chat app, this is the folder structure we’ll be using:

  • /components: contains reusable widgets used in different pages
  • /helpers: a set of reusable functions
  • /pages: the app views
  • /services: third-party services that we’re using (e.g. Firebase)
  • App.js: the root component

Anything else in the folder is unnecessary for this project and can safely be removed. From here, let’s add some code to src/services/firebase.js so the app can talk with Firebase.

import firebase from 'firebase';

Let’s get Firebase into the app

We’ll import and initialize Firebase using the configuration details we copied earlier when registering the app in the Firebase dashboard. Then, we’ll export the authentication and database modules.

const config = {   apiKey: "ADD-YOUR-DETAILS-HERE",   authDomain: "ADD-YOUR-DETAILS-HERE",   databaseURL: "ADD-YOUR-DETAILS-HERE" }; firebase.initializeApp(config); export const auth = firebase.auth; export const db = firebase.database();

Let’s import our dependencies in src/App.js:

import React, { Component } from 'react'; import {   Route,   BrowserRouter as Router,   Switch,   Redirect, } from "react-router-dom"; import Home from './pages/Home'; import Chat from './pages/Chat'; import Signup from './pages/Signup'; import Login from './pages/Login'; import { auth } from './services/firebase';

These are ES6 imports. Specifically, we’re importing React and other packages needed to build out the app. We’re also importing all the pages of our app that we’ll configure later to our router.

Next up is routing

Our app has public routes (accessible without authentication) and a private route (accessible only with authentication). Because React doesn’t provide a way to check the authenticated state, we’ll create higher-order components (HOCs) for both types of routes.

Our HOCs will:

  • wrap a <Route>,
  • pass props from the router to the <Route>,
  • render the component depending on the authenticated state, and
  • redirect the user to a specified route if the condition is not met

Let’s write the code for our <PrivateRoute> HOC.

function PrivateRoute({ component: Component, authenticated, ...rest }) {   return (     <Route       {...rest}       render={(props) => authenticated === true         ? <Component {...props} />         : <Redirect to={{ pathname: '/login', state: { from: props.location } }} />}     />   ) }

It receives three props: the component to render if the condition is true, the authenticated state, and the ES6 spread operator to get the remaining parameters passed from the router. It checks if authenticated is true and renders the component passed, else it redirects to/login.

function PublicRoute({ component: Component, authenticated, ...rest }) {   return (     <Route       {...rest}       render={(props) => authenticated === false         ? <Component {...props} />         : <Redirect to='/chat' />}     />   ) }

The <PublicRoute> is pretty much the same. It renders our public routes and redirects to the /chat path if the authenticated state becomes true. We can use the HOCs in our render method:

render() {   return this.state.loading === true ? <h2>Loading...</h2> : (     <Router>       <Switch>         <Route exact path="/" component={Home}></Route>         <PrivateRoute path="/chat" authenticated={this.state.authenticated} component={Chat}></PrivateRoute>         <PublicRoute path="/signup" authenticated={this.state.authenticated} component={Signup}></PublicRoute>         <PublicRoute path="/login" authenticated={this.state.authenticated} component={Login}></PublicRoute>       </Switch>     </Router>   ); }

Checking for authentication

It would be nice to show a loading indicator while we verify if the user is authenticated. Once the check is complete, we render the appropriate route that matches the URL. We have three public routes — <Home>, <Login> and <Signup> — and a private one called <Chat>.

Let’s write the logic to check if the user is indeed authenticated.

class App extends Component {   constructor() {     super();     this.state = {       authenticated: false,       loading: true,     };   } }  export default App;

Here we’re setting the initial state of the app. Then, we’re using the componentDidMount lifecycle hook to check if the user is authenticated. So, let’s add this after the constructor:

componentDidMount() {   this.removelistener = auth().onAuthStateChanged((user) => {     if (user) {       this.setState({         authenticated: true,         loading: false,       });     } else {       this.setState({         authenticated: false,         loading: false,       });     }   }) }

Firebase provides an intuitive method called onAuthStateChanged that is triggered when the authenticated state changes. We use this to update our initial state. user is null if the user is not authenticated. If the user is true, we update authenticated to true; else we set it to false. We also set loading to false either way.

Registering users with email and password

Users will be able to register for Chatty through email and password. The helpers folder contains a set of methods that we’ll use to handle some authentication logic. Inside this folder, let’s create a new file called auth.js and add this:

import { auth } from "../services/firebase";

We import the auth module from the service we created earlier.

export function signup(email, password) {   return auth().createUserWithEmailAndPassword(email, password); } 
 export function signin(email, password) {   return auth().signInWithEmailAndPassword(email, password); }

We have two methods here: signup andsignin:

  • signup will create a new user using their email and password. 
  • signin will log in an existing user created with email and password.

Let’s create our <Signup> page by creating a new file Signup.js file in the pages folder. This is the markup for the UI:

import React, { Component } from 'react'; import { Link } from 'react-router-dom'; import { signup } from '../helpers/auth'; 
 export default class SignUp extends Component { 
   render() {     return (       <div>         <form onSubmit={this.handleSubmit}>           <h1>             Sign Up to           <Link to="/">Chatty</Link>           </h1>           <p>Fill in the form below to create an account.</p>           <div>             <input placeholder="Email" name="email" type="email" onChange={this.handleChange} value={this.state.email}></input>           </div>           <div>             <input placeholder="Password" name="password" onChange={this.handleChange} value={this.state.password} type="password"></input>           </div>           <div>             {this.state.error ? <p>{this.state.error}</p> : null}             <button type="submit">Sign up</button>           </div>           <hr></hr>           <p>Already have an account? <Link to="/login">Login</Link></p>         </form>       </div>     )   } }
Email? Check. Password? Check. Submit button? Check. Our form is looking good.

The form and input fields are bound to a method we haven’t created yet, so let’s sort that out. Just before the render() method, we’ll add the following:

constructor(props) {   super(props);   this.state = {     error: null,     email: '',     password: '',   };   this.handleChange = this.handleChange.bind(this);   this.handleSubmit = this.handleSubmit.bind(this); }

We’re setting the initial state of the page. We’re also binding the handleChange and handleSubmit methods to the component’s this scope.

handleChange(event) {   this.setState({     [event.target.name]: event.target.value   }); }

Next up, we’ll add the handleChange method that our input fields are bound to. The method uses computed properties to dynamically determine the key and set the corresponding state variable.

async handleSubmit(event) {   event.preventDefault();   this.setState({ error: '' });   try {     await signup(this.state.email, this.state.password);   } catch (error) {     this.setState({ error: error.message });   } }

For handleSubmit, we’re preventing the default behavior for form submissions (which simply reloads the browser, among other things). We’re also clearing up the error state variable, then using the signup() method imported from helpers/auth to pass the email and password entered by the user.

If the registration is successful, users get redirected to the /Chats route. This is possible with the combination of onAuthStateChanged and the HOCs we created earlier. If registration fails, we set the error variable which displays a message to users.

Authenticating users with email and password

The login page is identical to the signup page. The only difference is we’ll be using the signin method from the helpers we created earlier. That said, let’s create yet another new file in the pages directory, this time called Login.js, with this code in it:

import React, { Component } from "react"; import { Link } from "react-router-dom"; import { signin, signInWithGoogle, signInWithGitHub } from "../helpers/auth"; 
 export default class Login extends Component {   constructor(props) {     super(props);     this.state = {       error: null,       email: "",       password: ""     };     this.handleChange = this.handleChange.bind(this);     this.handleSubmit = this.handleSubmit.bind(this);   } 
   handleChange(event) {     this.setState({       [event.target.name]: event.target.value     });   } 
   async handleSubmit(event) {     event.preventDefault();     this.setState({ error: "" });     try {       await signin(this.state.email, this.state.password);     } catch (error) {       this.setState({ error: error.message });     }   } 
   render() {     return (       <div>         <form           autoComplete="off"           onSubmit={this.handleSubmit}         >           <h1>             Login to             <Link to="/">               Chatty             </Link>           </h1>           <p>             Fill in the form below to login to your account.           </p>           <div>             <input               placeholder="Email"               name="email"               type="email"               onChange={this.handleChange}               value={this.state.email}             />           </div>           <div>             <input               placeholder="Password"               name="password"               onChange={this.handleChange}               value={this.state.password}               type="password"             />           </div>           <div>             {this.state.error ? (               <p>{this.state.error}</p>             ) : null}             <button type="submit">Login</button>           </div>           <hr />           <p>             Don't have an account? <Link to="/signup">Sign up</Link>           </p>         </form>       </div>     );   } }

Again, very similar to before. When the user successfully logs in, they’re redirected to /chat.

Authenticating with a Google account

Firebase allows us to authenticate users with a valid Google account. We’ve got to enable it in the Firebase dashboard just like we did for email and password.

Select the Google option and enable it in the settings.

On that same page, we also need to scroll down to add a domain to the list of domains that are authorized to access feature. This way, we avoid spam from any domain that is not whitelisted. For development purposes, our domain is localhost, so we’ll go with that for now.

We can switch back to our editor now. We’ll add a new method to helpers/auth.js to handle Google authentication.

export function signInWithGoogle() {   const provider = new auth.GoogleAuthProvider();   return auth().signInWithPopup(provider); }

Here, we’re creating an instance of the GoogleAuthProvider. Then, we’re calling signInWithPopup with the provider as a parameter. When this method is called, a pop up will appear and take the user through the Google sign in flow before redirecting them back to the app. You’ve likely experienced it yourself at some point in time.

Let’s use it in our signup page by importing the method:

import { signin, signInWithGoogle } from "../helpers/auth";

Then, let’s add a button to trigger the method, just under the Sign up button:

<p>Or</p> <button onClick={this.googleSignIn} type="button">   Sign up with Google </button>

Next, we’ll add the onClick handler:

async googleSignIn() {   try {     await signInWithGoogle();   } catch (error) {     this.setState({ error: error.message });   } }

Oh, and we should remember to bind the handler to the component:

constructor() {   // ...   this.githubSignIn = this.githubSignIn.bind(this); }

That’s all we need! When the button is clicked, it takes users through the Google sign in flow and, if successful, the app redirects the user to the chat route.

Authenticating with a GitHub account

We’re going to do the same thing with GitHub. May as well give folks more than one choice of account.

Let’s walk through the steps. First, we’ll enable GitHub sign in on Firebase dashboard, like we did for email and Google.

You will notice both the client ID and client secret fields are empty, but we do have our authorization callback URL at the bottom. Copy that, because we’ll use it when we do our next thing, which is register the app on GitHub.

Once that’s done, we’ll get a client ID and secret which we can now add to the Firebase console.

Let’s switch back to the editor and add a new method to helpers/auth.js:

export function signInWithGitHub() {   const provider = new auth.GithubAuthProvider();   return auth().signInWithPopup(provider); }

It’s similar to the Google sign in interface, but this time we’re creating a GithubAuthProvider. Then, we’ll call signInWithPopup with the provider.

In pages/Signup.js, we update our imports to include the signInWithGitHub method:

import { signup, signInWithGoogle, signInWithGitHub } from "../helpers/auth";

We add a button for GitHub sign up:

<button type="button" onClick={this.githubSignIn}>   Sign up with GitHub </button>

Then we add a click handler for the button which triggers the GitHub sign up flow:

async githubSignIn() {   try {     await signInWithGitHub();   } catch (error) {     this.setState({ error: error.message });   } }

Let’s remember again to bind the handler to the component:

constructor() {   // ...   this.githubSignIn = this.githubSignIn.bind(this); }

Now we’ll get the same sign-in and authentication flow that we have with Google, but with GitHub.

Reading data from Firebase

Firebase has two types of databases: A product they call Realtime Database and another called Cloud Firestore. Both databases are NoSQL-like databases, meaning the database is structured as key-value pairs. For this tutorial, we’ll use the Realtime Database.

This is the structure we’ll be using for our app. We have a root node chats with children nodes. Each child has a content, timestamp, and user ID. One of the tabs you’ll notice is Rules which is how we set permissions on the contents of the database.

Firebase database rules are defined as key-value pairs as well. Here, we’ll set our rules to allow only authenticated users to read and write to the chat node. There are a lot more firebase rules. worth checking out.

Let’s write code to read from the database. First, create a new file called Chat.js  in the pages  folder and add this code to import React, Firebase authentication, and Realtime Database:

import React, { Component } from "react"; import { auth } from "../services/firebase"; import { db } from "../services/firebase"

Next, let’s define the initial state of the app:

export default class Chat extends Component {   constructor(props) {     super(props);     this.state = {       user: auth().currentUser,       chats: [],       content: '',       readError: null,       writeError: null     };   }   async componentDidMount() {     this.setState({ readError: null });     try {       db.ref("chats").on("value", snapshot => {         let chats = [];         snapshot.forEach((snap) => {           chats.push(snap.val());         });         this.setState({ chats });       });     } catch (error) {       this.setState({ readError: error.message });     }   } }

The real main logic takes place in componentDidMount. db.ref("chats") is a reference to the chats path in the database. We listen to the value event which is triggered anytime a new value is added to the chats node. What is returned from the database is an array-like object that we loop through and push each object into an array. Then, we set the chats state variable to our resulting array. If there is an error, we set the readError state variable to the error message.

One thing to note here is that a connection is created between the client and our Firebase database because we used the .on() method. This means any time a new value is added to the database, the client app is updated in real-time which means users can see new chats without a page refresh Nice!.

After componentDidMount, we can render our chats like so:

render() {   return (     <div>       <div className="chats">         {this.state.chats.map(chat => {           return <p key={chat.timestamp}>{chat.content}</p>         })}       </div>       <div>         Login in as: <strong>{this.state.user.email}</strong>       </div>     </div>   ); }

This renders the array of chats. We render the email of the currently logged in user.

Writing data to Firebase

At the moment, users can only read from the database but are unable to send messages. What we need is a form with an input field that accepts a message and a button to send the message to the chat.

So, let’s modify the markup like so:

return (     <div>       <div className="chats">         {this.state.chats.map(chat => {           return <p key={chat.timestamp}>{chat.content}</p>         })}       </div>       {# message form #}       <form onSubmit={this.handleSubmit}>         <input onChange={this.handleChange} value={this.state.content}></input>         {this.state.error ? <p>{this.state.writeError}</p> : null}         <button type="submit">Send</button>       </form>       <div>         Login in as: <strong>{this.state.user.email}</strong>       </div>     </div>   ); }

We have added a form with an input field and a button. The value of the input field is bound to our state variable content and we call handleChange when its value changes.

handleChange(event) {   this.setState({     content: event.target.value   }); }

handleChange gets the value from the input field and sets on our state variable. To submit the form, we call handleSubmit:

async handleSubmit(event) {   event.preventDefault();   this.setState({ writeError: null });   try {     await db.ref("chats").push({       content: this.state.content,       timestamp: Date.now(),       uid: this.state.user.uid     });     this.setState({ content: '' });   } catch (error) {     this.setState({ writeError: error.message });   } }

We set any previous errors to null. We create a reference to the chats node in the database and use push() to create a unique key and pushe the object to it.

As always, we have to bind our methods to the component:

constructor(props) {   // ...   this.handleChange = this.handleChange.bind(this);   this.handleSubmit = this.handleSubmit.bind(this); }

Now a user can add new messages to the chats and see them in real-time! How cool is that?

Demo time!

Enjoy your new chat app!

Congratulations! You have just built a chat tool that authenticates users with email and password, long with options to authenticate through a Google or GitHub account.

I hope this give you a good idea of how handy Firebase can be to get up and running with authentication on an app. We worked on a chat app, but the real gem is the signup and sign-in methods we created to get into it. That’s something useful for many apps.

Questions? Thoughts? Feedback? Let me know in the comments!

The post Building a Real-Time Chat App with React and Firebase appeared first on CSS-Tricks.

CSS-Tricks

, , , ,
[Top]

Get Started Building GraphQL APIs With Node

We all have a number of interests and passions. For example, I’m interested in JavaScript, 90’s indie rock and hip hop, obscure jazz, the city of Pittsburgh, pizza, coffee, and movies starring John Lurie. We also have family members, friends, acquaintances, classmates, and colleagues who also have their own social relationships, interests, and passions. Some of these relationships and interests overlap, like my friend Riley who shares my interest in 90’s hip hop and pizza. Others do not, like my colleague Harrison, who prefers Python to JavaScript, only drinks tea, and prefers current pop music. All together, we each have a connected graph of the people in our lives, and the ways that our relationships and interests overlap.

These types of interconnected data are exactly the challenge that GraphQL initially set out to solve in API development. By writing a GraphQL API we are able to efficiently connect data, which reduces the complexity and number of requests, while allowing us to serve the client precisely the data that it needs. (If you’re into more GraphQL metaphors, check out Meeting GraphQL at a Cocktail Mixer.)

In this article, we’ll build a GraphQL API in Node.js, using the Apollo Server package. To do so, we’ll explore fundamental GraphQL topics, write a GraphQL schema, develop code to resolve our schema functions, and access our API using the GraphQL Playground user interface.

What is GraphQL?

GraphQL is an open source query and data manipulation language for APIs. It was developed with the goal of providing single endpoints for data, allowing applications to request exactly the data that is needed. This has the benefit of not only simplifying our UI code, but also improving performance by limiting the amount of data that needs to be sent over the wire.

What we’re building

To follow along with this tutorial, you’ll need Node v8.x or later and some familiarity with working with the command line. 

We’re going to build an API application for book highlights, allowing us to store memorable passages from the things that we read. Users of the API will be able to perform “CRUD” (create, read, update, delete) operations against their highlights:

  • Create a new highlight
  • Read an individual highlight as well as a list of highlights
  • Update a highlight’s content
  • Delete a highlight

Getting started

To get started, first create a new directory for our project, initialize a new node project, and install the dependencies that we’ll need:

# make the new directory mkdir highlights-api # change into the directory cd highlights-api # initiate a new node project npm init -y # install the project dependencies npm install apollo-server graphql # install the development dependencies npm install nodemon --save-dev

Before moving on, let’s break down our dependencies:

  • apollo-server is a library that enables us to work with GraphQL within our Node application. We’ll be using it as a standalone library, but the team at Apollo has also created middleware for working with existing Node web applications in ExpresshapiFastify, and Koa.
  • graphql includes the GraphQL language and is a required peer dependency of apollo-server.
  • nodemon is a helpful library that will watch our project for changes and automatically restart our server.

With our packages installed, let’s next create our application’s root file, named index.js. For now, we’ll console.log() a message in this file:

console.log("📚 Hello Highlights");

To make our development process simpler, we’ll update the scripts object within our package.json file to make use of the nodemon package:

"scripts": {   "start": "nodemon index.js" },

Now, we can start our application by typing npm start in the terminal application. If everything is working properly, you will see 📚 Hello Highlights logged to your terminal.

GraphQL schema types

A schema is a written representation of our data and interactions. By requiring a schema, GraphQL enforces a strict plan for our API. This is because the API can only return data and perform interactions that are defined within the schema. The fundamental component of GraphQL schemas are object types. GraphQL contains five built-in types:

  • String: A string with UTF-8 character encoding
  • Boolean: A true or false value
  • Int: A 32-bit integer
  • Float: A floating-point value
  • ID: A unique identifier

We can construct a schema for an API with these basic components. In a file named schema.js, we can import the gql library and prepare the file for our schema syntax:

const { gql } = require('apollo-server');  const typeDefs = gql`   # The schema will go here `;  module.exports = typeDefs;

To write our schema, we first define the type. Let’s consider how we might define a schema for our highlights application. To begin, we would create a new type with a name of Highlight:

const typeDefs = gql`   type Highlight {   } `;

Each highlight will have a unique ID,  some content, a title, and an author. The Highlight schema will look something like this:

const typeDefs = gql`   type Highlight {     id: ID     content: String     title: String     author: String   } `;

We can make some of these fields required by adding an exclamation point:

const typeDefs = gql`   type Highlight {     id: ID!     content: String!     title: String     author: String   } `;

Though we’ve defined an object type for our highlights, we also need to provide a description of how a client will fetch that data. This is called a query. We’ll dive more into queries shortly, but for now let’s describe in our schema the ways in which someone will retrieve highlights. When requesting all of our highlights, the data will be returned as an array (represented as [Highlight]) and when we want to retrieve a single highlight we will need to pass an ID as a parameter.

const typeDefs = gql`   type Highlight {     id: ID!     content: String!     title: String     author: String   }   type Query {     highlights: [Highlight]!     highlight(id: ID!): Highlight   } `;

Now, in the index.js file, we can import our type definitions and set up Apollo Server:

const {ApolloServer } = require('apollo-server'); const typeDefs = require('./schema');  const server = new ApolloServer({ typeDefs });  server.listen().then(({ url }) => {   console.log(`📚 Highlights server ready at $ https://css-tricks.com/get-started-building-graphql-apis-with-node/`); });

If we’ve kept the node process running, the application will have automatically updated and relaunched, but if not, typing npm start  from the project’s directory in the terminal window will start the server. If we look at the terminal, we should see that nodemon is watching our files and the server is running on a local port:

[nodemon] 2.0.2 [nodemon] to restart at any time, enter `rs` [nodemon] watching dir(s): *.* [nodemon] watching extensions: js,mjs,json [nodemon] starting `node index.js` 📚 Highlights server ready at http://localhost:4000/

Visiting the URL in the browser will launch the GraphQL Playground application, which provides a user interface for interacting with our API.

GraphQL Resolvers

Though we’ve developed our project with an initial schema and Apollo Server setup, we can’t yet interact with our API. To do so, we’ll introduce resolvers. Resolvers perform exactly the action their name implies; they resolve the data that the API user has requested. We will write these resolvers by first defining them in our schema and then implementing the logic within our JavaScript code. Our API will contain two types of resolvers: queries and mutations.

Let’s first add some data to interact with. In an application, this would typically be data that we’re retrieving and writing to from a database, but for our example let’s use an array of objects. In the index.js file add the following:

let highlights = [   {     id: '1',     content: 'One day I will find the right words, and they will be simple.',     title: 'Dharma Bums',     author: 'Jack Kerouac'   },   {     id: '2',     content: 'In the limits of a situation there is humor, there is grace, and everything else.',     title: 'Arbitrary Stupid Goal',     author: 'Tamara Shopsin'   } ]

Queries

A query requests specific data from an API, in its desired format. The query will then return an object, containing the data that the API user has requested. A query never modifies the data; it only accesses it. We’ve already written a two queries in our schema. The first returns an array of highlights and the second returns a specific highlight. The next step is to write the resolvers that will return the data.

In the index.js file, we can add a resolvers object, which can contain our queries:

const resolvers = {   Query: {     highlights: () => highlights,     highlight: (parent, args) => {       return highlights.find(highlight => highlight.id === args.id);     }   } };

The highlights query returns the full array of highlights data. The highlight query accepts two parameters: parent and args. The parent is the first parameter of any GraqhQL query in Apollo Server and provides a way of accessing the context of the query. The args parameter allows us to access the user provided arguments. In this case, users of the API will be supplying an id argument to access a specific highlight.

We can then update our Apollo Server configuration to include the resolvers:

const server = new ApolloServer({ typeDefs, resolvers });

With our query resolvers written and Apollo Server updated, we can now query API using the GraphQL Playground. To access the GraphQL Playground, visit http://localhost:4000 in your web browser.

A query is formatted as so:

query {   queryName {       field       field     } }

With this in mind, we can write a query that requests the ID, content, title, and author for each our highlights:

query {   highlights {     id     content     title     author   } }

Let’s say that we had a page in our UI that lists only the titles and authors of our highlighted texts. We wouldn’t need to retrieve the content for each of those highlights. Instead, we could write a query that only requests the data that we need:

query {   highlights {     title     author   } }

We’ve also written a resolver to query for an individual note by including an ID parameter with our query. We can do so as follows:

query {   highlight(id: "1") {     content   } }

Mutations

We use a mutation when we want to modify the data in our API. In our highlight example, we will want to write a mutation to create a new highlight, one to update an existing highlight, and a third to delete a highlight. Similar to a query, a mutation is also expected to return a result in the form of an object, typically the end result of the performed action.

The first step to updating anything in GraphQL is to write the schema. We can include mutations in our schema, by adding a mutation type to our schema.js file:

type Mutation {   newHighlight (content: String! title: String author: String): Highlight!   updateHighlight(id: ID! content: String!): Highlight!   deleteHighlight(id: ID!): Highlight! }

Our newHighlight mutation will take the required value of content along with optional title and author values and return a Highlight. The updateHighlight mutation will require that a highlight id and content be passed as argument values and will return the updated Highlight. Finally, the deleteHighlight mutation will accept an ID argument, and will return the deleted Highlight.

With the schema updated to include mutations, we can now update the resolvers in our index.js file to perform these actions. Each mutation will update our highlights array of data.

const resolvers = {   Query: {     highlights: () => highlights,     highlight: (parent, args) => {       return highlights.find(highlight => highlight.id === args.id);     }   },   Mutation: {     newHighlight: (parent, args) => {       const highlight = {         id: String(highlights.length + 1),         title: args.title || '',         author: args.author || '',         content: args.content       };       highlights.push(highlight);       return highlight;     },     updateHighlight: (parent, args) => {       const index = highlights.findIndex(highlight => highlight.id === args.id);       const highlight = {         id: args.id,         content: args.content,         author: highlights[index].author,         title: highlights[index].title       };       highlights[index] = highlight;       return highlight;     },     deleteHighlight: (parent, args) => {       const deletedHighlight = highlights.find(         highlight => highlight.id === args.id       );       highlights = highlights.filter(highlight => highlight.id !== args.id);       return deletedHighlight;     }   } };

With these mutations written, we can use the GraphQL Playground to practice mutating the data. The structure of a mutation is nearly identical to that of a query, specifying the name of the mutation, passing the argument values, and requesting specific data in return. Let’s start by adding a new highlight:

mutation {   newHighlight(author: "Adam Scott" title: "JS Everywhere" content: "GraphQL is awesome") {     id     author     title     content   } }

We can then write mutations to update a highlight:

mutation {   updateHighlight(id: "3" content: "GraphQL is rad") {     id     content   } }

And to delete a highlight:

mutation {   deleteHighlight(id: "3") {     id   } }

Wrapping up

Congratulations! You’ve now successfully built a GraphQL API, using Apollo Server, and can run GraphQL queries and mutations against an in-memory data object. We’ve established a solid foundation for exploring the world of GraphQL API development.

Here are some potential next steps to level up:

The post Get Started Building GraphQL APIs With Node appeared first on CSS-Tricks.

CSS-Tricks

, , , ,
[Top]

Building an Images Gallery using PixiJS and WebGL

Sometimes, we have to go a little further than HTML, CSS and JavaScript to create the UI we need, and instead use other resources, like SVG, WebGL, canvas and others.

For example, the most amazing effects can be created with WebGL, because it’s a JavaScript API designed to render interactive 2D and 3D graphics within any compatible web browser, allowing GPU-accelerated image processing.

That said, working with WebGL can be very complex. As such, there’s a variety of libraries that to make it relatively easier, such as PixiJSThree.js, and Babylon.js, among others.We’re going to work with a specific one of those, PixiJS, to create a gallery of random images inspired by this fragment of a Dribbble shot by Zhenya Rynzhuk.

This looks hard, but you actually don’t need to have advanced knowledge of WebGL or even PixiJS to follow along, though some basic knowledge of Javascript (ES6) will come in handy. You might even want to start by getting familiar with  the basic concept of fragment shaders used in WebGL, with The Book of Shaders as a good starting point.

With that, let’s dig into using PixiJS to create this WebGL effect!

Initial setup

Here’s what we’ll need to get started:

  1. Add the PixiJS library as a script in the HTML.
  2. Have a <canvas> element (or add it dynamically from Javascript), to render the application.
  3. Initialize the application with new PIXI.Application(options).

See, nothing too crazy yet. Here’s the JavaScript we can use as a boilerplate:

// Get canvas view const view = document.querySelector('.view') let width, height, app  // Set dimensions function initDimensions () {   width = window.innerWidth   height = window.innerHeight }  // Init the PixiJS Application function initApp () {   // Create a PixiJS Application, using the view (canvas) provided   app = new PIXI.Application({ view })   // Resizes renderer view in CSS pixels to allow for resolutions other than 1   app.renderer.autoDensity = true   // Resize the view to match viewport dimensions   app.renderer.resize(width, height) }  // Init everything function init () {   initDimensions()   initApp() }  // Initial call init()

When executing this code, the only thing that we will see is a black screen as well as a message like this in the if we open up the console:
PixiJS 5.0.2 - WebGL 2 - http://www.pixijs.com/.

We are ready to start drawing on the canvas using PixiJS and WebGL!

Creating the grid background with a WebGL Shader

Next we will create a background that contains a grid, which will allow us to clearly visualize the distortion effect we’re after. But first, we must know what a shader is and how it works.I recommended The Book of Shaders earlier as a starting point to learn about them and this is those concepts will come into play. If you have not done it yet, I strongly recommend that you review that material, and only then continue here.

We are going to create a fragment shader that prints a grid background on the screen:

// It is required to set the float precision for fragment shaders in OpenGL ES // More info here: https://stackoverflow.com/a/28540641/4908989 #ifdef GL_ES precision mediump float; #endif  // This function returns 1 if `coord` correspond to a grid line, 0 otherwise float isGridLine (vec2 coord) {   vec2 pixelsPerGrid = vec2(50.0, 50.0);   vec2 gridCoords = fract(coord / pixelsPerGrid);   vec2 gridPixelCoords = gridCoords * pixelsPerGrid;   vec2 gridLine = step(gridPixelCoords, vec2(1.0));   float isGridLine = max(gridLine.x, gridLine.y);   return isGridLine; }  // Main function void main () {   // Coordinates for the current pixel   vec2 coord = gl_FragCoord.xy;   // Set `color` to black   vec3 color = vec3(0.0);   // If it is a grid line, change blue channel to 0.3   color.b = isGridLine(coord) * 0.3;   // Assing the final rgba color to `gl_FragColor`   gl_FragColor = vec4(color, 1.0); }

This code is drawn from a demo on Shadertoy,  which is a great source of inspiration and resources for shaders.

In order to use this shader, we must first load the code from the file it is in and — only after it has been loaded correctly— we will initialize the app.

// Loaded resources will be here const resources = PIXI.Loader.shared.resources  // Load resources, then init the app PIXI.Loader.shared.add([   'shaders/backgroundFragment.glsl' ]).load(init)

Now, for our shader to work where we can see the result, we will add a new element (an empty Sprite) to the stage, which we will use to define a filter. This is the way PixiJS lets us execute custom shaders like the one we just created.

// Init the gridded background function initBackground () {   // Create a new empty Sprite and define its size   background = new PIXI.Sprite()   background.width = width   background.height = height   // Get the code for the fragment shader from the loaded resources   const backgroundFragmentShader = resources['shaders/backgroundFragment.glsl'].data   // Create a new Filter using the fragment shader   // We don't need a custom vertex shader, so we set it as `undefined`   const backgroundFilter = new PIXI.Filter(undefined, backgroundFragmentShader)   // Assign the filter to the background Sprite   background.filters = [backgroundFilter]   // Add the background to the stage   app.stage.addChild(background) }

And now we see the gridded background with blue lines. Look closely because the lines are a little faint against the dark background color.

The distortion effect 

Our background is now ready, so let’s see how we can add the desired effect (Cubic Lens Distortion) to the whole stage, including the background and any other element that we add later, like images. For this, need to create a new filter and add it to the stage. Yes, we can also define filters that affect the entire stage of PixiJS!

This time, we have based the code of our shader on this awesome Shadertoy demo that implements the distortion effectusing different configurable parameters.

#ifdef GL_ES precision mediump float; #endif  // Uniforms from Javascript uniform vec2 uResolution; uniform float uPointerDown;  // The texture is defined by PixiJS varying vec2 vTextureCoord; uniform sampler2D uSampler;  // Function used to get the distortion effect vec2 computeUV (vec2 uv, float k, float kcube) {   vec2 t = uv - 0.5;   float r2 = t.x * t.x + t.y * t.y;   float f = 0.0;   if (kcube == 0.0) {     f = 1.0 + r2 * k;   } else {     f = 1.0 + r2 * (k + kcube * sqrt(r2));   }   vec2 nUv = f * t + 0.5;   nUv.y = 1.0 - nUv.y;   return nUv; }  void main () {   // Normalized coordinates   vec2 uv = gl_FragCoord.xy / uResolution.xy;    // Settings for the effect   // Multiplied by `uPointerDown`, a value between 0 and 1   float k = -1.0 * uPointerDown;   float kcube = 0.5 * uPointerDown;   float offset = 0.02 * uPointerDown;      // Get each channel's color using the texture provided by PixiJS   // and the `computeUV` function   float red = texture2D(uSampler, computeUV(uv, k + offset, kcube)).r;   float green = texture2D(uSampler, computeUV(uv, k, kcube)).g;   float blue = texture2D(uSampler, computeUV(uv, k - offset, kcube)).b;      // Assing the final rgba color to `gl_FragColor`   gl_FragColor = vec4(red, green, blue, 1.0); }

We are using two uniforms this time. Uniforms are variables that we pass to the shader via JavaScript:

  • uResolution: This is a JavaScript object thaincludes {x: width, y: height}. This uniform allows us to normalize the coordinates of each pixel in the range [0, 1].
  • uPointerDown: This is a float in the range [0, 1], which allows us to animate the distortion effect, increasing its intensity proportionally.

Let’s see the code that we have to add to our JavaScript to see the distortion effect caused by our new shader:

// Target for pointer. If down, value is 1, else value is 0 // Here we set it to 1 to see the effect, but initially it will be 0 let pointerDownTarget = 1 let uniforms  // Set initial values for uniforms function initUniforms () {   uniforms = {     uResolution: new PIXI.Point(width, height),     uPointerDown: pointerDownTarget   } }  // Set the distortion filter for the entire stage const stageFragmentShader = resources['shaders/stageFragment.glsl'].data const stageFilter = new PIXI.Filter(undefined, stageFragmentShader, uniforms) app.stage.filters = [stageFilter]

We can already enjoy our distortion effect!

This effect is static at the moment, so it’s not terribly fun just yet. Next, we’ll see how we can make the effect dynamically respond to pointer events.

Listening to pointer events

PixiJS makes it’s surprisingly simple to listen to events, even multiple events that respond equally to mouse and touch interactions. In this case, we want our animation to work just as well on desktop as on a mobile device, so we must listen to the events corresponding to both platforms.

PixiJs provides an interactive attribute that lets us do just that. We apply it to an element and start listening to events with an API similar to jQuery:

// Start listening events function initEvents () {   // Make stage interactive, so it can listen to events   app.stage.interactive = true    // Pointer & touch events are normalized into   // the `pointer*` events for handling different events   app.stage     .on('pointerdown', onPointerDown)     .on('pointerup', onPointerUp)     .on('pointerupoutside', onPointerUp)     .on('pointermove', onPointerMove) }

From here, we will start using a third uniform (uPointerDiff), which will allow us to explore the image gallery using drag and drop. Its value will be equal to the translation of the scene as we explore the gallery. Below is the code corresponding to each of the event handling functions:

// On pointer down, save coordinates and set pointerDownTarget function onPointerDown (e) {   console.log('down')   const { x, y } = e.data.global   pointerDownTarget = 1   pointerStart.set(x, y)   pointerDiffStart = uniforms.uPointerDiff.clone() }  // On pointer up, set pointerDownTarget function onPointerUp () {   console.log('up')   pointerDownTarget = 0 }  // On pointer move, calculate coordinates diff function onPointerMove (e) {   const { x, y } = e.data.global   if (pointerDownTarget) {     console.log('dragging')     diffX = pointerDiffStart.x + (x - pointerStart.x)     diffY = pointerDiffStart.y + (y - pointerStart.y)   } }

We still will not see any animation if we look at our work, but we can start to see how the messages that we have defined in each event handler function are correctly printed in the console.

Let’s now turn to implementing our animations!

Animating the distortion effect and the drag and drop functionality

The first thing we need to start an animation with PixiJS (or any canvas-based animation) is an animation loop. It usually consists of a function that is called continuously, using requestAnimationFrame, which in each call renders the graphics on the canvas element, thus producing the desired animation.

We can implement our own animation loop in PixiJS, or we can use the utilities included in the library. In this case, we will use the add method of app.ticker, which allows us to pass a function that will be executed in each frame. At the end of the init function we will add this:

// Animation loop // Code here will be executed on every animation frame app.ticker.add(() => {   // Multiply the values by a coefficient to get a smooth animation   uniforms.uPointerDown += (pointerDownTarget - uniforms.uPointerDown) * 0.075   uniforms.uPointerDiff.x += (diffX - uniforms.uPointerDiff.x) * 0.2   uniforms.uPointerDiff.y += (diffY - uniforms.uPointerDiff.y) * 0.2 })

Meanwhile, in the Filter constructor for the background, we will pass the uniforms in the stage filter. This allows us to simulate the translation effect of the background with this tiny modification in the corresponding shader:

uniform vec2 uPointerDiff;  void main () {   // Coordinates minus the `uPointerDiff` value   vec2 coord = gl_FragCoord.xy - uPointerDiff;    // ... more code here ... }

And now we can see the distortion effect in action, including the drag and drop functionality for the gridd background. Play with it!

Randomly generate a masonry grid layout

To make our UI more interesting, we can randomly generate the sizing and dimensions of the grid cells. That is, each image can have different dimensions, creating a kind of masonry layout.

Let’s use Unsplash Source, which will allow us to get random images from Unsplash and define the dimensions we want. This will facilitate the task of creating a random masonry layout, since the images can have any dimension that we want, and therefore, generate the layout beforehand.

To achieve this, we will use an algorithm that executes the following steps:

  1. We will start with a list of rectangles.
  2. We will select the first rectangle in the list divide it into two rectangles with random dimensions, as long as both rectangles have dimensions equal to or greater than the minimum established limit. We’ll add a check to make sure it’s possible and, if it is, add both resulting rectangles to the list.
  3. If the list is empty, we will finish executing. If not, we’ll go back to step two.

I think you’ll get a much better understanding of how the algorithm works in this next demo. Use the buttons to see how it runs: Next  will execute step two, All will execute the entire algorithm, and Reset will reset to step one.

Drawing solid rectangles

Now that we can properly generate our random grid layout, we will use the list of rectangles generated by the algorithm to draw solid rectangles in our PixiJS application. That way, we can see if it works and make adjustments before adding the images using the Unsplash Source API.

To draw those rectangles, we will generate a random grid layout that is five times bigger than the viewport and position it in the center of the stage. That allows us to move with some freedom to any direction in the gallery.

// Variables and settings for grid const gridSize = 50 const gridMin = 3 let gridColumnsCount, gridRowsCount, gridColumns, gridRows, grid let widthRest, heightRest, centerX, centerY, rects  // Initialize the random grid layout function initGrid () {   // Getting columns   gridColumnsCount = Math.ceil(width / gridSize)   // Getting rows   gridRowsCount = Math.ceil(height / gridSize)   // Make the grid 5 times bigger than viewport   gridColumns = gridColumnsCount * 5   gridRows = gridRowsCount * 5   // Create a new Grid instance with our settings   grid = new Grid(gridSize, gridColumns, gridRows, gridMin)   // Calculate the center position for the grid in the viewport   widthRest = Math.ceil(gridColumnsCount * gridSize - width)   heightRest = Math.ceil(gridRowsCount * gridSize - height)   centerX = (gridColumns * gridSize / 2) - (gridColumnsCount * gridSize / 2)   centerY = (gridRows * gridSize / 2) - (gridRowsCount * gridSize / 2)   // Generate the list of rects   rects = grid.generateRects() }

So far, we have generated the list of rectangles. To add them to the stage, it is convenient to create a container, since then we can add the images to the same container and facilitate the movement when we drag the gallery.

Creating a container in PixiJS is like this:

let container  // Initialize a Container element for solid rectangles and images function initContainer () {   container = new PIXI.Container()   app.stage.addChild(container) }

Now we can now add the rectangles to the container so they can be displayed on the screen.

// Padding for rects and images const imagePadding = 20  // Add solid rectangles and images // So far, we will only add rectangles function initRectsAndImages () {   // Create a new Graphics element to draw solid rectangles   const graphics = new PIXI.Graphics()   // Select the color for rectangles   graphics.beginFill(0xAA22CC)   // Loop over each rect in the list   rects.forEach(rect => {     // Draw the rectangle     graphics.drawRect(       rect.x * gridSize,       rect.y * gridSize,       rect.w * gridSize - imagePadding,       rect.h * gridSize - imagePadding     )   })   // Ends the fill action   graphics.endFill()   // Add the graphics (with all drawn rects) to the container   container.addChild(graphics) }

Note that we have added to the calculations a padding (imagePadding) for each rectangle. In this way the images will have some space among them.

Finally, in the animation loop, we need to add the following code to properly define the position for the container:

// Set position for the container container.x = uniforms.uPointerDiff.x - centerX container.y = uniforms.uPointerDiff.y - centerY

And now we get the following result:

But there are still some details to fix, like defining limits for the drag and drop feature. Let’s add this to the onPointerMove event handler, where we effectively check the limits according to the size of the grid we have calculated:

diffX = diffX > 0 ? Math.min(diffX, centerX + imagePadding) : Math.max(diffX, -(centerX + widthRest)) diffY = diffY > 0 ? Math.min(diffY, centerY + imagePadding) : Math.max(diffY, -(centerY + heightRest))

Another small detail that makes things more refined is to add an offset to the grid background. That keeps the blue grid lines in tact. We just have to add the desired offset (imagePadding / 2 in our case) to the background shader this way:

// Coordinates minus the `uPointerDiff` value, and plus an offset vec2 coord = gl_FragCoord.xy - uPointerDiff + vec2(10.0);

And we will get the final design for our random grid layout:

Adding images from Unsplash Source

We have our layout ready, so we are all set to add images to it. To add an image in PixiJS, we need a Sprite, which defines the image as a Texture of it. There are multiple ways of doing this. In our case, we will first create an empty Sprite for each image and, only when the Sprite is inside the viewport, we will load the image, create the Texture and add it to the Sprite. Sound like a lot? We’ll go through it step-by-step.

To create the empty sprites, we will modify the initRectsAndImages function. Please pay attention to the comments for a better understanding:

// For the list of images let images = []  // Add solid rectangles and images function initRectsAndImages () {   // Create a new Graphics element to draw solid rectangles   const graphics = new PIXI.Graphics()   // Select the color for rectangles   graphics.beginFill(0x000000)   // Loop over each rect in the list   rects.forEach(rect => {     // Create a new Sprite element for each image     const image = new PIXI.Sprite()     // Set image's position and size     image.x = rect.x * gridSize     image.y = rect.y * gridSize     image.width = rect.w * gridSize - imagePadding     image.height = rect.h * gridSize - imagePadding     // Set it's alpha to 0, so it is not visible initially     image.alpha = 0     // Add image to the list     images.push(image)     // Draw the rectangle     graphics.drawRect(image.x, image.y, image.width, image.height)   })   // Ends the fill action   graphics.endFill()   // Add the graphics (with all drawn rects) to the container   container.addChild(graphics)   // Add all image's Sprites to the container   images.forEach(image => {     container.addChild(image)   }) }

So far, we only have empty sprites. Next, we will create a function that’s responsible for downloading an image and assigning it as Texture to the corresponding Sprite. This function will only be called if the Sprite is inside the viewport so that the image only downloads when necessary.

On the other hand, if the gallery is dragged and a Sprite is no longer inside the viewport during the course of the download, that request may be aborted, since we are going to use an AbortController (more on this on MDN). In this way, we will cancel the unnecessary requests as we drag the gallery, giving priority to the requests corresponding to the sprites that are inside the viewport at every moment.

Let’s see the code to land the ideas a little better:

// To store image's URL and avoid duplicates let imagesUrls = {}  // Load texture for an image, giving its index function loadTextureForImage (index) {   // Get image Sprite   const image = images[index]   // Set the url to get a random image from Unsplash Source, given image dimensions   const url = `https://source.unsplash.com/random/$ {image.width}x$ {image.height}`   // Get the corresponding rect, to store more data needed (it is a normal Object)   const rect = rects[index]   // Create a new AbortController, to abort fetch if needed   const { signal } = rect.controller = new AbortController()   // Fetch the image   fetch(url, { signal }).then(response => {     // Get image URL, and if it was downloaded before, load another image     // Otherwise, save image URL and set the texture     const id = response.url.split('?')[0]     if (imagesUrls[id]) {       loadTextureForImage(index)     } else {       imagesUrls[id] = true       image.texture = PIXI.Texture.from(response.url)       rect.loaded = true     }   }).catch(() => {     // Catch errors silently, for not showing the following error message if it is aborted:     // AbortError: The operation was aborted.   }) }

Now we need to call the loadTextureForImage function for each image whose corresponding Sprite is intersecting with the viewport. In addition, we will cancel the fetch requests that are no longer needed, and we will add an alpha transition when the rectangles enter or leave the viewport.

// Check if rects intersects with the viewport // and loads corresponding image function checkRectsAndImages () {   // Loop over rects   rects.forEach((rect, index) => {     // Get corresponding image     const image = images[index]     // Check if the rect intersects with the viewport     if (rectIntersectsWithViewport(rect)) {       // If rect just has been discovered       // start loading image       if (!rect.discovered) {         rect.discovered = true         loadTextureForImage(index)       }       // If image is loaded, increase alpha if possible       if (rect.loaded && image.alpha < 1) {         image.alpha += 0.01       }     } else { // The rect is not intersecting       // If the rect was discovered before, but the       // image is not loaded yet, abort the fetch       if (rect.discovered && !rect.loaded) {         rect.discovered = false         rect.controller.abort()       }       // Decrease alpha if possible       if (image.alpha > 0) {         image.alpha -= 0.01       }     }   }) }

And the function that verifies if a rectangle is intersecting with the viewport is the following:

// Check if a rect intersects the viewport function rectIntersectsWithViewport (rect) {   return (     rect.x * gridSize + container.x <= width &&     0 <= (rect.x + rect.w) * gridSize + container.x &&     rect.y * gridSize + container.y <= height &&     0 <= (rect.y + rect.h) * gridSize + container.y   ) }

Last, we have to add the checkRectsAndImages function to the animation loop:

// Animation loop app.ticker.add(() => {   // ... more code here ...    // Check rects and load/cancel images as needded   checkRectsAndImages() })

Our animation is nearly ready!

Handling changes in viewport size

When initializing the application, we resized the renderer so that it occupies the whole viewport, but if the viewport changes its size for any reason (for example, the user rotates their mobile device), we should re-adjust the dimensions and restart the application.

// On resize, reinit the app (clean and init) // But first debounce the calls, so we don't call init too often let resizeTimer function onResize () {   if (resizeTimer) clearTimeout(resizeTimer)   resizeTimer = setTimeout(() => {     clean()     init()   }, 200) } // Listen to resize event window.addEventListener('resize', onResize)

The clean function will clean any residuals of the animation that we were executing before the viewport changed its dimensions:

// Clean the current Application function clean () {   // Stop the current animation   app.ticker.stop()   // Remove event listeners   app.stage     .off('pointerdown', onPointerDown)     .off('pointerup', onPointerUp)     .off('pointerupoutside', onPointerUp)     .off('pointermove', onPointerMove)   // Abort all fetch calls in progress   rects.forEach(rect => {     if (rect.discovered && !rect.loaded) {       rect.controller.abort()     }   }) }

In this way, our application will respond properly to the dimensions of the viewport, no matter how it changes. This gives us the full and final result of our work!

Some final thoughts

Thanks for taking this journey with me! We walked through a lot but we learned a lot of concepts along the way and walked out with a pretty neat piece of UI. You can check the code on GitHub, or play with demos on CodePen.

If you have worked with WebGL before (with or without using other libraries), I hope you saw how nice it is working with PixiJS. It abstracts the complexity associated with the WebGL world in a great way, allowing us to focus on what we want to do rather than the technical details to make it work.

Bottom line is that PixiJS brings the world of WebGL closer for front-end developers to grasp, opening up a lot of possibilities beyond HTML, CSS and JavaScript.

The post Building an Images Gallery using PixiJS and WebGL appeared first on CSS-Tricks.

CSS-Tricks

, , , , ,
[Top]