Tag: Building

Building a Settings Component

This is a tremendous CSS-focused tutorial from Adam Argyle. I really like the “just for gap” concept here. Grid is extremely powerful, but you don’t have to use all its abilities every time you reach for it. Here, Adam reaches for it for very light reasons like using it as an in-between border alternative as well as more generic spacing. I guess he’s putting money where his mouth is in terms of gap superseding margin!

I also really like calling out Una Kravet’s awesome name for flexible grids: RAM. Perhaps you’ve seen the flexible-number-of-columns trick with CSS grid? The bonus trick here (which I first saw from Evan Minto) is to use min(). That way, not only are large layouts covered, but even the very smallest layouts have no hard-coded minimum (like if 100% is smaller than 10ch here):

.el {   display: grid;   grid-template-columns: repeat(auto-fit, minmax(min(10ch, 100%), 35ch)); }

There is a ton more trickery in the blog post. The “color pops” with :focus-within is fun and clever. So much practical CSS in building something so practical! 🧡 more blog posts like this, please. Fortunately, we don’t have to wait, as Adam has other component-focused posts like this one on Tabs and this one on Sidenav.

Direct Link to ArticlePermalink


The post Building a Settings Component appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, ,

Building a Settings Component

This is a tremendous CSS-focused tutorial from Adam Argyle. I really like the “just for gap” concept here. Grid is extremely powerful, but you don’t have to use all its abilities every time you reach for it. Here, Adam reaches for it for very light reasons like using it as an in-between border alternative as well as more generic spacing. I guess he’s putting money where his mouth is in terms of gap superseding margin!

I also really like calling out Una Kravet’s awesome name for flexible grids: RAM. Perhaps you’ve seen the flexible-number-of-columns trick with CSS grid? The bonus trick here (which I first saw from Evan Minto) is to use min(). That way, not only are large layouts covered, but even the very smallest layouts have no hard-coded minimum (like if 100% is smaller than 10ch here):

.el {   display: grid;   grid-template-columns: repeat(auto-fit, minmax(min(10ch, 100%), 35ch)); }

There is a ton more trickery in the blog post. The “color pops” with :focus-within is fun and clever. So much practical CSS in building something so practical! 🧡 more blog posts like this, please. Fortunately, we don’t have to wait, as Adam has other component-focused posts like this one on Tabs and this one on Sidenav.

Direct Link to ArticlePermalink


The post Building a Settings Component appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, ,
[Top]

Building a Full-Stack Geo-Distributed Serverless App with Macrometa, GatsbyJS, & GitHub Pages

In this article, we walk through building out a full-stack real-time and completely serverless application that allows you to create polls! All of the app’s static bits (HTML, CSS, JS, & Media) will be hosted and globally distributed via the GitHub Pages CDN (Content Delivery Network). All of the data and dynamic requests for data (i.e., the back end) will be globally distributed and stateful via the Macrometa GDN (Global Data Network).

Macrometa is a geo-distributed stateful serverless platform designed from the ground up to be lightning-fast no matter where the client is located, optimized for both reads and writes, and elastically scalable. We will use it as a database for data collection and maintaining state and stream to subscribe to database updates for real-time action.

We will be using Gatsby to manage our app and deploy it to Github Pages. Let’s do this!

Intro

This demo uses the Macrometa c8db-source-plugin to get some of the data as markdown and then transform it to HTML to display directly in the browser and the Macrometa JSC8 SKD to keep an open socket for real-time fun and manage working with Macrometa’s API.

Getting started

  1. Node.js and npm must be installed on your machine.
  2. After you have that done, install the Gatsby-CLI >> npm install -g gatsby-cli
  3. If you don’t have one already go ahead and signup for a free Macrometa developer account.
  4. Once you’re logged in to Macrometa create a document collection called markdownContent. Then create a single document with title and content fields in markdown format. This creates your data model the app will be using for its static content.

Here’s an example of what the markdownContent collection should look like:

{   "title": "## Real-Time Polling Application",   "content": "Full-Stack Geo-Distributed Serverless App Built with GatsbyJS and Macrometa!" }

content and title keys in the document are in the markdown format. Once they go through gatsby-source-c8db, data in title is converted to <h2></h2>, and content to <p></p>.

  1. Now create a second document collection called polls. This is where the poll data will be stored.

In the polls collection each poll will be stored as a separate document. A sample document is mentioned below:

{   "pollName": "What is your spirit animal?",   "polls": [     {       "editing": false,       "id": "975e41",       "text": "dog",       "votes": 2     },     {       "editing": false,       "id": "b8aa60",       "text": "cat",       "votes": 1     },     {       "editing": false,       "id": "b8aa42",       "text": "unicorn",       "votes": 10     }   ] }

Setting up auth

Your Macrometa login details, along with the collection to be used and markdown transformations, has to be provided in the application’s gatsby-config.js like below:

 {   resolve: "gatsby-source-c8db",   options: {     config: "https://gdn.paas.macrometa.io",     auth: {     email: "<my-email>",      password: "process.env.MM_PW"   },     geoFabric: "_system",     collection: 'markdownContent',     map: {       markdownContent: {          title: "text/markdown",         content: "text/markdown"        }     }   } }

Under password you will notice that it says process.env.MM_PW, instead of putting your password there, we are going to create some .env files and make sure that those files are listed in our .gitignore file, so we don’t accidentally push our Macrometa password back to Github. In your root directory create .env.development and .env.production files.

You will only have one thing in each of those files: MM_PW='<your-password-here>'

Running the app locally

We have the frontend code already done, so you can fork the repo, set up your Macrometa account as described above, add your password as described above, and then deploy. Go ahead and do that and then I’ll walk you through how the app is set up so you can check out the code.

In the terminal of your choice:

  1. Fork this repo and clone your fork onto your local machine
  2. Run npm install
  3. Once that’s done run npm run develop to start the local server. This will start local development server on http://localhost:<some_port> and the GraphQL server at http://localhost:<some_port>/___graphql

How to deploy app (UI) on GitHub Pages

Once you have the app running as expected in your local environment simply run npm run deploy!

Gatsby will automatically generate the static code for the site, create a branch called gh-pages, and deploy it to Github.

Now you can access your site at <your-github-username>.github.io/tutorial-jamstack-pollingapp

If your app isn‘t showing up there for some reason go check out your repo’s settings and make sure Github Pages is enabled and configured to run on your gh-pages branch.

Walking through the code

First, we made a file that loaded the Macrometa JSC8 Driver, made sure we opened up a socket to Macrometa, and then defined the various API calls we will be using in the app. Next, we made the config available to the whole app.

After that we wrote the functions that handle various front-end events. Here’s the code for handling a vote submission:

onVote = async (onSubmitVote, getPollData, establishLiveConnection) => {   const { title } = this.state;   const { selection } = this.state;   this.setState({ loading: true }, () => {   onSubmitVote(selection)     .then(async () => {       const pollData = await getPollData();       this.setState({ loading: false, hasVoted: true, options: Object.values(pollData) }, () => {         // open socket connections for live updates         const onmessage = msg => {           const { payload } = JSON.parse(msg);           const decoded = JSON.parse(atob(payload));           this.setState({ options: decoded[title] });         }         establishLiveConnection(onmessage);       });     })     .catch(err => console.log(err))   }); }

You can check out a live example of the app here

You can create your own poll. To allow multiple people to vote on the same topic just share the vote URL with them.


The post Building a Full-Stack Geo-Distributed Serverless App with Macrometa, GatsbyJS, & GitHub Pages appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , , , ,
[Top]

Building an Ethereum app using Redwood.js and Fauna

With the recent climb of Bitcoin’s price over 20k $ USD, and to it recently breaking 30k, I thought it’s worth taking a deep dive back into creating Ethereum applications. Ethereum, as you should know by now, is a public (meaning, open-to-everyone-without-restrictions) blockchain that functions as a distributed consensus and data processing network, with the data being in the canonical form of “transactions” (txns). However, the current capabilities of Ethereum let it store (constrained by gas fees) and process (constrained by block size or size of the parties participating in consensus) only so many txns and txns/sec. Now, since this is a “how to” article on building with Redwood and Fauna and not an article on “how does […],” I will not go further into the technical details about how Ethereum works, what constraints it has and does not have, et cetera. Instead, I will assume you, as the reader, already have some understanding about Ethereum and how to build on it or with it.

I realized that there will be some new people stumbling onto this post with no prior experience with Ethereum, and it would behoove me to point these viewers in some direction. Thankfully, as of the time of this rewriting, Ethereum recently revamped their Developers page with tons of resources and tutorials. I highly recommend newcomers to go through it!

Although, I will be providing relevant specific details as we go along so that anyone familiar with either building Ethereum apps, Redwood.js apps, or apps that rely on a Fauna, can easily follow the content in this tutorial. With that out of the way, let’s dive in!

Preliminaries

This project is a fork of the Emanator monorepo, a project that is well described by Patrick Gallagher, one of the creators of the app, in his blog post he made for his team’s Superfluid hackathon submission. While Patrick’s app used Heroku for their database, I will be showing how you can use Fauna with this same app! 

Since this project is a fork, make sure to have downloaded the MetaMask browser extension before continuing.

Fauna

Fauna is a web-native GraphQL interface, with support for custom business logic and integration with the serverless ecosystem, enabling developers to simplify code and ship faster. The underlying globally-distributed storage and compute fabric is fast, consistent, and reliable, with a modern security infrastructure. Fauna is easy to get started with and offers a 100 percent serverless experience with nothing to manage.

Fauna also provides us with a High Availability solution with each server globally located containing a partition of our database, replicating our data asynchronously with each request with a copy of our database or the transaction made. 

Some of the benefits to using Fauna can be summarized as: 

  • Transactional 
  • Multi-document 
  • Geo-distributed 

In short, Fauna frees the developer from worrying about single or multi-document solutions. Guarantees consistent data without burdening the developer on how to model their system to avoid consistency issues. To get a good overview of how Fauna does this see this blog post about the FaunaDB distributed transaction protocol. 

There are a few other alternatives that one could choose instead of using Fauna such as: 

  • Firebase 
  • Cassandra 
  • MongoDB 

But these options don’t give us the ACID guarantees that Fauna does, compromising scaling. ACID stands for: 

  • Atomic:  all transactions are a single unit of truth, either they all pass or none. If we have multiple transactions in the same request then either both are good or neither are, one cannot fail and the other succeed. 
  • Consistent: A transaction can only bring the database from one valid state to another, that is, any data written to the database must follow the rules set out by the database, this ensures that all transactions are legal. 
  • Isolation: When a transaction is made or created, concurrent transactions leave the state of the database the same as they would be if each request was made sequentially. 
  • Durability: Any transaction that is made and committed to the database is persisted in the database, regardless of down time of the system or failure.

Redwood.js

Since I’ve used Fauna several times, I can vouch for Fauna’s database first-hand, and of all the things I enjoy about it, what I love the most is how simple and easy it is to use! Not only that, but Fauna is also great and easy to pair with GraphQL and GraphQL tools like Apollo Client and Apollo Server!! However, we will not be using Apollo Client and Apollo Server directly. We’ll be using Redwood.js instead, a full-stack JavaScript/TypeScript (not production-ready) serverless framework which comes prepackaged with Apollo Client/Server! 

You can check out Redwood.js on its site, and the GitHub page.

Redwood.js is a newer framework to come out of the woodwork (lol) and was started by Tom Preston-Werner (one of the founders of GitHub). Even so, do be warned that this is an opinionated web-app framework, coming with a lot of the dev environment decisions already made for you. While some folk may not like this approach, it does offer us a faster way to build Ethereum apps, which is what this post is all about.

Superfluid

One of the challenges of working with Ethereum applications is block confirmations. The corollary to block confirmations is txn confirmations (i.e. data), and confirmations take time, which means time (usually minutes) that the user must wait until a computation they initiated (either directly via a UI or indirectly via another smart contract) is considered truthful or trustworthy. Superfluid is a protocol that aims to address this issue by introducing cashflows or txn streams to enable real-time financial applications; that is; apps where the user no longer needs to wait for txn confirmations and can immediately follow-up on the next set of computational actions. 

Learn more about Superfluid by reading their documentation.

Emanator

Patrick’s team did something really cool and applied Superfluid’s streaming functionality to NFTs, allowing for a user to “mint a continuous supply of NFTs”. This stream of NFTs can then be sold via auctions. Another interesting part of the emanator app is that these NFTs are for creators, artists 👩‍🎨 , or musicians 🎼 . 

There are a lot more technical details about how this application works, like the use of a Superfluid Instant Distribution Agreement (IDA), revenue split per auction, auction process, and the smart contract itself; however, since this is a “how-to” and not a “how does […]” tutorial, I’ll leave you with a link to the README.md of the original Emanator `monorepo`, if you want to learn more.  

Finally, let’s get to some code!

Setup

1. Download the repo from redwood-eth-with-fauna

Git clone the redwood-eth-with-fauna repo on your terminal, favorite text editor, or IDE. For greater cognitive ease, I’ll be using VSCode for this tutorial.

2. Install app dependencies and setup environment variables 🔐

To install this project’s dependencies after you’ve cloned the repo, just run:

yarn

…at the root of the directory. Then, we need to get our .env file from our .env.example file. To do that run:

cp .env.example .env

In your .env file, you still need to provide INFURA_ENDPOINT_KEY. Contrary to what you might initially think, this variable is actually your PROJECT ID of your Infura app. 

If you don’t have an Infura account, you can create one for free! 🆓 🕺

An example view of the Infura dashboard for my redwood-eth-with-fauna app. Copy the PROJECT ID and paste it in your .env file as for INFURA_ENDPOINT_KEY

3. Update the GraphQL schema and run the database migration

In the schema file found by at:

api/prisma/schema.prisma 

…we need to add a field to the Auction model. This is due to a bug in the code where this field is actually missing from the monorepo. So, we must add it to get our app working!

We are adding line 33, a contentHash field with the type `String` so that our Auctions can be added to our database and then shown to the user.

After that, we need to run a database migration using a Redwood.js command that will automatically update some of our project’s code. (How generous of the Redwood devs to abstract this responsibility from us; this command just works!) To do that, run:

yarn rw db save redwood-eth-with-fauna && yarn rw db up

You should see something like the following if this process was successful.

At this point, you could start the app by running

yarn rw dev

…and create, and then mint your first NFT! 🎉 🎉 

Note: You may get the following error when minting a new NFT:

If you do, just refresh the page to see your new NFT on the right!

You can also click on the name of your new NFT to view it’s auction details like the one shown below:

You can also notice on your terminal that Redwood updates the API resolver when you navigate to this page.

That’s all for the setup! Unfortunately, I won’t be touching on how to use this part of the UI, but you’re welcome to visit Emanator’s monorepo to learn more.

Now, we want to add Fauna to our app.

Adding Fauna

Before we get to adding Fauna to our Redwood app, let’s make sure to power it down by pressing CTL+C (on macOS). Redwood handles hot reloading for us and will automatically re-render pages as we make edits which can get quite annoying while we make your adjustments. So, we’ll keep our app down for now until we’ve finished adding Fauna. 

Next, we want to make sure we have a Fauna secret API key from a Fauna database that we create on Fauna’s dashboard (I will not walk through how to do that, but this helpful article does a good job of covering it!). Once you have copied your key secret, paste it into your .env file by replacing <FAUNA_SECRET_KEY>:

Make sure to leave the quotation marks in place! 

Importing GraphQL Schema to Fauna

To import our GraphQL schema of our project to Fauna, we need to first schema stitch our 3 separate schemas together, a process we’ll do manually. Make a new file api/src/graphql/fauna-schema-to-import.gql. In this file, we will add the following:

type Query {  bids: [Bid!]!   auctions: [Auction!]!  auction(address: String!): Auction   web3Auction(address: String!): Web3Auction!  web3User(address: String!, auctionAddress: String!): Web3User! }   # ------ Auction schema ------ type Auction {  id: Int!  owner: String!  address: String!  name: String!  winLength: Int!  description: String  contentHash: String  createdAt: String!  status: String!  highBid: Int!  generation: Int!  revenue: Int!  bids: [Bid]! }   input CreateAuctionInput {  address: String!  name: String!  owner: String!  winLength: Int!  description: String!  contentHash: String!  status: String  highBid: Int  generation: Int }   # Comment out to bypass Fauna `Import your GraphQL schema' error # type Mutation { #   createAuction(input: CreateAuctionInput!): Auction # }  # ------ Bids ------ type Bid {  id: Int!  amount: Int!  auction: Auction!  auctionAddress: String! }     input CreateBidInput {  amount: Int!  auctionAddress: String! }   input UpdateBidInput {  amount: Int  auctionAddress: String }   # ------ Web3 ------ type Web3Auction {  address: String!  highBidder: String!  status: String!  highBid: Int!  currentGeneration: Int!  auctionBalance: Int!  endTime: String!  lastBidTime: String!  # Unfortunately, the Fauna GraphQL API does not support custom scalars.  # So, we'll this field from the app.  # pastAuctions: JSON!  revenue: Int! }   type Web3User {  address: String!  auctionAddress: String!  superTokenBalance: String!  isSubscribed: Boolean! }

Using this schema, we can now import it to our Fauna database.

Also, don’t forget to make the necessary changes to our 3 separate schema files api/src/graphql/auctions.sdl.js, api/src/graphql/bids.sdl.js, and api/src/graphql/web3.sdl.js to correspond to our new Fauna GraphQL schema!! This is important to maintain consistency between our app’s GraphQL schema and Fauna’s.

View Complete Project Diffs — Quick Start section

If you want to take a deep dive and learn the necessary changes required to get this project up and running, great! Head on to the next section!!  

Otherwise, if you want to just get up and running quickly, this section is for you. 

You can git checkout the `integrating-fauna` branch at the root directory of this project’s repo. To do that, run the following command:

git checkout integrating-fauna

Then, run yarn again, for a sanity check:

yarn

To start the app, you can then run:

yarn rw dev

Steps to add Fauna

Now for some more steps to get our project going!

1. Install faunadb and graphql-request

First, let’s install the Fauna JavaScript driver faunadb and the graphql-request. We will use both of these for our main modifications to our database scripts folder to add Fauna. 

To install, run:

yarn workspace api add faunadb graphql-request

2. Edit  api/src/lib/db.js and api/src/functions/graphql.js

Now, we will replace the PrismaClient instance in api/src/lib/db.js with our Fauna instance. You can delete everything in file and replace it with the following:

Then, we must make a small update to our api/src/functions/graphql.js file like so:

3. Create api/src/lib/fauna-client.js

In this simple file, we will instantiate our client-side instance of the Fauna database with two variables which we will be using in the next step. This file should end up looking like the following:

4. Update our first service under api/src/services/auctions/auctions.js

Here comes the hard part. In order to get our services running, we need to replace all Prisma related commands with commands using an instance of the Fauna client from our fauna-client.js we just created. This part doesn’t seem straightforward initially, but with some deep thought and thinking, all the necessary changes come down to understanding how Fauna’s FQL commands work. 

FQL (Fauna Query Language) is Fauna’s native API for querying Fauna. Since FQL is expression-oriented, using it is as simple as chaining several functional commands. Thus, for the first changes in api/services/auctions/auctions.js, we’ll do the following:

To break this down a bit, first, we import the client variables and `db` instance from the proper project file paths. Then, we remove line 11, and replace it with lines 13 – 28 (you can ignore the comments for now, but if you really want to see the rest of these, you can check out the integrating-fauna branch from this project’s repo to see the complete diffs). Here, all we’re doing is using FQL to query the auctions Index of our Fauna Indexes to get all the auctions data from our Fauna database. You can test this out by running console.log(auctionsRaw).

From running that console.log(), we see that we need to do some object destructing to get the data we need to update what was previously line 18:

const auctions = await auctionsRaw.map(async (auction, i) => {

Since we dealing with an object, but we want an array, we’ll add the following in the next line after finishing the declaration of const auctionsRaw:

Now we can see that we’re getting the right data format.

Next, let’s update the call instance of `auctionsRaw` to our new auctionsDataObjects:

Here comes the most challenging part of updating this file. We want to update the simple return statement of both the auction and createAuction functions. Actually, the changes we make are actually quite similar. So, let’s make update our auction function like so:

Again, you can ignore the comments, as this comment is just to note the preference return command statement that was there prior to our changes.

All this query says is, “in the auction Collection, find one specific auction that has this address.”

This next step to complete this createAuctin function is admittedly quite hacky. While making this tutorial, I realized that Fauna’s GraphQL API unfortunately does not support custom scalars (you can read more about that under the Limitations section of their GraphQL documentation). This sadly meant that the GraphQL schema of Emanator’s monorepo would not work directly out of the box. In the end, this resulted in having to make many minor changes to get the app to properly run the creation of an auction. So, instead of walking in detail through this section, I will first show you the diff, then briefly summarize the purpose of the changes. 

Looking at the green lines of 100 and 101, we can see that the functional commands we’re using here are not that much different; here, we’re just creating a new document in our Auction collection, instead of reading from the Indexes. 

Turning back to the data fields of this createAuction function, we can see that we are given an input as argument, which actually refers to the UI input fields of the new NFT auction form on the Home page. Thus, input is an object of six fields, namely address, name, owner, winLength, description, and contentHash. However, the other four fields that are required to fulfill our GraphQL schema for an Auction type are still missing! Therefore, the other variables I created, id, dateTime, status, and highBid are variables I, more or less, hardcoded so that this function could complete successfully. 

Lastly, we need to complete the export of the Auction constant. To do that, we’ll make use of the Fauna client once more to make the following changes:

And, we’re finally done with our first service 🎊 , phew!

Completing GraphQL services

By now, you may be feeling a bit tired from these changes from updating the GraphQL services (I know I was while I was trying to learn the necessary changes to make!). So, to save you time getting this app to work, I’ll instead of walking through them entirely, I will share the git diffs again from the integrating-fauna branch that I have already working in the repo. After sharing them, I will summarize the changes that were made.

First file to update is api/src/services/bids/bids.js:

And, updating our last GraphQL service:

Finally, one final change in web/src/components/AuctionCell/AuctionCell.js:

So, back to Fauna not supporting custom scalars. Since Fauna doesn’t support custom scalars, we had to comment out the pastAuctions field from our web3.js service query (along with commenting it out from our GraphQL schemas). 

The last change that was made in web/src/components/AuctionCell/AuctionCell.js is another hacky change to make the newly created NFT address domains (you can navigate to these when you click on the hyperlink of the NFT name, located on the right of the home page after you create a new NFT) clickable without throwing an error. 😄 

Conclusion

Finally, when you run:

yarn rw dev

…and you create a new token, you can now do so using Fauna!! 🎉🎉🎉🎉

Final notes

There are two caveats. First, you will see this annoying error message appear above the create NFT form after you have created one and confirmed the transaction with MetaMask.

Unfortunately, I couldn’t find a solution for this besides refreshing the page. So, we will do this just like we did with our original Emanator monorepo version. 

But when you do refresh the page, you should see your new shiny token displayed on the right! 👏 

 And, this is with the NFT token data fetched from Fauna! 🙌 🕺 🙌🙌

The second caveat is that the page for a new NFT is still not renderable due to the bug web/src/components/AuctionCell/AuctionCell.js.

This is another issue I couldn’t solve. However, this is where you, the community, can step in! This repo, redwood-eth-with-fauna is openly available on GitHub, along with the (currently) finalized integrating-fauna branch that has a working (as it currently does 😅) version of the Emanator app. So, if you’re really interested in this app and would like to explore how to leverage this app further with Fauna, feel free to fork the project and explore or make changes! I can always be reached on GitHub and am always happy to help you! 😊

That’s all for this tut, and I hope you enjoyed! Feel free to reach out with any questions on GitHub!


The post Building an Ethereum app using Redwood.js and Fauna appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , ,
[Top]

Building Flexible Components With Transparency

Good thinking from Paul Herbert on the Cloudfour blog about colorizing a component. You might look at a design comp and see a card component with a header background of #dddddd, content background of #ffffff, on an overall background of #eeeeee. OK, easy enough. But what if the overall background becomes #dddddd? Now your header looks lost within it.

That darker header? Design-wise, it’s not being exactly #dddddd that’s important; it’s about looking slightly darker than the background. When that’s the case, a background of, say rgba(0, 0, 0, 0.135) is more resiliant.

That will then remain resilient against backgrounds of any kind.

Direct Link to ArticlePermalink


The post Building Flexible Components With Transparency appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , ,
[Top]

A Community-Driven Site with Eleventy: Building the Site

In the last article, we learned what goes into planning for a community-driven site. We saw just how many considerations are needed to start accepting user submissions, using what I learned from my experience building Style Stage as an example.

Now that we’ve covered planning, let’s get to some code! Together, we’re going to develop an Eleventy setup that you can use as a starting point for your own community (or personal) site.

Article Series:

  1. Preparing for Contributions
  2. Building the Site (You are here!)

This article will cover:

  • How to initialize Eleventy and create useful develop and build scripts
  • Recommended setup customizations
  • How to define custom data and combine multiple data sources
  • Creating layouts with Nunjucks and Eleventy layout chaining
  • Deploying to Netlify

The vision

Let’s assume we want to let folks submit their dogs and cats and pit them against one another in cuteness contests.

Screenshot of the site, showing a Meow vs. Bow Wow heading above a Weekly Battle subheading, followed by a photo of a tabby cat named Fluffy and one of a happy dog named Lexi.
Live demo

We’re not going to get into user voting in this article. That would be so cool (and totally possible with serverless functions) but our focus is on the pet submissions themselves. In other words, users can submit profile details for their cats and dogs. We’ll use those submissions to create a weekly battle that puts a random cat up against a random dog on the home page to duke it out over which is the most purrrfect (or woof-tastic, if you prefer).

Let’s spin up Eleventy

We’ll start by initializing a new project by running npm init on any directory you’d like, then installing Eleventy into it with:

npm install @11ty/eleventy

While it’s totally optional, I like to open up the package-json file that’s added to the directory and replace the scripts section with this:

"scripts": {   "develop": "eleventy --serve",   "build": "eleventy" },

This allows us to start developing Eleventy in a development environment (npm run develop) that includes Browsersync hot-reloading for local development. It also adds a command that compiles and builds our work (npm run build) for deployment on a production server.

If you’re thinking, “npm what?” what we’re doing is calling on Node (which is something Eleventy requires). The commands noted here are intended to be run in your preferred terminal, which may be an additional program or built-in to your code editor, like it is in VS Code.

We’ll need one more npm package, fast-glob, that will come in handy a little later for combining data. We may as well install it now:

npm install --save-dev fast-glob.

Let’s configure our directory

Eleventy allows customizing the input directory (where we work) and output directory (where our built work goes) to provide a little extra organization.

To configure this, we’ll create the eleventy.js file at the root of the project directory. Then we’ll tell Eleventy where we want our input and output directories to go. In this case, we’re going to use a src directory for the input and a public directory for the output.

module.exports = function (eleventyConfig) {   return      dir: {       input: "src",       output: "public"     },   }; };

Next, we’ll create a directory called pets where we’ll store the pets data we get from user submissions. We can even break that directory down a little further to reduce merge conflicts and clearly distinguish cat data from dog data with cat and dog subdirectories:

pets/   cats/   dogs/

What’s the data going to look like? Users will send in a JSON file that follows this schema, where each property is a data point about the pet:

{   "name": "",   "petColor": "",   "favoriteFood": "",   "favoriteToy": "",   "photoURL": "",   "ownerName": "",   "ownerTwitter": "" }

To make the submission process crystal clear for users, we can create a CONTRIBUTING.md file at the root of the project and write out the guidelines for submissions. GitHub takes the content in this file and uses displays it in the repo. This way, we can provide guidance on this schema such as a note that favoriteFood, favoriteToy, and ownerTwitte are optional fields.

A README.md file would be just as fine if you’d prefer to go that route. It’s just nice that there’s a standard file that’s meant specifically for contributions.

Notice photoURL is one of those properties. We could’ve made this a file but, for the sake of security and hosting costs, we’re going to ask for a URL instead. You may decide that you are willing to take on actual files, and that’s totally cool.

Let’s work with data

Next, we need to create a combined array of data out of the individual cat files and dog files. This will allow us to loop over them to create site pages and pick random cat and dog submissions for the weekly battles.

Eleventy allows node module.exports within the _data directory. That means we can create a function that finds all cat files and another that finds all dog files and then creates arrays out of each set. It’s like taking each cat file and merging them together to create one data set in a single JavaScript file, then doing the same with dogs.

The filename used in _data becomes the variable that holds that dataset, so we’ll add files for cats and dogs in there:

_data/   cats.js   dogs.js

The functions in each file will be nearly identical — we’re merely swapping instances of “cat” for “dog” between the two. Here’s the function for cats: 

const fastglob = require("fast-glob"); const fs = require("fs"); 
 module.exports = async () => {   // Create a "glob" of all cat json files   const catFiles = await fastglob("./src/pets/cats/*.json", {     caseSensitiveMatch: false,   }); 
   // Loop through those files and add their content to our `cats` Set   let cats = new Set();   for (let cat of catFiles) {     const catData = JSON.parse(fs.readFileSync(cat));     cats.add(catData);   } 
   // Return the cats Set of objects within an array   return [...cats]; };

Does this look scary? Never fear! I do not routinely write node either, and it’s not a required step for less complex Eleventy sites. If we had instead chosen to have contributors add to an ever growing single JSON file with _data, then this combination step wouldn’t be necessary in the first place. Again, the main reason for this step is to reduce merge conflicts by allowing for individual contributor files. It’s also the reason we added fast-glob to the mix.

Let’s output the data

This is a good time to start plugging data into the templates for our UI. In fact, go ahead and drop a few JSON files into the pets/cats and pets/dogs directories that include data for the properties so we have something to work with right out of the gate and test things.

We can go ahead and add our first Eleventy page by adding a index.njk file in the src directory. This will become the home page, and is a Nunjucks template file format.

Nunjucks is one option of many for creating templates with Eleventy. See the docs for a full list of templating options.

Let’s start by looping over our data and outputting an unordered list both for cats and dogs:

<ul>   <!-- Loop through cat data -->   {% for cat in cats %}   <li>     <a href="/cats/{{ cat.name | slug }}/">{{ cat.name }}</a>   </li>   {% endfor %} </ul> 
 <ul>   <!-- Loop through dog data -->   {% for dog in dogs %}   <li>     <a href="/dogs/{{ dog.name | slug }}/">{{ dog.name }}</a>   </li>   {% endfor %} </ul>

As a reminder, the reference to cats and dogs matches the filename in _data. Within the loop we can access the JSON keys using dot notation, as seen for cat.name, which is output as a Nunjucks template variable using double curly braces (e.g. {{ cat.name }}).

Let’s create pet profile pages

Besides lists of cats and dogs on the home page (index.njk), we also want to create individual profile pages for each pet. The loop indicated a hint at the structure we’ll use for those, which will be [pet type]/[name-slug].

The recommended way to create pages from data is via the Eleventy concept of pagination which allows chunking out data.

We’re going to create the files responsible for the pagination at the root of the src directory, but you could nest them in a custom directory, as long as it lives within src and can still be discovered by Eleventy.

src/   cats.njk   dogs.njk

Then we’ll add our pagination information as front matter, shown for cats:

--- pagination:   data: cats   alias: cat   size: 1 permalink: "/cats/{{ cat.name | slug }}/" ---

The data value is the filename from _data. The alias value is optional, but is used to reference one item from the paginated array. size: 1 indicates that we’re creating one page per item of data.

Finally, in order to successfully create the page output, we need to also indicate the desired permalink structure. That’s where the alias value above comes into play, which accesses the name key from the dataset. Then we are using a built-in filter called slug that transforms a string value into a URL-friendly string (lowercasing and converting spaces to dashes, etc).

Let’s review what we have so far

Now is the time to fire up Eleventy with npm run develop. That will start the local server and show you a URL in the terminal you can use to view the project. It will show build errors in the terminal if there are any.

As long as all was successful, Eleventy will create a public directory, which should contain:

public/   cats/     cat1-name/index.html     cat2-name/index.html   dogs/     dog1-name/index.html     dog2-name/index.html   index.html

And in the browser, the index page should display one linked list of cat names and another one of linked dog names.

Let’s add data to pet profile pages

Each of the generated pages for cats and dogs is currently blank. We have data we can use to fill them in, so let’s put it to work.

Eleventy expects an _includes directory that contains layout files (“templates”) or template partials that are included in layouts.

We’ll create two layouts:

src/   _includes/     base.njk     pets.njk

The contents of base.njk will be an HTML boilerplate. The <body> element in it will include a special template tag, {{ content | safe }}, where content passed into the template will render, with safe meaning it can render any HTML that is passed in versus encoding it.

Then, we can assign the homepage, index.md, to use the base.njk layout by adding the following as front matter. This should be the first thing in index.md, including the dashes:

--- layout: base.njk ---

If you check the compiled HTML in the public directory, you’ll see the output of the cat and dog loops we created are now within the <body> of the base.njk layout.

Next, we’ll add the same front matter to pets.njk to define that it will also use the base.njk layout to leverage the Eleventy concept of layout chaining. This way, the content we place in pets.njk will be wrapped by the HTML boilerplate in base.njk so we don’t have to write out that HTML each and every time.

In order to use the single pets.njk template to render both cat and dog profile data, we’ll use one of the newest Eleventy features called computed data. This will allow us to assign values from the cats and dogs data to the same template variables, as opposed to using if statements or two separate templates (one for cats and one for dogs). The benefit is, once again, to avoid redundancy.

Here’s the update needed in cats.njk, with the same update needed in dogs.njk (substituting cat with dog):

eleventyComputed:   title: "{{ cat.name }}"   petColor: "{{ cat.petColor }}"   favoriteFood: "{{ cat.favoriteFood }}"   favoriteToy: "{{ cat.favoriteToy }}"   photoURL: "{{ cat.photoURL }}"   ownerName: "{{ cat.ownerName }}"   ownerTwitter: "{{ cat.ownerTwitter }}"

Notice that eleventyComputed defines this front matter array key and then uses the alias for accessing values in the cats dataset. Now, for example, we can just use {{ title }} to access a cat’s name and a dog’s name since the template variable is now the same.

We can start by dropping the following code into pets.njk to successfully load cat or dog profile data, depending on the page being viewed:

<img src="{{ photoURL }}" /> <ul>   <li><strong>Name</strong>: {{ title }}</li>   <li><strong>Color</strong>: {{ petColor }}</li>   <li><strong>Favorite Food</strong>: {{ favoriteFood if favoriteFood else 'N/A' }}</li>   <li><strong>Favorite Toy</strong>: {{ favoriteToy if favoriteToy else 'N/A' }}</li> {% if ownerTwitter %}   <li><strong>Owner</strong>: <a href="{{ ownerTwitter }}">{{ ownerName }}</a></li> {% else %}   <li><strong>Owner</strong>: {{ ownerName }}</li> {% endif %} </ul>

The last thing we need to tie this all together is to add layout: pets.njk to the front matter in both cats.njk and dogs.njk.

With Eleventy running, you can now visit an individual pet page and see their profile:

Screenshot of a cat profile page that starts with the cat's name for the heading, followed by the cat's photo, and a list of the cat's details.
Fancy Feast for a fancy cat. 😻

We’re not going into styling in this article, but you can head over to the sample project repo to see how CSS is included.

Let’s deploy this to production!

The site is now in a functional state and can be deployed to a hosting environment! 

As recommended earlier, Netlify is an ideal choice, particularly for a community-driven site, since it can trigger a deployment each time a submission is merged and provide a preview of the submission before sending it for review.

If you choose Netlify, you will want to push your site to a GitHub repo which you can select during the process of adding a site to your Netlify account. We’ll tell Netlify to serve from the public directory and run npm run build when new changes are merged into the main branch.

The sample site includes a netlify.toml file which has the build details and is automatically detected by Netlify in the repo, removing the need to define the details in the new site flow.

Once the initial site is added, visit Settings → Build → Deploy in Netlify. Under Deploy contexts, select “Edit” and update the selection for “Deploy Previews” to “Any pull request against your production branch / branch deploy branches.” Now, for any pull request, a preview URL will be generated with the link being made available directly in the pull request review screen.

Let’s start accepting submissions!

Before we pass Go and collect $ 100, it’s a good idea to revisit the first post and make sure we’re prepared to start taking user submissions. For example, we ought to add community health files to the project if they haven’t already been added. Perhaps the most important thing is to make sure a branch protection rule is in place for the main branch. This means that your approval is required prior to a pull request being merged.

Contributors will need to have a GitHub account. While this may seem like a barrier, it removes some of the anonymity. Depending on the sensitivity of the content, or the target audience, this can actually help vet (get it?) contributors.

Here’s the submission process:

  1. Fork the website repository.
  2. Clone the fork to a local machine or use the GitHub web interface for the remaining steps.
  3. Create a unique .json file within src/pets/cats or src/pets/dogs that contains required data.
  4. Commit the changes if they’re made on a clone, or save the file if it was edited in the web interface.
  5. Open a pull request back to the main repository.
  6. (Optional) Review the Netlify deploy preview to verify information appears as expected.
  7. Merge the changes.
  8. Netlify deploys the new pet to the live site.

A FAQ section is a great place to inform contributors how to create pull request. You can check out an example on Style Stage.

Let’s wrap this up…

What we have is fully functional site that accepts user contributions as submissions to the project repo. It even auto-deploys those contributions for us when they’re merged!

There are many more things we can do with a community-driven site built with Eleventy. For example:

  • Markdown files can be used for the content of an email newsletter sent with Buttondown. Eleventy allows mixing Markdown with Nunjucks or Liquid. So, for example, you can add a Nunjucks for loop to output the latest five pets as links that output in Markdown syntax and get picked up by Buttondown.
  • Auto-generated social media preview images can be made for social network link previews.
  • A commenting system can be added to the mix.
  • Netlify CMS Open Authoring can be used to let folks make submissions with an interface. Check out Chris’ great rundown of how it works.

My Meow vs. BowWow example is available for you to fork on GitHub. You can also view the live preview and, yes, you really can submit your pet to this silly site. 🙂

Best of luck creating a healthy and thriving community!

Article Series:

  1. Preparing for Contributions
  2. Building the Site (You are here!)

The post A Community-Driven Site with Eleventy: Building the Site appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , ,
[Top]

Building Custom Data Importers: What Engineers Need to Know

Importing data is a common pain-point for engineering teams. Whether its importing CRM data, inventory SKUs, or customer details, importing data into various applications and building a solution for this is a frustrating experience nearly every engineer can relate to. Data import, as a critical product experience is a huge headache. It reduces the time to value for customers, strains internal resources, and takes valuable development cycles away from developing key, differentiating product features.

Frequent error messages end-users receive when importing data. Why do we expect customers to fix this themselves?

Data importers, specifically CSV importers, haven’t been treated as key product features within the software, and customer experience. As a result, engineers tend to dedicate an exorbitant amount of effort creating less-than-ideal solutions for customers to successfully import their data.

Engineers typically create lengthy, technical documentation for customers to review when an import fails. However, this doesn’t truly solve the issue but instead offsets the burden of a great experience from the product to an end-user.

In this article, we’ll address the current problems with importing data and discuss a few key product features that are necessary to consider if you’re faced with a decision to build an in-house solution.

Importing data is typically frustrating for anyone involved at a data-led company. Simply put, there has never been a standard for importing customer data. Until now, teams have deferred to CSV templates, lengthy documentation, video tutorials, or buggy in-house solutions to allow users the ability to import spreadsheets. Companies trying to import CSV data can run into a variety of issues such as:

  • Fragmented data: With no standard way to import data, we get emails going back and forth with attached spreadsheets that are manually imported. As spreadsheets get passed around, there are obvious version control challenges. Who made this change? Why don’t these numbers add up as they did in the original spreadsheet? Why are we emailing spreadsheets containing sensitive data?
  • Improper formatting: CSV import errors frequently occur when formatting isn’t done correctly. As a result, companies often rely on internal developer resources to properly clean and format data on behalf of the client — a process that can take hours per customer, and may lead to churn anyway. This includes correcting dates or splitting fields that need to be healed prior to importing.
  • Encoding errors: There are plenty of instances where a spreadsheet can’t be imported when it’s not improperly encoded. For example, a company may need a file to be saved with UTF-8 encoding (the encoding typically preferred for email and web pages) in order to then be uploaded properly to their platform. Incorrect encoding can result in a lengthy chain of communication where the customer is burdened with correcting and re-importing their data.
  • Data normalization: A lack of data normalization results in data redundancy and a never-ending string of data quality problems that make customer onboarding particularly challenging. One example includes formatting email addresses, which are typically imported into a database, or checking value uniqueness, which can result in a heavy load on engineers to get the validation working correctly.

Remember building your first CSV importer?

When it comes down to creating a custom-built data importer, there are a few critical features that you should include to help improve the user experience. (One caveat – building a data importer can be time-consuming not only to create but also maintain – it’s easy to ensure your company has adequate engineering bandwidth when first creating a solution, but what about maintenance in 3, 6, or 12 months?)

A preview of Flatfile Portal. It integrates in minutes using a few lines of JavaScript.

Data mapping

Mapping or column-matching (they are often used interchangeably) is an essential requirement for a data importer as the file import will typically fail without it. An example is configuring your data model to accept contact-level data. If one of the required fields is “address” and the customer who is trying to import data chooses a spreadsheet where the field is labeled “mailing address,” the import will fail because “mailing address” doesn’t correlate with “address” in a back-end system. This is typically ‘solved’ by providing a pre-built CSV template for customers, who then have to manipulate their data, effectively increasing time-to-value during a product experience. Data mapping needs to be included in the custom-built product as a key feature to retain data quality and improve the customer data onboarding experience.

Auto-column matching CSV data is the bread and butter of Portal, saving massive amounts of time for customers while providing a delightful import experience.

Data validation

Data validation, which checks if the data matches an expected format or value, is another critical feature to include in a custom data importer. Data validation is all about ensuring the data is accurate and is specific to your required data model. For example, if special characters can’t be used within a certain template, error messages can appear during the import stage. Having spreadsheets with hundreds of rows containing validation errors results in a nightmare for customers, as they’ll have to fix these issues themselves, or your team, which will spend hours on end cleaning data. Automatic data validators allow for streamlining of healing incoming data without the need for a manual review.

We built Data Hooks into Portal to quickly normalize data on import. A perfect use-case would be validating email uniqueness against a database.

Data parsing

Data parsing is the process of taking an aggregation of information (in a spreadsheet) and breaking it into discrete parts. It’s the separation of data. In a custom-built data importer, a data parsing feature should not only have the ability to go from a file to an array of discrete data but also streamline the process for customers.

Data transformation

Data transformation means making changes to imported data as it’s flowing into your system to meet an expected or desired value. Rather than sending data back to users with an error message for them to fix, data transformation can make small, systematic tweaks so that the users’ data is more usable in your backend. For example, when transferring a task list, prioritization data could be transformed into a different value, such as numbers instead of labels.

Data Hooks normalize imported customer data automatically using validation rules set in the Portal JSON config. These highly adaptable hooks can be worked to auto-validate nearly any incoming customer data.

We’ve baked all of the above features into Portal, our flagship CSV importer at Flatfile. Now that we’ve reviewed some of the must-have features of a data importer, the next obvious question for an engineer building an in-house importer is typically… should they?

Engineering teams that are taking on this task typically use custom or open source solutions, which may not adhere to specific use-cases. Building a comprehensive data importer also brings UX challenges when building a UI and maintaining application code to handle data parsing, normalization, and mapping. This is prior to considering how customer data requirements may change in future months and the ramifications of maintaining a custom-built solution.

Companies facing data import challenges are now considering integrating a pre-built data importer such as Flatfile Portal. We’ve built Portal to be the elegant import button for web apps. With just a few lines of JavaScript, Portal can be implemented alongside any data model and validation ruleset, typically in a few hours. Engineers no longer need to dedicate hours cleaning up and formatting data, nor do they need to custom build a data importer (unless they want to!). With Flatfile, engineers can focus on creating product-differentiating features, rather than work on solving spreadsheet imports.

Importing data is wrought with challenges and there are several critical features necessary to include when building a data importer. The alternative to a custom-built solution is to look for a pre-built data importer such as Portal.

Flatfile’s mission is to remove barriers between humans and data. With AI-assisted data onboarding, they eliminate repetitive work and make B2B data transactions fast, intuitive, and error-free. Flatfile automatically learns how imported data should be structured and cleaned, enabling customers and teams to spend more time using their data instead of fixing it. Flatfile has transformed over 300 million rows of data for companies like ClickUp, Blackbaud, Benevity, and Toast. To learn more about Flatfile’s products, Portal and Concierge, visit flatfile.io.


The post Building Custom Data Importers: What Engineers Need to Know appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , , ,
[Top]

Building a Blog with Next.js

In this article, we will use Next.js to build a static blog framework with the design and structure inspired by Jekyll. I’ve always been a big fan of how Jekyll makes it easier for beginners to setup a blog and at the same time also provides a great degree of control over every aspect of the blog for the advanced users.

With the introduction of Next.js in recent years, combined with the popularity of React, there is a new avenue to explore for static blogs. Next.js makes it super easy to build static websites based on the file system itself with little to no configuration required.

The directory structure of a typical bare-bones Jekyll blog looks like this:

. ├─── _posts/          ...blog posts in markdown ├─── _layouts/        ...layouts for different pages ├─── _includes/       ...re-usable components ├─── index.md         ...homepage └─── config.yml       ...blog config

The idea is to design our framework around this directory structure as much as possible so that it becomes easier to  migrate a blog from Jekyll by simply reusing the posts and configs defined in the blog.

For those unfamiliar with Jekyll, it is a static site generator that can transform your plain text into static websites and blogs. Refer the quick start guide to get up and running with Jekyll.

This article also assumes that you have a basic knowledge of React. If not, React’s getting started page is a good place to start.

Installation

Next.js is powered by React and written in Node.js. So we need to install npm first, before adding next, react and react-dom to the project.

mkdir nextjs-blog && cd $ _ npm init -y npm install next react react-dom --save

To run Next.js scripts on the command line, we have to add the next command to the scripts section of our package.json.

"scripts": {   "dev": "next" }

We can now run npm run dev on the command line for the first time. Let’s see what happens.

$  npm run dev > nextjs-blog@1.0.0 dev /~user/nextjs-blog > next  ready - started server on http://localhost:3000 Error: > Couldn't find a `pages` directory. Please create one under the project root

The compiler is complaining about a missing pages directory in the root of the project. We’ll learn about the concept of pages in the next section.

Concept of pages

Next.js is built around the concept of pages. Each page is a React component that can be of type .js or .jsx which is mapped to a route based on the filename. For example:

File                            Route ----                            ----- /pages/about.js                 /about /pages/projects/work1.js        /projects/work1 /pages/index.js                 /

Let’s create the pages directory in the root of the project and populate our first page, index.js, with a basic React component.

// pages/index.js export default function Blog() {   return <div>Welcome to the Next.js blog</div> }

Run npm run dev once again to start the server and navigate to http://localhost:3000 in the browser to view your blog for the first time.

Screenshot of the homepage in the browser. The content says welcome to the next.js blog.

Out of the box, we get:

  • Hot reloading so we don’t have to refresh the browser for every code change.
  • Static generation of all pages inside the /pages/** directory.
  • Static file serving for assets living in the/public/** directory.
  • 404 error page.

Navigate to a random path on localhost to see the 404 page in action. If you need a custom 404 page, the Next.js docs have great information.

Screenshot of the 404 page. It says 404 This page could not be found.

Dynamic pages

Pages with static routes are useful to build the homepage, about page, etc. However, to dynamically build all our posts, we will use the dynamic route capability of Next.js. For example:

File                        Route ----                        ----- /pages/posts/[slug].js      /posts/1                             /posts/abc                             /posts/hello-world

Any route, like /posts/1, /posts/abc, etc., will be matched by /posts/[slug].js and the slug parameter will be sent as a query parameter to the page. This is especially useful for our blog posts because we don’t want to create one file per post; instead we could dynamically pass the slug to render the corresponding post.

Anatomy of a blog

Now, since we understand the basic building blocks of Next.js, let’s define the anatomy of our blog.

. ├─ api │  └─ index.js             # fetch posts, load configs, parse .md files etc ├─ _includes │  ├─ footer.js            # footer component │  └─ header.js            # header component ├─ _layouts │  ├─ default.js           # default layout for static pages like index, about │  └─ post.js              # post layout inherts from the default layout ├─ pages │  ├─ index.js             # homepage |  └─ posts                # posts will be available on the route /posts/ |     └─ [slug].js       # dynamic page to build posts └─ _posts    ├─ welcome-to-nextjs.md    └─ style-guide-101.md

Blog API

A basic blog framework needs two API functions: 

  • A function to fetch the metadata of all the posts in _posts directory
  • A function to fetch a single post for a given slug with the complete HTML and metadata

Optionally, we would also like all the site’s configuration defined in config.yml to be available across all the components. So we need a function that will parse the YAML config into a native object.

Since, we would be dealing with a lot of non-JavaScript files, like Markdown (.md), YAML (.yml), etc, we’ll use the raw-loader library to load such files as strings to make it easier to process them. 

npm install raw-loader --save-dev

Next we need to tell Next.js to use raw-loader when we import .md and .yml file formats by creating a next.config.js file in the root of the project (more info on that).

module.exports = {   target: 'serverless',   webpack: function (config) {     config.module.rules.push({test:  /.md$ /, use: 'raw-loader'})     config.module.rules.push({test: /.yml$ /, use: 'raw-loader'})     return config   } }

Next.js 9.4 introduced aliases for relative imports which helps clean up the import statement spaghetti caused by relative paths. To use aliases, create a jsconfig.json file in the project’s root directory specifying the base path and all the module aliases needed for the project.

{   "compilerOptions": {     "baseUrl": "./",     "paths": {       "@includes/*": ["_includes/*"],       "@layouts/*": ["_layouts/*"],       "@posts/*": ["_posts/*"],       "@api": ["api/index"],     }   } }

For example, this allows us to import our layouts by just using:

import DefaultLayout from '@layouts/default'

Fetch all the posts

This function will read all the Markdown files in the _posts directory, parse the front matter defined at the beginning of the post using gray-matter and return the array of metadata for all the posts.

// api/index.js import matter from 'gray-matter' 
 export async function getAllPosts() {   const context = require.context('../_posts', false, /.md$ /)   const posts = []   for(const key of context.keys()){     const post = key.slice(2);     const content = await import(`../_posts/$ {post}`);     const meta = matter(content.default)     posts.push({       slug: post.replace('.md',''),       title: meta.data.title     })   }   return posts; }

A typical Markdown post looks like this:

--- title:  "Welcome to Next.js blog!" --- **Hello world**, this is my first Next.js blog post and it is written in Markdown. I hope you like it!

The section outlined by --- is called the front matter which holds the metadata of the post like, title, permalink, tags, etc. Here’s the output:

[   { slug: 'style-guide-101', title: 'Style Guide 101' },   { slug: 'welcome-to-nextjs', title: 'Welcome to Next.js blog!' } ]

Make sure you install the gray-matter library from npm first using the command npm install gray-matter --save-dev.

Fetch a single post

For a given slug, this function will locate the file in the _posts directory, parse the Markdown with the marked library and return the output HTML with metadata.

// api/index.js import matter from 'gray-matter' import marked from 'marked' 
 export async function getPostBySlug(slug) {   const fileContent = await import(`../_posts/$ {slug}.md`)   const meta = matter(fileContent.default)   const content = marked(meta.content)       return {     title: meta.data.title,      content: content   } }

Sample output:

{   title: 'Style Guide 101',   content: '<p>Incididunt cupidatat eiusmod ...</p>' }

Make sure you install the marked library from npm first using the command npm install marked --save-dev.

Config

In order to re-use the Jekyll config for our Next.js blog, we’ll parse the YAML file using the js-yaml library and export this config so that it can be used across components.

// config.yml title: "Next.js blog" description: "This blog is powered by Next.js" 
 // api/index.js import yaml from 'js-yaml' export async function getConfig() {   const config = await import(`../config.yml`)   return yaml.safeLoad(config.default) }

Make sure you install js-yaml from npm first using the command npm install js-yaml --save-dev.

Includes

Our _includes directory contains two basic React components, <Header> and <Footer>, which will be used in the different layout components defined in the _layouts directory.

// _includes/header.js export default function Header() {   return <header><p>Blog | Powered by Next.js</p></header> } 
 // _includes/footer.js export default function Footer() {   return <footer><p>©2020 | Footer</p></footer> }

Layouts

We have two layout components in the _layouts directory. One is the <DefaultLayout> which is the base layout on top of which every other layout component will be built.

// _layouts/default.js import Head from 'next/head' import Header from '@includes/header' import Footer from '@includes/footer' 
 export default function DefaultLayout(props) {   return (     <main>       <Head>         <title>{props.title}</title>         <meta name='description' content={props.description}/>       </Head>       <Header/>       {props.children}       <Footer/>     </main>   ) }

The second layout is the <PostLayout> component that will override the title defined in the <DefaultLayout> with the post title and render the HTML of the post. It also includes a link back to the homepage.

// _layouts/post.js import DefaultLayout from '@layouts/default' import Head from 'next/head' import Link from 'next/link' 
 export default function PostLayout(props) {   return (     <DefaultLayout>       <Head>         <title>{props.title}</title>       </Head>       <article>         <h1>{props.title}</h1>         <div dangerouslySetInnerHTML={{__html:props.content}}/>         <div><Link href='/'><a>Home</a></Link></div>        </article>     </DefaultLayout>   ) }

next/head is a built-in component to append elements to the <head> of the page. next/link is a built-in component that handles client-side transitions between the routes defined in the pages directory.

Homepage

As part of the index page, aka homepage, we will list all the posts inside the _posts directory. The list will contain the post title and the permalink to the individual post page. The index page will use the <DefaultLayout> and we’ll import the config in the homepage to pass the title and description to the layout.

// pages/index.js import DefaultLayout from '@layouts/default' import Link from 'next/link' import { getConfig, getAllPosts } from '@api' 
 export default function Blog(props) {   return (     <DefaultLayout title={props.title} description={props.description}>       <p>List of posts:</p>       <ul>         {props.posts.map(function(post, idx) {           return (             <li key={idx}>               <Link href={'/posts/'+post.slug}>                 <a>{post.title}</a>               </Link>             </li>           )         })}       </ul>     </DefaultLayout>   ) }  
 export async function getStaticProps() {   const config = await getConfig()   const allPosts = await getAllPosts()   return {     props: {       posts: allPosts,       title: config.title,       description: config.description     }   } }

getStaticProps is called at the build time to pre-render pages by passing props to the default component of the page. We use this function to fetch the list of all posts at build time and render the posts archive on the homepage.

Screenshot of the homepage showing the page title, a list with two post titles, and the footer.

Post page

This page will render the title and contents of the post for the slug supplied as part of the context. The post page will use the <PostLayout> component.

// pages/posts/[slug].js import PostLayout from '@layouts/post' import { getPostBySlug, getAllPosts } from "@api" 
 export default function Post(props) {   return <PostLayout title={props.title} content={props.content}/> } 
 export async function getStaticProps(context) {   return {     props: await getPostBySlug(context.params.slug)   } } 
 export async function getStaticPaths() {   let paths = await getAllPosts()   paths = paths.map(post => ({     params: { slug:post.slug }   }));   return {     paths: paths,     fallback: false   } }

If a page has dynamic routes, Next.js needs to know all the possible paths at build time. getStaticPaths supplies the list of paths that has to be rendered to HTML at build time. The fallback property ensures that if you visit a route that does not exist in the list of paths, it will return a 404 page.

Screenshot of the blog page showing a welcome header and a hello world blue above the footer.

Production ready

Add the following commands for build and start in package.json, under the scripts section and then run npm run build followed by npm run start to build the static blog and start the production server.

// package.json "scripts": {   "dev": "next",   "build": "next build",   "start": "next start" }

The entire source code in this article is available on this GitHub repository. Feel free to clone it locally and play around with it. The repository also includes some basic placeholders to apply CSS to your blog.

Improvements

The blog, although functional, is perhaps too basic for most average cases. It would be nice to extend the framework or submit a patch to include some more features like:

  • Pagination
  • Syntax highlighting
  • Categories and Tags for posts
  • Styling

Overall, Next.js seems really very promising to build static websites, like a blog. Combined with its ability to export static HTML, we can built a truly standalone app without the need of a server!

The post Building a Blog with Next.js appeared first on CSS-Tricks.

CSS-Tricks

, ,
[Top]

Building Serverless GraphQL API in Node with Express and Netlify

I’ve always wanted to build an API, but was scared away by just how complicated things looked. I’d read a lot of tutorials that start with “first, install this library and this library and this library” without explaining why that was important. I’m kind of a Luddite when it comes to these things.

Well, I recently rolled up my sleeves and got my hands dirty. I wanted to build and deploy a simple read-only API, and goshdarnit, I wasn’t going to let some scary dependency lists and fancy cutting-edge services stop me¹.

What I discovered is that underneath many of the tutorials and projects out there is a small, easy-to-understand set of tools and techniques. In less than an hour and with only 30 lines of code, I believe anyone can write and deploy their very own read-only API. You don’t have to be a senior full-stack engineer — a basic grasp of JavaScript and some experience with npm is all you need.

At the end of this article you’ll be able to deploy your very own API without the headache of managing a server. I’ll list out each dependency and explain why we’re incorporating it. I’ll also give you an intro to some of the newer concepts involved, and provide links to resources to go deeper.

Let’s get started!

A rundown of the API concepts

There are a couple of common ways to work with APIs. But let’s begin by (super briefly) explaining what an API is all about: reading and updating data.

Over the past 20 years, some standard ways to build APIs have emerged. REST (short for REpresentational State Transfer) is one of the most common. To use a REST API, you make a call to a server through a URL — say api.example.com/rest/books — and expect to get a list of books back in a format like JSON or XML. To get a single book, we’d go back to the server at a URL — like api.example.com/rest/books/123 — and expect the data for book #123. Adding a new book or updating a specific book’s data means more trips to the server at similar, purpose-defined URLs.

That’s the basic idea of two concepts we’ll be looking at here: GraphQL and Serverless.

GraphQL

Applications that do a lot of getting and updating of data make a lot of API calls. Complicated software, like Twitter, might make hundreds of calls to get the data for a single page. Collecting the right data from a handful of URLs and formatting it can be a real headache. In 2012, Facebook developers starting looking for new ways to get and update data more efficiently.

Their key insight was that for the most part, data in complicated applications has relationships to other data. A user has followers, who are each users themselves, who each have their own followers, and those followers have tweets, which have replies from other users. Drawing the relationships between data results in a graph and that graph can help a server do a lot of clever work formatting and sending (or updating) data, and saving front-end developers time and frustration. Graph Query Language, aka GraphQL, was born.

GraphQL is different from the REST API approach in its use of URLs and queries. To get a list of books from our API using GraphQL, we don’t need to go to a specific URL (like our api.example.com/graphql/books example). Instead, we call up the API at the top level — which would be api.example.com/graphql in our example — and tell it what kind of information we want back with a JSON object:

{   books {     id     title     author   } }

The server sees that request, formats our data, and sends it back in another JSON object:

{   "books" : [     {       "id" : 123       "title" : "The Greatest CSS Tricks Vol. I"       "author" : "Chris Coyier"     }, {       // ...     }   ] }

Sebastian Scholl compares GraphQL to REST using a fictional cocktail party that makes the distinction super clear. The bottom line: GraphQL allows us to request the exact data we want while REST gives us a dump of everything at the URL.

Concept 2: Serverless

Whenever I see the word “serverless,” I think of Chris Watterston’s famous sticker.

Similarly, there is no such thing as a truly “serverless” application. Chris Coyier nice sums it up his “Serverless” post:

What serverless is trying to mean, it seems to me, is a new way to manage and pay for servers. You don’t buy individual servers. You don’t manage them. You don’t scale them. You don’t balance them. You aren’t really responsible for them. You just pay for what you use.

The serverless approach makes it easier to build and deploy back-end applications. It’s especially easy for folks like me who don’t have a background in back-end development. Rather than spend my time learning how to provision and maintain a server, I often hand the hard work off to someone (or even perhaps something) else.

It’s worth checking out the CSS-Tricks guide to all things serverless. On the Ideas page, there’s even a link to a tutorial on building a serverless API!

Picking our tools

If you browse through that serverless guide you’ll see there’s no shortage of tools and resources to help us on our way to building an API. But exactly which ones we use requires some initial thought and planning. I’m going to cover two specific tools that we’ll use for our read-only API.

Tool 1: NodeJS and Express

Again, I don’t have much experience with back-end web development. But one of the few things I have encountered is Node.js. Many of you are probably aware of it and what it does, but it’s essentially JavaScript that runs on a server instead of a web browser. Node.js is perfect for someone coming from the front-end development side of things because we can work directly in JavaScript — warts and all — without having to reach for some back-end language.

Express is one of the most popular frameworks for Node.js. Back before React was king (How Do You Do, Fellow Kids?), Express was the go-to for building web applications. It does all sorts of handy thing like routing, templating, and error handling.

I’ll be honest: frameworks like Express intimidate me. But for a simple API, Express is extremely easy to use and understand. There’s an official GraphQL helper for Express, and a plug-and-play library for making a serverless application called serverless-http. Neat, right?!

Tool 2: Netlify functions

The idea of running an application without maintaining a server sounds too good to be true. But check this out: not only can you accomplish this feat of modern sorcery, you can do it for free. Mind blowing.

Netlify offers a free plan with serverless functions that will give you up to 125,000 API calls in a month. Amazon offers a similar service called Lambda. We’ll stick with Netlify for this tutorial.

Netlify includes Netlify Dev which is a CLI for Netlify’s platform. Essentially, it lets us run a simulation of our in a fully-featured production environment, all within the safety of our local machine. We can use it to build and test our serverless functions without needing to deploy them.

At this point, I think it’s worth noting that not everyone agrees that running Express in a serverless function is a good idea. As Paul Johnston explains, if you’re building your functions for scale, it’s best to break each piece of functionality out into its own single-purpose function. Using Express the way I have means that every time a request goes to the API, the whole Express server has to be booted up from scratch — not very efficient. Deploy to production at your own risk.

Let’s get building!

Now that we have out tools in place, we can kick off the project. Let’s start by creating a new folder, navigating to fit in terminal, then running npm init  on it. Once npm creates a package.json file, we can install the dependencies we need. Those dependencies are:

  1. Express
  2. GraphQL and express-graphql. These allow us to receive and respond to GraphQL requests.
  3. Bodyparser. This is a small layer that translates the requests we get to and from JSON, which is what GraphQL expects.
  4. Serverless-http. This serves as a wrapper for Express that makes sure our application can be used on a serverless platform, like Netlify.

That’s it! We can install them all in a single command:

npm i express express-graphql graphql body-parser serverless-http

We also need to install Netlify Dev as a global dependency so we can use it as a CLI:

npm i -g netlify-dev

File structure

There’s a few files that are required for our API to work correctly. The first is netlify.toml which should be created at the project’s root directory. This is a configuration file to tell Netlify how to handle our project. Here’s what we need in the file to define our startup command, our build command and where our serverless functions are located:

[build] 
   # This command builds the site   command = "npm run build" 
   # This is the directory that will be deployed   publish = "build" 
   # This is where our functions are located   functions = "functions"

That functions line is super important; it tells Netlify where we’ll be putting our API code.

Next, let’s create that /functions folder at the project’s root, and create a new file inside it called api.js.  Open it up and add the following lines to the top so our dependencies are available to use and are included in the build:

const express = require("express"); const bodyParser = require("body-parser"); const expressGraphQL = require("express-graphql"); const serverless = require("serverless-http");

Setting up Express only takes a few lines of code. First, we’ll initial Express and wrap it in the serverless-http serverless function:

const app = express(); module.exports.handler = serverless(app);

These lines initialize Express, and wrap it in the serverless-http function. module.exports.handler lets Netlify know that our serverless function is the Express function.

Now let’s configure Express itself:

app.use(bodyParser.json()); app.use(   "/",   expressGraphQL({     graphiql: true   }) );

These two declarations tell Express what middleware we’re running. Middleware is what we want to happen between the request and response. In our case, we want to parse JSON using bodyparser, and handle it with express-graphql. The graphiql:true configuration for express-graphql will give us a nice user interface and playground for testing.

Defining the GraphQL schema

In order to understand requests and format responses, GraphQL needs to know what our data looks like. If you’ve worked with databases then you know that this kind of data blueprint is called a schema. GraphQL combines this well-defined schema with types — that is, definitions of different kinds of data — to work its magic.

The very first thing our schema needs is called a root query. This will handle any data requests coming in to our API. It’s called a “root” query because it’s accessed at the root of our API— say, api.example.com/graphql.

For this demonstration, we’ll build a hello world example; the root query should result in a response of “Hello world.”

So, our GraphQL API will need a schema (composed of types) for the root query. GraphQL provides some ready-built types, including a schema, a generic object², and a string.

Let’s get those by adding this below the imports:

const {   GraphQLSchema,   GraphQLObjectType,   GraphQLString } = require("graphql");

Then we’ll define our schema like this:

const schema = new GraphQLSchema({   query: new GraphQLObjectType({     name: 'HelloWorld',     fields: () => ({ /* we'll put our response here */ })   }) })

The first element in the object, with the key query, tells GraphQL how to handle a root query. Its value is a GraphQL object with the following configuration:

  • name – A reference used for documentation purposes
  • fields – Defines the data that our server will respond with. It might seem strange to have a function that just returns an object here, but this allows us to use variables and functions defined elsewhere in our file without needing to define them first³.
const schema = new GraphQLSchema({   query: new GraphQLObjectType({     name: "HelloWorld",     fields: () => ({       message: {         type: GraphQLString,         resolve: () => "Hello World",       },     }),   }), });

The fields function returns an object and our schema only has a single message field so far. The message we want to respond with is a string, so we specify its type as a GraphQLString. The resolve function is run by our server to generate the response we want. In this case, we’re only  returning “Hello World” but in a more complicated application, we’d probably use this function to go to our database and retrieve some data.

That’s our schema! We need to tell our Express server about it, so let’s open up api.js and make sure the Express configuration is updated to this:

app.use(   "/",   expressGraphQL({     schema: schema,     graphiql: true   }) );

Running the server locally

Believe it or not, we’re ready to start the server! Run netlify dev in Terminal from the project’s root folder. Netlify Dev will read the netlify.toml configuration, bundle up your api.js function, and make it available locally from there. If everything goes according to plan, you’ll see a message like “Server now ready on http://localhost:8888.” 

If you go to localhost:8888 like I did the first time, you might be a little disappointed to get a 404 error.

But fear not! Netlify is running the function, only in a different directory than you might expect, which is /.netlify/functions. So, if you go to localhost:8888/.netlify/functions/api, you should see the GraphiQL interface as expected. Success!

Now, that’s more like it!

The screen we get is the GraphiQL playground and we can use it to test out the API. First, clear out the comments in the left pane and replace them with the following:

{   message }

This might seem a little… naked… but you just wrote a GraphQL query! What we’re saying is that we’d like to see the message field we defined in api.js. Click the “Run” button, and on the righth, you’ll see the following:

{   "data": {     "message": "Hello World"   } }

I don’t know about you, but I did a little fist pump when I did this the first time. We built an API!

Bonus: Redirecting requests

One of my hang-ups while learning about Netlify’s serverless functions is that they run on the /.netlify/functions path. It wasn’t ideal to type or remember it and I nearly bailed for another solution. But it turns out you can easily redirect requests when running and deploying on Netlfiy. All it takes is creating a file in the project’s root directory called _redirects (no extension necessary) with the following line in it:

/api /.netlify/functions/api 200!

This tells Netlify that any traffic that goes to yoursite.com/api should be sent to /.netlify/functions/api. The 200! bit instructs the server to send back a status code of 200 (meaning everything’s OK).

Deploying the API

To deploy the project, we need to connect the source code to Netlfiy. I host mine in a GitHub repo, which allows for continuous deployment.

After connecting the repository to Netlfiy, the rest is automatic: the code is processed and deployed as a serverless function! You can log into the Netlify dashboard to see the logs from any function.

Conclusion

Just like that, we are able to create a serverless API using GraphQL with a few lines of JavaScript and some light configuration. And hey, we can even deploy — for free. 

The possibilities are endless. Maybe you want to create your own personal knowledge base, or a tool to serve up design tokens. Maybe you want to try your hand at making your own PokéAPI. Or, maybe you’re interesting in working with GraphQL.

Regardless of what you make, it’s these sorts of technologies that are getting more and more accessible every day. It’s exciting to be able to work with some of the most modern tools and techniques without needing a deep technical back-end knowledge.

If you’d like to see at the complete source code for this project, it’s available on GitHub.

Some of the code in this tutorial was adapted from Web Dev Simplified’s “Learn GraphQL in 40 minutes” article. It’s a great resource to go one step deeper into GraphQL. However, it’s also focused on a more traditional server-full Express.


  1. If you’d like to see the full result of my explorations, I’ve written a companion piece called “A design API in practice” on my website.
  2. The reasons you need a special GraphQL object, instead of a regular ol’ vanilla JavaScript object in curly braces, is a little beyond the scope of this tutorial. Just keep in mind that GraphQL is a finely-tuned machine that uses these specialized types to be fast and resilient.
  3. Scope and hoisting are some of the more confusing topics in JavaScript. MDN has a good primer that’s worth checking out.

The post Building Serverless GraphQL API in Node with Express and Netlify appeared first on CSS-Tricks.

CSS-Tricks

, , , , ,
[Top]

Building a hexagonal grid using CSS grid

I think of grids as arrangements of rectangles with vertical and horizontal lines running through. And they are, but that doesn’t mean we can’t still do clever things in how we place things on those grids and what we do with the elements afterwards.

In this demo by Jesse Breneman, a grid of hexagons is created by setting up the grid columns with math such that each block can span over three columns and two rows so that the blocks overlap in a way that allows a clip-path to be applied around them. This carves a block into a hexagon that is evenly spaced with the others. Very clever.

And, ha, that’s a hell of a domain name Jesse. Personally, I don’t know anything about blogging about CSS at a super cheesy domain name.

Direct Link to ArticlePermalink

The post Building a hexagonal grid using CSS grid appeared first on CSS-Tricks.

CSS-Tricks

, , ,
[Top]