Tag: APIs

Boost app engagement with chat, voice, and video APIs

Sendbird is a service for helping you add social features to your app. Wanna add in-app chat? Sendbird does that. Wanna add in-app voice or video calls? Sendbird does that.

Here’s how I always think about stuff like this. Whatever the thing you are building is, you should specialize in the core of it, and not get bogged down in building essentially another product as you’re doing it. Say you want to build an app for dentists to do bookings and customer management. Please do, by the way, my dentist could really use it. You’ve got a lot of work ahead of you, the core of which is building a thing that is perfect for actual dentists and getting the UX right. In-app chat might be an important part of that, but you aren’t specifically building chat software, you’re building dentist software. Lean on Sendbird for the chat (and everything else).

To elaborate on the dentist software example for a bit, I can tell you more about my dentist. They are so eager to text me, email me, call me, even use social media, to remind me about everything constantly. But for me to communicate with them, I have to call. It’s the only way to talk to them about anything—and it’s obnoxious. If there was a dentist in town where I knew I could fire up a quick digital chat with them to book things, I’d literally switch dentists. Even better if I could click a button in a browser to call them or do a video consult. That’s just good business.

You know what else? Your new app for dentists (seriously, you should do this) is going to have to be compliant on a million standards for any dentist to buy it. This means any software you buy will need to be too. Good thing Sendbird is all set with HIPPA/HITECH, SOC 2, GDPR, and more, not to mention being hugely security-focused.

Sendbird aren’t spring chickens either, they are already trusted by Reddit, Virgin UAE, Yahoo, Meetup, and tons more.

Chat is tricky stuff, too. It’s not just a simple as shuffling a message off to another user and displaying it. Modern chat offers things like reactions, replies, and notifications. Chat needs to be friendly to poor networks and offline situations. Harder still, moderating chat activity and social control like blocking and reporting. Not only does Sendbird help with all that, their UIKit will help you build the interface as well.

Build it with Sendbird, and it’ll scale forever.

Sendbird’s client base gave us confidence that they would be able to handle our traffic and projected growth. ”

Ben Celibicic, CTO

Modern apps have modern users that expect these sort of features.


The post Boost app engagement with chat, voice, and video APIs appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , ,

Creating CSS APIs without JavaScript With the datasette-css-properties plugin

Simon Willison has a project called Datasette, an open source multi-tool for exploring and publishing data. I’m not sure I’m qualified to explain it, but it’s like a tool to make handling data easier and doing more — through the web — with data you have. Like making that data queryable and giving it an API.

I would think, typically, you’d get the results of an API call against your data in something useful, like JSON. But Simon made a plugin that outputs the results as CSS custom properties instead, and blogged it:

It’s very, very weird—it adds a .css output extension to Datasette which outputs the result of a SQL query using CSS custom property format. This means you can display the results of database queries using pure CSS and HTML, no JavaScript required!

Here’s what I said just recently in “Custom Properties as State”:

This makes me think that a CDN-hosted CSS file like this could have other useful stuff, like today’s date for usage in pseudo content, or other special time-sensitive stuff. Maybe the phase of the moon? Sports scores?! Soup of the day?!

And Simon is like, how about roadside attractions?

My brain automatically worries about the accessibility of that, but… aren’t pseudo-elements fairly and reliably read in screen readers these days? You still can’t select the text though, or find-on-page, which are both usability and accessibility issues, so don’t consider this like a real thing that you really do for production work with unknown users.

His blog post demonstrates a slightly more dynamic example where the time of day outputs a different color. That makes me think of @property and declaring types for custom properties. I think this gets a smidge more useful when you can use the values that come back as specific syntaxes.


The post Creating CSS APIs without JavaScript With the datasette-css-properties plugin appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , ,
[Top]

A Complete Walkthrough of GraphQL APIs with React and FaunaDB

As a web developer, there is an interesting bit of back and forth that always comes along with setting up a new application. Even using a full stack web framework like Ruby on Rails can be non-trivial to set up and deploy, especially if it’s your first time doing so in a while.

Personally I have always enjoyed being able to dig in and write the actual bit of application logic more so than setting up the apps themselves. Lately I have become a big fan of React applications together with a GraphQL API and storing state with the Apollo library.

Setting up a React application has become very easy in the past few years, but setting up a backend with a GraphQL API? Not so much. So when I was working on a project recently, I decided to look for an easier way to integrate a GraphQL API and was delighted to find FaunaDB.

FaunaDB is a NoSQL database as a service that makes provisioning a GraphQL API an incredibly simple process, and even comes with a free tier. Frankly I was surprised and really impressed with how quickly I was able to go from zero to a working API.

The service also touts its production readiness, with a focus on making scalability much easier than if you were managing your own backend. Although I haven’t explored its more advanced features yet, if it’s anything like my first impression then the prospects and implications of using FaunaDB are quite exciting. For now, I can confirm that for many of my projects, it provides an excellent solution for managing state together with a React application.

While working on my project, I did run into a few configuration issues when making all of the frameworks work together which I think could’ve been addressed with a guide that focuses on walking through standing up an application in its entirety. So in this article, I’m going to do a thorough walkthrough of setting up a small to-do React application on Heroku, then persisting data to that application with FaunaDB using the Apollo library. You can find the full source code here.

Our Application

For this walkthrough, we’re building a to-do list where a user can take the following actions:

  • Add a new item
  • Mark an item as complete
  • Remove an item

From a technical perspective, we’re going to accomplish this by doing the following:

  • Creating a React application
  • Deploying the application to Heroku
  • Provisioning a new FaunaDB database
  • Declaring a GraphQL API schema
  • Provisioning a new database key
  • Configuring Apollo in our React application to interact with our API
  • Writing application logic and consume our API to persist information

Here’s a preview of what the final result will look like:

Creating the React Application

First we’ll create a boilerplate React application and make sure it runs. Assuming you have create-react-app installed, the commands to create a new application are:

create-react-app fauna-todo cd fauna-todo yarn start

After which you should be able to head to http://localhost:3000 and see the generated homepage for the application.

Deploying to Heroku

As I mentioned above, deploying React applications has become awesomely easy over the last few years. I’m using Heroku here since it’s been my go-to platform as a service for a while now, but you could just as easily use another service like Netlify (though of course the configuration will be slightly different). Assuming you have a Heroku account and the Heroku CLI installed, then this article shows that you only need a few lines of code to create and deploy a React application.

git init heroku create -b https://github.com/mars/create-react-app-buildpack.git git push heroku master

And your app is deployed! To view it you can run:

heroku open

Provisioning a FaunaDB Database

Now that we have a React app up and running, let’s add persistence to the application using FaunaDB. Head to fauna.com to create a free account. After you have an account, click “New Database” on the dashboard, and enter in a name of your choosing:

Creating an API via GraphQL Schema in FaunaDB

In this example, we’re going to declare a GraphQL schema then use that file to automatically generate our API within FaunaDB. As a first stab at the schema for our to-do application, let’s suppose that there is simply a collection of “Items” with “name” as its sole field. Since I plan to build upon this schema later and like being able to see the schema itself at a glance, I’m going to create a schema.graphql file and add it to the top level of my React application. Here is the content for this file:

type Item {  name: String } type Query {  allItems: [Item!] }

If you’re unfamiliar with the concept of defining a GraphQL schema, think of it as a manifest for declaring what kinds of objects and queries are available within your API. In this case, we’re saying that there is going to be an Item type with a name string field and that we are going to have an explicit query allItems to look up all item records. You can read more about schemas in this Apollo article and types in this graphql.org article. FaunaDB also provides a reference document for declaring and importing a schema file.

We can now upload this schema.graphql file and use it to generate a GraphQL API. Head to the FaunaDB dashboard again and click on “GraphQL” then upload your newly created schema file here:

Congratulations! You have created a fully functional GraphQL API. This page turns into a “GraphQL Playground” which lets you interact with your API. Click on the “Docs” tab in the sidebar to see the available queries and mutations.

Note that in addition to our allItems query, FaunaDB has generated the following queries/mutations automatically on our behalf:

  • findItemByID
  • createItem
  • updateItem
  • deleteItem

All of these were derived by declaring the Item type in our schema file. Pretty cool right? Let’s give these queries and mutations a spin to familiarize ourselves with them. We can execute queries and mutations directly in the “GraphQL Playground.” Let’s first run a query for items. Enter this query into the left pane of the playground:

query MyItemQuery {  allItems {    data {     name    }  } }

Then click on the play button to run it:

The result is listed on the right pane, and unsurprisingly returns no results since we haven’t created any items yet. Fortunately createItem was one of the mutations that was automatically generated from the schema and we can use that to populate a sample item. Let’s run this mutation:

mutation MyItemCreation {  createItem(data: { name: "My first todo item" }) {    name  } }

You can see the result of the mutation in the right pane. It seems like our item was created successfully, but just to double check we can re-run our first query and see the result:

You can see that if we add our first query back to the left pane in the playground that the play button gives you a choice as to which operation you’d like to perform. Finally, note in step 3 of the screenshot above that our item was indeed created successfully.

In addition to running the query above, we can also look in the “Collections” tab of FaunaDB to view the collection directly:

Provisioning a New Database Key

Now that we have the database itself configured, we need a way for our React application to access it.

For the sake of simplicity in this application, this will be done with a secret key that we can add as an environment variable to our React application. We aren’t going to have authentication for individual users. Instead we’ll generate an application key which has permission to create, read, update, and delete items.

Authentication and authorization are substantial topics on their own — if you would like to learn more on how FaunaDB handles them as a follow up exercise to this guide, you can read all about the topic here.

The application key we generate has an associate set of permissions that are grouped together in a “role.” Let’s begin by first defining a role that has permission to perform CRUD operations on items, as well as perform the allItems query. Start by going to the “Security” tab, then clicking on “Manage Roles”:

There are 2 built in roles, admin and server. We could in theory use these roles for our key, but this is a bad idea as those keys would allow whoever has access to this key to perform database level operations such as creating new collections or even destroy the database itself. So instead, let’s create a new role by clicking on “New Custom Role” button:

You can name the role whatever you’d like, here we’re using the name ItemEditor and giving the role permission to read, write, create, and delete items — as well as permission to read the allItems index.

Save this role then, head to the “Security” tab and create a new key:

When creating a key, make sure to select “ItemEditor” for the role and whatever name you please:

Next you’ll be presented with your secret key which you’ll need to copy:

In order for React to load the key’s value as an environment variable, create a new file called .env.local which lives at the root level of your React application. In this file, add an entry for the generated key:

REACT_APP_FAUNA_SECRET=fnADzT7kXcACAFHdiKG-lIUWq-hfWIVxqFi4OtTv

Important: Since it’s not good practice to store secrets directly in source control in plain text, make sure that you also have a .gitignore file in your project’s root directory that contains .env.local so that your secrets won’t be added to your git repo and shared with others.

It’s critical that this variable’s name starts with “REACT_APP_” otherwise it won’t be recognized when the application is started. By adding the value to the .env.local file, it will still be loaded for the application when running locally. You’ll have to explicitly stop and restart your application with yarn start in order to see these changes take.

If you’re interested in reading more about how environment variables are loaded in apps created via create-react-app, there is a full explanation here. We’ll cover adding this secret as an environment variable in Heroku later on in this article.

Connecting to FaunaDB in React with Apollo

In order for our React application to interact with our GraphQL API, we need some sort of GraphQL client library. Fortunately for us, the Apollo client provides an elegant interface for making API requests as well as caching and interacting with the results.

To install the relevant Apollo packages we’ll need, run:

yarn add @apollo/client graphql @apollo/react-hooks

Now in your src directory of your application, add a new file named client.js with the following content:

import { ApolloClient, InMemoryCache } from "@apollo/client"; export const client = new ApolloClient({  uri: "https://graphql.fauna.com/graphql",  headers: {    authorization: `Bearer $ {process.env.REACT_APP_FAUNA_SECRET}`,  },  cache: new InMemoryCache(), });

What we’re doing here is configuring Apollo to make requests to our FaunaDB database. Specifically, the uri makes the request to Fauna itself, then the authorization header indicates that we’re connecting to the specific database instance for the provided key that we generated earlier.

There are 2 important implications from this snippet of code:

  1. The authorization header contains the key with the “ItemEditor” role, and is currently hard coded to use the same header regardless of which user is looking at our application. If you were to update this application to show a different to-do list for each user, you would need to login for each user and generate a token which could instead be passed in this header. Again, the FaunaDB documentation covers this concept if you care to learn more about it.
  2. As with any time you add a layer of caching to a system (as we are doing here with Apollo), you introduce the potential to have stale data. FaunaDB’s operations are strongly consistent, and you can configure Apollo’s fetchPolicy to minimize the potential for stale data. In order to prevent stale reads to our cache, we’ll use a combination of refetch queries and specifying response fields in our mutations.

Next we’ll replace the contents of the home page’s component. Head to App.js and replace its content with:

import React from "react"; import { ApolloProvider } from "@apollo/client"; import { client } from "./client"; function App() {  return (    <ApolloProvider client={client}>      <div style={{ padding: "5px" }}>        <h3>My Todo Items</h3>        <div>items to get loaded here</div>      </div>    </ApolloProvider>  ); }

Note: For this sample application I’m focusing on functionality over presentation, so you’ll see some inline styles. While I certainly wouldn’t recommend this for a production-grade application, I think it does at least demonstrate any added styling in the most straightforward manner within a small demo.

Visit http://localhost:3000 again and you’ll see:

Which contains the hard coded values we’ve set in our jsx above. What we would really like to see however is the to-do item we created in our database. In the src directory, let’s create a component called ItemList which lists out any items in our database:

import React from "react"; import gql from "graphql-tag"; import { useQuery } from "@apollo/react-hooks"; const ITEMS_QUERY = gql`  {    allItems {      data {        _id        name      }    }  } `; export function ItemList() {  const { data, loading } = useQuery(ITEMS_QUERY); if (loading) {    return "Loading...";  } return (    <ul>      {data.allItems.data.map((item) => {        return <li key={item._id}>{item.name}</li>;      })}    </ul>  ); }

Then update App.js to render this new component  —  see the full commit in this example’s source code to see this step in its entirety. Previewing your app in again, you’ll see that your to-do item has loaded:

Now is a good time to commit your progress in git. And since we’re using Heroku, deploying is a snap:

git push heroku master heroku open

When you run heroku open though, you’ll see that the page is blank. If we inspect the network traffic and request to FaunaDB, we’ll see an error about how the database secret is missing:

Which makes sense since we haven’t configured this value in Heroku yet. Let’s set it by going to the Heroku dashboard, selecting your application, then clicking on the “Settings” tab. In there you should add the REACT_APP_FAUNA_SECRET key and value used in the .env.local file earlier. Reusing this key is done for demonstration purposes. In a “real” application, you would probably have a separate database and separate keys for each environment.

If you would prefer to use the command line rather than Heroku’s web interface, you can use the following command and replace the secret with your key instead:

heroku config:set REACT_APP_FAUNA_SECRET=fnADzT7kXcACAFHdiKG-lIUWq-hfWIVxqFi4OtTv

Important: as noted in the Heroku docs, you need to trigger a deploy in order for this environment variable to apply in your app:

git commit — allow-empty -m 'Add REACT_APP_FAUNA_SECRET env var' git push heroku master heroku open

After running this last command, your Heroku-hosted app should appear and load the items from your database.

Adding New To-Do Items

We now have all of the pieces in place for accessing our FaunaDB database both locally and a hosted Heroku environment. Now adding items is as simple as calling the mutation we used in the GraphQL Playground earlier. Here is the code for an AddItem component, which uses a bare bones html form to call the createItem mutation:

import React from "react"; import gql from "graphql-tag"; import { useMutation } from "@apollo/react-hooks"; const CREATE_ITEM = gql`  mutation CreateItem($ data: ItemInput!) {    createItem(data: $ data) {      _id    }  } `; const ITEMS_QUERY = gql`  {    allItems {      data {        _id        name      }    }  } `; export function AddItem() {  const [showForm, setShowForm] = React.useState(false);  const [newItemName, setNewItemName] = React.useState(""); const [createItem, { loading }] = useMutation(CREATE_ITEM, {    refetchQueries: [{ query: ITEMS_QUERY }],    onCompleted: () => {      setNewItemName("");      setShowForm(false);    },  }); if (showForm) {    return (      <form        onSubmit={(e) => {          e.preventDefault();          createItem({ variables: { data: { name: newItemName } } });        }}      >        <label>          <input            disabled={loading}            type="text"            value={newItemName}            onChange={(e) => setNewItemName(e.target.value)}            style={{ marginRight: "5px" }}          />        </label>        <input disabled={loading} type="submit" value="Add" />      </form>    );  } return <button onClick={() => setShowForm(true)}>Add Item</button>; }

After adding a reference to AddItem in our App component, we can verify that adding items works as expected. Again, you can see the full commit in the demo app for a recap of this step.

Deleting New To-Do Items

Similar to how we called the automatically generated AddItem mutation to add new items, we can call the generated DeleteItem mutation to remove items from our list. Here’s what our updated ItemList component looks like after adding this mutation:

import React from "react"; import gql from "graphql-tag"; import { useMutation, useQuery } from "@apollo/react-hooks"; const ITEMS_QUERY = gql`  {    allItems {      data {        _id        name      }    }  } `; const DELETE_ITEM = gql`  mutation DeleteItem($ id: ID!) {    deleteItem(id: $ id) {      _id    }  } `; export function ItemList() {  const { data, loading } = useQuery(ITEMS_QUERY); const [deleteItem, { loading: deleteLoading }] = useMutation(DELETE_ITEM, {    refetchQueries: [{ query: ITEMS_QUERY }],  }); if (loading) {    return <div>Loading...</div>;  } return (    <ul>      {data.allItems.data.map((item) => {        return (          <li key={item._id}>            {item.name}{" "}            <button              disabled={deleteLoading}              onClick={(e) => {                e.preventDefault();                deleteItem({ variables: { id: item._id } });              }}            >              Remove            </button>          </li>        );      })}    </ul>  ); }

Reloading our app and adding another item should result in a page that looks like this:

If you click on the “Remove” button for any item, the DELETE_ITEM mutation is fired and the entire list of items is fired upon completion as specified per the refetchQuery option. 

One thing you may have noticed is that in our ITEMS_QUERY, we’re specifying _id as one of the fields we’d like in the result set from our query. This _id field is automatically generated by FaunaDB as a unique identifier for each collection, and should be used when updating or deleting a record.

Marking Items as Complete

This wouldn’t be a fully functional to-do list without the ability to mark items as complete! So let’s add that now. By the time we’re done, we expect the app to look like this:

The first step we need to take is updating our Item schema within FaunaDB since right now the only information we store about an item is its name. Heading to our schema.graphql file, we can add a new field to track the completion state for an item:

type Item {  name: String  isComplete: Boolean } type Query {  allItems: [Item!] }

Now head to the GraphQL tab in the FaunaDB console and click on the “Update Schema” link to upload the newly updated schema file.

Note: there is also an “Override Schema” option, which can be used to rewrite your database’s schema from scratch if you’d like. One consideration to make when choosing to override the schema completely is that the data is deleted from your database. This may be fine for a test environment, but a test or production environment would require a proper database migration instead.

Since the changes we’re making here are additive, there won’t be any conflict with the existing schema so we can keep our existing data.

You can view the mutation itself and its expected schema in the GraphQL Playground in FaunaDB:

This tells us that we can call the deleteItem mutation with a data parameter of type ItemInput. As with our other requests, we could test this mutation in the playground if we wanted. For now, we can add it directly to the application. In ItemList.js, let’s add this mutation with this code as outlined in the example repository.

The references to UPDATE_ITEM are the most relevant changes we’ve made. It’s interesting to note as well that we don’t need the refetchQueries parameter for this mutation. When the update mutation returns, Apollo updates the corresponding item in the cache based on its identifier field (_id in this case) so our React component re-renders appropriately as the cache updates.

We now have all of the functionality for an initial version of a to-do application. As a final step, push your branch one more time to Heroku:

git push heroku master heroku open

Conclusion

Take a moment to pat yourself on the back! You’ve created a brand-new React application, added persistence at a database level with FaunaDB, and can do deployments available to the entire world with the push of a branch to Heroku.

Now that we’ve covered some of the concepts behind provisioning and interacting with FaunaDB, setting up any similar project in the future is an amazingly fast process. Being able to provision a GraphQL-accessible database in minutes is a dream for me when it comes to spinning up a new project. Not only that, but this is a production grade database that you don’t have to worry about configuring or scaling — and instead get to focus on writing the rest of your application instead of playing the role of a database administrator.


The post A Complete Walkthrough of GraphQL APIs with React and FaunaDB appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , ,
[Top]

Apple declined to implement 16 Web APIs in Safari due to privacy concerns

Why? Fingerprinting. Rather than these APIs being used for what they are meant for, they end up being used for gross ad tech. As in, “hey, we don’t know exactly who you are, but wait, through a script we can tell your phone stopped being idle from 8:00 am to 8:13 am and were near the Bluetooth device JBL BATHROOM, so it’s probably dad taking his morning poop! Let’s show him some ads for nicer speakers and flannel shirts ASAP.”

I’ll pull the complete list here from Catalin Cimpanu’s article:

  • Web Bluetooth – Allows websites to connect to nearby Bluetooth LE devices.
  • Web MIDI API – Allows websites to enumerate, manipulate and access MIDI devices.
  • Magnetometer API – Allows websites to access data about the local magnetic field around a user, as detected by the device’s primary magnetometer sensor.
  • Web NFC API – Allows websites to communicate with NFC tags through a device’s NFC reader.
  • Device Memory API – Allows websites to receive the approximate amount of device memory in gigabytes.
  • Network Information API – Provides information about the connection a device is using to communicate with the network and provides a means for scripts to be notified if the connection type changes
  • Battery Status API – Allows websites to receive information about the battery status of the hosting device.
  • Web Bluetooth Scanning – Allows websites to scan for nearby Bluetooth LE devices.
  • Ambient Light Sensor – Lets websites get the current light level or illuminance of the ambient light around the hosting device via the device’s native sensors.
  • HDCP Policy Check extension for EME – Allows websites to check for HDCP policies, used in media streaming/playback.
  • Proximity Sensor – Allows websites to retrieve data about the distance between a device and an object, as measured by a proximity sensor.
  • WebHID – Allows websites to retrieve information about locally connected Human Interface Device (HID) devices.
  • Serial API – Allows websites to write and read data from serial interfaces, used by devices such as microcontrollers, 3D printers, and othes.
  • Web USB – Lets websites communicate with devices via USB (Universal Serial Bus).
  • Geolocation Sensor (background geolocation) – A more modern version of the older Geolocation API that lets websites access geolocation data.
  • User Idle Detection – Lets website know when a user is idle.

I’m of mixing feelings. I do like the idea of the web being a competitive platform for building any sort of app and sometimes fancy APIs like this open those doors.

Not to mention that some of these APIs are designed to do responsible things, like knowing connections speeds through the Network Information API and sending less data if you can, and the same for the Battery Status API.

This is all a similar situation to :visited in CSS. Have you ever noticed how there are some CSS declarations you can’t use on visited links? JavaScript APIs will even literally lie about the current styling of visited links to make links always appear unvisited. Because fingerprinting.

Direct Link to ArticlePermalink


The post Apple declined to implement 16 Web APIs in Safari due to privacy concerns appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , , ,
[Top]

Tackling Authentication With Vue Using RESTful APIs

Authentication (logging in!) is a crucial part of many websites. Let’s look at how to go about it on a site using Vue, in the same way it can be done with any custom back end. Vue can’t actually do authentication all by itself, —we’ll need another service for that, so we’ll be using another service (Firebase) for that, but then integrating the whole experience in Vue.

Authentication works quite differently on Single Page Applications (SPAs) than it works on sites that reload every page. You don’t have to make an SPA with Vue, but we will in this tutorial. 

Here’s the plan. We’ll build a UI for users to log in and the submitted data will be sent to a server to check if the user exists. If yes, we’ll be sent a token. That’s very useful, because it’s going to be used throughout our site  tocheck if the user is still signed in. If no, the user can always sign up. In other words, it can be used in lots of conditional contexts. Beyond that, if we need any information from the server that requires been logged in, the token is sent to the server through the URL so that information can be only sent to logged in users.

The complete demo of this tutorial is posted on GitHub for those that who are comfortable reading through the code. The rest of us can follow through with the article. The starter file is also on GitHub so you can follow through as we code together. 

After downloading the repo, you’ll run npm install in your terminal. If you’re going to build this application completely on your own, you’ll have to install Vuex, Vue Router, and axios. We’ll also use Firebase for this project, so take a moment to set up a free account and create a new project in there.

After adding the project to Firebase, go to the authentication section, and set up a sign in method where we would be using the traditional email/password provider, that’ll be stored on our Firebase servers.

After that we’ll then go to the Firebase Auth REST API documentation to get our sign up and sign in API endpoints. We’ll need an API key to use those endpoints in our app and it can be found in the Firebase project settings.

Firebase offers authentication over the SDK, but we’re using the Auth API to demonstrate authentication over any custom back end server.

In our stater file, we have the sign up form below. We’re keeping things pretty simple here since we’re focusing on learning the concepts.

<template>   <div id="signup">     <div class="signup-form">       <form @submit.prevent="onSubmit">         <div class="input">           <label for="email">Mail</label>           <input              type="email"              id="email"              v-model="email">         </div>         <div class="input">           <label for="name">Your Name</label>           <input             type="text"             id="name"             v-model.number="name">         </div>         <div class="input">           <label for="password">Password</label>           <input             type="password"             id="password"             v-model="password">         </div>         <div class="submit">           <button type="submit">Submit</button>         </div>       </form>     </div>   </div> </template>

If we weren’t working with an SPA, we would naturally use axios to send our data inside the script tag like this:

axios.post('https://identitytoolkit.googleapis.com/v1/account   s:signUp?key=[API_KEY]', {     email: authData.email,     password: authData.password,     returnSecureToken: true   })   .then(res => {     console.log(res)   })   .catch(error => console.log(error))           } }

Sign up and log in

Working with an SPA (using Vue in this case) is very different from the above approach. Instead, we’ll be sending our authorization requests using Vuex in our actions in the store.js file. We’re doing it this way because we want the entire app to be aware of any change to the user’s authentication status.

actions: {   signup ({commit}, authData) {     axios.post('https://identitytoolkit.googleapis.com/v1/accounts:signUp?key=[API_KEY]', {       email: authData.email,       password: authData.password,       returnSecureToken: true     })     .then(res => {       console.log(res)       router.push("/dashboard")     })     .catch(error => console.log(error))   },   login ({commit}, authData) {     axios.post(https://identitytoolkit.googleapis.com/v1/accounts:signIn?key=[API_KEY]', {       email: authData.email,       password: authData.password,       returnSecureToken: true     })     .then(res => {       console.log(res)       router.push("/dashboard")     })     .catch(error => console.log(error))   } }

We can use pretty much the same thing for the sign in method, but using the sign in API endpoint instead. We then dispatch both the sign up and log in from the components, to their respective actions in the store.

methods : {    onSubmit () {     const formData = {       email : this.email,       name : this.name,            password : this.password     }     this.$ store.dispatch('signup', formData)     }   } }

formData contains the user’s data.

methods : {   onSubmit () {     const formData = {       email : this.email,       password : this.password     }     this.$ store.dispatch('login', {email: formData.email, password: formData.password})   } }

We’re taking the authentication data (i.e. the token and the user’s ID) that was received from the sign up/log in form, and using them as state with Vuex. It’ll initially result as null.

state: {   idToken: null,   userId: null,   user: null }

We now create a new method called authUser in the mutations that’ll store the data that’s collected from the response. We need to import the router into the store as we’ll need that later.

import router from '/router' 
 mutations : {   authUser (state, userData) {     state.idToken = userData.token     state.userId = userData.userId   } }

Inside the .then block in the signup/login methods in our actions, we’ll commit our response to the authUser mutation just created and save to local storage.

actions: {   signup ({commit}, authData) {     axios.post('https://identitytoolkit.googleapis.com/v1/accounts:signUp?key=[API_KEY]'), {       email: authData.email,       password: authData.password,       returnSecureToken: true     })     .then(res => {       console.log(res)       commit('authUser', {         token: res.data.idToken,         userId: res.data.localId       })       localStorage.setItem('token', res.data.idToken)       localStorage.setItem('userId', res.data.localId)       router.push("/dashboard")     })     .catch(error => console.log(error))   },   login ({commit}, authData) {     axios.post('https://identitytoolkit.googleapis.com/v1/accounts:signIn?key=[API_KEY]'), {       email: authData.email,       password: authData.password,       returnSecureToken: true     })     .then(res => {       console.log(res)       commit('authUser', {         token: res.data.idToken,         userId: res.data.localId       })         localStorage.setItem('token', res.data.idToken)         localStorage.setItem('userId', res.data.localId)         router.push("/dashboard")       })     .catch(error => console.log(error))   } }

Setting up an Auth guard

Now that we have our token stored within the application, we’re going touse this token while setting up our Auth guard. What’s an Auth guard? It protects the dashboard from unauthenticated users access it without tokens.

First, we’ll go into our route file and import the store. The store is imported because of the token that’ll determine the logged in state of the user.

import store from './store.js'

Then within our routes array, go to the dashboard path and add the method beforeEnter which takes three parameters: to, from and next. Within this method, we’re simply saying that if the tokens are stored (which is automatically done if authenticated), then next, meaning it continues with the designated route. Otherwise, we’re leading the unauthenticated user back to the sign up page.

{   path: '/dashboard',   component: DashboardPage,   beforeEnter (to, from, next) {     if (store.state.idToken) {       next()     }      else {       next('/signin')     }   } }

Creating the UI state

At this point, we can still see the dashboard in the navigation whether we’re logged in or not,  and that’s not what we want. We have to add another method under the getters called ifAuthenticated which checks if the token within our state is null, then update the navigation items accordingly.

getters: {   user (state) {     return state.user   },   ifAuthenticated (state) {     return state.idToken !== null   } }

Next, let’s open up the header component and create a method called auth inside the computed property. That will dispatch to the ifAuthenticated getters we just created in the store. ifAuthenticated will return false if there’s no token, which automatically means auth would also be null, and vice versa. After that, we add a v-if to check if auth is null or not, determining whether the dashboard option would show in the navigation.

<template>   <header id="header">     <div class="logo">       <router-link to="/">Vue Authenticate</router-link>     </div>     <nav>       <ul>         <li v-if='auth'>           <router-link to="/dashboard">Dashboard</router-link>         </li>         <li  v-if='!auth'>           <router-link to="/signup">Register</router-link>         </li>         <li  v-if='!auth'>           <router-link to="/signin">Log In</router-link>         </li>       </ul>     </nav>   </header> </template> <script>   export default {     computed: {       auth () {         return this.$ store.getters.ifAuthenticated       }     },   } </script>

Logging out

What’s an application without a logout button? Let’s create a new mutation called clearAuth, which sets both the token and userId to null.

mutations: {   authUser (state, userData) {     state.idToken = userData.token     state.userId = userData.userId   },   clearAuth (state) {     state.idToken = null     state.userId = null   } }

Then, in our logout action , we commit to clearAuth, delete local storage and add router.replace('/') to properly redirect the user following logout.

Back to the header component. We have an onLogout method that dispatches our logout action in the store. We then add a @click to the button which calls the to the onLogout method as we can see below:

<template>   <header id="header">     <div class="logo">       <router-link to="/">Vue Authenticate</router-link>     </div>     <nav>       <ul>         <li v-if='auth'>           <router-link to="/dashboard">Dashboard</router-link>         </li>         <li  v-if='!auth'>           <router-link to="/signup">Register</router-link>         </li>         <li  v-if='!auth'>           <router-link to="/signin">Log In</router-link>         </li>          <li  v-if='auth'>           <ul @click="onLogout">Log Out</ul>         </li>       </ul>     </nav>   </header> </template> <script>   export default {     computed: {       auth () {         return this.$ store.getters.ifAuthenticated       }     },     methods: {       onLogout() {         this.$ store.dispatch('logout')       }     }   } </script>

Auto login? Sure!

We’re almost done with our app. We can sign up, log in, and log out with all the UI changes we just made. But, when we refresh our app, we lose the data and are signed out, having to start all over again because we stored our token and Id in Vuex, which is JavaScript. This means everything in the app gets reloaded in the browser when refreshed. 

What we’ll do is to retrieve the token within our local storage. By doing that, we can have the user’s token in the browser regardless of when we refresh the window, and even auto-login the user as long as the token is still valid.

Create a new actions method called AutoLogin, where we’ll get the token and userId from the local storage, only if the user has one. Then we commit our data to the authUser method in the mutations.

actions : {   AutoLogin ({commit}) {     const token = localStorage.getItem('token')     if (!token) {       return     }     const userId = localStorage.getItem('userId')     const token = localStorage.getItem('token')     commit('authUser', {       idToken: token,       userId: userId     })   } }

We then go to our App.vue and make a created method where we’ll dispatch the autoLogin from our store when the app is loaded.

created () {   this.$ store.dispatch('AutoLogin') }

Yay! With that, we’ve successfully implemented authentication within our app and can now deploy using npm run build. Check out the live demo to see it in action.

The example site is purely for demonstration purposes. Please do not share real data, like your real email and password, while testing the demo app.

The post Tackling Authentication With Vue Using RESTful APIs appeared first on CSS-Tricks.

CSS-Tricks

, , , ,
[Top]

APIs and Authentication on the Jamstack

The first “A” in the Jamstack stands for “APIs” and is a key contributor to what makes working with static sites so powerful. APIs give developers the freedom to offload complexity and provide avenues for including dynamic functionality to an otherwise static site. Often, accessing an API requires validating the authenticity of a request. This frequently manifests in the form of authentication (auth) and can be done either client side or server side depending on the service used and the task being accomplished. 

Given the vast spectrum of protocols available, APIs differ in their individual auth implementations. These auth protocols and implementation intricacies add an additional challenge when integrating APIs into a Jamstack site. Thankfully, there is a method to this madness. Every protocol can be mapped to a specific use case and implementing auth is a matter of understanding this.

To illustrate this best, let’s dive into the various protocols and the scenarios that they’re best suited for.

Summon the protocols

OAuth 2.0 is the general standard by which authentication today follows. OAuth is a fairly flexible authorization framework that constitutes a series of grants defining the relationship between a client and an API endpoint. In an OAuth flow, a client application requests an access token from an authorization endpoint and uses that to sign a request to an API endpoint.

There are four main grant types — authorization code, implicit flow, resource owner credential, and client credentials. We’ll look at each one individually.

Authorization Code Grant 

Of all OAuth grant types, the Authorization Code Grant is likely the most common one. Primarily used to obtain an access token to authorize API requests after a user explicitly grants permission, this grant flow follows a two-step process.

  • First, the user is directed to a consent screen aka the authorization server where they grant the service restricted access to their personal account and data.
  • Once permission has been granted, the next step is to retrieve an access token from the authentication server which can then be used to authenticate the request to the API endpoint.

Compared to other grant types, the Authorization Code Grant has an extra layer of security with the added step of asking a user for explicit authorization. This multi-step code exchange means that the access token is never exposed and is always sent via a secure backchannel between an application and auth server. In this way, attackers can’t easily steal an access token by intercepting a request. Google-owned services, like Gmail and Google Calendar, utilize this authorization code flow to access personal content from a user’s account. If you’d like to dig into this workflow more, check out this blog post to learn more.

Implicit Grant

The Implicit Grant is akin to the Authorization Code Grant with a noticeable difference: instead of having a user grant permission to retrieve an authorization code that is then exchanged for an access token, an access token is returned immediately via the the fragment (hash) part of the redirect URL (a.k.a. the front channel).

With the reduced step of an authorization code, the Implicit Grant flow carries the risk of exposing tokens. The token, by virtue of being embedded directly into the URL (and logged to the browser history), is easily accessible if the redirect is ever intercepted.

Despite its vulnerabilities, the Implicit Grant can be useful for user-agent-based clients like Single Page Applications. Since both application code and storage is easily accessed in client-side rendered applications, there is no safe way to keep client secrets secure. The implicit flow is the logical workaround to this by providing applications a quick and easy way to authenticate a user on the client side. It is also a valid means to navigate CORS issues, especially when using a third-party auth server that doesn’t support cross-origin requests. Because of the inherent risks of exposed tokens with this approach, it’s important to note that access tokens in Implicit Flow tend to be short-lived and refresh tokens are never issued. As a result, this flow may require logging in for every request to a privileged resource.

Resource Owner Credential

In the case of the Resource Owner Credential Grant, resource owners send their username and password credentials to the auth server, which then sends back an access token with an optional refresh token. Since resource owner credentials are visible in the auth exchange between client application and authorization server, a trust relationship must exist between resource owner and client application. Though evidently less secure than other grant types, the Resource Owner Credential grant yields an excellent user experience for first party clients. This grant flow is most suitable in cases where the application is highly privileged or when working within a device’s operating system. This authorization flow is often used when other flows are not viable.

Client Credential

The Client Credentials Grant type is used primarily when clients need to obtain an access token outside of the context of a user. This is suitable for machine to machine authentication when a user’s explicit permission cannot be guaranteed for every access to a protected resource. CLIs, and services running in the back end are instances when this grant type comes in handy. Instead of relying on user login, a Client ID and Secret are passed along to obtain a token which can then be used to authenticate an API request.

Typically, in the Client Credential grant, a service account is established through which the application operates and makes API calls. This way, users are not directly involved and applications can still continue to authenticate requests. This workflow is fairly common in situations where applications want access to their own data, e.g. Analytics, rather than to specific user data.

Conclusion

With its reliance on third party services for complex functionality, a well-architected authentication solution is crucial to maintain the security of Jamstack sites. APIs, being the predominant way to exchange data in the Jamstack, are a big part of that. We looked at four different methods for authenticating API requests, each with its benefits and impacts on user experience.

We mentioned at the start that these four are the main forms of authentication that are used to request data from an API. There are plenty of other types as well, which are nicely outlined on oauth.net. The website as a whole is an excellent deep-dive on not only the auth types available, but the OAuth framework as a whole.

Do you prefer one method over another? Do you have an example in use you can point to? Share in the comments!

The post APIs and Authentication on the Jamstack appeared first on CSS-Tricks.

CSS-Tricks

, ,
[Top]

Get Started Building GraphQL APIs With Node

We all have a number of interests and passions. For example, I’m interested in JavaScript, 90’s indie rock and hip hop, obscure jazz, the city of Pittsburgh, pizza, coffee, and movies starring John Lurie. We also have family members, friends, acquaintances, classmates, and colleagues who also have their own social relationships, interests, and passions. Some of these relationships and interests overlap, like my friend Riley who shares my interest in 90’s hip hop and pizza. Others do not, like my colleague Harrison, who prefers Python to JavaScript, only drinks tea, and prefers current pop music. All together, we each have a connected graph of the people in our lives, and the ways that our relationships and interests overlap.

These types of interconnected data are exactly the challenge that GraphQL initially set out to solve in API development. By writing a GraphQL API we are able to efficiently connect data, which reduces the complexity and number of requests, while allowing us to serve the client precisely the data that it needs. (If you’re into more GraphQL metaphors, check out Meeting GraphQL at a Cocktail Mixer.)

In this article, we’ll build a GraphQL API in Node.js, using the Apollo Server package. To do so, we’ll explore fundamental GraphQL topics, write a GraphQL schema, develop code to resolve our schema functions, and access our API using the GraphQL Playground user interface.

What is GraphQL?

GraphQL is an open source query and data manipulation language for APIs. It was developed with the goal of providing single endpoints for data, allowing applications to request exactly the data that is needed. This has the benefit of not only simplifying our UI code, but also improving performance by limiting the amount of data that needs to be sent over the wire.

What we’re building

To follow along with this tutorial, you’ll need Node v8.x or later and some familiarity with working with the command line. 

We’re going to build an API application for book highlights, allowing us to store memorable passages from the things that we read. Users of the API will be able to perform “CRUD” (create, read, update, delete) operations against their highlights:

  • Create a new highlight
  • Read an individual highlight as well as a list of highlights
  • Update a highlight’s content
  • Delete a highlight

Getting started

To get started, first create a new directory for our project, initialize a new node project, and install the dependencies that we’ll need:

# make the new directory mkdir highlights-api # change into the directory cd highlights-api # initiate a new node project npm init -y # install the project dependencies npm install apollo-server graphql # install the development dependencies npm install nodemon --save-dev

Before moving on, let’s break down our dependencies:

  • apollo-server is a library that enables us to work with GraphQL within our Node application. We’ll be using it as a standalone library, but the team at Apollo has also created middleware for working with existing Node web applications in ExpresshapiFastify, and Koa.
  • graphql includes the GraphQL language and is a required peer dependency of apollo-server.
  • nodemon is a helpful library that will watch our project for changes and automatically restart our server.

With our packages installed, let’s next create our application’s root file, named index.js. For now, we’ll console.log() a message in this file:

console.log("📚 Hello Highlights");

To make our development process simpler, we’ll update the scripts object within our package.json file to make use of the nodemon package:

"scripts": {   "start": "nodemon index.js" },

Now, we can start our application by typing npm start in the terminal application. If everything is working properly, you will see 📚 Hello Highlights logged to your terminal.

GraphQL schema types

A schema is a written representation of our data and interactions. By requiring a schema, GraphQL enforces a strict plan for our API. This is because the API can only return data and perform interactions that are defined within the schema. The fundamental component of GraphQL schemas are object types. GraphQL contains five built-in types:

  • String: A string with UTF-8 character encoding
  • Boolean: A true or false value
  • Int: A 32-bit integer
  • Float: A floating-point value
  • ID: A unique identifier

We can construct a schema for an API with these basic components. In a file named schema.js, we can import the gql library and prepare the file for our schema syntax:

const { gql } = require('apollo-server');  const typeDefs = gql`   # The schema will go here `;  module.exports = typeDefs;

To write our schema, we first define the type. Let’s consider how we might define a schema for our highlights application. To begin, we would create a new type with a name of Highlight:

const typeDefs = gql`   type Highlight {   } `;

Each highlight will have a unique ID,  some content, a title, and an author. The Highlight schema will look something like this:

const typeDefs = gql`   type Highlight {     id: ID     content: String     title: String     author: String   } `;

We can make some of these fields required by adding an exclamation point:

const typeDefs = gql`   type Highlight {     id: ID!     content: String!     title: String     author: String   } `;

Though we’ve defined an object type for our highlights, we also need to provide a description of how a client will fetch that data. This is called a query. We’ll dive more into queries shortly, but for now let’s describe in our schema the ways in which someone will retrieve highlights. When requesting all of our highlights, the data will be returned as an array (represented as [Highlight]) and when we want to retrieve a single highlight we will need to pass an ID as a parameter.

const typeDefs = gql`   type Highlight {     id: ID!     content: String!     title: String     author: String   }   type Query {     highlights: [Highlight]!     highlight(id: ID!): Highlight   } `;

Now, in the index.js file, we can import our type definitions and set up Apollo Server:

const {ApolloServer } = require('apollo-server'); const typeDefs = require('./schema');  const server = new ApolloServer({ typeDefs });  server.listen().then(({ url }) => {   console.log(`📚 Highlights server ready at $ https://css-tricks.com/get-started-building-graphql-apis-with-node/`); });

If we’ve kept the node process running, the application will have automatically updated and relaunched, but if not, typing npm start  from the project’s directory in the terminal window will start the server. If we look at the terminal, we should see that nodemon is watching our files and the server is running on a local port:

[nodemon] 2.0.2 [nodemon] to restart at any time, enter `rs` [nodemon] watching dir(s): *.* [nodemon] watching extensions: js,mjs,json [nodemon] starting `node index.js` 📚 Highlights server ready at http://localhost:4000/

Visiting the URL in the browser will launch the GraphQL Playground application, which provides a user interface for interacting with our API.

GraphQL Resolvers

Though we’ve developed our project with an initial schema and Apollo Server setup, we can’t yet interact with our API. To do so, we’ll introduce resolvers. Resolvers perform exactly the action their name implies; they resolve the data that the API user has requested. We will write these resolvers by first defining them in our schema and then implementing the logic within our JavaScript code. Our API will contain two types of resolvers: queries and mutations.

Let’s first add some data to interact with. In an application, this would typically be data that we’re retrieving and writing to from a database, but for our example let’s use an array of objects. In the index.js file add the following:

let highlights = [   {     id: '1',     content: 'One day I will find the right words, and they will be simple.',     title: 'Dharma Bums',     author: 'Jack Kerouac'   },   {     id: '2',     content: 'In the limits of a situation there is humor, there is grace, and everything else.',     title: 'Arbitrary Stupid Goal',     author: 'Tamara Shopsin'   } ]

Queries

A query requests specific data from an API, in its desired format. The query will then return an object, containing the data that the API user has requested. A query never modifies the data; it only accesses it. We’ve already written a two queries in our schema. The first returns an array of highlights and the second returns a specific highlight. The next step is to write the resolvers that will return the data.

In the index.js file, we can add a resolvers object, which can contain our queries:

const resolvers = {   Query: {     highlights: () => highlights,     highlight: (parent, args) => {       return highlights.find(highlight => highlight.id === args.id);     }   } };

The highlights query returns the full array of highlights data. The highlight query accepts two parameters: parent and args. The parent is the first parameter of any GraqhQL query in Apollo Server and provides a way of accessing the context of the query. The args parameter allows us to access the user provided arguments. In this case, users of the API will be supplying an id argument to access a specific highlight.

We can then update our Apollo Server configuration to include the resolvers:

const server = new ApolloServer({ typeDefs, resolvers });

With our query resolvers written and Apollo Server updated, we can now query API using the GraphQL Playground. To access the GraphQL Playground, visit http://localhost:4000 in your web browser.

A query is formatted as so:

query {   queryName {       field       field     } }

With this in mind, we can write a query that requests the ID, content, title, and author for each our highlights:

query {   highlights {     id     content     title     author   } }

Let’s say that we had a page in our UI that lists only the titles and authors of our highlighted texts. We wouldn’t need to retrieve the content for each of those highlights. Instead, we could write a query that only requests the data that we need:

query {   highlights {     title     author   } }

We’ve also written a resolver to query for an individual note by including an ID parameter with our query. We can do so as follows:

query {   highlight(id: "1") {     content   } }

Mutations

We use a mutation when we want to modify the data in our API. In our highlight example, we will want to write a mutation to create a new highlight, one to update an existing highlight, and a third to delete a highlight. Similar to a query, a mutation is also expected to return a result in the form of an object, typically the end result of the performed action.

The first step to updating anything in GraphQL is to write the schema. We can include mutations in our schema, by adding a mutation type to our schema.js file:

type Mutation {   newHighlight (content: String! title: String author: String): Highlight!   updateHighlight(id: ID! content: String!): Highlight!   deleteHighlight(id: ID!): Highlight! }

Our newHighlight mutation will take the required value of content along with optional title and author values and return a Highlight. The updateHighlight mutation will require that a highlight id and content be passed as argument values and will return the updated Highlight. Finally, the deleteHighlight mutation will accept an ID argument, and will return the deleted Highlight.

With the schema updated to include mutations, we can now update the resolvers in our index.js file to perform these actions. Each mutation will update our highlights array of data.

const resolvers = {   Query: {     highlights: () => highlights,     highlight: (parent, args) => {       return highlights.find(highlight => highlight.id === args.id);     }   },   Mutation: {     newHighlight: (parent, args) => {       const highlight = {         id: String(highlights.length + 1),         title: args.title || '',         author: args.author || '',         content: args.content       };       highlights.push(highlight);       return highlight;     },     updateHighlight: (parent, args) => {       const index = highlights.findIndex(highlight => highlight.id === args.id);       const highlight = {         id: args.id,         content: args.content,         author: highlights[index].author,         title: highlights[index].title       };       highlights[index] = highlight;       return highlight;     },     deleteHighlight: (parent, args) => {       const deletedHighlight = highlights.find(         highlight => highlight.id === args.id       );       highlights = highlights.filter(highlight => highlight.id !== args.id);       return deletedHighlight;     }   } };

With these mutations written, we can use the GraphQL Playground to practice mutating the data. The structure of a mutation is nearly identical to that of a query, specifying the name of the mutation, passing the argument values, and requesting specific data in return. Let’s start by adding a new highlight:

mutation {   newHighlight(author: "Adam Scott" title: "JS Everywhere" content: "GraphQL is awesome") {     id     author     title     content   } }

We can then write mutations to update a highlight:

mutation {   updateHighlight(id: "3" content: "GraphQL is rad") {     id     content   } }

And to delete a highlight:

mutation {   deleteHighlight(id: "3") {     id   } }

Wrapping up

Congratulations! You’ve now successfully built a GraphQL API, using Apollo Server, and can run GraphQL queries and mutations against an in-memory data object. We’ve established a solid foundation for exploring the world of GraphQL API development.

Here are some potential next steps to level up:

The post Get Started Building GraphQL APIs With Node appeared first on CSS-Tricks.

CSS-Tricks

, , , ,
[Top]

Design APIs: The Evolution of Design Systems

A clever idea from Matthew Ström:

[…] design APIs don’t seem like a stretch of the imagination. An API-driven approach is the natural extension of the work currently being done on design systems, including tokens and standardization projects.

If you buy into the idea of design tokens, that expresses itself as a chunk of JSON with things like colors and spacing values. No reason an API couldn’t deliver that, potentially empowering a build process that pulls the most recent decisions from a central location.

Direct Link to ArticlePermalink

The post Design APIs: The Evolution of Design Systems appeared first on CSS-Tricks.

CSS-Tricks

, , ,
[Top]

Spam Detection APIs

I was trying to research the landscape of these the other day — And by research, I mean light Googling and asking on Twitter. Weirdly, very little comes to mind when thinking about spam detection APIs. I mean some kind of URL endpoint, paid or not, where you can hit it with a block of text and whatever metadata it wants and it’ll tell you if it’s spam or not. Seems like something an absolute buttload of the internet could use and something companies of any size could monetize or offer free to show off their smart computer machines.

Akismet is the big kid on the block.

You might think of Akismet as a WordPress thing, and it is. It’s an Automattic product and is perhaps primarily used as a WordPress plugin. I run that here on CSS-Tricks and it’s blocked 1,989,326 so far.

It also has a generic API. There are libraries for Dart, JavaScript, PHP, Python, Ruby, Go, etc, as well as plugins for other CMSs. So if you use a different CMS or have your custom app, you can still use Akismet for spam detection.

After you get an API key, you can POST to a URL endpoint with all the data it needs and it’ll respond true if it’s spam or false if it’s not.

To get better results over time, you can also submit content telling it if it’s spam or ham (ham is the opposite of spam… good content).

Plino

Several folks mentioned Plino to me.

There is a lot to like here, like the fact that it’s free and returns a JSON response like you might be used to in development. There is the fancy buzzword “Machine Learning” being used here, too. It makes me think that with lots of people using this, it’ll get smarter and smarter as it goes. But there is no way to submit ham/spam, so I’m not sure that’s really the case.

There is other stuff that makes me nervous. It’s clearly on Heroku which is kinda expensive at scale, and so with no pricing model it seems like it could go away anytime. Sorta feels like a fun-but-abandoned side project. Last commit was two years ago, as I write.

OOPSpam

OOPSpam looks super similar to Plino, but has a pricing model, which is nice. They publish their latency, which is over two seconds. I haven’t compared that to the others so I have no idea if they are all that slow. Two seconds seems like a lot for an API call to me, but maybe it’s not that big of a deal since it’s an async submit?

CleanTalk

CleanTalk has a clear pricing structure and appears to have plenty of customers, which is a plus to me. The website looks a little janky though, which makes me worry a little.

(Sorry if that’s a little rude, but it’s just mental math to me. Good design is one of the least expensive investments a company can make to increase trust, so companies that overlook it make me wonder.)

It looks like they have a variety of anti-spam solutions though, which is interesting. For example, you can ask an API to see if an IP, email, or domain is on a blacklist, which is a pretty raw way of blocking bad stuff, but useful for stuff like protecting against spam registrations (rather than just checking blocks of text). They also have a firewall solution, which is interesting for folks trying to block spam before it even touches their servers.

Email options…

There are a couple out there that seem rather specific to testing emails. As in, testing your own emails before you send them to make sure they aren’t considered spam by email services. Here are a couple I cam across while looking around:

The post Spam Detection APIs appeared first on CSS-Tricks.

CSS-Tricks

, ,
[Top]

Earth day, API’s and sunshine.

Cassie Evans showcases some really nifty web design ideas and explores using the API provided by the company her team over at Clearleft recently hired to cover their building’s roof with solar panels. Cassie outlines her journey designing a webpage that uses the API to populate some light data visualizations about the energy the building uses now that the solar panels are installed.

Here at Clearleft we’ve been taking small steps to reduce our environmental impact. In December 2018 we covered the roof of our home with solar panels.

With the first of the glorious summer sun starting to shine down on us, we started to ponder about what environmental impact they’d had over the last 5 months.

Luckily for us, our solar panels have an API, so we can not only find out that information, we can request it from SolarEdge and display it in our very own interface.

The post is a great practical look into using the Fetch API which is also something Zell Liew wrote up in thorough detail, covering the history of using APIs with JavaScript, handling errors and other weird things that might happen when working with it.

But, equally interesting and useful is reading through Cassie’s thought process as she sketches a wireframe for the page, researches how to use the Fetch API, and integrates animation into a lovely SVG illustration. It’s an exercise in both design and development that many of us can relate to but also learn from.

Oh and while we’re on the topic of data visualizations, Dan Englishby posted something just today on the many ways if getting data into charts. Working with real-time APIs is covered there as well, and is a nice segue from Cassie’s post.

Direct Link to ArticlePermalink

The post Earth day, API’s and sunshine. appeared first on CSS-Tricks.

CSS-Tricks

, ,
[Top]