Tag: Data

Caching Data in SvelteKit

My previous post was a broad overview of SvelteKit where we saw what a great tool it is for web development. This post will fork off what we did there and dive into every developer’s favorite topic: caching. So, be sure to give my last post a read if you haven’t already. The code for this post is available on GitHub, as well as a live demo.

This post is all about data handling. We’ll add some rudimentary search functionality that will modify the page’s query string (using built-in SvelteKit features), and re-trigger the page’s loader. But, rather than just re-query our (imaginary) database, we’ll add some caching so re-searching prior searches (or using the back button) will show previously retrieved data, quickly, from cache. We’ll look at how to control the length of time the cached data stays valid and, more importantly, how to manually invalidate all cached values. As icing on the cake, we’ll look at how we can manually update the data on the current screen, client-side, after a mutation, while still purging the cache.

This will be a longer, more difficult post than most of what I usually write since we’re covering harder topics. This post will essentially show you how to implement common features of popular data utilities like react-query; but instead of pulling in an external library, we’ll only be using the web platform and SvelteKit features.

Unfortunately, the web platform’s features are a bit lower level, so we’ll be doing a bit more work than you might be used to. The upside is we won’t need any external libraries, which will help keep bundle sizes nice and small. Please don’t use the approaches I’m going to show you unless you have a good reason to. Caching is easy to get wrong, and as you’ll see, there’s a bit of complexity that’ll result in your application code. Hopefully your data store is fast, and your UI is fine allowing SvelteKit to just always request the data it needs for any given page. If it is, leave it alone. Enjoy the simplicity. But this post will show you some tricks for when that stops being the case.

Speaking of react-query, it was just released for Svelte! So if you find yourself leaning on manual caching techniques a lot, be sure to check that project out, and see if it might help.

Setting up

Before we start, let’s make a few small changes to the code we had before. This will give us an excuse to see some other SvelteKit features and, more importantly, set us up for success.

First, let’s move our data loading from our loader in +page.server.js to an API route. We’ll create a +server.js file in routes/api/todos, and then add a GET function. This means we’ll now be able to fetch (using the default GET verb) to the /api/todos path. We’ll add the same data loading code as before.

import { json } from "@sveltejs/kit"; import { getTodos } from "$ lib/data/todoData";  export async function GET({ url, setHeaders, request }) {   const search = url.searchParams.get("search") || "";    const todos = await getTodos(search);    return json(todos); }

Next, let’s take the page loader we had, and simply rename the file from +page.server.js to +page.js (or .ts if you’ve scaffolded your project to use TypeScript). This changes our loader to be a “universal” loader rather than a server loader. The SvelteKit docs explain the difference, but a universal loader runs on both the server and also the client. One advantage for us is that the fetch call into our new endpoint will run right from our browser (after the initial load), using the browser’s native fetch function. We’ll add standard HTTP caching in a bit, but for now, all we’ll do is call the endpoint.

export async function load({ fetch, url, setHeaders }) {   const search = url.searchParams.get("search") || "";    const resp = await fetch(`/api/todos?search=$ {encodeURIComponent(search)}`);    const todos = await resp.json();    return {     todos,   }; }

Now let’s add a simple form to our /list page:

<div class="search-form">   <form action="/list">     <label>Search</label>     <input autofocus name="search" />   </form> </div>

Yep, forms can target directly to our normal page loaders. Now we can add a search term in the search box, hit Enter, and a “search” term will be appended to the URL’s query string, which will re-run our loader and search our to-do items.

Search form

Let’s also increase the delay in our todoData.js file in /lib/data. This will make it easy to see when data are and are not cached as we work through this post.

export const wait = async amount => new Promise(res => setTimeout(res, amount ?? 500));

Remember, the full code for this post is all on GitHub, if you need to reference it.

Basic caching

Let’s get started by adding some caching to our /api/todos endpoint. We’ll go back to our +server.js file and add our first cache-control header.

setHeaders({   "cache-control": "max-age=60", });

…which will leave the whole function looking like this:

export async function GET({ url, setHeaders, request }) {   const search = url.searchParams.get("search") || "";    setHeaders({     "cache-control": "max-age=60",   });    const todos = await getTodos(search);    return json(todos); }

We’ll look at manual invalidation shortly, but all this function says is to cache these API calls for 60 seconds. Set this to whatever you want, and depending on your use case, stale-while-revalidate might also be worth looking into.

And just like that, our queries are caching.

Cache in DevTools.

Note make sure you un-check the checkbox that disables caching in dev tools.

Remember, if your initial navigation on the app is the list page, those search results will be cached internally to SvelteKit, so don’t expect to see anything in DevTools when returning to that search.

What is cached, and where

Our very first, server-rendered load of our app (assuming we start at the /list page) will be fetched on the server. SvelteKit will serialize and send this data down to our client. What’s more, it will observe the Cache-Control header on the response, and will know to use this cached data for that endpoint call within the cache window (which we set to 60 seconds in put example).

After that initial load, when you start searching on the page, you should see network requests from your browser to the /api/todos list. As you search for things you’ve already searched for (within the last 60 seconds), the responses should load immediately since they’re cached.

What’s especially cool with this approach is that, since this is caching via the browser’s native caching, these calls could (depending on how you manage the cache busting we’ll be looking at) continue to cache even if you reload the page (unlike the initial server-side load, which always calls the endpoint fresh, even if it did it within the last 60 seconds).

Obviously data can change anytime, so we need a way to purge this cache manually, which we’ll look at next.

Cache invalidation

Right now, data will be cached for 60 seconds. No matter what, after a minute, fresh data will be pulled from our datastore. You might want a shorter or longer time period, but what happens if you mutate some data and want to clear your cache immediately so your next query will be up to date? We’ll solve this by adding a query-busting value to the URL we send to our new /todos endpoint.

Let’s store this cache busting value in a cookie. That value can be set on the server but still read on the client. Let’s look at some sample code.

We can create a +layout.server.js file at the very root of our routes folder. This will run on application startup, and is a perfect place to set an initial cookie value.

export function load({ cookies, isDataRequest }) {   const initialRequest = !isDataRequest;    const cacheValue = initialRequest ? +new Date() : cookies.get("todos-cache");    if (initialRequest) {     cookies.set("todos-cache", cacheValue, { path: "/", httpOnly: false });   }    return {     todosCacheBust: cacheValue,   }; }

You may have noticed the isDataRequest value. Remember, layouts will re-run anytime client code calls invalidate(), or anytime we run a server action (assuming we don’t turn off default behavior). isDataRequest indicates those re-runs, and so we only set the cookie if that’s false; otherwise, we send along what’s already there.

The httpOnly: false flag is also significant. This allows our client code to read these cookie values in document.cookie. This would normally be a security concern, but in our case these are meaningless numbers that allow us to cache or cache bust.

Reading cache values

Our universal loader is what calls our /todos endpoint. This runs on the server or the client, and we need to read that cache value we just set up no matter where we are. It’s relatively easy if we’re on the server: we can call await parent() to get the data from parent layouts. But on the client, we’ll need to use some gross code to parse document.cookie:

export function getCookieLookup() {   if (typeof document !== "object") {     return {};   }    return document.cookie.split("; ").reduce((lookup, v) => {     const parts = v.split("=");     lookup[parts[0]] = parts[1];      return lookup;   }, {}); }  const getCurrentCookieValue = name => {   const cookies = getCookieLookup();   return cookies[name] ?? ""; };

Fortunately, we only need it once.

Sending out the cache value

But now we need to send this value to our /todos endpoint.

import { getCurrentCookieValue } from "$ lib/util/cookieUtils";  export async function load({ fetch, parent, url, setHeaders }) {   const parentData = await parent();    const cacheBust = getCurrentCookieValue("todos-cache") || parentData.todosCacheBust;   const search = url.searchParams.get("search") || "";    const resp = await fetch(`/api/todos?search=$ {encodeURIComponent(search)}&cache=$ {cacheBust}`);   const todos = await resp.json();    return {     todos,   }; }

getCurrentCookieValue('todos-cache') has a check in it to see if we’re on the client (by checking the type of document), and returns nothing if we are, at which point we know we’re on the server. Then it uses the value from our layout.

Busting the cache

But how do we actually update that cache busting value when we need to? Since it’s stored in a cookie, we can call it like this from any server action:

cookies.set("todos-cache", cacheValue, { path: "/", httpOnly: false });

The implementation

It’s all downhill from here; we’ve done the hard work. We’ve covered the various web platform primitives we need, as well as where they go. Now let’s have some fun and write application code to tie it all together.

For reasons that’ll become clear in a bit, let’s start by adding an editing functionality to our /list page. We’ll add this second table row for each todo:

import { enhance } from "$ app/forms";
<tr>   <td colspan="4">     <form use:enhance method="post" action="?/editTodo">       <input name="id" value="{t.id}" type="hidden" />       <input name="title" value="{t.title}" />       <button>Save</button>     </form>   </td> </tr>

And, of course, we’ll need to add a form action for our /list page. Actions can only go in .server pages, so we’ll add a +page.server.js in our /list folder. (Yes, a +page.server.js file can co-exist next to a +page.js file.)

import { getTodo, updateTodo, wait } from "$ lib/data/todoData";  export const actions = {   async editTodo({ request, cookies }) {     const formData = await request.formData();      const id = formData.get("id");     const newTitle = formData.get("title");      await wait(250);     updateTodo(id, newTitle);      cookies.set("todos-cache", +new Date(), { path: "/", httpOnly: false });   }, };

We’re grabbing the form data, forcing a delay, updating our todo, and then, most importantly, clearing our cache bust cookie.

Let’s give this a shot. Reload your page, then edit one of the to-do items. You should see the table value update after a moment. If you look in the Network tab in DevToold, you’ll see a fetch to the /todos endpoint, which returns your new data. Simple, and works by default.

Saving data

Immediate updates

What if we want to avoid that fetch that happens after we update our to-do item, and instead, update the modified item right on the screen?

This isn’t just a matter of performance. If you search for “post” and then remove the word “post” from any of the to-do items in the list, they’ll vanish from the list after the edit since they’re no longer in that page’s search results. You could make the UX better with some tasteful animation for the exiting to-do, but let’s say we wanted to not re-run that page’s load function but still clear the cache and update the modified to-do so the user can see the edit. SvelteKit makes that possible — let’s see how!

First, let’s make one little change to our loader. Instead of returning our to-do items, let’s return a writeable store containing our to-dos.

return {   todos: writable(todos), };

Before, we were accessing our to-dos on the data prop, which we do not own and cannot update. But Svelte lets us return our data in their own store (assuming we’re using a universal loader, which we are). We just need to make one more tweak to our /list page.

Instead of this:

{#each todos as t}

…we need to do this since todos is itself now a store.:

{#each $ todos as t}

Now our data loads as before. But since todos is a writeable store, we can update it.

First, let’s provide a function to our use:enhance attribute:

<form   use:enhance={executeSave}   on:submit={runInvalidate}   method="post"   action="?/editTodo" >

This will run before a submit. Let’s write that next:

function executeSave({ data }) {   const id = data.get("id");   const title = data.get("title");    return async () => {     todos.update(list =>       list.map(todo => {         if (todo.id == id) {           return Object.assign({}, todo, { title });         } else {           return todo;         }       })     );   }; }

This function provides a data object with our form data. We return an async function that will run after our edit is done. The docs explain all of this, but by doing this, we shut off SvelteKit’s default form handling that would have re-run our loader. This is exactly what we want! (We could easily get that default behavior back, as the docs explain.)

We now call update on our todos array since it’s a store. And that’s that. After editing a to-do item, our changes show up immediately and our cache is cleared (as before, since we set a new cookie value in our editTodo form action). So, if we search and then navigate back to this page, we’ll get fresh data from our loader, which will correctly exclude any updated to-do items that were updated.

The code for the immediate updates is available at GitHub.

Digging deeper

We can set cookies in any server load function (or server action), not just the root layout. So, if some data are only used underneath a single layout, or even a single page, you could set that cookie value there. Moreoever, if you’re not doing the trick I just showed manually updating on-screen data, and instead want your loader to re-run after a mutation, then you could always set a new cookie value right in that load function without any check against isDataRequest. It’ll set initially, and then anytime you run a server action that page layout will automatically invalidate and re-call your loader, re-setting the cache bust string before your universal loader is called.

Writing a reload function

Let’s wrap-up by building one last feature: a reload button. Let’s give users a button that will clear cache and then reload the current query.

We’ll add a dirt simple form action:

async reloadTodos({ cookies }) {   cookies.set('todos-cache', +new Date(), { path: '/', httpOnly: false }); },

In a real project you probably wouldn’t copy/paste the same code to set the same cookie in the same way in multiple places, but for this post we’ll optimize for simplicity and readability.

Now let’s create a form to post to it:

<form method="POST" action="?/reloadTodos" use:enhance>   <button>Reload todos</button> </form>

That works!

UI after reload.

We could call this done and move on, but let’s improve this solution a bit. Specifically, let’s provide feedback on the page to tell the user the reload is happening. Also, by default, SvelteKit actions invalidate everything. Every layout, page, etc. in the current page’s hierarchy would reload. There might be some data that’s loaded once in the root layout that we don’t need to invalidate or re-load.

So, let’s focus things a bit, and only reload our to-dos when we call this function.

First, let’s pass a function to enhance:

<form method="POST" action="?/reloadTodos" use:enhance={reloadTodos}>
import { enhance } from "$ app/forms"; import { invalidate } from "$ app/navigation";  let reloading = false; const reloadTodos = () => {   reloading = true;    return async () => {     invalidate("reload:todos").then(() => {       reloading = false;     });   }; };

We’re setting a new reloading variable to true at the start of this action. And then, in order to override the default behavior of invalidating everything, we return an async function. This function will run when our server action is finished (which just sets a new cookie).

Without this async function returned, SvelteKit would invalidate everything. Since we’re providing this function, it will invalidate nothing, so it’s up to us to tell it what to reload. We do this with the invalidate function. We call it with a value of reload:todos. This function returns a promise, which resolves when the invalidation is complete, at which point we set reloading back to false.

Lastly, we need to sync our loader up with this new reload:todos invalidation value. We do that in our loader with the depends function:

export async function load({ fetch, url, setHeaders, depends }) {     depends('reload:todos');    // rest is the same

And that’s that. depends and invalidate are incredibly useful functions. What’s cool is that invalidate doesn’t just take arbitrary values we provide like we did. We can also provide a URL, which SvelteKit will track, and invalidate any loaders that depend on that URL. To that end, if you’re wondering whether we could skip the call to depends and invalidate our /api/todos endpoint altogether, you can, but you have to provide the exact URL, including the search term (and our cache value). So, you could either put together the URL for the current search, or match on the path name, like this:

invalidate(url => url.pathname == "/api/todos");

Personally, I find the solution that uses depends more explicit and simple. But see the docs for more info, of course, and decide for yourself.

If you’d like to see the reload button in action, the code for it is in this branch of the repo.

Parting thoughts

This was a long post, but hopefully not overwhelming. We dove into various ways we can cache data when using SvelteKit. Much of this was just a matter of using web platform primitives to add the correct cache, and cookie values, knowledge of which will serve you in web development in general, beyond just SvelteKit.

Moreover, this is something you absolutely do not need all the time. Arguably, you should only reach for these sort of advanced features when you actually need them. If your datastore is serving up data quickly and efficiently, and you’re not dealing with any kind of scaling problems, there’s no sense in bloating your application code with needless complexity doing the things we talked about here.

As always, write clear, clean, simple code, and optimize when necessary. The purpose of this post was to provide you those optimization tools for when you truly need them. I hope you enjoyed it!


Caching Data in SvelteKit originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

CSS-Tricks

, ,

Rendering External API Data in WordPress Blocks on the Back End

This is a continuation of my last article about “Rendering External API Data in WordPress Blocks on the Front End”. In that last one, we learned how to take an external API and integrate it with a block that renders the fetched data on the front end of a WordPress site.

The thing is, we accomplished this in a way that prevents us from seeing the data in the WordPress Block Editor. In other words, we can insert the block on a page but we get no preview of it. We only get to see the block when it’s published.

Let’s revisit the example block plugin we made in the last article. Only this time, we’re going to make use of the JavaScript and React ecosystem of WordPress to fetch and render that data in the back-end Block Editor as well.

Where we left off

As we kick this off, here’s a demo where we landed in the last article that you can reference. You may have noticed that I used a render_callback method in the last article so that I can make use of the attributes in the PHP file and render the content.

Well, that may be useful in situations where you might have to use some native WordPress or PHP function to create dynamic blocks. But if you want to make use of just the JavaScript and React (JSX, specifically) ecosystem of WordPress to render the static HTML along with the attributes stored in the database, you only need to focus on the Edit and Save functions of the block plugin.

  • The Edit function renders the content based on what you want to see in the Block Editor. You can have interactive React components here.
  • The Save function renders the content based on what you want to see on the front end. You cannot have the the regular React components or the hooks here. It is used to return the static HTML that is saved into your database along with the attributes.

The Save function is where we’re hanging out today. We can create interactive components on the front-end, but for that we need to manually include and access them outside the Save function in a file like we did in the last article.

So, I am going to cover the same ground we did in the last article, but this time you can see the preview in the Block Editor before you publish it to the front end.

The block props

I intentionally left out any explanations about the edit function’s props in the last article because that would have taken the focus off of the main point, the rendering.

If you are coming from a React background, you will likely understand what is that I am talking about, but if you are new to this, I would recommend checking out components and props in the React documentation.

If we log the props object to the console, it returns a list of WordPress functions and variables related to our block:

Console log of the block properties.

We only need the attributes object and the setAttributes function which I am going to destructure from the props object in my code. In the last article, I had modified RapidAPI’s code so that I can store the API data through setAttributes(). Props are only readable, so we are unable to modify them directly.

Block props are similar to state variables and setState in React, but React works on the client side and setAttributes() is used to store the attributes permanently in the WordPress database after saving the post. So, what we need to do is save them to attributes.data and then call that as the initial value for the useState() variable.

The edit function

I am going to copy-paste the HTML code that we used in football-rankings.php in the last article and edit it a little to shift to the JavaScript background. Remember how we created two additional files in the last article for the front end styling and scripts? With the way we’re approaching things today, there’s no need to create those files. Instead, we can move all of it to the Edit function.

Full code
import { useState } from "@wordpress/element"; export default function Edit(props) {   const { attributes, setAttributes } = props;   const [apiData, setApiData] = useState(null);     function fetchData() {       const options = {         method: "GET",         headers: {           "X-RapidAPI-Key": "Your Rapid API key",           "X-RapidAPI-Host": "api-football-v1.p.rapidapi.com",         },       };       fetch(         "https://api-football-v1.p.rapidapi.com/v3/standings?season=2021&league=39",           options       )       .then((response) => response.json())       .then((response) => {         let newData = { ...response }; // Deep clone the response data         setAttributes({ data: newData }); // Store the data in WordPress attributes         setApiData(newData); // Modify the state with the new data       })       .catch((err) => console.error(err));     }     return (       <div {...useBlockProps()}>         <button onClick={() => getData()}>Fetch data</button>         {apiData && (           <>           <div id="league-standings">             <div               className="header"               style={{                 backgroundImage: `url($ {apiData.response[0].league.logo})`,               }}             >               <div className="position">Rank</div>               <div className="team-logo">Logo</div>               <div className="team-name">Team name</div>               <div className="stats">                 <div className="games-played">GP</div>                 <div className="games-won">GW</div>                 <div className="games-drawn">GD</div>                 <div className="games-lost">GL</div>                 <div className="goals-for">GF</div>                 <div className="goals-against">GA</div>                 <div className="points">Pts</div>               </div>               <div className="form-history">Form history</div>             </div>             <div className="league-table">               {/* Usage of [0] might be weird but that is how the API structure is. */}               {apiData.response[0].league.standings[0].map((el) => {                                  {/* Destructure the required data from all */}                 const { played, win, draw, lose, goals } = el.all;                   return (                     <>                     <div className="team">                       <div class="position">{el.rank}</div>                       <div className="team-logo">                         <img src={el.team.logo} />                       </div>                       <div className="team-name">{el.team.name}</div>                       <div className="stats">                         <div className="games-played">{played}</div>                         <div className="games-won">{win}</div>                         <div className="games-drawn">{draw}</div>                         <div className="games-lost">{lose}</div>                         <div className="goals-for">{goals.for}</div>                         <div className="goals-against">{goals.against}</div>                         <div className="points">{el.points}</div>                       </div>                       <div className="form-history">                         {el.form.split("").map((result) => {                           return (                             <div className={`result-$ {result}`}>{result}</div>                           );                         })}                       </div>                     </div>                     </>                   );                 }               )}             </div>           </div>         </>       )}     </div>   ); }

I have included the React hook useState() from @wordpress/element rather than using it from the React library. That is because if I were to load the regular way, it would download React for every block that I am using. But if I am using @wordpress/element it loads from a single source, i.e., the WordPress layer on top of React.

This time, I have also not wrapped the code inside useEffect() but inside a function that is called only when clicking on a button so that we have a live preview of the fetched data. I have used a state variable called apiData to render the league table conditionally. So, once the button is clicked and the data is fetched, I am setting apiData to the new data inside the fetchData() and there is a rerender with the HTML of the football rankings table available.

You will notice that once the post is saved and the page is refreshed, the league table is gone. That is because we are using an empty state (null) for apiData‘s initial value. When the post saves, the attributes are saved to the attributes.data object and we call it as the initial value for the useState() variable like this:

const [apiData, setApiData] = useState(attributes.data);

The save function

We are going to do almost the same exact thing with the save function, but modify it a little bit. For example, there’s no need for the “Fetch data” button on the front end, and the apiData state variable is also unnecessary because we are already checking it in the edit function. But we do need a random apiData variable that checks for attributes.data to conditionally render the JSX or else it will throw undefined errors and the Block Editor UI will go blank.

Full code
export default function save(props) {   const { attributes, setAttributes } = props;   let apiData = attributes.data;   return (     <>       {/* Only render if apiData is available */}       {apiData && (         <div {...useBlockProps.save()}>         <div id="league-standings">           <div             className="header"             style={{               backgroundImage: `url($ {apiData.response[0].league.logo})`,             }}           >             <div className="position">Rank</div>             <div className="team-logo">Logo</div>             <div className="team-name">Team name</div>             <div className="stats">               <div className="games-played">GP</div>               <div className="games-won">GW</div>               <div className="games-drawn">GD</div>               <div className="games-lost">GL</div>               <div className="goals-for">GF</div>               <div className="goals-against">GA</div>               <div className="points">Pts</div>             </div>             <div className="form-history">Form history</div>           </div>           <div className="league-table">             {/* Usage of [0] might be weird but that is how the API structure is. */}             {apiData.response[0].league.standings[0].map((el) => {               const { played, win, draw, lose, goals } = el.all;                 return (                   <>                   <div className="team">                     <div className="position">{el.rank}</div>                       <div className="team-logo">                         <img src={el.team.logo} />                       </div>                       <div className="team-name">{el.team.name}</div>                       <div className="stats">                         <div className="games-played">{played}</div>                         <div className="games-won">{win}</div>                         <div className="games-drawn">{draw}</div>                         <div className="games-lost">{lose}</div>                         <div className="goals-for">{goals.for}</div>                         <div className="goals-against">{goals.against}</div>                         <div className="points">{el.points}</div>                       </div>                       <div className="form-history">                         {el.form.split("").map((result) => {                           return (                             <div className={`result-$ {result}`}>{result}</div>                           );                         })}                       </div>                     </div>                   </>                 );               })}             </div>           </div>         </div>       )}     </>   ); }

If you are modifying the save function after a block is already present in the Block Editor, it would show an error like this:

The football rankings block in the WordPress block Editor with an error message that the block contains an unexpected error.

That is because the markup in the saved content is different from the markup in our new save function. Since we are in development mode, it is easier to remove the bock from the current page and re-insert it as a new block — that way, the updated code is used instead and things are back in sync.

This situation of removing it and adding it again can be avoided if we had used the render_callback method since the output is dynamic and controlled by PHP instead of the save function. So each method has it’s own advantages and disadvantages.

Tom Nowell provides a thorough explanation on what not to do in a save function in this Stack Overflow answer.

Styling the block in the editor and the front end

Regarding the styling, it is going to be almost the same thing we looked at in the last article, but with some minor changes which I have explained in the comments. I’m merely providing the full styles here since this is only a proof of concept rather than something you want to copy-paste (unless you really do need a block for showing football rankings styled just like this). And note that I’m still using SCSS that compiles to CSS on build.

Editor styles
/* Target all the blocks with the data-title="Football Rankings" */ .block-editor-block-list__layout  .block-editor-block-list__block.wp-block[data-title="Football Rankings"] {   /* By default, the blocks are constrained within 650px max-width plus other design specific code */   max-width: unset;   background: linear-gradient(to right, #8f94fb, #4e54c8);   display: grid;   place-items: center;   padding: 60px 0;    /* Button CSS - From: https://getcssscan.com/css-buttons-examples - Some properties really not needed :) */   button.fetch-data {     align-items: center;     background-color: #ffffff;     border: 1px solid rgb(0 0 0 / 0.1);     border-radius: 0.25rem;     box-shadow: rgb(0 0 0 / 0.02) 0 1px 3px 0;     box-sizing: border-box;     color: rgb(0 0 0 / 0.85);     cursor: pointer;     display: inline-flex;     font-family: system-ui, -apple-system, system-ui, "Helvetica Neue", Helvetica, Arial, sans-serif;     font-size: 16px;     font-weight: 600;     justify-content: center;     line-height: 1.25;     margin: 0;     min-height: 3rem;     padding: calc(0.875rem - 1px) calc(1.5rem - 1px);     position: relative;     text-decoration: none;     transition: all 250ms;     user-select: none;     -webkit-user-select: none;     touch-action: manipulation;     vertical-align: baseline;     width: auto;     &:hover,     &:focus {       border-color: rgb(0, 0, 0, 0.15);       box-shadow: rgb(0 0 0 / 0.1) 0 4px 12px;       color: rgb(0, 0, 0, 0.65);     }     &:hover {       transform: translateY(-1px);     }     &:active {       background-color: #f0f0f1;       border-color: rgb(0 0 0 / 0.15);       box-shadow: rgb(0 0 0 / 0.06) 0 2px 4px;       color: rgb(0 0 0 / 0.65);       transform: translateY(0);     }   } }
Front-end styles
/* Front-end block styles */ .wp-block-post-content .wp-block-football-rankings-league-table {   background: linear-gradient(to right, #8f94fb, #4e54c8);   max-width: unset;   display: grid;   place-items: center; }  #league-standings {   width: 900px;   margin: 60px 0;   max-width: unset;   font-size: 16px;   .header {     display: grid;     gap: 1em;     padding: 10px;     grid-template-columns: 1fr 1fr 3fr 4fr 3fr;     align-items: center;     color: white;     font-size: 16px;     font-weight: 600;     background-color: transparent;     background-repeat: no-repeat;     background-size: contain;     background-position: right;      .stats {       display: flex;       gap: 15px;       &amp; &gt; div {         width: 30px;       }     }   } } .league-table {   background: white;   box-shadow:     rgba(50, 50, 93, 0.25) 0px 2px 5px -1px,     rgba(0, 0, 0, 0.3) 0px 1px 3px -1px;   padding: 1em;   .position {     width: 20px;   }   .team {     display: grid;     gap: 1em;     padding: 10px 0;     grid-template-columns: 1fr 1fr 3fr 4fr 3fr;     align-items: center;   }   .team:not(:last-child) {     border-bottom: 1px solid lightgray;   }   .team-logo img {     width: 30px;     top: 3px;     position: relative;   }   .stats {     display: flex;     gap: 15px;     &amp; &gt; div {       width: 30px;       text-align: center;     }   }   .last-5-games {     display: flex;     gap: 5px;     &amp; &gt; div {       width: 25px;       height: 25px;       text-align: center;       border-radius: 3px;       font-size: 15px;     &amp; .result-W {       background: #347d39;       color: white;     }     &amp; .result-D {       background: gray;       color: white;     }     &amp; .result-L {       background: lightcoral;       color: white;     }   } } 

We add this to src/style.scss which takes care of the styling in both the editor and the frontend. I will not be able to share the demo URL since it would require editor access but I have a video recorded for you to see the demo:


Pretty neat, right? Now we have a fully functioning block that not only renders on the front end, but also fetches API data and renders right there in the Block Editor — with a refresh button to boot!

But if we want to take full advantage of the WordPress Block Editor, we ought to consider mapping some of the block’s UI elements to block controls for things like setting color, typography, and spacing. That’s a nice next step in the block development learning journey.


Rendering External API Data in WordPress Blocks on the Back End originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

CSS-Tricks

, , , , ,
[Top]

Rendering External API Data in WordPress Blocks on the Front End

There’ve been some new tutorials popping here on CSS-Tricks for working with WordPress blocks. One of them is an introduction to WordPress block development and it’s a good place to learn what blocks are and to register them in WordPress for use in pages and posts.

While the block basics are nicely covered in that post, I want to take it another step forward. You see, in that article, we learned the difference between rendering blocks in the back-end WordPress Block Editor and rendering them on the front-end theme. The example was a simple Pullquote Block that rendered different content and styles on each end.

Let’s go further and look at using dynamic content in a WordPress block. More specifically, let’s fetch data from an external API and render it on the front end when a particular block is dropped into the Block Editor.

We’re going to build a block that outputs data that shows soccer (er, football) rankings pulled from Api-Football.

An ordered set of football team rankings showing team logos, names, and game results.
This is what we’re working for together.

There’s more than one way to integrate an API with a WordPress block! Since the article on block basics has already walked through the process of making a block from scratch, we’re going to simplify things by using the @wordpress/create-block package to bootstrap our work and structure our project.

Initializing our block plugin

First things first: let’s spin up a new project from the command line:

npx @wordpress/create-block football-rankings

I normally would kick a project like this off by making the files from scratch, but kudos to the WordPress Core team for this handy utility!

Once the project folder has been created by the command, we technically have a fully-functional WordPress block registered as a plugin. So, let’s go ahead and drop the project folder into the wp-content/plugins directory where you have WordPress installed (probably best to be working in a local environment), then log into the WordPress admin and activate it from the Plugins screen.

Now that our block is initialized, installed, and activated, go ahead and open up the project folder from at /wp-content/plugins/football-rankings. You’re going to want to cd there from the command line as well to make sure we can continue development.

These are the only files we need to concentrate on at the moment:

  • edit.js
  • index.js
  • football-rankings.php

The other files in the project are important, of course, but are inessential at this point.

Reviewing the API source

We already know that we’re using Api-Football which comes to us courtesy of RapidAPI. Fortunately, RapidAPI has a dashboard that automatically generates the required scripts we need to fetch the API data for the 2021 Premier League Standings.

A dashboard interface with three columns showing code and data from an API source.
The RapidAPI dashboard

If you want to have a look on the JSON structure, you can generate visual representation with JSONCrack.

Fetching data from the edit.js file

I am going to wrap the RapidAPI code inside a React useEffect() hook with an empty dependency array so that it runs only once when the page is loaded. This way, we prevent WordPress from calling the API each time the Block Editor re-renders. You can check that using wp.data.subscribe() if you care to.

Here’s the code where I am importing useEffect(), then wrapping it around the fetch() code that RapidAPI provided:

/** * The edit function describes the structure of your block in the context of the * editor. This represents what the editor will render when the block is used. * * @see https://developer.wordpress.org/block-editor/reference-guides/block-api/block-edit-save/#edit * * @return {WPElement} Element to render. */  import { useEffect } from "@wordpress/element";  export default function Edit(props) {   const { attributes, setAttributes } = props;    useEffect(() => {     const options = {       method: "GET",       headers: {         "X-RapidAPI-Key": "Your Rapid API key",         "X-RapidAPI-Host": "api-football-v1.p.rapidapi.com",       },     };      fetch("https://api-football-v1.p.rapidapi.com/v3/standings?season=2021&league=39", options)       .then( ( response ) => response.json() )       .then( ( response ) => {         let newData = { ...response };         setAttributes( { data: newData } );         console.log( "Attributes", attributes );       })       .catch((err) => console.error(err)); }, []);    return (     <p { ...useBlockProps() }>       { __( "Standings loaded on the front end", "external-api-gutenberg" ) }     </p>   ); }

Notice that I have left the return function pretty much intact, but have included a note that confirms the football standings are rendered on the front end. Again, we’re only going to focus on the front end in this article — we could render the data in the Block Editor as well, but we’ll leave that for another article to keep things focused.

Storing API data in WordPress

Now that we are fetching data, we need to store it somewhere in WordPress. This is where the attributes.data object comes in handy. We are defining the data.type as an object since the data is fetched and formatted as JSON. Make sure you don’t have any other type or else WordPress won’t save the data, nor does it throw any error for you to debug.

We define all this in our index.js file:

registerBlockType( metadata.name, {   edit: Edit,   attributes: {     data: {       type: "object",     },   },   save, } );

OK, so WordPress now knows that the RapidAPI data we’re fetching is an object. If we open a new post draft in the WordPress Block Editor and save the post, the data is now stored in the database. In fact, if we can see it in the wp_posts.post_content field if we open the site’s database in phpMyAdmin, Sequel Pro, Adminer, or whatever tool you use.

Showing a large string of JSON output in a database table.
API output stored in the WordPress database

Outputting JSON data in the front end

There are multiple ways to output the data on the front end. The way I’m going to show you takes the attributes that are stored in the database and passes them as a parameter through the render_callback function in our football-rankings.php file.

I like keeping a separation of concerns, so how I do this is to add two new files to the block plugin’s build folder: frontend.js and frontend.css (you can create a frontend.scss file in the src directory which compiled to CSS in the build directory). This way, the back-end and front-end codes are separate and the football-rankings.php file is a little easier to read.

/explanation Referring back to the introduction to WordPress block development, there are editor.css and style.css files for back-end and shared styles between the front and back end, respectively. By adding frontend.scss (which compiles to frontend.css, I can isolate styles that are only intended for the front end.

Before we worry about those new files, here’s how we call them in football-rankings.php:

/** * Registers the block using the metadata loaded from the `block.json` file. * Behind the scenes, it registers also all assets so they can be enqueued * through the block editor in the corresponding context. * * @see https://developer.wordpress.org/reference/functions/register_block_type/ */ function create_block_football_rankings_block_init() {   register_block_type( __DIR__ . '/build', array(     'render_callback' => 'render_frontend'   )); } add_action( 'init', 'create_block_football_rankings_block_init' );  function render_frontend($ attributes) {   if( !is_admin() ) {     wp_enqueue_script( 'football_rankings', plugin_dir_url( __FILE__ ) . '/build/frontend.js');     wp_enqueue_style( 'football_rankings', plugin_dir_url( __FILE__ ) . '/build/frontend.css' ); // HIGHLIGHT 15,16,17,18   }      ob_start(); ?>    <div class="football-rankings-frontend" id="league-standings">     <div class="data">       <pre>         <?php echo wp_json_encode( $ attributes ) ?>       </pre>     </div>     <div class="header">       <div class="position">Rank</div>       <div class="team-logo">Logo</div>       <div class="team-name">Team name</div>       <div class="stats">         <div class="games-played">GP</div>         <div class="games-won">GW</div>         <div class="games-drawn">GD</div>         <div class="games-lost">GL</div>         <div class="goals-for">GF</div>         <div class="goals-against">GA</div>         <div class="points">Pts</div>       </div>       <div class="form-history">Last 5 games</div>     </div>     <div class="league-table"></div>   </div>    <?php return ob_get_clean(); }

Since I am using the render_callback() method for the attributes, I am going to handle the enqueue manually just like the Block Editor Handbook suggests. That’s contained in the !is_admin() condition, and is enqueueing the two files so that we avoid enqueuing them while using the editor screen.

Now that we have two new files we’re calling, we’ve gotta make sure we are telling npm to compile them. So, do that in package.json, in the scripts section:

"scripts": {   "build": "wp-scripts build src/index.js src/frontend.js",   "format": "wp-scripts format",   "lint:css": "wp-scripts lint-style",   "lint:js": "wp-scripts lint-js",   "packages-update": "wp-scripts packages-update",   "plugin-zip": "wp-scripts plugin-zip",   "start": "wp-scripts start src/index.js src/frontend.js" },

Another way to include the files is to define them in the block metadata contained in our block.json file, as noted in the introduction to block development:

"viewScript": [ "file:./frontend.js", "example-shared-view-script" ], "style": [ "file:./frontend.css", "example-shared-style" ],

The only reason I’m going with the package.json method is because I am already making use of the render_callback() method.

Rendering the JSON data

In the rendering part, I am concentrating only on a single block. Generally speaking, you would want to target multiple blocks on the front end. In that case, you need to make use of document.querySelectorAll() with the block’s specific ID.

I’m basically going to wait for the window to load and grab data for a few key objects from JSON and apply them to some markup that renders them on the front end. I am also going to convert the attributes data to a JSON object so that it is easier to read through the JavaScript and set the details from JSON to HTML for things like the football league logo, team logos, and stats.

The “Last 5 games” column shows the result of a team’s last five matches. I have to manually alter the data for it since the API data is in a string format. Converting it to an array can help make use of it in HTML as a separate element for each of a team’s last five matches.

import "./frontend.scss";  // Wait for the window to load window.addEventListener( "load", () => {   // The code output   const dataEl = document.querySelector( ".data pre" ).innerHTML;   // The parent rankings element   const tableEl = document.querySelector( ".league-table" );   // The table headers   const tableHeaderEl = document.querySelector( "#league-standings .header" );   // Parse JSON for the code output   const dataJSON = JSON.parse( dataEl );   // Print a little note in the console   console.log( "Data from the front end", dataJSON );      // All the teams    let teams = dataJSON.data.response[ 0 ].league.standings[ 0 ];   // The league logo   let leagueLogoURL = dataJSON.data.response[ 0 ].league.logo;   // Apply the league logo as a background image inline style   tableHeaderEl.style.backgroundImage = `url( $ { leagueLogoURL } )`;      // Loop through the teams   teams.forEach( ( team, index ) => {     // Make a div for each team     const teamDiv = document.createElement( "div" );     // Set up the columns for match results     const { played, win, draw, lose, goals } = team.all;      // Add a class to the parent rankings element     teamDiv.classList.add( "team" );     // Insert the following markup and data in the parent element     teamDiv.innerHTML = `       <div class="position">         $ { index + 1 }       </div>       <div class="team-logo">         <img src="$ { team.team.logo }" />       </div>       <div class="team-name">$ { team.team.name }</div>       <div class="stats">         <div class="games-played">$ { played }</div>         <div class="games-won">$ { win }</div>         <div class="games-drawn">$ { draw }</div>         <div class="games-lost">$ { lose }</div>         <div class="goals-for">$ { goals.for }</div>         <div class="goals-against">$ { goals.against }</div>         <div class="points">$ { team.points }</div>       </div>       <div class="form-history"></div>     `;          // Stringify the last five match results for a team     const form = team.form.split( "" );          // Loop through the match results     form.forEach( ( result ) => {       // Make a div for each result       const resultEl = document.createElement( "div" );       // Add a class to the div       resultEl.classList.add( "result" );       // Evaluate the results       resultEl.innerText = result;       // If the result a win       if ( result === "W" ) {         resultEl.classList.add( "win" );       // If the result is a draw       } else if ( result === "D" ) {         resultEl.classList.add( "draw" );       // If the result is a loss       } else {         resultEl.classList.add( "lost" );       }       // Append the results to the column       teamDiv.querySelector( ".form-history" ).append( resultEl );     });      tableEl.append( teamDiv );   }); });

As far as styling goes, you’re free to do whatever you want! If you want something to work with, I have a full set of styles you can use as a starting point.

I styled things in SCSS since the @wordpress/create-block package supports it out of the box. Run npm run start in the command line to watch the SCSS files and compile them to CSS on save. Alternately, you can use npm run build on each save to compile the SCSS and build the rest of the plugin bundle.

View SCSS
body {   background: linear-gradient(to right, #8f94fb, #4e54c8); }  .data pre {   display: none; }  .header {   display: grid;   gap: 1em;   padding: 10px;   grid-template-columns: 1fr 1fr 3fr 4fr 3fr;   align-items: center;   color: white;   font-size: 16px;   font-weight: 600;   background-repeat: no-repeat;   background-size: contain;   background-position: right; }  .frontend#league-standings {   width: 900px;   margin: 60px 0;   max-width: unset;   font-size: 16px;    .header {     .stats {       display: flex;       gap: 15px;        &amp; &gt; div {         width: 30px;       }     }   } }  .league-table {   background: white;   box-shadow:     rgba(50, 50, 93, 0.25) 0px 2px 5px -1px,     rgba(0, 0, 0, 0.3) 0px 1px 3px -1px;   padding: 1em;    .position {     width: 20px;   }    .team {     display: grid;     gap: 1em;     padding: 10px 0;     grid-template-columns: 1fr 1fr 3fr 4fr 3fr;     align-items: center;   }    .team:not(:last-child) {     border-bottom: 1px solid lightgray;   }    .team-logo img {     width: 30px;   }    .stats {     display: flex;     gap: 15px;   }    .stats &gt; div {     width: 30px;     text-align: center;   }    .form-history {     display: flex;     gap: 5px;   }    .form-history &gt; div {     width: 25px;     height: 25px;     text-align: center;     border-radius: 3px;     font-size: 15px;   }    .form-history .win {     background: #347d39;     color: white;   }    .form-history .draw {     background: gray;     color: white;   }    .form-history .lost {     background: lightcoral;     color: white;   } }

Here’s the demo!

Check that out — we just made a block plugin that fetches data and renders it on the front end of a WordPress site.

We found an API, fetch()-ed data from it, saved it to the WordPress database, parsed it, and applied it to some HTML markup to display on the front end. Not bad for a single tutorial, right?

Again, we can do the same sort of thing so that the rankings render in the Block Editor in addition to the theme’s front end. But hopefully keeping this focused on the front end shows you how fetching data works in a WordPress block, and how the data can be structured and rendered for display.


Rendering External API Data in WordPress Blocks on the Front End originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

CSS-Tricks

, , , , ,
[Top]

I’m confused about Static Site Generators. Are they only for informational sites where I can’t log in or display any dynamic data?

(This is a sponsored post.)

I got this question from a listener the other day. Fair question, I’d say. The word “Static” in “Static Site Generator” is at-odds with the word “Dynamic.” It seems to imply that the website created with a Static Site Generator (SSG) is locked in stone, only to be changed when it is run again some future date. That’s a somewhat unfortunate implication, if entirely understandable.

“Static Sites,” in actuality, can be as dynamic as any other side because of one¹ thing baked right in an available to any website: JavaScript. JavaScript can, say, hit an API and update the otherwise statically generated markup of a page. Just think. JavaScript. APIs. Markup… J… A… M… Jamstack!

Part of the trick to understanding this Jamstack world (aka Static Sites that do Dynamic Things) is just looking at what Netlify offers. Netlify literally only offers static hosting. No server-side languages (think Ruby, PHP, or Python) serving up individual pages on Netlify. So SSGs and Netlify go hand in hand. But let’s go through it as a list:

  • Netlify runs your build process for you. Which very likely includes a SSG. The point of which is largely that you keep your built site files out of your version control system, which would otherwise be a wasteful mess.
  • Netlify processes your forms. No need to run an always-on server just for this dynamic feature.
  • Netlify offers authentication. That’s right reader, auth is a first-class citzen of the platform.
  • Netlify runs your server side code in the form of cloud functions. Static hosting doesn’t mean you can’t do server side things. It means you do server side things with modern, cheap, secure, focused, fast, powerful cloud functions.
  • Netlify can build pages on-demand. Meaning you don’t have to pre-build all your pages if you don’t want to.

That’s just some of the feature set. Here’s a fun blog post from a little while ago with some of the staff’s favorite features, many of which aren’t in the list above. Jamstack is starting to literally mean that indeed dynamic things are happening to a static site.

I hope that answers the question for this particular reader and anyone else with the same confusion. SSGs can produce entirely static websites with zero dynamic data and that offer no special logged-in experience. But they can also be every bit as dynamic as any other site, just built from a more solid, static foundation.

  1. Well, let’s say two things. Dynamic things can be done via edge handlers as well, without the need for client-side JavaScript.

CSS-Tricks

, , , , , , , , , , , , ,
[Top]

React Suspense: Lessons Learned While Loading Data

Suspense is React’s forthcoming feature that helps coordinate asynchronous actions—like data loading—allowing you to easily prevent inconsistent state in your UI. I’ll provide a better explanation of what exactly that means, along with a quick introduction of Suspense, and then go over a somewhat realistic use case, and cover some lessons learned.

The features I’m covering are still in the alpha stage, and should by no means be used in production. This post is for folks who want to take a sneak peek at what’s coming, and see what the future looks like.

A Suspense primer

One of the more challenging parts of application development is coordinating application state and how data loads. It’s common for a state change to trigger new data loads in multiple locations. Typically, each piece of data would have its own loading UI (like a “spinner”), roughly where that data lives in the application. The asynchronous nature of data loading means each of these requests can be returned in any order. As a result, not only will your app have a bunch of different spinners popping in and out, but worse, your application might display inconsistent data. If two out of three of your data loads have completed, you’ll have a loading spinner sitting on top of that third location, still displaying the old, now outdated data.

I know that was a lot. If you find any of that baffling, you might be interested in a prior post I wrote about Suspense. That goes into much more detail on what Suspense is and what it accomplishes. Just note that a few minor pieces of it are now outdated, namely, the useTransition hook no longer takes a timeoutMs value, and waits as long as needed instead.

Now let’s do a quick walkthrough of the details, then get into a specific use case, which has a few lurking gotchas.

How does Suspense work?

Fortunately, the React team was smart enough to not limit these efforts to just loading data. Suspense works via low-level primitives, which you can apply to just about anything. Let’s take a quick look at these primitives.

First up is the <Suspense> boundary, which takes a fallback prop:

<Suspense fallback={<Fallback />}>

Whenever any child under this component suspends, it renders the fallback. No matter how many children are suspending, for whatever reason, the fallback is what shows. This is one way React ensures a consistent UI—it won’t render anything, until everything is ready.

But what about after things have rendered, initially, and now the user changes state, and loads new data. We certainly don’t want our existing UI to vanish and display our fallback; that would be a poor UX. Instead, we probably want to show one loading spinner, until all data are ready, and then show the new UI.

The useTransition hook accomplishes this. This hook returns a function and a boolean value. We call the function and wrap our state changes. Now things get interesting. React attempts to apply our state change. If anything suspends, React sets that boolean to true, then waits for the suspension to end. When it does, it’ll try to apply the state change again. Maybe it’ll succeed this time, or maybe something else suspends instead. Whatever the case, the boolean flag stays true until everything is ready, and then, and only then, does the state change complete and get reflected in the UI.

Lastly, how do we suspend? We suspend by throwing a promise. If data is requested, and we need to fetch, then we fetch—and throw a promise that’s tied to that fetch. The suspension mechanism being at a low level like this means we can use it with anything. The React.lazy utility for lazy loading components works with Suspense already, and I’ve previously written about using Suspense to wait until images are loaded before displaying a UI in order to prevent content from shifting.

Don’t worry, we’ll get into all this.

What we’re building

We’ll build something slightly different than the examples of many other posts like this. Remember, Suspense is still in alpha, so your favorite data loading utility probably doesn’t have Suspense support just yet. But that doesn’t mean we can’t fake a few things and get an idea of how Suspense works.

Let’s build an infinite loading list that displays some data, combined with some Suspense-based preloaded images. We’ll display our data, along with a button to load more. As data renders, we’ll preload the associated image, and Suspend until it’s ready.

This use case is based on actual work I’ve done on my side project (again, don’t use Suspense in production—but side projects are fair game). I was using my own GraphQL client, and this post is motivated by some of the difficulties I ran into. We’ll just fake the data loading in order to keep things simple and focus on Suspense itself, rather than any individual data loading utility.

Let’s build!

Here’s the sandbox for our initial attempt. We’re going to use it to walk through everything, so don’t feel pressured to understand all the code right now.

Our root App component renders a Suspense boundary like this:

<Suspense fallback={<Fallback />}>

Whenever anything suspends (unless the state change happened in a useTransition call), the fallback is what renders. To make things easier to follow, I made this Fallback component turn the entire UI pink, that way it’s tough to miss; our goal is to understand Suspense, not to build a quality UI.

We’re loading the current chunk of data inside of our DataList component:

const newData = useQuery(param);

Our useQuery hook is hardcoded to return fake data, including a timeout that simulates a network request. It handles caching the results and throws a promise if the data is not yet cached.

We’re keeping (at least for now) state in the master list of data we’re displaying:

const [data, setData] = useState([]);

As new data comes in from our hook, we append it to our master list:

useEffect(() => {   setData((d) => d.concat(newData)); }, [newData]);

Lastly, when the user wants more data, they click the button, which calls this:

function loadMore() {   startTransition(() => {     setParam((x) => x + 1);   }); }

Finally, note that I’m using a SuspenseImg component to handle preloading the image I’m displaying with each piece of data. There are only five random images being displayed, but I’m adding a query string to ensure a fresh load for each new piece of data we encounter.

Recap

To summarize where we are at this point, we have a hook that loads the current data. The hook obeys Suspense mechanics, and throws a promise while loading is happening. Whenever that data changes, the running total list of items is updated and appended with the new items. This happens in useEffect. Each item renders an image, and we use a SuspenseImg component to preload the image, and suspend until it’s ready. If you’re curious how some of that code works, check out my prior post on preloading images with Suspense.

Let’s test

This would be a pretty boring blog post if everything worked, and don’t worry, it doesn’t. Notice how, on the initial load, the pink fallback screen shows and then quickly hides, but then is redisplayed.

When we click the button that’s loads more data, we see the inline loading indicator (controlled by the useTransition hook) flip to true. Then we see it flip to false, before our original pink fallback shows. We were expecting to never see that pink screen again after the initial load; the inline loading indicator was supposed to show until everything was ready. What’s going on?

The problem

It’s been hiding right here in plain sight the entire time:

useEffect(() => {   setData((d) => d.concat(newData)); }, [newData]);

useEffect runs when a state change is complete, i.e., a state change has finished suspending, and has been applied to the DOM. That part, “has finished suspending,” is key here. We can set state in here if we’d like, but if that state change suspends, again, that is a brand new suspension. That’s why we saw the pink flash on initial load, as well subsequent loads when the data finished loading. In both cases, the data loading was finished, and then we set state in an effect which caused that new data to actually render, and suspend again, because of the image preloads.

So, how do we fix this? On one level, the solution is simple: stop setting state in the effect. But that’s easier said than done. How do we update our running list of entries to append new results as they come in, without using an effect. You might think we could track things with a ref.

Unfortunately, Suspense comes with some new rules about refs, namely, we can’t set refs inside of a render. If you’re wondering why, remember that Suspense is all about React attempting to run a render, seeing that promise get thrown, and then discarding that render midway through. If we mutated a ref before that render was cancelled and discarded, the ref would still have that changed, but invalid value. The render function needs to be pure, without side effects. This has always been a rule with React, but it matters more now.

Re-thinking our data loading

Here’s the solution, which we’ll go over, piece by piece.

First, instead of storing our master list of data in state, let’s do something different: let’s store a list of pages we’re viewing. We can store the most recent page in a ref (we won’t write to it in render, though), and we’ll store an array of all currently-loaded pages in state.

const currentPage = useRef(0); const [pages, setPages] = useState([currentPage.current]);

In order to load more data, we’ll update accordingly:

function loadMore() {   startTransition(() => {     currentPage.current = currentPage.current + 1;     setPages((pages) => pages.concat(currentPage.current));   }); }

The tricky part, however, is turning those page numbers into actual data. What we certainly cannot do is loop over those pages and call our useQuery hook; hooks cannot be called in a loop. What we need is a new, non-hook-based data API. Based on a very unofficial convention I’ve seen in past Suspense demos, I’ll name this method read(). It is not going to be a hook. It returns the requested data if it’s cached, or throws a promise otherwise. For our fake data loading hook, no real changes were necessary; I simple copy-and-pasted the hook, then renamed it. But for an actual data loading utility library, authors will likely need to do some work to expose both options as part of their public API. In my GraphQL client referenced earlier, there is indeed both a useSuspenseQuery hook, and also a read() method on the client object.

With this new read() method in place, the final piece of our code is trivial:

const data = pages.flatMap((page) => read(page));

We’re taking each page, and requesting the corresponding data with our read() method. If any of the pages are uncached (which really should only be the last page in the list) then a promise is thrown, and React suspends for us. When the promise resolves, React attempts the prior state change again, and this code runs again.

Don’t let the flatMap call confuse you. That does the exact same thing as map except it takes each result in the new array and, if it itself is an array, “flattens” it.

The result

With these changes in place, everything works as we expected it to when we started. Our pink loading screen shows once on their initial load, then, on subsequent loads, the inline loading state shows until everything is ready.

Parting thoughts

Suspense is an exciting update that’s coming to React. It’s still in the alpha stages, so don’t try to use it anywhere that matters. But if you’re the kind of developer who enjoys taking a sneak peek at upcoming things, then I hope this post provided you some good context and info that’s useful when this releases.


The post React Suspense: Lessons Learned While Loading Data appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , ,
[Top]

Building an Angular Data Grid With Filtering

(This is a sponsored post.)

Kendo UI makes it possible to go from a basic idea to a full-fledged app, thanks to a massive component library. We’re talking well over 100 components that are ready for you to drop into your app at will, whether it’s React, Angular or Vue you’re working in — they just work. That is because Kendo UI is actually a bundle of four JavaScript libraries, each built natively for their respective framework. But more than that, as we’ve covered before, the components are super themeable to the extent that you can make them whatever you want.

But here’s the real kicker with Kendo UI: it takes care of the heavy lifting. The styling is great and all, but what separates Kendo UI from other component frameworks is the functionality it provides right out of the box.

Case in point: data. Rather than spending all your time figuring out the best way to bind data to a component, that’s just a given which ultimately allows you to focus more of your time on theming and getting the UI just right.

Perhaps the best way to see how trivial Kendo UI makes working with data is to see it in action, so…

Let’s look at the Angular Grid component

This is the Kendo UI for Angular Data Grid component. Lots of data in there, right? We’re looking at a list of employees that displays a name, image, and other bits of information about each person.

Like all of Kendo UI’s components, it’s not like there’s one data grid component that they square-pegged to work in multiple frameworks. This data grid was built from scratch and designed specifically to work for Angular, just as their KendoReact Grid component is designed specifically for React.

Now, normally, a simple <table> element might do the trick, right? But Kendo UI for Angular’s data grid is chockfull of extras that make it a much better user experience. Notice right off the bat that it provides interactive functionality around things like exporting the data to Excel or PDF. And there are a bunch of other non-trivial features that would otherwise take a vast majority of the time and effort to make the component.

Filtering

Here’s one for you: filtering a grid of data. Say you’re looking at a list of employees like the data grid example above, but for a company that employees thousands of folks. It’d be hard to find a specific person without considering a slew of features, like search, sortable columns, and pagination — all of which Kendo UI’s data grid does.

Users can quickly parse the data bound to the Angular data grid. Filtering can be done through a dedicated filter row, or through a filter menu that pops up when clicking on a filter icon in the header of a column. 

One way to filter the data is to click on a column header, select the Filter option, and set the criteria.

Kendo UI’s documentation is great. Here’s how fast we can get this up and running.

First, import the component

No tricks here — import the data grid as you would any other component:

import { Component, OnInit, ViewChild } from '@angular/core'; import { DataBindingDirective } from '@progress/kendo-angular-grid'; import { process } from '@progress/kendo-data-query'; import { employees } from './employees'; import { images } from './images';

Next, call the component

@Component({   selector: 'my-app',   template: `     <kendo-grid>       // ...     </kendo-grid>   ` })

This is incomplete, of course, because next we have to…

Configure the component

The key feature we want to enable is filtering, but Kendo’s Angular Grid takes all kinds of feature parameters that can be enabled in one fell swoop, from sorting and grouping, to pagination and virtualization, to name a few.

Filtering? It’s a one-liner to bind it to the column headers.

@Component({   selector: 'my-app',   template: `     <kendo-grid       [kendoGridBinding]="gridView"       kendoGridSelectBy="id"       [selectedKeys]="mySelection"       [pageSize]="20"       [pageable]="true"       [sortable]="true"       [groupable]="true"       [reorderable]="true"       [resizable]="true"       [height]="500"       [columnMenu]="{ filter: true }"     >       // etc.     </kendo-grid>   ` })

Then, mark up the rest of the UI

We won’t go deep here. Kendo UI’s documentation has an excellent example of how that looks. This is a good time to work on the styling as well, which is done in a styles parameter. Again, theming a Kendo UI component is a cinch.

So far, we have a nice-looking data grid before we even plug any actual data into it!

And, finally, bind the data

You may have noticed right away when we imported the component that we imported “employee” data in the process. We need to bind that data to the component. Now, this is where someone like me would go run off in a corner and cry, but Kendo UI makes it a little too easy for that to happen.

// Active the component on init export class AppComponent implements OnInit {   // Bind the employee data to the component   @ViewChild(DataBindingDirective) dataBinding: DataBindingDirective;   // Set the grid's data source to the employee data file   public gridData: any[] = employees;   // Apply the data source to the Grid component view   public gridView: any[];    public mySelection: string[] = [];    public ngOnInit(): void {     this.gridView = this.gridData;   }   // Start processing the data   public onFilter(inputValue: string): void {     this.gridView = process(this.gridData, {       filter: {         // Set the type of logic (and/or)         logic: "or",         // Defining filters and their operators         filters: [           {             field: 'full_name',             operator: 'contains',             value: inputValue           },           {             field: 'job_title',             operator: 'contains',             value: inputValue           },           {             field: 'budget',             operator: 'contains',             value: inputValue           },           {             field: 'phone',             operator: 'contains',             value: inputValue           },           {             field: 'address',             operator: 'contains',             value: inputValue           }         ],       }     }).data;      this.dataBinding.skip = 0;   }    // ... }

Let’s see that demo again


That’s a heckuva lot of power with a minimal amount of effort. The Kendo UI APIs are extensive and turn even the most complex feature dead simple.

And we didn’t even get to all of the other wonderful goodies that we get with Kendo UI components. Take accessibility. Could you imagine all of the consideration that needs to go into making a component like this accessible? Like all of the other powerful features we get, Kendo UI tackles accessibility for us as well, taking on the heavy lifting that goes into making a keyboard-friendly UI that meets WCAG 2.0 Alice standards and is compliant with Section 508 and WAI-ARIA standards.


The post Building an Angular Data Grid With Filtering appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , ,
[Top]

A Themeable React Data Grid With Great UX-Focused Features

(This is a sponsored post.)

KendoReact can save you boatloads of time because it offers pre-built componentry you can use in your app right away. They look nice, but more importantly, they are easily themeable, so they look however you need them to look. And I’d say the looks aren’t even the important part. There are lots of component libraries out there that focus on the visuals. These components tackle the hardest interactivity problems in UI/UX, and do it with grace, speed, and accessibility in mind.

Let’s take a look at their React Data Grid component.

The ol’ <table> element is the right tool for the job for data grids, but a table doesn’t offer most of the features that make for a good data browsing experience. If we use the KendoReact <Grid /> component (and friends), we get an absolute ton of extra features, any one of which is non-trivial to pull off nicely, and all together make for an extremely compelling solution. Let’s go through a list of what you get.

Sortable Columns

You’ll surely pick a default ordering for your data, but if any given row of data has things like ID’s, dates, or names, it’s perfectly likely that a user would want to sort the column by that data. Perhaps they want to view the oldest orders, or the orders of the highest total value. HTML does not help with ordering in tables, so this is table stakes (get it?!) for a JavaScript library for data grids, and it’s perfectly handled here.

Pagination and Limits

When you have any more than, say, a few dozen rows of data, it’s common that you want to paginate it. That way users don’t have to scroll as much, and equally importantly, it keeps the page fast by not making the DOM too enormous. One of the problems with pagination though is it makes things like sorting harder! You can’t just sort the 20 rows you can see, it is expected that the entire data set gets sorted. Of course that’s handled in KendoReact’s Data Grid component.

Or, if pagination isn’t your thing, the data grid offers virtualized scrolling — in both the column and row directions. That’s a nice touch as the data loads quickly for smooth, natural scrolling.

Expandable Rows

A data grid might have a bunch of data visible across the row itself, but there might be even more data that a user might want to dig out of an entry once they find it. Perhaps it is data that doesn’t need to be cross-referenced in the same way column data is. This can be tricky to pull off, because of the way table cells are laid out. The data is still associated with a single row, but you often need more room than the width of one cell offers. With the KendoReact Data Grid component, you can pass in a detail prop with an arbitrary React component to show when a row is expanded. Super flexible!

Notice how the expanded details can have their own <Grid /> inside!

Responsive Design

Perhaps the most notoriously difficult thing to pull off with <table> designs is how to display them on small screens. Zooming out isn’t very good UX, nor is collapsing the table into something non-table-like. The thing about data grids is that they are all different, and you’ll know data is most important to your users best. The KendoReact Data Grid component helps with this by making your data grid scrollable/swipeable, and also being able to lock columns to make sure they continue to be easy to find and cross-reference.

Filtering Data

This is perhaps my favorite feature just because of how UX-focused it is. Imagine you’re looking at a big data grid of orders, and you’re like “Let me see all orders from White Clover Markets.” With a filtering feature, perhaps you quickly type “clover” into the filter input, and viola, all those orders are right there. That’s extra tricky stuff when you’re also supporting ordering and pagination — so it’s great all these features work together.

Grouping Data

Now this feature actually blows my mind 🤯 a little bit. Filtering and sorting are both very useful, but in some cases, they leave a little bit to be desired. For example, it’s easy to filter too far too quickly, leaving the data you are looking at very limited. And with sorting, you might be trying to look at a subset of data as well, but it’s up to your brain to figure out where that data begins and ends. With grouping, you can tell the data grid to strongly group together things that are the most important to you, but then still leverage filtering and sorting on top of that. It instantly makes your data exploration easier and more useful.

Localization

This is where you can really tell KendoReact went full monty. It would be highly unfortunate to pick some kind of component library and then realize that you need localization and realize it wasn’t made to be a first-class citizen. You avoid all that with KendoReact, which you can see in this Data Grid component. In the demo, you can flip out English for Spanish with a simple dropdown and see all the dates localized. You pull off any sort of translation and localization with the <LocalizationProvider> and <IntlProvider>, both comfortable React concepts.

Exporting to PDF or Excel

Here’s a live demo of this:

C’mon now! That’s very cool.

That’s not all…

Go check out the docs for the React Data Grid. There are a bunch more features we didn’t even get to here (row pinning! cell editing!). And here’s something to ease your mind: this component, and all the KendoReact components, are keyboard friendly and meet Section 508 accessibility standards. That is no small feat. When components are this complex and involve this much interactivity, getting the accessibility right is tough. So not only are you getting good-looking components that work everywhere, you’re getting richly interactive components that deliver UX beyond what you might even think of, and it’s all done fast and accessiblty. That’s pretty unreal, really.


The post A Themeable React Data Grid With Great UX-Focused Features appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , , ,
[Top]

Accessing Your Data With Netlify Functions and React

(This is a sponsored post.)

Static site generators are popular for their speed, security, and user experience. However, sometimes your application needs data that is not available when the site is built. React is a library for building user interfaces that helps you retrieve and store dynamic data in your client application. 

Fauna is a flexible, serverless database delivered as an API that completely eliminates operational overhead such as capacity planning, data replication, and scheduled maintenance. Fauna allows you to model your data as documents, making it a natural fit for web applications written with React. Although you can access Fauna directly via a JavaScript driver, this requires a custom implementation for each client that connects to your database. By placing your Fauna database behind an API, you can enable any authorized client to connect, regardless of the programming language.

Netlify Functions allow you to build scalable, dynamic applications by deploying server-side code that works as API endpoints. In this tutorial, you build a serverless application using React, Netlify Functions, and Fauna. You learn the basics of storing and retrieving your data with Fauna. You create and deploy Netlify Functions to access your data in Fauna securely. Finally, you deploy your React application to Netlify.

Getting started with Fauna

Fauna is a distributed, strongly consistent OLTP NoSQL serverless database that is ACID-compliant and offers a multi-model interface. Fauna also supports document, relational, graph, and temporal data sets from a single query. First, we will start by creating a database in the Fauna console by selecting the Database tab and clicking on the Create Database button.

Next, you will need to create a Collection. For this, you will need to select a database, and under the Collections tab, click on Create Collection.

Fauna uses a particular structure when it comes to persisting data. The design consists of attributes like the example below.

{   "ref": Ref(Collection("avengers"), "299221087899615749"),   "ts": 1623215668240000,   "data": {     "id": "db7bd11d-29c5-4877-b30d-dfc4dfb2b90e",     "name": "Captain America",     "power": "High Strength",     "description": "Shield"   } }

Notice that Fauna keeps a ref column which is a unique identifier used to identify a particular document. The ts attribute is a timestamp to determine the time of creating the record and the data attribute responsible for the data.

Why creating an index is important

Next, let’s create two indexes for our avengers collection. This will be pretty valuable in the latter part of the project. You can create an index from the Index tab or from the Shell tab, which provides a console to execute scripts. Fauna supports two types of querying techniques: FQL (Fauna’s Query language) and GraphQL. FQL operates based on the schema of Fauna, which includes documents, collections, indexes, sets, and databases. 

Let’s create the indexes from the shell.

This command will create an index on the Collection, which will create an index by the id field inside the data object. This index will return a ref of the data object. Next, let’s create another index for the name attribute and name it avenger_by_name.

Creating a server key

To create a server key, we need to navigate the Security tab and click on the New Key button. This section will prompt you to create a key for a selected database and the user’s role.

Getting started with Netlify functions and React

In this section, we’ll see how we create Netlify functions with React. We will be using create-react-app to create the react app.

npx create-react-app avengers-faunadb

After creating the react app, let’s install some dependencies, including Fauna and Netlify dependencies.

yarn add axios bootstrap node-sass uuid faunadb react-netlify-identity react-netlify-identity-widget

Now let’s create our first Netlfiy function. To make the functions, first, we need to install Netlifiy CLI globally.

npm install netlify-cli -g

Now that the CLI is installed, let’s create a .env file on our project root with the following fields.

FAUNADB_SERVER_SECRET= <FaunaDB secret key> REACT_APP_NETLIFY= <Netlify app url>

Next, Let’s see how we can start with creating netlify functions. For this, we will need to create a directory in our project root called functions and a file called netlify.toml, which will be responsible for maintaining configurations for our Netlify project. This file defines our function’s directory, build directory, and commands to execute.

[build] command = "npm run build" functions = "functions/" publish = "build"  [[redirects]]   from = "/api/*"   to = "/.netlify/functions/:splat"   status = 200   force = true

We will do some additional configuration for the Netlify configuration file, like in the redirection section in this example. Notice that we are changing the default path of the Netlify function of /.netlify/** to /api/. This configuration is mainly for the improvement of the look and field of the API URL. So to trigger or call our function, we can use the path:

https://domain.com/api/getPokemons

 …instead of:

https://domain.com/.netlify/getPokemons

Next, let’s create our Netlify function in the functions directory. But, first, let’s make a connection file for Fauna called util/connections.js, returning a Fauna connection object.

const faunadb = require('faunadb'); const q = faunadb.query  const clientQuery = new faunadb.Client({   secret: process.env.FAUNADB_SERVER_SECRET, });  module.exports = { clientQuery, q };

Next, let’s create a helper function checking for reference and returning since we will need to parse the data on several occasions throughout the application. This file will be util/helper.js.

const responseObj = (statusCode, data) => {   return {     statusCode: statusCode,     headers: {      /* Required for CORS support to work */       "Access-Control-Allow-Origin": "*",       "Access-Control-Allow-Headers": "*",       "Access-Control-Allow-Methods": "GET, POST, OPTIONS",     },    body: JSON.stringify(data)   }; };  const requestObj = (data) => {   return JSON.parse(data); }  module.exports = { responseObj: responseObj, requestObj: requestObj }

Notice that the above helper functions handle the CORS issues, stringifying and parsing of JSON data. Let’s create our first function, getAvengers, which will return all the data.

const { responseObj } = require('./util/helper'); const { q, clientQuery } = require('./util/connection');  exports.handler = async (event, context) => {   try {    let avengers = await clientQuery.query(      q.Map(        q.Paginate(q.Documents(q.Collection('avengers'))),        q.Lambda(x => q.Get(x))       )     )     return responseObj(200, avengers)   } catch (error) {     console.log(error)     return responseObj(500, error);   } };

In the above code example, you can see that we have used several FQL commands like Map, Paginate, Lamda. The Map key is used to iterate through the array, and it takes two arguments: an Array and Lambda. We have passed the Paginate for the first parameter, which will check for reference and return a page of results (an array). Next, we used a Lamda statement, an anonymous function that is quite similar to an anonymous arrow function in ES6.

Next, Let’s create our function AddAvenger responsible for creating/inserting data into the Collection.

const { requestObj, responseObj } = require('./util/helper'); const { q, clientQuery } = require('./util/connection');  exports.handler = async (event, context) => {   let data = requestObj(event.body);    try {     let avenger = await clientQuery.query(       q.Create(         q.Collection('avengers'),         {           data: {             id: data.id,             name: data.name,             power: data.power,             description: data.description           }         }       )     );      return responseObj(200, avenger)   } catch (error) {     console.log(error)     return responseObj(500, error);   }   };

To save data for a particular collection, we will have to pass, or data to the data:{} object like in the above code example. Then we need to pass it to the Create function and point it to the collection you want and the data. So, let’s run our code and see how it works through the netlify dev command.

Let’s trigger the GetAvengers function through the browser through the URL http://localhost:8888/api/GetAvengers.

The above function will fetch the avenger object by the name property searching from the avenger_by_name index. But, first, let’s invoke the GetAvengerByName function through a Netlify function. For that, let’s create a function called SearchAvenger.

const { responseObj } = require('./util/helper'); const { q, clientQuery } = require('./util/connection');  exports.handler = async (event, context) => {   const {     queryStringParameters: { name },   } = event;    try {     let avenger = await clientQuery.query(       q.Call(q.Function("GetAvengerByName"), [name])     );     return responseObj(200, avenger)   } catch (error) {     console.log(error)     return responseObj(500, error);   } };

Notice that the Call function takes two arguments where the first parameter will be the reference for the FQL function that we created and the data that we need to pass to the function.

Calling the Netlify function through React

Now that several functions are available let’s consume those functions through React. Since the functions are REST APIs, let’s consume them via Axios, and for state management, let’s use React’s Context API. Let’s start with the Application context called AppContext.js.

import { createContext, useReducer } from "react"; import AppReducer from "./AppReducer"  const initialState = {     isEditing: false,     avenger: { name: '', description: '', power: '' },     avengers: [],     user: null,     isLoggedIn: false };  export const AppContext = createContext(initialState);  export const AppContextProvider = ({ children }) => {     const [state, dispatch] = useReducer(AppReducer, initialState);      const login = (data) => { dispatch({ type: 'LOGIN', payload: data }) }     const logout = (data) => { dispatch({ type: 'LOGOUT', payload: data }) }     const getAvenger = (data) => { dispatch({ type: 'GET_AVENGER', payload: data }) }     const updateAvenger = (data) => { dispatch({ type: 'UPDATE_AVENGER', payload: data }) }     const clearAvenger = (data) => { dispatch({ type: 'CLEAR_AVENGER', payload: data }) }     const selectAvenger = (data) => { dispatch({ type: 'SELECT_AVENGER', payload: data }) }     const getAvengers = (data) => { dispatch({ type: 'GET_AVENGERS', payload: data }) }     const createAvenger = (data) => { dispatch({ type: 'CREATE_AVENGER', payload: data }) }     const deleteAvengers = (data) => { dispatch({ type: 'DELETE_AVENGER', payload: data }) }      return <AppContext.Provider value={{         ...state,         login,         logout,         selectAvenger,         updateAvenger,         clearAvenger,         getAvenger,         getAvengers,         createAvenger,         deleteAvengers     }}>{children}</AppContext.Provider> }  export default AppContextProvider;

Let’s create the Reducers for this context in the AppReducer.js file, Which will consist of a reducer function for each operation in the application context.

const updateItem = (avengers, data) => {     let avenger = avengers.find((avenger) => avenger.id === data.id);     let updatedAvenger = { ...avenger, ...data };     let avengerIndex = avengers.findIndex((avenger) => avenger.id === data.id);     return [         ...avengers.slice(0, avengerIndex),         updatedAvenger,         ...avengers.slice(++avengerIndex),     ]; }  const deleteItem = (avengers, id) => {     return avengers.filter((avenger) => avenger.data.id !== id) }  const AppReducer = (state, action) => {     switch (action.type) {         case 'SELECT_AVENGER':             return {                 ...state,                 isEditing: true,                 avenger: action.payload             }         case 'CLEAR_AVENGER':             return {                 ...state,                 isEditing: false,                 avenger: { name: '', description: '', power: '' }             }         case 'UPDATE_AVENGER':             return {                 ...state,                 isEditing: false,                 avengers: updateItem(state.avengers, action.payload)             }         case 'GET_AVENGER':             return {                 ...state,                 avenger: action.payload.data             }         case 'GET_AVENGERS':             return {                 ...state,                 avengers: Array.isArray(action.payload && action.payload.data) ? action.payload.data : [{ ...action.payload }]             };         case 'CREATE_AVENGER':             return {                 ...state,                 avengers: [{ data: action.payload }, ...state.avengers]             };         case 'DELETE_AVENGER':             return {                 ...state,                 avengers: deleteItem(state.avengers, action.payload)             };         case 'LOGIN':             return {                 ...state,                 user: action.payload,                 isLoggedIn: true             };         case 'LOGOUT':             return {                 ...state,                 user: null,                 isLoggedIn: false             };         default:             return state     } }  export default AppReducer; 

Since the application context is now available, we can fetch data from the Netlify functions that we have created and persist them in our application context. So let’s see how to call one of these functions.

const { avengers, getAvengers } = useContext(AppContext);  const GetAvengers = async () => {   let { data } = await axios.get('/api/GetAvengers);   getAvengers(data) }

To get the data to the application contexts, let’s import the function getAvengers from our application context and pass the data fetched by the get call. This function will call the reducer function, which will keep the data in the context. To access the context, we can use the attribute called avengers. Next, let’s see how we could save data on the avengers collection.

const { createAvenger } = useContext(AppContext);  const CreateAvenger = async (e) => {   e.preventDefault();   let new_avenger = { id: uuid(), ...newAvenger }   await axios.post('/api/AddAvenger', new_avenger);   clear();   createAvenger(new_avenger) }

The above newAvenger object is the state object which will keep the form data. Notice that we pass a new id of type uuid to each of our documents. Thus, when the data is saved in Fauna, We will be using the createAvenger function in the application context to save the data in our context. Similarly, we can invoke all the netlify functions with CRUD operations like this via Axios.

How to deploy the application to Netlify

Now that we have a working application, we can deploy this app to Netlify. There are several ways that we can deploy this application:

  1. Connecting and deploying the application through GitHub
  2. Deploying the application through the Netlify CLI

Using the CLI will prompt you to enter specific details and selections, and the CLI will handle the rest. But in this example, we will be deploying the application through Github. So first, let’s log in to the Netlify dashboard and click on New Site from Git button. Next, It will prompt you to select the Repo you need to deploy and the configurations for your site like build command, build folder, etc.

How to authenticate and authorize functions by Netlify Identity

Netlify Identity provides a full suite of authentication functionality to your application which will help us to manage authenticated users throughout the application. Netlify Identity can be integrated easily into the application without using any other 3rd party service and libraries. To enable Netlify Identity, we need to login into our Neltify dashboard, and under our deployed site, we need to move to the Identity tab and allow the identity feature.

Enabling Identity will provide a link to your netlify identity. You will have to copy that URL and add it to the .env file of your application for REACT_APP_NETLIFY. Next, We need to add the Netlify Identity to our React application through the netlify-identity-widget and the Netlify functions. But, first, let’s add the REACT_APP_NETLIFY property for the Identity Context Provider component in the index.js file.

import React from 'react'; import ReactDOM from 'react-dom'; import './index.css'; import "react-netlify-identity-widget/styles.css" import 'bootstrap/dist/css/bootstrap.css'; import App from './App'; import { IdentityContextProvider } from "react-netlify-identity-widget" const url = process.env.REACT_APP_NETLIFY;  ReactDOM.render(   <IdentityContextProvider url=https://css-tricks.com/accessing-data-netlify-functions-react/>     <App />   </IdentityContextProvider>,   document.getElementById('root') );

This component is the Navigation bar that we use in this application. This component will be on top of all the other components to be the ideal place to handle the authentication. This react-netlify-identity-widget will add another component that will handle the user signI= in and sign up.

Next, let’s use the Identity in our Netlify functions. Identity will introduce some minor modifications to our functions, like the below function GetAvenger.

const { responseObj } = require('./util/helper'); const { q, clientQuery } = require('./util/connection');  exports.handler = async (event, context) => {     if (context.clientContext.user) {         const {             queryStringParameters: { id },         } = event;         try {             const avenger = await clientQuery.query(                 q.Get(                     q.Match(q.Index('avenger_by_id'), id)                 )             );             return responseObj(200, avenger)         } catch (error) {             console.log(error)             return responseObj(500, error);         }     } else {         return responseObj(401, 'Unauthorized');     } };

The context of each request will consist of a property called clientContext, which will consist of authenticated user details. In the above example, we use a simple if condition to check the user context. 

To get the clientContext in each of our requests, we need to pass the user token through the Authorization Headers. 

const { user } = useIdentityContext();  const GetAvenger = async (id) => {   let { data } = await axios.get('/api/GetAvenger/?id=' + id, user && {     headers: {       Authorization: `Bearer $ {user.token.access_token}`     }   });   getAvenger(data) }

This user token will be available in the user context once logged in to the application through the netlify identity widget.

As you can see, Netlify functions and Fauna look to be a promising duo for building serverless applications. You can follow this GitHub repo for the complete code and refer to this URL for the working demo.

Conclusion

In conclusion, Fauna and Netlify look to be a promising duo for building serverless applications. Netlify also provides the flexibility to extend its functionality through the plugins to enhance the experience. The pricing plan with pay as you go is ideal for developers to get started with fauna. Fauna is extremely fast, and it auto-scales so that developers will have the time to focus on their development more than ever. Fauna can handle complex database operations where you would find in Relational, Document, Graph, Temporal databases. Fauna Driver support all the major languages such as Android, C#, Go, Java, JavaScript, Python, Ruby, Scala, and Swift. With all these excellent features, Fauna looks to be one of the best Serverless databases. For more information, go through Fauna documentation.


The post Accessing Your Data With Netlify Functions and React appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , ,
[Top]

Implementing a single GraphQL across multiple data sources

(This is a sponsored post.)

In this article, we will discuss how we can apply schema stitching across multiple Fauna instances. We will also discuss how to combine other GraphQL services and data sources with Fauna in one graph.

What is Schema Stitching?

Schema stitching is the process of creating a single GraphQL API from multiple underlying GraphQL APIs.

Where is it useful?

While building large-scale applications, we often break down various functionalities and business logic into micro-services. It ensures the separation of concerns. However, there will be a time when our client applications need to query data from multiple sources. The best practice is to expose one unified graph to all your client applications. However, this could be challenging as we do not want to end up with a tightly coupled, monolithic GraphQL server. If you are using Fauna, each database has its own native GraphQL. Ideally, we would want to leverage Fauna’s native GraphQL as much as possible and avoid writing application layer code. However, if we are using multiple databases our front-end application will have to connect to multiple GraphQL instances. Such arrangement creates tight coupling. We want to avoid this in favor of one unified GraphQL server.

To remedy these problems, we can use schema stitching. Schema stitching will allow us to combine multiple GraphQL services into one unified schema. In this article, we will discuss

  1. Combining multiple Fauna instances into one GraphQL service
  2. Combining Fauna with other GraphQL APIs and data sources
  3. How to build a serverless GraphQL gateway with AWS Lambda?

Combining multiple Fauna instances into one GraphQL service

First, let’s take a look at how we can combine multiple Fauna instances into one GraphQL service. Imagine we have three Fauna database instances ProductInventory, and Review. Each is independent of the other. Each has its graph (we will refer to them as subgraphs). We want to create a unified graph interface and expose it to the client applications. Clients will be able to query any combination of the downstream data sources.

We will call the unified graph to interface our gateway service. Let’s go ahead and write this service.

We’ll start with a fresh node project. We will create a new folder. Then navigate inside it and initiate a new node app with the following commands.

mkdir my-gateway  cd my-gateway npm init --yes

Next, we will create a simple express GraphQL server. So let’s go ahead and install the express and express-graphqlpackage with the following command.

npm i express express-graphql graphql --save

Creating the gateway server

We will create a file called gateway.js . This is our main entry point to the application. We will start by creating a very simple GraphQL server.

const express = require('express'); const { graphqlHTTP } = require('express-graphql'); const { buildSchema }  = require('graphql');  // Construct a schema, using GraphQL schema language const schema = buildSchema(`   type Query {     hello: String   } `);  // The root provides a resolver function for each API endpoint const rootValue = {     hello: () => 'Hello world!', };  const app = express();  app.use(   '/graphql',   graphqlHTTP((req) => ({     schema,     rootValue,     graphiql: true,   })), );  app.listen(4000); console.log('Running a GraphQL API server at <http://localhost:4000/graphql>');

In the code above we created a bare-bone express-graphql server with a sample query and a resolver. Let’s test our app by running the following command.

node gateway.js

Navigate to [<http://localhost:4000/graphql>](<http://localhost:4000/graphql>) and you will be able to interact with the GraphQL playground.

Creating Fauna instances

Next, we will create three Fauna databases. Each of them will act as a GraphQL service. Let’s head over to fauna.com and create our databases. I will name them ProductInventory and Review

Once the databases are created we will generate admin keys for them. These keys are required to connect to our GraphQL APIs.

Let’s create three distinct GraphQL schemas and upload them to the respective databases. Here’s how our schemas will look.

# Schema for Inventory database type Inventory {   name: String   description: String   sku: Float   availableLocation: [String] }
# Schema for Product database type Product {   name: String   description: String   price: Float }
# Schema for Review database type Review {   email: String   comment: String   rating: Float }

Head over to the relative databases, select GraphQL from the sidebar and import the schemas for each database.

Now we have three GraphQL services running on Fauna. We can go ahead and interact with these services through the GraphQL playground inside Fauna. Feel free to enter some dummy data if you are following along. It will come in handy later while querying multiple data sources.

Setting up the gateway service

Next, we will combine these into one graph with schema stitching. To do so we need a gateway server. Let’s create a new file gateway.js. We will be using a couple of libraries from graphql tools to stitch the graphs.

Let’s go ahead and install these dependencies on our gateway server.

npm i @graphql-tools/schema @graphql-tools/stitch @graphql-tools/wrap cross-fetch --save 

In our gateway, we are going to create a new generic function called makeRemoteExecutor. This function is a factory function that returns another function. The returned asynchronous function will make the GraphQL query API call.

// gateway.js  const express = require('express'); const { graphqlHTTP } = require('express-graphql'); const { buildSchema }  = require('graphql');   function makeRemoteExecutor(url, token) {     return async ({ document, variables }) => {       const query = print(document);       const fetchResult = await fetch(url, {         method: 'POST',         headers: { 'Content-Type': 'application/json', 'Authorization': 'Bearer ' + token },         body: JSON.stringify({ query, variables }),       });       return fetchResult.json();     }  }  // Construct a schema, using GraphQL schema language const schema = buildSchema(`   type Query {     hello: String   } `);  // The root provides a resolver function for each API endpoint const rootValue = {     hello: () => 'Hello world!', };  const app = express();  app.use(   '/graphql',   graphqlHTTP(async (req) => {     return {       schema,       rootValue,       graphiql: true,     }   }), );  app.listen(4000); console.log('Running a GraphQL API server at http://localhost:4000/graphql');

As you can see above the makeRemoteExecutor has two parsed arguments. The url argument specifies the remote GraphQL url and the token argument specifies the authorization token.

We will create another function called makeGatewaySchema. In this function, we will make the proxy calls to the remote GraphQL APIs using the previously created makeRemoteExecutor function.

// gateway.js  const express = require('express'); const { graphqlHTTP } = require('express-graphql'); const { introspectSchema } = require('@graphql-tools/wrap'); const { stitchSchemas } = require('@graphql-tools/stitch'); const { fetch } = require('cross-fetch'); const { print } = require('graphql');  function makeRemoteExecutor(url, token) {   return async ({ document, variables }) => {     const query = print(document);     const fetchResult = await fetch(url, {       method: 'POST',       headers: { 'Content-Type': 'application/json', 'Authorization': 'Bearer ' + token },       body: JSON.stringify({ query, variables }),     });     return fetchResult.json();   } }  async function makeGatewaySchema() {      const reviewExecutor = await makeRemoteExecutor('https://graphql.fauna.com/graphql', 'fnAEQZPUejACQ2xuvfi50APAJ397hlGrTjhdXVta');     const productExecutor = await makeRemoteExecutor('https://graphql.fauna.com/graphql', 'fnAEQbI02HACQwTaUF9iOBbGC3fatQtclCOxZNfp');     const inventoryExecutor = await makeRemoteExecutor('https://graphql.fauna.com/graphql', 'fnAEQbI02HACQwTaUF9iOBbGC3fatQtclCOxZNfp');      return stitchSchemas({         subschemas: [           {             schema: await introspectSchema(reviewExecutor),             executor: reviewExecutor,           },           {             schema: await introspectSchema(productExecutor),             executor: productExecutor           },           {             schema: await introspectSchema(inventoryExecutor),             executor: inventoryExecutor           }         ],                  typeDefs: 'type Query { heartbeat: String! }',         resolvers: {           Query: {             heartbeat: () => 'OK'           }         }     }); }  // ...

We are using the makeRemoteExecutor function to make our remote GraphQL executors. We have three remote executors here one pointing to Product , Inventory , and Review services. As this is a demo application I have hardcoded the admin API key from Fauna directly in the code. Avoid doing this in a real application. These secrets should not be exposed in code at any time. Please use environment variables or secret managers to pull these values on runtime.

As you can see from the highlighted code above we are returning the output of the switchSchemas function from @graphql-tools. The function has an argument property called subschemas. In this property, we can pass in an array of all the subgraphs we want to fetch and combine. We are also using a function called introspectSchema from graphql-tools. This function is responsible for transforming the request from the gateway and making the proxy API request to the downstream services.

You can learn more about these functions on the graphql-tools documentation site.

Finally, we need to call the makeGatewaySchema. We can remove the previously hardcoded schema from our code and replace it with the stitched schema.

// gateway.js  // ...  const app = express();  app.use(   '/graphql',   graphqlHTTP(async (req) => {     const schema = await makeGatewaySchema();     return {       schema,       context: { authHeader: req.headers.authorization },       graphiql: true,     }   }), );  // ...

When we restart our server and go back to localhost we will see that queries and mutations from all Fauna instances are available in our GraphQL playground.

Let’s write a simple query that will fetch data from all Fauna instances simultaneously.

Stitch third party GraphQL APIs

We can stitch third-party GraphQL APIs into our gateway as well. For this demo, we are going to stitch the SpaceX open GraphQL API with our services.

The process is the same as above. We create a new executor and add it to our sub-graph array.

// ...  async function makeGatewaySchema() {    const reviewExecutor = await makeRemoteExecutor('https://graphql.fauna.com/graphql', 'fnAEQdRZVpACRMEEM1GKKYQxH2Qa4TzLKusTW2gN');   const productExecutor = await makeRemoteExecutor('https://graphql.fauna.com/graphql', 'fnAEQdSdXiACRGmgJgAEgmF_ZfO7iobiXGVP2NzT');   const inventoryExecutor = await makeRemoteExecutor('https://graphql.fauna.com/graphql', 'fnAEQdR0kYACRWKJJUUwWIYoZuD6cJDTvXI0_Y70');    const spacexExecutor = await makeRemoteExecutor('https://api.spacex.land/graphql/')    return stitchSchemas({     subschemas: [       {         schema: await introspectSchema(reviewExecutor),         executor: reviewExecutor,       },       {         schema: await introspectSchema(productExecutor),         executor: productExecutor       },       {         schema: await introspectSchema(inventoryExecutor),         executor: inventoryExecutor       },       {         schema: await introspectSchema(spacexExecutor),         executor: spacexExecutor       }     ],              typeDefs: 'type Query { heartbeat: String! }',     resolvers: {       Query: {         heartbeat: () => 'OK'       }     }   }); }  // ...

Deploying the gateway

To make this a true serverless solution we should deploy our gateway to a serverless function. For this demo, I am going to deploy the gateway into an AWS lambda function. Netlify and Vercel are the two other alternatives to AWS Lambda.

I am going to use the serverless framework to deploy the code to AWS. Let’s install the dependencies for it.

npm i -g serverless # if you don't have the serverless framework installed already npm i serverless-http body-parser --save  

Next, we need to make a configuration file called serverless.yaml

# serverless.yaml  service: my-graphql-gateway  provider:   name: aws   runtime: nodejs14.x   stage: dev   region: us-east-1  functions:   app:     handler: gateway.handler     events:       - http: ANY /       - http: 'ANY {proxy+}'

Inside the serverless.yaml we define information such as cloud provider, runtime, and the path to our lambda function. Feel free to take look at the official documentation for the serverless framework for more in-depth information.

We will need to make some minor changes to our code before we can deploy it to AWS.

npm i -g serverless # if you don't have the serverless framework installed already npm i serverless-http body-parser --save 

Notice the highlighted code above. We added the body-parser library to parse JSON body. We have also added the serverless-http library. Wrapping the express app instance with the serverless function will take care of all the underlying lambda configuration.

We can run the following command to deploy this to AWS Lambda.

serverless deploy

This will take a minute or two to deploy. Once the deployment is complete we will see the API URL in our terminal.

Make sure you put /graphql at the end of the generated URL. (i.e. https://gy06ffhe00.execute-api.us-east-1.amazonaws.com/dev/graphql).

There you have it. We have achieved complete serverless nirvana 😉. We are now running three Fauna instances independent of each other stitched together with a GraphQL gateway.

Feel free to check out the code for this article here.

Conclusion

Schema stitching is one of the most popular solutions to break down monoliths and achieve separation of concerns between data sources. However, there are other solutions such as Apollo Federation which pretty much works the same way. If you would like to see an article like this with Apollo Federation please let us know in the comment section. That’s it for today, see you next time.


The post Implementing a single GraphQL across multiple data sources appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , , ,
[Top]

How to Cancel Pending API Requests to Show Correct Data

I recently had to create a widget in React that fetches data from multiple API endpoints. As the user clicks around, new data is fetched and marshalled into the UI. But it caused some problems.

One problem quickly became evident: if the user clicked around fast enough, as previous network requests got resolved, the UI was updated with incorrect, outdated data for a brief period of time.

We can debounce our UI interactions, but that fundamentally does not solve our problem. Outdated network fetches will resolve and update our UI with wrong data up until the final network request finishes and updates our UI with the final correct state. The problem becomes more evident on slower connections. Furthermore, we’re left with useless networks requests that waste the user’s data.

Here is an example I built to illustrate the problem. It grabs game deals from Steam via the cool Cheap Shark API using the modern fetch() method. Try rapidly updating the price limit and you will see how the UI flashes with wrong data until it finally settles.

The solution

Turns out there is a way to abort pending DOM asynchronous requests using an AbortController. You can use it to cancel not only HTTP requests, but event listeners as well.

The AbortController interface represents a controller object that allows you to abort one or more Web requests as and when desired.

Mozilla Developer Network

The AbortController API is simple: it exposes an AbortSignal that we insert into our fetch() calls, like so:

const abortController = new AbortController() const signal = abortController.signal fetch(url, { signal })

From here on, we can call abortController.abort() to make sure our pending fetch is aborted.

Let’s rewrite our example to make sure we are canceling any pending fetches and marshalling only the latest data received from the API into our app:

The code is mostly the same with few key distinctions:

  1. It creates a new cached variable, abortController, in a useRef in the <App /> component.
  2. For each new fetch, it initializes that fetch with a new AbortController and obtains its corresponding AbortSignal.
  3. It passes the obtained AbortSignal to the fetch() call.
  4. It aborts itself on the next fetch.
const App = () => {  // Same as before, local variable and state declaration  // ...   // Create a new cached variable abortController in a useRef() hook  const abortController = React.useRef()   React.useEffect(() => {   // If there is a pending fetch request with associated AbortController, abort   if (abortController.current) {     abortController.abort()   }   // Assign a new AbortController for the latest fetch to our useRef variable   abortController.current = new AbortController()   const { signal } = abortController.current    // Same as before   fetch(url, { signal }).then(res => {     // Rest of our fetching logic, same as before   })  }, [   abortController,   sortByString,   upperPrice,   lowerPrice,  ]) }

Conclusion

That’s it! We now have the best of both worlds: we debounce our UI interactions and we manually cancel outdated pending network fetches. This way, we are sure that our UI is updated once and only with the latest data from our API.


The post How to Cancel Pending API Requests to Show Correct Data appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , ,
[Top]