Month: September 2021

Top 10 Logo Maker and Creation Tools

Systems for z-index

Say you have a z-index bug. Something is being covered up by something else. In my experience, a typical solution is to put position: relative on the thing so z-index works in the first place, and maybe rejigger the z-index value until the right thing is on top.

The danger here is that this sets off a little z-index war. Raising a z-index value over here fixes one bug, and then causes another by covering up some other element somewhere else when you didn’t want to. Hopefully, you can reason about the situation and fix the bugs, even if it means refactoring more z-index values than you thought you might need to.

If the “stack” of z-index values is complicated enough, you might consider an abstraction. One of the problems is that your z-index values are probably sprinkled rather randomly throughout your CSS. So, instead of simply letting that be, you can give yourself a central location for z-index values to reference later.

I’ve covered doing that as a Sass map in the past:

$ zindex: (   modal     : 9000,    overlay   : 8000,   dropdown  : 7000,   header    : 6000,   footer    : 5000 );  .header {   z-index: map-get($ zindex, header); }

Now you’ve got a central place to manage those values.

Rafi Strauss has taken that a step further with OZMap. It’s also a Sass map situation, but it’s configured like this:

$ globalZIndexes: (   a: null,   b: null,   c: 2000,   d: null, );

The idea here is that most values are auto-generated by the tool (the null values), but you can specify them if you want. That way, if you have a third-party component with a z-index value you can’t change, you plug that into the map, and then the auto-generated numbers will factor that in when you make layers on top. It also means it’s very easy to slip layers in between others.

I think all that is clever and useful stuff — but I also think it doesn’t help with another common z-index bug: stacking contexts. It’s always the stacking context, as I’ve written. If some element is in a stacking context that is under some other stacking context, there is no z-index value possible that will bring it on top. That’s a much harder bug to fix.

Andy Blum wrote about this recently.

One of the great frustrations of front-end development is the unexpected interaction and overlapping of those same elements. Struggling to arrange elements along the z-axis, which extends perpendicularly through the computer screen towards and away from the viewer, is such a shared front-end experience that an element’s z-index can sometimes be used as a frustrate-o-meter gauging the developer’s mood.

The key to maintainable z-index values is understanding that z-index values can’t always be directly compared. They’re not an absolute measurement along an imaginary ruler extending out of the viewport; rather, they are a relative order between elements within the same stacking context.

Turns out there is a nice little debugging tool for stacking contexts in the form of a browser extension (Chrome and Firefox.) Andy gets into a very tricky situation where an RTL version of a website has a rule that uses a transform to move a video on the page. That transform triggers a new stacking context and hence a bug with an unclickable button. Gnarly.

Kinda makes me think that there could be some kinda built-in DevTools way of seeing/understanding stacking contexts. I don’t know if this is the right answer, but I’ve been seeing a leaning into “badges” in DevTools more and more. These things:

In fact, Chrome 94 recently dropped a new one for container queries, so they are being used more and more.

Maybe there could be a badge for stacking contexts? Typically what happens with badges is that you click it on and it shows a special UI. For example, for flexbox or grid it will show the overlay for all the grid lines. The special UI for stacking contexts could be color/texture coded and labelled areas showing all the stacking contexts and how they are layered.


The post Systems for z-index appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

,
[Top]

The Self Provisioning Runtime

Big thoughts on where the industry is headed from Shawn Wang:

Advancements in two fields — programming languages and cloud infrastructure — will converge in a single paradigm: where all resources required by a program will be automatically provisioned, and optimized, by the environment that runs it.

I can’t articulate it like Shawn, but this all feels right.

I think of how front-end development has exploded over time with JavaScript being everywhere (see: “ooops, I guess we’re full-stack developers now”). Services have also exploded to help. Oh hiiii front-end developers! I see you can write a little JavaScript! Come over here and we’ll give you a complete database with GraphQL endpoints! And we’ll run your cloud functions! But that means there are a lot more people doing this who, in some sense, have no business doing it (points at self). I just have to trust that these services are going to protect me from myself as best they can.

Follow this trend line, and it will get easier and easier to run everything you need to run. Maybe I’ll write code like:

/*   - Be a cloud function   - Run at the edge   - Get data from my data store I named "locations", require JWT auth   - Return no slower than 250ms   - I'm willing to pay $  8/month for this, alert me if we're on target to exceed that */  exports.hello = (message) => {   const name = message.data   const location = locations.get("location").where(message.id);    return `Hello, $  {name} from $  {location}`; };

That’s just some fake code, but you can see what I mean. Right by your code, you explain what infrastructure you need to have it work and it just does it. I saw a demo of cloudcompiler.run the other day and it was essentially like this. Even the conventions Netlify offers point highly in this direction, e.g. put your .js files in a functions folder, and we’ll take care of the rest. You just hit it with a local relative URL.

I’d actually bet the future is even more magical than this, guessing what you need and making it happen.

Direct Link to ArticlePermalink


The post The Self Provisioning Runtime appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, ,
[Top]

Our Learning Partner: Frontend Masters

Frontend Masters has been our learning partner for a couple of years now. I love it. If you need structured learning to up your web development skills, Frontend Masters is the place. It works so well because we don’t offer that kind of structured learning ourselves — I’d rather recommend a first-rate learning joint. CSS-Tricks is more of a place you subscribe to for our special blend of industry news, web development advice, and reference material. Frontend Masters has these learning paths that they highly invest in to take you through a guided learning path for particular technologies and learning levels.

I spoke with Frontend Masters founder and CEO Marc and he says:

I couldn’t be more proud of how the learning paths have shaped up. Everyone else in this industry is so focused on the latest and greatest new shiny thing, when instead our platform focuses on deeply learning the fundamentals. And that has really paid off with so many amazing, nearly evergreen courses – rather than having to play the quantity game creating a million courses we can keep focused on refreshing the core curriculum. 

I’m kind of a fundamentals guy myself, so that really resonates with me.

I also asked him what specifically is new and he sent me nearly 20 new and updated courses. He also said this specifically:

We just updated all 15 core learning paths so those are all 🔥 absolute fire. The thing that I’m most excited about is our new ones… specifically the typescript and functional JavaScript paths. Also our fullstack path updated with new courses is epic.

He also hinted at some stuff I’m excited about coming up this fall and winter, but I’ll let you be surprised by them once they come.


The post Our Learning Partner: Frontend Masters appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , ,
[Top]

Comparing Methods for Appending and Inserting With JavaScript

Let’s say we want to add something to a webpage after the initial load. JavaScript gives us a variety of tools. Perhaps you’ve used some of them, like append, appendChild, insertAdjacentHTML, or innerHTML.

The difficult thing about appending and inserting things with JavaScript isn’t so much about the tools it offers, but which one to use, when to use them, and understanding how each one works.

Let’s try to clear things up.

Super quick context

It might be helpful to discuss a little background before jumping in. At the simplest level, a website is an HTML file downloaded from a server to a browser.

Your browser converts the HTML tags inside your HTML file into a bunch of objects that can be manipulated with JavaScript. These objects construct a Document Object Model (DOM) tree. This tree is a series of objects that are structured as parent-child relationships.

In DOM parlance, these objects are called nodes, or more specifically, HTML elements.

<!-- I'm the parent element --> <div>   <!-- I'm a child element -->   <span>Hello</span> </div>

In this example, the HTML span element is the child of the div element, which is the parent.

And I know that some of these terms are weird and possibly confusing. We say “node”, but other times we may say “element” or “object” instead. And, in some cases, they refer to the same thing, just depending on how specific we want to be .

For example, an “element” is a specific type of “node”, just like an apple is a specific type of fruit.

We can organize these terms from most general, to most specific: ObjectNodeElementHTML Element

Understanding these DOM items is important, as we’ll interact with them to add and append things with JavaScript after an initial page load. In fact, let’s start working on that.

Setup

These append and insert methods mostly follow this pattern:

Element.append_method_choice(stuff_to_append)

Again, an element is merely an object in the DOM Tree that represents some HTML. Earlier, we had mentioned that the purpose of the DOM tree is to give us a convenient way to interact with HTML using JavaScript.

So, how do we use JavaScript to grab an HTML element?

Querying the DOM

Let’s say we have the following tiny bit of HTML:

<div id="example" class="group">   Hello World </div>

There are a few common ways to query the DOM:

// Query a specific selector (could be class, ID, element type, or attribute): const my_element1 = document.querySelector('#example')  // Query an element by its ID: const my_element2 = document.getElementbyId('example')  // Query an element by its class: const my_element3 = document.getElementbyClass('group')[0] 

In this example, all three lines query the same thing, but look for it in different ways. One looks at any of the item’s CSS selectors; one looks at the item’s ID; and one looks at the item’s class.

Note that the getElementbyClass method returns an array. That’s because it’s capable of matching multiple elements in the DOM and storing those matches in an array makes sure all of them are accounted for.

What we can append and insert

// Append Something const my_element1 = document.querySelector('#example') my_element1.append(something)

In this example, something is a parameter that represents stuff we want to tack on to the end of (i.e. append to) the matched element.

We can’t just append any old thing to any old object. The append method only allows us to append either a node or plain text to an element in the DOM. But some other methods can append HTML to DOM elements as well.

  1. Nodes are either created with document.createElement() in JavaScript, or they are selected with one of the query methods we looked at in the last section.
  2. Plain text is, well, text. It’s plain text in that it does not carry any HTML tags or formatting with it. (e.g. Hello).
  3. HTML is also text but, unlike plain text, it does indeed get parsed as markup when it’s added to the DOM (e.g. <div>Hello</div>).

It might help to map out exactly which parameters are supported by which methods:

Method Node HTML Text Text
append Yes No Yes
appendChild Yes No No
insertAdjacentHTML No Yes Yes1
innerHTML2 No Yes Yes
1 This works, but insertAdjacentText is recommended.
2 Instead of taking traditional parameters, innerHTML is used like: element.innerHTML = 'HTML String'

How to choose which method to use

Well, it really depends on what you’re looking to append, not to mention certain browser quirks to work around.

  • If you have existing HTML that gets sent to your JavaScript, it’s probably easiest to work with methods that support HTML.
  • If you’re building some new HTML in JavasScript, creating a node with heavy markup can be cumbersome, whereas HTML is less verbose.
  • If you want to attach event listeners right away, you’ll want to work with nodes because we call addEventListener on nodes, not HTML.
  • If all you need is text, any method supporting plain text parameters is fine.
  • If your HTML is potentially untrustworthy (i.e. it comes from user input, say a comment on a blog post), then you’ll want to be careful when using HTML, unless it has been sanitized (i.e. the harmful code has been removed).
  • If you need to support Internet Explorer, then using append is out of the question.

Example

Let’s say we have a chat application, and we want to append a user, Dale, to a buddy list when they log in.

<!-- HTML Buddy List --> <ul id="buddies">   <li><a>Alex</a></li>   <li><a>Barry</a></li>   <li><a>Clive</a></li>   <!-- Append next user here --> </ul>

Here’s how we’d accomplish this using each of the methods above.

append

We need to create a node object that translates to <li><a>Dale</a></li>.

const new_buddy = document.createElement('li') const new_link = document.createElement('a')  const buddy_name = "Dale"  new_link.append(buddy_name) // Text param new_buddy.append(new_link) // Node param  const list = document.querySelector('#buddies') list.append(new_buddy) // Node param

Our final append places the new user at the end of the buddy list, just before the closing </ul> tag. If we’d prefer to place the user at the front of the list, we could use the prepend method instead.

You may have noticed that we were also able to use append to fill our <a> tag with text like this:

const buddy_name = "Dale" new_link.append(buddy_name) // Text param

This highlights the versatility of append.

And just to call it out once more, append is unsupported in Internet Explorer.

appendChild

appendChild is another JavaScript method we have for appending stuff to DOM elements. It’s a little limited in that it only works with node objects, so we we’ll need some help from textContent (or innerText) for our plain text needs.

Note that appendChild, unlike append, is supported in Internet Explorer.

const new_buddy = document.createElement('li') const new_link = document.createElement('a')  const buddy_name = "Dale"  new_link.textContent = buddy_name new_buddy.appendChild(new_link) // Node param  const list = document.querySelector('#buddies') list.appendChild(new_buddy) // Node param

Before moving on, let’s consider a similar example, but with heavier markup.

Let’s say the HTML we wanted to append didn’t look like <li><a>Dale</a></li>, but rather:

<li class="abc" data-tooltip="Click for Dale">   <a id="user_123" class="def" data-user="dale">     <img src="images/dale.jpg" alt="Profile Picture"/>     <span>Dale</span>   </a> </li>

Our JavaScript would look something like:

const buddy_name = "Dale"  const new_buddy = document.createElement('li') new_buddy.className = ('abc') new_buddy.setAttribute('data-tooltip', `Click for $ {buddy_name}`)  const new_link = document.createElement('a') new_link.id = 'user_123' new_link.className = ('def') new_link.setAttribute('data-user', buddy_name)  const new_profile_img = document.createElement('img') new_profile_img.src = 'images/dale.jpg' new_profile_img.alt = 'Profile Picture'  const new_buddy_span = document.createElement('span') new_buddy_span.textContent = buddy_name  new_link.appendChild(new_profile_img) // Node param new_link.appendChild(new_buddy_span) // Node param new_buddy.appendChild(new_link) // Node param  const list = document.querySelector('#buddies') list.appendChild(new_buddy) // Node param

There’s no need to follow all of above JavaScript – the point is that creating large amounts of HTML in JavaScript can become quite cumbersome. And there’s no getting around this if we use append or appendChild.

In this heavy markup scenario, it might be nice to just write our HTML as a string, rather than using a bunch of JavaScript methods…

insertAdjacentHTML

insertAdjacentHTML is is like append in that it’s also capable of adding stuff to DOM elements. One difference, though, is that insertAdjacentHTML inserts that stuff at a specific position relative to the matched element.

And it just so happens to work with HTML. That means we can insert actual HTML to a DOM element, and pinpoint exactly where we want it with four different positions:

<!-- beforebegin --> <div id="example" class="group">   <!-- afterbegin -->   Hello World   <!-- beforeend --> </div> <!-- afterend -->

So, we can sorta replicate the same idea of “appending” our HTML by inserting it at the beforeend position of the #buddies selector:

const buddy_name = "Dale"  const new_buddy = `<li><a>$ {buddy_name}</a></li>` const list = document.querySelector('#buddies') list.insertAdjacentHTML('beforeend', new_buddy)

Remember the security concerns we mentioned earlier. We never want to insert HTML that’s been submitted by an end user, as we’d open ourselves up to cross-site scripting vulnerabilities.

innerHTML

innerHTML is another method for inserting stuff. That said, it’s not recommended for inserting, as we’ll see.

Here’s our query and the HTML we want to insert:

const buddy_name = "Dale" const new_buddy = `<li><a>$ {buddy_name}</a></li>` const list = document.querySelector('#buddies')   list.innerHTML += new_buddy

Initially, this seems to work. Our updated buddy list looks like this in the DOM:

<ul id="buddies">   <li><a>Alex</a></li>   <li><a>Barry</a></li>   <li><a>Clive</a></li>   <li><a>Dale</a></li> </ul>

That’s what we want! But there’s a constraint with using innerHTML that prevents us from using event listeners on any elements inside of #buddies because of the nature of += in list.innerHTML += new_buddy.

You see, A += B behaves the same as A = A + B. In this case, A is our existing HTML and B is what we’re inserting to it. The problem is that this results in a copy of the existing HTML with the additional inserted HTML. And event listeners are unable to listen to copies. That means if we want to listen for a click event on any of the <a> tags in the buddy list, we’re going to lose that ability with innerHTML.

So, just a word of caution there.

Demo

Here’s a demo that pulls together all of the methods we’ve covered. Clicking the button of each method inserts “Dale” as an item in the buddies list.

Go ahead and open up DevTools while you’re at it and see how the new list item is added to the DOM.

Recap

Here’s a general overview of where we stand when we’re appending and inserting stuff into the DOM. Consider it a cheatsheet for when you need help figuring out which method to use.

Method Node HTML Text Text Internet Explorer? Event Listeners Secure? HTML Templating
append Yes No Yes No Preserves Yes Medium
appendChild Yes No No Yes Preserves Yes Medium
insertAdjacentHTML No Yes Yes1 Yes Preserves Careful Easy
innerHTML2 No Yes Yes Yes Loses Careful Easy
1 This works, but insertAdjacentText is recommended.
2 Instead of taking traditional parameters, innerHTML is used like: element.innerHTML = 'HTML String'

If I had to condense all of that into a few recommendations:

  • Using innerHTML for appending is not recommended as it removes event listeners.
  • append works well if you like the flexibility of working with node elements or plain text, and don’t need to support Internet Explorer.
  • appendChild works well if you like (or need) to work with node elements, and want full browser coverage.
  • insertAdjacentHTML is nice if you need to generate HTML, and want to more specific control over where it is placed in the DOM.

Last thought and a quick plug 🙂

This post was inspired by real issues I recently ran into when building a chat application. As you’d imagine, a chat application relies on a lot of appending/inserting — people coming online, new messages, notifications, etc.

That chat application is called Bounce. It’s a peer-to-peer learning chat. Assuming you’re a JavaScript developer (among other things), you probably have something to teach! And you can earn some extra cash.

If you’re curious, here’s a link to the homepage, or my profile on Bounce. Cheers!


The post Comparing Methods for Appending and Inserting With JavaScript appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , ,
[Top]

What is Your Page Title on a Google Search Engine Results Page?

Whatever Google wants it to be. I always thought it was exactly what your <title> element was. Perhaps in lieu of that, what the first <h1> on the page is. But recently I noticed some pages on this site that were showing a title on SERPs that was a string that appeared nowhere at all in the source of the page.

When I first noticed it, I tweeted my basic findings…

The thread has more info than what unfurled here.

This is a known thing. Apparently, they’ve been doing this for a long time (~10 years), but it’s the first I’ve noticed it. And it’s undergone a recent change:

The article is pretty clear about things:

[…] we think our new system is producing titles that work better for documents overall, to describe what they are about, regardless of the particular query.

Also, while we’ve gone beyond HTML text to create titles for over a decade, our new system is making even more use of such text. In particular, we are making use of text that humans can visually see when they arrive at a web page. We consider the main visual title or headline shown on a page, content that site owners often place within <H1> tags or other header tags, and content that’s large and prominent through the use of style treatments.

Other text contained in the page might be considered, as might be text within links that point at pages.

The change is in response to people having sucky <title> text. Like it’s too long, too jacked up with SEO garbage (irony!), or are just plain non-descriptive.

I’m not entirely sure how much I care just yet.

Part of me thinks, well, google.com isn’t the web. As important as it is, it’s a proprietary product by a private company and they can do whatever they want within the bounds of the law. In this case, it’s clear the intention is to help: to provide titles that are more clear than what the original page has.

Part of me thinks, well, that sucks that, as site owners, we have no control. If Google wanted to change the SERP title for every results to this website to “CSS-Tricks is a stupid website, never visit it,” they could and that’s that.

Part of me connects this kind of work to AMP. AMP was basically saying, “Y’all are absolutely horrible at building performant mobile websites, so we’re going to build a strict set of rules such that you can’t screw it up anymore, and dangling a carrot of better SERP placement if you buy into the rules.” This way of creating page titles is basically saying, “Y’all are absolutely horrible at providing good titles, so we’re going to title your pages for you so you can’t screw it up anymore and we can improve our SERPs.”

Except with AMP, you had to put in the development hours to make it happen. It was opt-in, even if the carrot was un-ignorable by content companies. This doesn’t carry the risk of burning development hours, but it’s also not something we get to opt into.


The post What is Your Page Title on a Google Search Engine Results Page? appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , ,
[Top]

Learn How to Build True Edge Apps With Cloudflare Workers and Fauna

(This is a sponsored post.)

There is a lot of buzz around apps running on the edge instead of on a centralized server in web development. Running your app on the edge allows your code to be closer to your users, which makes it faster. However, there is a spectrum of edge apps. Many apps only have some parts, usually static content, on the edge. But you can move even more to the edge, like computing and databases. This article describes how to do that.

Intro to the edge

First, let’s look at what the edge really is.

The “edge” refers to locations designed to be close to users instead of being at one centralized place. Edge servers are smaller servers put on the edge. Traditionally, servers have been centralized so that there was only one server available. This made websites slower and less reliable. They were slower because the server can often be far away from the user. Say if you have two users, one in Singapore and one in the U.S., and your server is in the U.S. For the customer in the U.S., the server would be close, but for the person in Singapore, the signal would have to travel across the entire Pacific. This adds latency, which makes your app slower and less responsive for the user. Placing your servers on the edge mitigates this latency problem.

Normal server architecture

With an edge server design, your servers have lighter-weight versions in multiple different areas, so a user in Singapore would be able to access a server in Singapore, and a user in the U.S. would also be able to access a close server. Multiple servers on the edge also make an app more reliable because if the server in Singapore went offline, the user in Singapore would still be able to access the U.S. server.

Edge architecture

Many apps have more than 100 different server locations on the edge. However, multiple server locations can add significant cost. To make it cheaper and easier for developers to harness the power of the edge, many services offer the ability to easily deploy to the edge without having to spend a lot of money or time managing multiple servers. There are many different types of these. The most basic and widely used is an edge Content Delivery Network (CDN), which allows static content to be served from the edge. However, CDNs cannot do anything more complicated than serving content. If you need databases or custom code on the edge, you will need a more advanced service.

Introducing edge functions and edge databases

Luckily, there are solutions to this. The first of which, for running code on the edge, is edge functions. These are small pieces of code, automatically provisioned when needed, that are designed to respond to HTTP requests. They are also commonly called serverless functions. However, not all serverless functions run on the edge. Some edge function providers are Lambda@Edge, Cloudflare Workers, and Deno Deploy. In this article, we will focus on Cloudflare Workers. We can also take databases to the edge to ensure that our serverless functions run fast even when querying a database. There are also solutions for databases, the easiest of which is Fauna. With traditional databases, it is very hard or almost impossible to scale to multiple different regions. You have to manage different servers and how database updates are replicated between them. Fauna, however, abstracts all of that away, allowing you to use a cross-region database with a click of a button. It also provides an easy-to-use GraphQL interface and its own query language if you need more. By using Cloudflare Workers and Fauna, we can build a true edge app where everything is run on the edge.

Using Cloudflare Workers and Fauna to build a URL shortener

Setting up Cloudflare Workers and the code

URL Shorteners need to be fast, which makes Cloudflare Workers and Fauna perfect for this. To get started, clone the repository at github.com/AsyncBanana/url-shortener and set your directory to the folder generated.

git clone https://github.com/AsyncBanana/url-shortener.git cd url-shortener

Then, install wrangler, the CLI needed for Cloudflare Workers. After that, install all npm dependencies.

npm install -g @cloudflare/wrangler npm install

Then, sign up for Cloudflare workers at https://dash.cloudflare.com/sign-up/workers and run wrangler login. Finally, to finish off the Cloudflare Workers set up, run wrangler whoami and take the account id from there and put it inside wrangler.toml, which is in the URL shortener.

Setting up Fauna

Good job, now we need to set up Fauna, which will provide the edge database for our URL shortener.

First, register for a Fauna account. Once you have finished that, create a new database by clicking “create database” on the dashboard. Enter URL-Shortener for the name, click classic for the region, and uncheck use demo data.

This is what it should look like

Once you create the database, click Collections on the dashboard sidebar and click “create new collection.” Name the collection URLs and click save.

Next, click the Security tab on the sidebar and click “New key.” Next, click Save on the modal and copy the resulting API key. You can also name the key, but it is not required. Finally, copy the key into the file named .env in the code under FAUNA_KEY.

Black code editor with code in it.
This is what the .env file should look like, except with API_KEY_HERE replaced with your key

Good job! Now we can start coding.

Create the URL shortener

There are two main folders in the code, public and src. The public folder is where all of the user-facing files are stored. src is the folder where the server code is. You can look through and edit the HTML, CSS, and client-side JavaScript if you want, but we will be focusing on the server-side code right now. If you look in src, you should see a file called urlManager.js. This is where our URL Shortening code will go.

This is the URL manager

First, we need to make the code to create shortened URLs. in the function createUrl, create a database query by running FaunaClient.query(). Now, we will use Fauna Query Language (FQL) to structure the query. Fauna Query Language is structured using functions, which are all available under q in this case. When you execute a query, you put all of the functions as arguments in FaunaClient.query(). Inside FaunaClient.query(), add:

q.Create(q.Collection("urls"),{   data: {     url: url   } })

What this does is creates a new document in the collection urls and puts in an object containing the URL to redirect to. Now, we need to get the id of the document so we can return it as a redirection point. First, to get the document id in the Fauna query, put q.Create in the second argument of q.Select, with the first argument being [“ref”,”id”]. This will get the id of the new document. Then, return the value returned by awaiting FaunaClient.query(). The function should now look like this:

return await FaunaClient.query(   q.Select(     ["ref", "id"],       q.Create(q.Collection("urls"), {         data: {           url: url,         },       })     )   ); }

Now, if you run wrangler dev and go to localhost:8787, you should see the URL shortener page. You can enter in a URL and click submit, and you should see another URL generated. However, if you go to that URL it will not do anything. Now we need to add the second part of this, the URL redirect.

Look back in urlManager.js. You should see a function called processUrl. In that function, put:

const res = await FaunaClient.query(q.Get(q.Ref(q.Collection("urls"), id)));

What this does is executes a Fauna query that gets the value of the document in the collection URLs with the specified id. You can use this to get the URL of the id in the URL. Next return res.data.url.url.

const res = await FaunaClient.query(q.Get(q.Ref(q.Collection("urls"), id))); return res.data.url.url

Now you should be all set! Just run wrangler publish, go to your workers.dev domain, and try it out!

Conclusion

Now have a URL shortener that runs entirely on the edge! If you want to add more features or learn more about Fauna or Cloudflare Workers, look below. I hope you have learned something from this, and thank you for reading.

Next steps

  • Further improve the speed of your URL shortener by adding caching
  • Add analytics
  • Read more about Fauna

Read more about Cloudflare Workers


The post Learn How to Build True Edge Apps With Cloudflare Workers and Fauna appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , , , , ,
[Top]

Working With GraphQL Caching

If you’ve recently started working with GraphQL, or reviewed its pros and cons, you’ve no doubt heard things like “GraphQL doesn’t support caching” or “GraphQL doesn’t care about caching.” And for most, that is a big deal.

The official GraphQL documentation refers to caching techniques so, clearly, the folks behind it do care about caching and its performance benefits.

The perception that GraphQL is at odds with caching is something I want to address in this post. I want to walk through different caching techniques and how to leverage cache for GraphQL queries.

Getting GraphQL to cache automatically

Consider the following query to fetch a post and the post author:

query getPost {   post(slug: "working-with-graphql-caching") {     id     title     author {       id       name       avatar     }   } }

The one “magic trick” that makes automatic GraphQL caching work is the __typename meta field that all GraphQL APIs expose.

As the name suggests, __typename returns the name of the objects type. This field can even be manually added to existing queries — and most of the time, a GraphQL client, or CDN will do it for you. urql is one such GraphQL client. The server might receive a query like this:

query getPost {   post(slug: "working-with-graphql-caching") {     __typename     id     title     author {       __typename       id       name     }   } }

The response with __typename might look a little something like this:

{   data: {     __typename: "Post",     id: 5,     title: "Working with GraphQL Caching",     author: {       __typename: "User",       id: 1,       name: "Jamie Barton"     }   } }

The __typename is a critical piece of the GraphQL caching puzzle because we can now cache this result, and know it contains the Post ID 5 and User ID 1.

Then there are libraries like Apollo and Relay, that also have some level of built-in caching we can use for automatic caching. Since they already know what’s in the cache, they can defer to the cache instead of remote APIs to fetch what the client asks for in a query.

Automatically invalidate the cache when there are changes

Imagine the post author edits the post’s title with the editPost mutation:

mutation {   editPost(input: { id: 5, title: "Working with GraphQL Caching" }) {     id     title   } }

Since the GraphQL client automatically adds the __typename field, the result of this mutation immediately tells the cache that Post ID 5 has changed, and any cached query result containing that post needs to be invalidated:

{   data: {     __typename: "Post",     id: 5,     title: "Working with GraphQL Caching"   } }

The next time a user sends the same query, the query fetches the new data from the origin rather than serving the stale result from the cache. Magic!

Normalized GraphQL caching

Many GraphQL clients won’t cache entire query results.

Instead, they normalize the cached data into two data structures; one that associates each object with its data (e.g. Post #5: { … }, User #1: { … }, etc.); and one that associates each query with the objects it contains (e.g. getPost: { Post #5, User #1}, etc.).

See urql’s documentation on normalized caching or Apollo’s “Demystifying Cache Normalization” for specific examples and use cases.

Caching edge cases with GraphQL

The one major edge case that GraphQL caches are unable to handle automatically is adding items to a list. So, if a createPost mutation passes through the cache, it doesn’t know which specific list to add that item to.

The easiest “fix” for this is to query the parent type in the mutation if it exists. For example, in the query below, we query the community relation on post:

query getPost {   post(slug: "working-with-graphql-caching") {     id     title     author {       id       name       avatar     }      # Also query the community of the post     community {       id       name     }   } }

Then we can also query that community from the createPost mutation, and invalidate any cached query results that contain that community:

mutation createPost {   createPost(input: { ... }) {     id     title      # Also query and thus invalidate the community of the post     community {       id       name     }   } }

While it’s imperfect, the typed schema and __typename meta field are the keys that make GraphQL APIs ideal for caching.

You might be thinking by now that all of this is a hack and that GraphQL still doesn’t support traditional caching. And you wouldn’t be wrong. Since GraphQL operates via a POST request, you’ll need to offload to the application client and use the “tricks” above to take advantage of modern browser caching with GraphQL.

That said, this sort of thing isn’t always possible to do, nor does it make a lot of sense for managing cache on the client. GraphQL clients make you manually update the cache for even trickier cases, but a service like GraphCDN provides a “server-like” caching experience, which also exposes a manual purging API that lets you do things like this for greater cache control:

# Purge all occurrences of a specific object mutation {   purgeUser(id: [5]) }
# Purge by query name mutation {   _purgeQuery(queries: [listUsers, listPosts]) }
# Purge all occurrences of a type mutation {   purgeUser }

Now, no matter where you consume your GraphCDN endpoint, there’s no longer a need to re-implement cache strategies in all of your client-side logic on mobile, web, etc. Edge caching makes your API feel super fast, and reduces load by sharing the cache amongst your users, and keeping it away from each of their clients.

Having recently used GraphCDN on a project, it’s taken care of configuring a cache on the client or server for me, allowing me to get on with my project. For instance, I can swap out my endpoint with GraphCDN and obtain query complexity (which is coming soon), analytics, and more for free.


So, does GraphQL care about caching? It most certainly does! Not only does it provide some automatic caching methods baked right into it, but a number of GraphQL libraries offer additional ways to do it and even manage it.

Hopefully this article has given you some insight into the GraphQL caching story, and how to go about implementing it on the client, as well as taking advantage of CDNs to do it all for you. My goal here isn’t to sell you on using GraphQL on all your projects or anything, but if you are choosing between query languages and caching is a big concern, know that GraphQL is more than up for the task.


The post Working With GraphQL Caching appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, ,
[Top]

Art With Tea Bags By Ruby Silvious

[Top]

Container Units Should Be Pretty Handy

Container queries are going to solve this long-standing issue in web design where we want to make design choices based on the size of an element (the container) rather than the size of the entire page. So, if a container is 600px wide, perhaps it has a row-like design, but any narrower than that it has a column-like design, and we’ll have that kind of control. That’s much different than transitioning between layouts based on screen size.

We can already size some things based on the size of an element, thanks to the % unit. For example, all these containers are 50% as wide as their parent container.

The % here is 1-to-1 with the property in use, so width is a % of width. Likewise, I could use % for font-size, but it will be a % of the parent container’s font-size. There is nothing that lets me cross properties and set the font-size as a % of a container’s width.

That is, unless we get container units! Here’s the table of units per the draft spec:

unit relative to
qw 1% of a query container’s width
qh 1% of a query container’s height
qi 1% of a query container’s inline size
qb 1% of a query container’s block size
qmin The smaller value of qi or qb
qmax The larger value of qi or qb

With these, I could easily set the font-size to a percentage of the parent container’s width. Or line-height! Or gap! Or margin! Or whatever!

Miriam notes that we can actually play with these units right now in Chrome Canary, as long as the container queries flag is on.

I had a quick play too. I’ll just put a video here as that’ll be easier to see in these super early days.

And some great exploratory work from Scott here as well:

Ahmad Shadeed is also all over this!

Query units can save us effort and time when dealing with things like font-sizepadding, and margin within a component. Instead of manually increasing the font size, we can use query units instead.

Ahmad Shadeed, “CSS Container Query Units”

Maybe container queries and container units will drop for real at the same time. 🤷


The post Container Units Should Be Pretty Handy appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks

, , , ,
[Top]