Category: Design

HONK: Cover Art by Tobias Hall for the Rolling Stones Latest Album

People Digging into Grid Sizing and Layout Possibilities

Jen Simmons has been coining the term intrinsic design, referring to a new era in web layout where the sizing of content has gone beyond fluid columns and media query breakpoints and into, I dunno, something a bit more exotic. For example, columns that are sized more by content and guidelines than percentages. And not always columns, but more like appropriate placement, however that needs to be done.

One thing is for sure, people are playing with the possibilities a lot right now. In the span of 10 days I’ve gathered these links:

The post People Digging into Grid Sizing and Layout Possibilities appeared first on CSS-Tricks.

CSS-Tricks

, , , , , ,
[Top]

See No Evil: Hidden Content and Accessibility

There is no one true way to hide something on the web. Nor should there be, because hiding is too vague. Are you hiding visually or temporarily (like a user menu), but the content should still be accessible? Are you hiding it from assistive tech on purpose? Are you showing it to assistive tech only? Are you hiding it at certain screen sizes or other scenarios? Or are you just plain hiding it from everyone all the time?

Paul Hebert digs into these scenarios. We’ve done a video on this subject as well.

Feels like many CSS properties play some role in hiding or revealing content: display, position, overflow, opacity, visibility, clip-path

Direct Link to ArticlePermalink

The post See No Evil: Hidden Content and Accessibility appeared first on CSS-Tricks.

CSS-Tricks

, , ,
[Top]

Design Systems and Portfolios

In my experience working with design systems, I’ve found that I have to sacrifice my portfolio to do it well. Unlike a lot of other design work where it’s relatively easy to present Dribbble-worthy interfaces and designs, I fear that systems are quite a bit trickier than that.

You could make things beautiful, but the best work that happens on a design systems team often isn’t beautiful. In fact, a lot of the best work isn’t even visible.

For example, most days I’m pairing up with folks on my team to help them understand how our system works; from the CSS architecture, to the font stack, to the UI Kit to how a component can be manipulated to solve a specific problem, to many things in between. I’m trying as best as I can to help other designers understand what would be hard to build and what would be easy, as well as when to change their designs based on technical or other design constraints.

Further, there’s a lot of hard and diligent work that goes into projects that have no visible impact on the system at all. Last week, I noticed a weird thing with our checkboxes. Our Checkbox React component would output HTML like this:

<div class="checkbox">   <label for="ch-1">     <input id="ch-1" type="checkbox" class="checkbox" />   </label> </div>

We needed to wrap the checkbox with a <div> for styling purposes and, from a quick glance, there’s nothing wrong with this markup. However, the <div> and the <input> both have a class of .checkbox and there were confusing styles in the CSS file that styled the <div> first and then un-did those styles to fix the <input> itself.

The fix for this is a pretty simple one: all we need to do is make sure that the class names are specific so that we can safely refactor any confusing CSS:

<div class="checkbox-wrapper">   <label for="ch-1">     <input id="ch-1" type="checkbox" class="checkbox" />   </label> </div>

The thing is that this work took more than a week to ship because we had to refactor a ton of checkboxes in our app to behave in the same way and make sure that they were all using the same component. These checkboxes are one of those things that are now significantly better and less confusing, but it’s difficult to make it look sexy in a portfolio. I can’t simply drop them into a big iPhone mockup and rotate it as part of a fancy portfolio post if I wanted to write about my work or show it to someone else.

Take another example: I spent an entire day making an audit of our illustrations to help our team get an understanding of how we use them in our application. I opened up Figma and took dozens of screenshots:

It’s sort of hard to take credit for this work because the heavy lifting is really moderating a discussion and helping the team plan. It’s important work! But I feel like it’s hard to show that this work is valuable and to show the effects of it in a large org. “Things are now less confusing,” isn’t exactly a great accomplishment – but it really should be. These boring, methodical changes are vital for the health of a good design system.

Also… it’s kind of weird to putm “I wrote documentation” in a portfolio as much as it is to say, “I paired with designers and engineers for three years.” It’s certainly less satisfying than a big, glossy JPEG of a cool interface you designed. And I’m not sure if this is the same everywhere, but only about 10% of the work I do is visual and worthy of showing off.

My point is that building new components like this RadioCard I designed a while back is extraordinarily rare and accounts for a tiny amount of the useful work that I do:

See the Pen
Gusto App – RadioCard Prototype
by Robin Rendle (@robinrendle)
on CodePen.

I’d love to see how you’re dealing with this problem though. How do you show off your front-end and design systems work? How do you make it visible and valuable in your organization? Let me know in the comments!

The post Design Systems and Portfolios appeared first on CSS-Tricks.

CSS-Tricks

, ,
[Top]

Web Standards Meet User-Land: Using CSS-in-JS to Style Custom Elements

The popularity of CSS-in-JS has mostly come from the React community, and indeed many CSS-in-JS libraries are React-specific. However, Emotion, the most popular library in terms of npm downloads, is framework agnostic.

Using the shadow DOM is common when creating custom elements, but there’s no requirement to do so. Not all use cases require that level of encapsulation. While it’s also possible to style custom elements with CSS in a regular stylesheet, we’re going to look at using Emotion.

We start with an install:

npm i emotion

Emotion offers the css function:

import {css} from 'emotion';

css is a tagged template literal. It accepts standard CSS syntax but adds support for Sass-style nesting.

const buttonStyles = css`   color: white;   font-size: 16px;   background-color: blue;    &:hover {     background-color: purple;   } `

Once some styles have been defined, they need to be applied. Working with custom elements can be somewhat cumbersome. Libraries — like Stencil and LitElement — compile to web components, but offer a friendlier API than what we’d get right out of the box.

So, we’re going to define styles with Emotion and take advantage of both Stencil and LitElement to make working with web components a little easier.

Applying styles for Stencil

Stencil makes use of the bleeding-edge JavaScript decorators feature. An @Component decorator is used to provide metadata about the component. By default, Stencil won’t use shadow DOM, but I like to be explicit by setting shadow: false inside the @Component decorator:

@Component({   tag: 'fancy-button',   shadow: false })

Stencil uses JSX, so the styles are applied with a curly bracket ({}) syntax:

export class Button {   render() {     return <div><button class={buttonStyles}><slot/></button></div>   } }

Here’s how a simple example component would look in Stencil:

import { css, injectGlobal } from 'emotion'; import {Component} from '@stencil/core';  const buttonStyles = css`   color: white;   font-size: 16px;   background-color: blue;   &:hover {     background-color: purple;   } ` @Component({   tag: 'fancy-button',   shadow: false }) export class Button {   render() {     return <div><button class={buttonStyles}><slot/></button></div>   } }

Applying styles for LitElement

LitElement, on the other hand, use shadow DOM by default. When creating a custom element with LitElement, the LitElement class is extended. LitElement has a createRenderRoot() method, which creates and opens a shadow DOM:

createRenderRoot()  {   return this.attachShadow({mode: 'open'}); }

Don’t want to make use of shadow DOM? That requires re-implementing this method inside the component class:

class Button extends LitElement {   createRenderRoot() {       return this;   } }

Inside the render function, we can reference the styles we defined using a template literal:

render() {   return html`<button class=$ {buttonStyles}>hello world!</button>` }

It’s worth noting that when using LitElement, we can only use a slot element when also using shadow DOM (Stencil does not have this problem).

Put together, we end up with:

import {LitElement, html} from 'lit-element'; import {css, injectGlobal} from 'emotion'; const buttonStyles = css`   color: white;   font-size: 16px;   background-color: blue;   &:hover {     background-color: purple;   } `  class Button extends LitElement {   createRenderRoot() {     return this;   }   render() {     return html`<button class=$ {buttonStyles}>hello world!</button>`   } }  customElements.define('fancy-button', Button);

Understanding Emotion

We don’t have to stress over naming our button — a random class name will be generated by Emotion.

We could make use of CSS nesting and attach a class only to a parent element. Alternatively, we can define styles as separate tagged template literals:

const styles = {   heading: css`     font-size: 24px;   `,   para: css`     color: pink;   ` } 

And then apply them separately to different HTML elements (this example uses JSX):

render() {   return <div>     <h2 class={styles.heading}>lorem ipsum</h2>     <p class={styles.para}>lorem ipsum</p>   </div> }

Styling the container

So far, we’ve styled the inner contents of the custom element. To style the container itself, we need another import from Emotion.

import {css, injectGlobal} from 'emotion';

injectGlobal injects styles into the “global scope” (like writing regular CSS in a traditional stylesheet — rather than generating a random class name). Custom elements are display: inline by default (a somewhat odd decision from spec authors). In almost all cases, I change this default with a style applied to all instances of the component. Below are the buttonStyles which is how we can change that up, making use of injectGlobal:

injectGlobal` fancy-button {   display: block; } `

Why not just use shadow DOM?

If a component could end up in any codebase, then shadow DOM may well be a good option. It’s ideal for third party widgets — any CSS that’s applied to the page simply won’t break the the component, thanks to the isolated nature of shadow DOM. That’s why it’s used by Twitter embeds, to take one example. However, the vast majority of us make components for for a particular site or app and nowhere else. In that situation, shadow DOM can arguably add complexity with limited benefit.

The post Web Standards Meet User-Land: Using CSS-in-JS to Style Custom Elements appeared first on CSS-Tricks.

CSS-Tricks

, , , , , , ,
[Top]

The Stunning Crochet Sculptures of Caitlin McCormack

[Top]

Little Things That Tickled My Brain from An Event Apart Seattle

I had so much fun at An Event Apart Seattle! There is something nice about sitting back and basking in the messages from a variety of such super smart people.

I didn’t take comprehensive notes of each talk, but I did jot down little moments that flickered my brain. I’ll post them here! Blogging is fun! Again, note that these moments weren’t necessarily the main point of the speaker’s presentation or reflective of the whole journey of the topic — they are little micro-memorable moments that stuck out to me.


Jeffrey Zeldman brought up the reading apps Instapaper (still around!) and Readability (not around… but the technology is what seeped into native browser tech). He called them a vanguard (cool word!) meaning they were warning us that our practices were pushing users away. This turned out to be rather true, as they still exist and are now joined by new technologies, like AMP and native reader mode, which are fighting the same problems.


Margot Bloomstein made a point about inconsistency eroding our ability to evaluate and build trust. Certainly applicable to websites, but also to a certain President of the United States.

President Flip Flops

Sarah Parmenter shared a powerful moment where she, through the power of email, reached out to Bloom and Wild, a flower mail delivery service, to tell them a certain type of email they were sending she found to be, however unintentionally, very insensitive. Sarah was going to use this as an example anyway, but the day before, Bloom and Wild actually took her advice and implemented a specialized opt-out system.

This not only made Sarah happy that a company could actually change their systems to be more sensitive to their customers, but it made a whole ton of people happy, as evidenced by an outpouring of positive tweets after it happened. Turns out your customers like it when you, ya know, think of them.


Eric Meyer covered one of the more inexplicable things about pseudo-elements: if you content: url(/icons/icon.png); you literally can’t control the width/height. There are ways around it, notably by using a background image instead, but it is a bit baffling that there is a way to add an image to a page with no possible way to resize it.

Literally, the entire talk was about pseudo-elements, which I found kinda awesome as I did that same thing eight years ago. If you’re looking for some nostalgia (and are OK with some cringe-y moments), here’s the PDF.

Eric also showed a demo that included a really neat effect that looks like a border going from thick to thin to thick again, which isn’t really something easily done on the web. He used a pseudo, but here it is as an <hr> element:

See the Pen
CSS Thick-Thin-Thick Line
by Chris Coyier (@chriscoyier)
on CodePen.


Rachel Andrew had an interesting way of talking about flexbox. To paraphrase:

Flexbox isn’t the layout method you think it is. Flexbox looks at some disparate things and returns some kind of reasonable layout. Now that grid is here it’s a lot more common to use that to be more much explict about what we are doing with layout. Not that flexbox isn’t extremely useful.

Rachel regularly pointed out that we don’t know how tall things are in web design, which is just so, so true. It’s always been true. The earlier we embrace that, the better off we’ll be. So much of our job is dealing with overflow.

Rachel brought up a concept that was new to me, in the sense that it has an official name. The concept is “data loss” through CSS. For example, aligning something a certain way might cause some content to become visually hidden and totally unreachable. Imagine some boxes like this, set in flexbox, with center alignment:

No “data loss” there because we can read everything. But let’s say we have more content in some of them. We can never know heights!

If that element was along the top of a page, for example, no scrollbar will be triggered because it’s opposite the scroll direction. We’d have “data loss” of that text:

We now alignment keywords that help with this. Like, we can still attempt to center, but we can save ourselves by using safe center (unsafe center being the default):

Rachel also mentioned overlapping as a thing that grid does better. Here’s a kinda bad recreation of what she showed:

See the Pen
Overlapping Figure with CSS Grid
by Chris Coyier (@chriscoyier)
on CodePen.

I was kinda hoping to be able to do that without being as explicit as I am being there, but that’s as close as I came.


Jen Simmons showed us a ton of different scenarios involving both grid and flexbox. She made a very clear point that a grid item can be a flexbox container (and vice versa).

Perhaps the most memorable part is how honest Jen was about how we arrive at the layouts were shooting for. It’s a ton of playing with the possible values and employing a little trial and error. Happy accidents abound! But there is a lot to know about the different sizing values and placement possibilties of grid, so the more you know the more you can play. While playing, the layout stuff in Firefox DevTools is your best bet.

Flexbox with gap is gonna be sweet.

There was a funny moment in Una Kravets’ talk about brainstorming the worst possible ideas.

The idea is that even though brainstorm sessions are supposed to be judgment-free, they never are. Bad ideas are meant to be bad, so the worst you can do is have a good idea. Even better, starting with good ideas is problematic in that it’s easy to get attached to an idea too early, whereas bad ideas allow more freedom to jump through ideation space and land on better ideas.


Scott Jehl mentioned a fascinating idea where you can get the benefits of inlining code and caching files at the same time. That’s useful for stuff we’ve gotten used to seeing inlined, like critical CSS. But you know what else is awesome to inline? SVG icon systems. Scott covered the idea in his talk, but I wanted to see if it I could give it a crack myself.

The idea is that a fresh page visit inlines the icons, but also tosses them in cache. Then other pages can <svg><use> them out of the cache.

Here’s my demo page. It’s not really production-ready. For example, you’d probably want to do another pass where you Ajax for the icons and inject them by replacing the <use> so that everywhere is actually using inline <svg> the same way. Plus, a server-side system would be ideal to display them either way depending on whether the cache is present or not.


Jeremy Keith mentioned the incredible prime number shitting bear, which is, as you might suspect, computationally expensive. He mentioned it in the context of web workers, which is essentially JavaScript that runs in a separate thread, so it won’t/can’t slow down the operation of the current page. I see that same idea has crossed other people’s minds.


I’m sad that I didn’t get to see every single talk because I know they were all amazing. There are plenty of upcoming shows with some of the same folks!

The post Little Things That Tickled My Brain from An Event Apart Seattle appeared first on CSS-Tricks.

CSS-Tricks

, , , , , , ,
[Top]

Perfect Image Optimization for Mobile with Optimole

(This is a sponsored post.)

In 2015 there were 24,000 different Android devices, and each of them was capable of downloading images. And this was just the beginning. The mobile era is starting to gather pace with mobile visitors starting to eclipse desktop. One thing is certain, building and running a website in the mobile era requires an entirely different approach to images. You need to consider users on desktops, laptops, tablets, watches, smartphones and then viewport sizes, resolutions, before, finally, getting to browser supported formats.

So, what’s a good image optimization option to help you juggle all of these demands without sacrificing image quality and on-page SEO strategy?

Say hello to Optimole.

If you’ve ever wondered how many breakpoints a folding screen will have, then you’ll probably like Optimole. It integrates with both the WordPress block editor and page builders to solve a bunch of image optimization problems. It’s an all-in-one service, so you can optimize images and also take advantage of features built to help you improve your page speed.

What’s different about Optimole?

The next step in image optimization is device specificity, so the traditional method of catch and replace image optimization will no longer be the best option for your website. Most image optimizers work in the same way:

  1. Select images for optimization.
  2. Replace images on the page.
  3. Delete the original.

They provide multiple static images for each screen size according to the breakpoints defined in your code. Essentially, you are providing multiple images to try and Optimized? Sure. Optimal? Hardly.

Optimole works like this:

  1. Your images get sucked into the cloud and optimized.
  2. Optimole replaces the standard image URLs with new ones – tailored to the user’s screen.
  3. The images go through a fast CDN to make them load lightning fast.

Here’s why this is better: You will be serving perfectly sized images to every device, faster, in the best format, and without an unnecessary load on your servers.

A quick case study: CodeinWP

Optimole’s first real test was on CodeinWP as part of a full site redesign. The blog has been around for a while during which time is has emerged as one of the leaders in the WordPress space. Performance wise? Not so much. With over 150,000 monthly visitors, the site needed to provide a better experience for Google and mobile.

Optimole was used as one part of a mobile first strategy. In the early stages, Optimole helped provide a ~0.4s boost in load time for most pages with it enabled. Whereas, on desktop, Optimole is compressing images by ~50% and improving pagespeed grades by 23%. Fully loaded time and the total page size is reduced by ~19%.

Improved PageSpeed and quicker delivery

Along with a device-specific approach, Optimole provides an image by image optimization to ensure each image fits perfectly into the targeted container. Google will love it. These savings in bandwidth are going to help you improve your PageSpeed scores.

It’s not always about the numbers; your website needs to conform to expected behavior even when rendering on mobile. You can avoid content jumping and shifting with any number of tweaks but a container based lazy loading option provides the best user experience. Optimole sends a blurred image at the exact container size, so your visitors never lose their place on the page.

We offer CDNs for both free and premium users. If you’re already using CDN, then we can integrate it from our end. The extra costs involved will be balanced out with a monthly discount.

Picking the perfect image for every device

Everyone loves <srcsets> and <sizes> but it is time for an automated solution that doesn’t leak bandwidth. With client hints, Optimole provides dynamic image resizing that provides a perfectly sized image for each and every device.

Acting as a proxy service allows Optimole to deliver unique images based on supported formats. Rather than replace an image on the page with a broad appeal, Optimole provides the best image based on the information provided by the browser. This dynamism means WebP and Retina displays are supported for, lucky, users without needing to make any changes.

Optimole can be set to detect slower connections, and deliver an image with a high compression rate to keep the page load time low.

Industrial strength optimization with a simple UI

The clean and simple UI gives you the options you need to make a site; no muss no fuss. You can set the parameters, introduce lazy loading, and test the quality without touching up any of the URLs.

You can reduce the extra CSS you need to make page builder images responsive and compressed. It currently takes time and a few CSS tricks to get all of your Elementor images responsive. For example, the extra thumbnails from galleries and carousels can throw a few curve balls. With Optimole all of these images are picked up from the page, and treated like any other. Automatically.

One of the reasons to avoid changing image optimization strategies is the, frightening, prospect of going through the optimization process again. Optimole is the set-and-forget optimization option that optimizes your images without making changes to the original image. Just install, activate and let Optimole handle the rest.

Set. Forget. Work on important things.

Try it today

Get some insight into how Optimole can help your site with the speed test here.

If you like what you see then you can get a fully functional free plan with all the features. It includes 1GB of image optimization and 5GB of viewing bandwidth. The premium plans start with 10GB of image optimization, and 50GB of viewing bandwidth, plus access to an enhanced CDN including over 130 locations.

If you’re worried about making a big change, then rest assured Optimole can be uninstalled cleanly to leave your site exactly as before.

Direct Link to ArticlePermalink

The post Perfect Image Optimization for Mobile with Optimole appeared first on CSS-Tricks.

CSS-Tricks

, , , ,
[Top]

7 things you should know when getting started with Serverless APIs

I want you to take a second and think about Twitter, and think about it in terms of scale. Twitter has 326 million users. Collectively, we create ~6,000 tweets every second. Every minute, that’s 360,000 tweets created. That sums up to nearly 200 billion tweets a year. Now, what if the creators of Twitter had been paralyzed by how to scale and they didn’t even begin?

That’s me on every single startup idea I’ve ever had, which is why I love serverless so much: it handles the issues of scaling leaving me to build the next Twitter!

Live metrics with Application Insights

As you can see in the above, we scaled from one to seven servers in a matter of seconds, as more user requests come in. You can scale that easily, too.

So let’s build an API that will scale instantly as more and more users come in and our workload increases. We’re going to do that is by answering the following questions:


How do I create a new serverless project?

With every new technology, we need to figure out what tools are available for us and how we can integrate them into our existing tool set. When getting started with serverless, we have a few options to consider.

First, we can use the good old browser to create, write and test functions. It’s powerful, and it enables us to code wherever we are; all we need is a computer and a browser running. The browser is a good starting point for writing our very first serverless function.

Serverless in the browser

Next, as you get more accustomed to the new concepts and become more productive, you might want to use your local environment to continue with your development. Typically you’ll want support for a few things:

  • Writing code in your editor of choice
  • Tools that do the heavy lifting and generate the boilerplate code for you
  • Run and debug code locally
  • Support for quickly deploying your code

Microsoft is my employer and I’ve mostly built serverless applications using Azure Functions so for the rest of this article I’ll continue using them as an example. With Azure Functions, you’ll have support for all these features when working with the Azure Functions Core Tools which you can install from npm.

npm install -g azure-functions-core-tools

Next, we can initialize a new project and create new functions using the interactive CLI:

func CLI

If your editor of choice happens to be VS Code, then you can use it to write serverless code too. There’s actually a great extension for it.

Once installed, a new icon will be added to the left-hand sidebar — this is where we can access all our Azure-related extensions! All related functions can to be grouped under the same project (also known as a function app). This is like a folder for grouping functions that should scale together and that we want to manage and monitor at the same time. To initialize a new project using VS Code, click on the Azure icon and then the folder icon.

Create new Azure Functions project

This will generate a few files that help us with global settings. Let’s go over those now.

host.json

We can configure global options for all functions in the project directly in the host.json file.

In it, our function app is configured to use the latest version of the serverless runtime (currently 2.0). We also configure functions to timeout after ten minutes by setting the functionTimeout property to 00:10:00 — the default value for that is currently five minutes (00:05:00).

In some cases, we might want to control the route prefix for our URLs or even tweak settings, like the number of concurrent requests. Azure Functions even allows us to customize other features like logging, healthMonitor and different types of extensions.

Here’s an example of how I’ve configured the file:

// host.json {   "version": "2.0",   "functionTimeout": "00:10:00",   "extensions": {   "http": {     "routePrefix": "tacos",     "maxOutstandingRequests": 200,     "maxConcurrentRequests": 100,     "dynamicThrottlesEnabled": true   } } }

Application settings

Application settings are global settings for managing runtime, language and version, connection strings, read/write access, and ZIP deployment, among others. Some are settings that are required by the platform, like FUNCTIONS_WORKER_RUNTIME, but we can also define custom settings that we’ll use in our application code, like DB_CONN which we can use to connect to a database instance.

While developing locally, we define these settings in a file named local.settings.json and we access them like any other environment variable.

Again, here’s an example snippet that connects these points:

// local.settings.json {   "IsEncrypted": false,   "Values": {     "AzureWebJobsStorage": "your_key_here",     "FUNCTIONS_WORKER_RUNTIME": "node",     "WEBSITE_NODE_DEFAULT_VERSION": "8.11.1",     "FUNCTIONS_EXTENSION_VERSION": "~2",     "APPINSIGHTS_INSTRUMENTATIONKEY": "your_key_here",     "DB_CONN": "your_key_here",   } }

Azure Functions Proxies

Azure Functions Proxies are implemented in the proxies.json file, and they enable us to expose multiple function apps under the same API, as well as modify requests and responses. In the code below we’re publishing two different endpoints under the same URL.

// proxies.json {   "$ schema": "http://json.schemastore.org/proxies",   "proxies": {     "read-recipes": {         "matchCondition": {         "methods": ["POST"],         "route": "/api/recipes"       },       "backendUri": "https://tacofancy.azurewebsites.net/api/recipes"     },     "subscribe": {       "matchCondition": {         "methods": ["POST"],         "route": "/api/subscribe"       },       "backendUri": "https://tacofancy-users.azurewebsites.net/api/subscriptions"     }   } }

Create a new function by clicking the thunder icon in the extension.

Create a new Azure Function

The extension will use predefined templates to generate code, based on the selections we made — language, function type, and authorization level.

We use function.json to configure what type of events our function listens to and optionally to bind to specific data sources. Our code runs in response to specific triggers which can be of type HTTP when we react to HTTP requests — when we run code in response to a file being uploaded to a storage account. Other commonly used triggers can be of type queue, to process a message uploaded on a queue or time triggers to run code at specified time intervals. Function bindings are used to read and write data to data sources or services like databases or send emails.

Here, we can see that our function is listening to HTTP requests and we get access to the actual request through the object named req.

// function.json {   "disabled": false,   "bindings": [     {       "authLevel": "anonymous",       "type": "httpTrigger",       "direction": "in",       "name": "req",       "methods": ["get"],       "route": "recipes"     },     {       "type": "http",       "direction": "out",       "name": "res"     }   ] }

index.js is where we implement the code for our function. We have access to the context object, which we use to communicate to the serverless runtime. We can do things like log information, set the response for our function as well as read and write data from the bindings object. Sometimes, our function app will have multiple functions that depend on the same code (i.e. database connections) and it’s good practice to extract that code into a separate file to reduce code duplication.

//Index.js module.exports = async function (context, req) {   context.log('JavaScript HTTP trigger function processed a request.');    if (req.query.name || (req.body && req.body.name)) {       context.res = {           // status: 200, /* Defaults to 200 */           body: "Hello " + (req.query.name || req.body.name)       };   }   else {       context.res = {           status: 400,           body: "Please pass a name on the query string or in the request body"       };   } };

Who’s excited to give this a run?

How do I run and debug Serverless functions locally?

When using VS Code, the Azure Functions extension gives us a lot of the setup that we need to run and debug serverless functions locally. When we created a new project using it, a .vscode folder was automatically created for us, and this is where all the debugging configuration is contained. To debug our new function, we can use the Command Palette (Ctrl+Shift+P) by filtering on Debug: Select and Start Debugging, or typing debug.

Debugging Serverless Functions

One of the reasons why this is possible is because the Azure Functions runtime is open-source and installed locally on our machine when installing the azure-core-tools package.

How do I install dependencies?

Chances are you already know the answer to this, if you’ve worked with Node.js. Like in any other Node.js project, we first need to create a package.json file in the root folder of the project. That can done by running npm init -y — the -y will initialize the file with default configuration.

Then we install dependencies using npm as we would normally do in any other project. For this project, let’s go ahead and install the MongoDB package from npm by running:

npm i mongodb

The package will now be available to import in all the functions in the function app.

How do I connect to third-party services?

Serverless functions are quite powerful, enabling us to write custom code that reacts to events. But code on its own doesn’t help much when building complex applications. The real power comes from easy integration with third-party services and tools.

So, how do we connect and read data from a database? Using the MongoDB client, we’ll read data from an Azure Cosmos DB instance I have created in Azure, but you can do this with any other MongoDB database.

//Index.js const MongoClient = require('mongodb').MongoClient;  // Initialize authentication details required for database connection const auth = {   user: process.env.user,   password: process.env.password };  // Initialize global variable to store database connection for reuse in future calls let db = null; const loadDB = async () => {   // If database client exists, reuse it     if (db) {     return db;   }   // Otherwise, create new connection     const client = await MongoClient.connect(     process.env.url,     {       auth: auth     }   );   // Select tacos database   db = client.db('tacos');   return db; };  module.exports = async function(context, req) {   try {     // Get database connection     const database = await loadDB();     // Retrieve all items in the Recipes collection      let recipes = await database       .collection('Recipes')       .find()       .toArray();     // Return a JSON object with the array of recipes      context.res = {       body: { items: recipes }     };   } catch (error) {     context.log(`Error code: $ {error.code} message: $ {error.message}`);     // Return an error message and Internal Server Error status code     context.res = {       status: 500,       body: { message: 'An error has occurred, please try again later.' }     };   } };

One thing to note here is that we’re reusing our database connection rather than creating a new one for each subsequent call to our function. This shaves off ~300ms of every subsequent function call. I call that a win!

Where can I save connection strings?

When developing locally, we can store our environment variables, connection strings, and really anything that’s secret into the local.settings.json file, then access it all in the usual manner, using process.env.yourVariableName.

local.settings.json {   "IsEncrypted": false,   "Values": {     "AzureWebJobsStorage": "",     "FUNCTIONS_WORKER_RUNTIME": "node",     "user": "your-db-user",     "password": "your-db-password",     "url": "mongodb://your-db-user.documents.azure.com:10255/?ssl=true"   } }

In production, we can configure the application settings on the function’s page in the Azure portal.

However, another neat way to do this is through the VS Code extension. Without leaving your IDE, we can add new settings, delete existing ones or upload/download them to the cloud.

Debugging Serverless Functions

How do I customize the URL path?

With the REST API, there are a couple of best practices around the format of the URL itself. The one I settled on for our Recipes API is:

  • GET /recipes: Retrieves a list of recipes
  • GET /recipes/1: Retrieves a specific recipe
  • POST /recipes: Creates a new recipe
  • PUT /recipes/1: Updates recipe with ID 1
  • DELETE /recipes/1: Deletes recipe with ID 1

The URL that is made available by default when creating a new function is of the form http://host:port/api/function-name. To customize the URL path and the method that we listen to, we need to configure them in our function.json file:

// function.json {   "disabled": false,   "bindings": [     {       "authLevel": "anonymous",       "type": "httpTrigger",       "direction": "in",       "name": "req",       "methods": ["get"],       "route": "recipes"     },     {       "type": "http",       "direction": "out",       "name": "res"     }   ] }

Moreover, we can add parameters to our function’s route by using curly braces: route: recipes/{id}. We can then read the ID parameter in our code from the req object:

const recipeId = req.params.id;

How can I deploy to the cloud?

Congratulations, you’ve made it to the last step! 🎉 Time to push this goodness to the cloud. As always, the VS Code extension has your back. All it really takes is a single right-click we’re pretty much done.

Deployment using VS Code

The extension will ZIP up the code with the Node modules and push them all to the cloud.

While this option is great when testing our own code or maybe when working on a small project, it’s easy to overwrite someone else’s changes by accident — or even worse, your own.

Don’t let friends right-click deploy!
— every DevOps engineer out there

A much healthier option is setting up on GitHub deployment which can be done in a couple of steps in the Azure portal, via the Deployment Center tab.

Github deployment

Are you ready to make Serverless APIs?

This has been a thorough introduction to the world of Servless APIs. However, there’s much, much more than what we’ve covered here. Serverless enables us to solve problems creatively and at a fraction of the cost we usually pay for using traditional platforms.

Chris has mentioned it in other posts here on CSS-Tricks, but he created this excellent website where you can learn more about serverless and find both ideas and resources for things you can build with it. Definitely check it out and let me know if you have other tips or advice scaling with serverless.

The post 7 things you should know when getting started with Serverless APIs appeared first on CSS-Tricks.

CSS-Tricks

, , , , , ,
[Top]

The Benefits of Structuring CSS Around Appearance and Layout

I like this point that Jonathan Snook made on Twitter and I’ve been thinking about it non-stop because it describes something that’s really hard about writing CSS:

In fact, I reckon this is the hardest thing about writing maintainable CSS in a large codebase. It’s an enormous problem in my day-to-day work and I reckon it’s what most technical debt in CSS eventually boils down to.

Let’s imagine we’re styling a checkbox, for example – that checkbox probably has a position on the page, some margins, and maybe other positioning styles, too. And that checkbox might be green but turns blue when you click it.

I think we can distinguish between these two types of styles as layout and appearance.

But writing good CSS requires keeping those two types of styles separated. That way, the checkbox styles can be reused time and time again without having to worry about how those positioning styles (like margin, padding or width) might come back to bite you.

At Gusto, we use Bootstrap’s grid system which is great because we can write HTML like this that explicitly separates these concerns like so:

<div class="row">   <div class="col-6">     <!-- Checkbox goes here -->   </div>   <div class="col-6">     <!-- Another element can be placed here -->		   </div> </div>

Otherwise, you might end up writing styles like this, which will end up with a ton of issues if those checkbox styles are reused in the future:

.checkbox {   width: 40%;   margin-bottom: 60px;   /* Other checkbox styles */ }

When I see code like this, my first thought is, “Why is the width 40% – and 40% of what?” All of a sudden, I can see that this checkbox class is now dependent on some other bit of code that I don’t understand.

So I’ve begun to think about all CSS as fitting into one of those two buckets: appearance and layout. That’s important because I believe those two types of styles should almost never be baked into one set of styles or one class. Layout on the page should be one set of styles, and appearance (like what the checkbox actually looks like) should be in another. And whether you do that with HTML or a separate CSS class is up for debate.

The reason why this is an issue is that folks will constantly be trying to overwrite layout styles and that’s when we eventually wind up with a codebase that resembles a spaghetti monster. I think this distinction is super important to writing great CSS that scales properly. What do you think? Add a comment below!

The post The Benefits of Structuring CSS Around Appearance and Layout appeared first on CSS-Tricks.

CSS-Tricks

, , , ,
[Top]