Tag: Should

Consistent Backends and UX: Why Should You Care?

Article Series

  1. Why should you care?
  2. What can go wrong? (Coming soon)
  3. What are the barriers to adoption? (Coming soon)
  4. How do new algorithms help? (Coming soon)

More than ever, new products aim to make an impact on a global scale, and user experience is rapidly becoming the determining factor for whether they are successful or not. These properties of your application can significantly influence the user experience:

  1. Performance & low latency
  2. The application does what you expect
  3. Security
  4. Features and UI

Let’s begin our quest toward the perfect user experience!

1) Performance & Low Latency

Others have said it before; performance is user experience (1, 2). When you have caught the attention of potential visitors, a slight increase in latency can make you lose that attention again. 

2) The application does what you expect

What does ‘does what you expect’ even mean? It means that if I change my name in my application to ‘Robert’ and reload the application, my name will be Robert and not Brecht. It seems important that an application delivers these guarantees, right? 

Whether the application can deliver on these guarantees depends on the database. When pursuing low latency and performance, we end up in the realm of distributed databases where only a few of the more recent databases deliver these guarantees. In the realm of distributed databases, there might be dragons, unless we choose a strongly (vs. eventually) consistent database. In this series, we’ll go into detail on what this means, which databases provide this feature called strong consistency, and how it can help you build awesomely fast apps with minimal effort.  

3) Security

Security does not always seem to impact user experience at first. However, as soon as users notice security flaws, relationships can be damaged beyond repair. 

4) Features and UI 

Impressive features and great UI have a great impact on the conscious and unconscious mind. Often, people only desire a specific product after they have experienced how it looks and feels. 

If a database saves time in setup and configuration, then the rest of our efforts can be focused on delivering impressive features and a great UI. There is good news for you; nowadays, there are databases that deliver on all of the above, do not require configuration or server provisioning, and provide easy to use APIs such as GraphQL out-of-the-box.

What is so different about this new breed of databases? Let’s take a step back and show how the constant search for lower latency and better UX, in combination with advances in database research, eventually led to a new breed of databases that are the ideal building blocks for modern applications. 

The Quest for distribution  

I. Content delivery networks

As we mentioned before, performance has a significant impact on UX. There are several ways to improve latency, where the most obvious is to optimize your application code. Once your application code is quite optimal, network latency and write/read performance of the database often remain the bottleneck. To achieve our low latency requirement, we need to make sure that our data is as close to the client as possible by distributing the data globally. We can deliver the second requirement (write/read performance) by making multiple machines work together, or in other words, replicating data.

Distribution leads to better performance and consequently to good user experience. We’ve already seen extensive use of a distribution solution that speeds up the delivery of static data; it’s called a Content Delivery Network (CDN). CDNs are highly valued by the Jamstack community to reduce the latency of their applications. They typically use frameworks and tools such as Next.js/Now, Gatsby, and Netlify to preassemble front end React/Angular/Vue code into static websites so that they can serve them from a CDN. 

Unfortunately, CDNs aren’t sufficient for every use case, because we can’t rely on statically generated HTML pages for all applications. There are many types of highly dynamic applications where you can’t statically generate everything. For example:

  1. Applications that require real-time updates for instantaneous communication between users (e.g., chat applications, collaborative drawing or writing, games).
  2. Applications that present data in many different forms by filtering, aggregating, sorting, and otherwise manipulating data in so many ways that you can’t generate everything in advance. 

II. Distributed databases

In general, a highly dynamic application will require a distributed database to improve performance. Just like a CDN, a distributed database also aims to become a global network instead of a single node. In essence, we want to go from a scenario with a single database node… 

…to a scenario where the database becomes a network. When a user connects from a specific continent, he will automatically be redirected to the closest database. This results in lower latencies and happier end users. 

If databases were employees waiting by a phone, the database employee would inform you that there is an employee closer by, and forward the call. Luckily, distributed databases automatically route us to the closest database employee, so we never have to bother the database employee on the other continent. 

Distributed databases are multi-region, and you always get redirected to the closest node.

Besides latency, distributed databases also provide a second and a third advantage. The second is redundancy, which means that if one of the database locations in the network were completely obliterated by a Godzilla attack, your data would not be lost since other nodes still have duplicates of your data. 

Distributed databases provide redundancy which can save your application when things go wrong. 
Distributed databases divide the load by scaling up automatically when the workload increases. 

Last but not least, the third advantage of using a distributed database is scaling. A database that runs on one server can quickly become the bottleneck of your application. In contrast, distributed databases replicate data over multiple servers and can scale up and down automatically according to the demands of the applications. In some advanced distributed databases, this aspect is completely taken care of for you. These databases are known as “serverless”, meaning you don’t even have to configure when the database should scale up and down, and you only pay for the usage of your application, nothing more.

Distributing dynamic data brings us to the realm of distributed databases. As mentioned before, there might be dragons. In contrast to CDNs, the data is highly dynamic; the data can change rapidly and can be filtered and sorted, which brings additional complexities. The database world examined different approaches to achieve this. Early approaches had to make sacrifices to achieve the desired performance and scalability. Let’s see how the quest for distribution evolved. 

Traditional databases’ approach to distribution

One logical choice was to build upon traditional databases (MySQL, PostgreSQL, SQL Server) since so much effort has already been invested in them. However, traditional databases were not built to be distributed and therefore took a rather simple approach to distribution. The typical approach to scale reads was to use read replicas. A read replica is just a copy of your data from which you can read but not write. Such a copy (or replica) offloads queries from the node that contains the original data. This mechanism is very simple in that the data is incrementally copied over to the replicas as it comes in.

Due to this relatively simple approach, a replica’s data is always older than the original data. If you read the data from a replica node at a specific point in time, you might get an older value than if you read from the primary node. This is called a “stale read”. Programmers using traditional databases have to be aware of this possibility and program with this limitation in mind. Remember the example we gave at the beginning where we write a value and reread it? When working with traditional database replicas, you can’t expect to read what you write. 

You could improve the user experience slightly by optimistically applying the results of writes on the front end before all replicas are aware of the writes. However, a reload of the webpage might return the UI to a previous state if the update did not reach the replica yet. The user would then think that his changes were never saved. 

The first generation of distributed databases

In the replication approach of traditional databases, the obvious bottleneck is that writes all go to the same node. The machine can be scaled up, but will inevitably bump into a ceiling. As your app gains popularity and writes increase, the database will no longer be fast enough to accept new data. To scale horizontally for both reads and writes, distributed databases were invented. A distributed database also holds multiple copies of the data, but you can write to each of these copies. Since you update data via each node, all nodes have to communicate with each other and inform others about new data. In other words, it is no longer a one-way direction such as in the traditional system. 

However, these kinds of databases can still suffer from the aforementioned stale reads and introduce many other potential issues related to writes. Whether they suffer from these issues depends on what decision they took in terms of availability and consistency.   

This first generation of distributed databases was often called the “NoSQL movement”, a name influenced by databases such as MongoDB and Neo4j, which also provided alternative languages to SQL and different modeling strategies (documents or graphs instead of tables). NoSQL databases often did not have typical traditional database features such as constraints and joins. As time passed, this name appeared to be a terrible name since many databases that were considered NoSQL did provide a form of SQL. Multiple interpretations arose that claimed that NoSQL databases:

  • do not provide SQL as a query language. 
  • do not only provide SQL (NoSQL = Not Only SQL)
  • do not provide typical traditional features such as joins, constraints, ACID guarantees. 
  • model their data differently (graph, document, or temporal model)

Some of the newer databases that were non-relational yet offered SQL were then called “NewSQL” to avoid confusion. 

Wrong interpretations of the CAP theorem

The first generation of databases was strongly inspired by the CAP theorem, which dictates that you can’t have both Consistency and Availability during a network Partition. A network partition is essentially when something happens so that two nodes can no longer talk to each other about new data, and can arise for many reasons (e.g., apparently sharks sometimes munch on Google’s cables). Consistency means that the data in your database is always correct, but not necessarily available to your application. Availability means that your database is always online and that your application is always able to access that data, but does not guarantee that the data is correct or the same in multiple nodes. We generally speak of high availability since there is no such thing as 100% availability. Availability is mentioned in digits of 9 (e.g. 99.9999% availability) since there is always a possibility that a series of events cause downtime.

Visualization of the CAP theorem, a balance between consistency and availability in the event of a network partition. We generally speak of high availability since there is no such thing as 100% availability. 

But what happens if there is no network partition? Database vendors took the CAP theorem a bit too generally and either chose to accept potential data loss or to be available, whether there is a network partition or not. While the CAP theorem was a good start, it did not emphasize that it is possible to be highly available and consistent when there is no network partition. Most of the time, there are no network partitions, so it made sense to describe this case by expanding the CAP theorem into the PACELC theorem. The key difference is the three last letters (ELC) which stand for Else Latency Consistency. This theorem dictates that if there is no network partition, the database has to balance Latency and Consistency. 

According to the PACELC theorem, increased consistency results in higher latencies (during normal operation).

In simple terms: when there is no network partition, latency goes up when the consistency guarantees go up. However, we’ll see that reality is still even more subtle than this. 

How is this related to User Experience?

Let’s look at an example of how giving up consistency can impact user experience. Consider an application that provides you with a friendly interface to compose teams of people; you drag and drop people into different teams. 

Once you drag a person into a team, an update is triggered to update that team. If the database does not guarantee that your application can read the result of this update immediately, then the UI has to apply those changes optimistically. In that case, bad things can happen: 

  • The user refreshes the page and does not see his update anymore and thinks that his update is gone. When he refreshes again, it is suddenly back. 
  • The database did not store the update successfully due to a conflict with another update. In this case, the update might be canceled, and the user will never know. He might only notice that his changes are gone the next time he reloads. 

This trade-off between consistency and latency has sparked many heated discussions between front-end and back-end developers. The first group wanted a great UX where users receive feedback when they perform actions and can be 100% sure that once they receive this feedback and respond to it, the results of their actions are consistently saved. The second group wanted to build a scalable and performant back end and saw no other way than to sacrifice the aforementioned UX requirements to deliver that. 

Both groups had valid points, but there was no golden bullet to satisfy both. When the transactions increased and the database became the bottleneck, their only option was to go for either traditional database replication or a distributed database that sacrificed strong consistency for something called “eventual consistency”. In eventual consistency, an update to the database will eventually be applied on all machines, but there is no guarantee that the next transaction will be able to read the updated value. In other words, if I update my name to “Robert”, there is no guarantee that I will actually receive “Robert” if I query my name immediately after the update.

Consistency Tax

To deal with eventual consistency, developers need to be aware of the limitations of such a database and do a lot of extra work. Programmers often resort to user experience hacks to hide the database limitations, and back ends have to write lots of additional layers of code to accommodate for various failure scenarios. Finding and building creative solutions around these limitations has profoundly impacted the way both front- and back-end developers have done their jobs, significantly increasing technical complexity while still not delivering an ideal user experience. 

We can think of this extra work required to ensure data correctness as a “tax” an application developer must pay to deliver good user experiences. That is the tax of using a software system that doesn’t offer consistency guarantees that hold up in todays webscale concurrent environments. We call this the Consistency Tax.

Thankfully, a new generation of databases has evolved that does not require you to pay the Consistency Tax and can scale without sacrificing consistency!

The second generation of distributed databases

A second generation of distributed databases has emerged to provide strong (rather than eventual) consistency. These databases scale well, won’t lose data, and won’t return stale data. In other words, they do what you expect, and it’s no longer required to learn about the limitations or pay the Consistency Tax. If you update a value, the next time you read that value, it always reflects the updated value, and different updates are applied in the same temporal order as they were written. FaunaDB, Spanner, and FoundationDB are the only databases at the time of writing that offer strong consistency without limitations (also called Strict serializability). 

The PACELC theorem revisited

The second generation of distributed databases has achieved something that was previously considered impossible; they favor consistency and still deliver low latencies. This became possible due to intelligent synchronization mechanisms such as Calvin, Spanner, and Percolator, which we will discuss in detail in article 4 of this series. While older databases still struggle to deliver high consistency guarantees at lower latencies, databases built on these new intelligent algorithms suffer no such limitations.

Database designs influence the attainable latency at high consistency greatly. 

Since these new algorithms allow databases to provide both strong consistency and low latencies, there is usually no good reason to give up consistency (at least in the absence of a network partition). The only time you would do this is if extremely low write latency is the only thing that truly matters, and you are willing to lose data to achieve it. 

intelligent algorithms result in strong consistency and relatively low latencies

Are these databases still NoSQL?

It’s no longer trivial to categorize this new generation of distributed databases. Many efforts are still made (1, 2) to explain what NoSQL means, but none of them still make perfect sense since  NoSQL and SQL databases are growing towards each other. New distributed databases borrow from different data models (Document, Graph, Relational, Temporal), and some of them provide ACID guarantees or even support SQL. They still have one thing in common with NoSQL: they are built to solve the limitations of traditional databases. One word will never be able to describe how a database behaves. In the future, it would make more sense to describe distributed databases by answering these questions:

  • Is it strongly consistent? 
  • Does the distribution rely on read-replicas, or is it truly distributed?
  • What data models does it borrow from?
  • How expressive is the query language, and what are its limitations? 

Conclusion

We explained how applications can now benefit from a new generation of globally distributed databases that can serve dynamic data from the closest location in a CDN-like fashion. We briefly went over the history of distributed databases and saw that it was not a smooth ride. Many first-generation databases were developed, and their consistency choices–which were mainly driven by the CAP theorem–required us to write more code while still diminishing the user experience. Only recently has the database community developed algorithms that allow distributed databases to combine low latency with strong consistency. A new era is upon us, a time when we no longer have to make trade-offs between data access and consistency!

At this point, you probably want to see concrete examples of the potential pitfalls of eventually consistent databases. In the next article of this series, we will cover exactly that. Stay tuned for these upcoming articles:

Article Series

  1. Why should you care?
  2. What can go wrong? (Coming soon)
  3. What are the barriers to adoption? (Coming soon)
  4. How do new algorithms help? (Coming soon)

The post Consistent Backends and UX: Why Should You Care? appeared first on CSS-Tricks.

CSS-Tricks

, , ,

How Many Websites Should We Build?

Someone emailed me:

What approach to building a site should I take?

  1. Build a single responsive website
  2. Build a site on a single domain, but detect mobile, and render a separate mobile site
  3. Build a separate mobile site on a subdomain

It’s funny how quickly huge industry-defining conversations fade from view. This was probably the biggest question in web design and development1 this past decade, and we came up with an answer: It’s #1, you should build a responsive website. Any other answer and you’re building multiple websites and the pain from that comes from essentially doubling the workload, splitting teams, communication problems across those teams, inconsistencies across the sites, and an iceberg of other pain points this industry has struggled with for ages.

But, the web is a big place.

This emailer specifically mentioned imdb.com as their example. IMDB is an absolutely massive site with a large team (they are owned by Amazon) and lots of money flying around. If the IMDB team decides they would be better off building multiple websites, well that’s their business. They’ve got the resources to do whatever the hell they want.

The post How Many Websites Should We Build? appeared first on CSS-Tricks.

CSS-Tricks

, , ,
[Top]

Mina Markham Should Make Beyoncé’s Site Accessible

, , , , ,
[Top]

If you can build a site with WordPress.com, you should build your site on WordPress.com.

That’s what I like to tell people. I’ve seen too many websites die off, often damaging the company along the way because the technical debt of hosting and maintaining the website is too much in the long term. For a few examples, there is the domain name itself to handle and the tricky DNS settings to go along with it. There is choosing and setting up web hosting, which often requires more long-term maintenance than many folks would like. There are SSL certificates that need to be handled and renewed. You’d better make sure security and backups and handled well, lest you risk the entire site.

Lots of stuff to think about!

Building and working on a website is hard enough without all this stress. These things are even hard enough work for seasoned web people, and often just too much entirely for people, projects, and companies that just want a dang website.

You know what? Do it on WordPress.com and worry about nothing. Just build your website and know that a great company has got your back on everything else.

To be clear, I’ve been working on websites for decades. I know a lot about what it takes to run them and what can break them. To anyone that wants to learn those things too, that’s great. I would never try to that away from anybody. And there are plenty of projects out there that need to do things that WordPress.com can’t do. But there are also a lot more projects out there suffering from forgotten web chores and abandoned responsibilities that would be and could be happily chugging along on WordPress.com.

Signing up for a WordPress.com site is not just easy, but even includes a free tier. It’s pretty incredible how quick it is to get a site online. And, if you’re at all familiar with publishing content on WordPress, then you know how simple it is to start cranking out content — and if you’re new to WordPress, well, you’re in for a treat because the editing experience is just plain delightful, especially with the new Gutenberg interface.

So, yes, regardless your skill level, type or business, team, or whatever, WordPress.com is an excellent resource and is the right call. Does it fit all use cases? No, but nothing does. I like the idea of choosing the right tool for the job and WordPress.com can certainly be the right choice for a good number of projects.

Oh and hey, as chance has it, the awesome WordPress Jetpack plugin happens to be having a 20% promotion. That’s pretty awesome. If you’ve been following us for some time, then you know that we love Jetpack and use it right here on CSS-Tricks — from comment moderation to image optimization to social integrations and many things in between. Use coupon code JPSAVE20 at checkout.

The post If you can build a site with WordPress.com, you should build your site on WordPress.com. appeared first on CSS-Tricks.

CSS-Tricks

, , ,
[Top]

7 things you should know when getting started with Serverless APIs

I want you to take a second and think about Twitter, and think about it in terms of scale. Twitter has 326 million users. Collectively, we create ~6,000 tweets every second. Every minute, that’s 360,000 tweets created. That sums up to nearly 200 billion tweets a year. Now, what if the creators of Twitter had been paralyzed by how to scale and they didn’t even begin?

That’s me on every single startup idea I’ve ever had, which is why I love serverless so much: it handles the issues of scaling leaving me to build the next Twitter!

Live metrics with Application Insights

As you can see in the above, we scaled from one to seven servers in a matter of seconds, as more user requests come in. You can scale that easily, too.

So let’s build an API that will scale instantly as more and more users come in and our workload increases. We’re going to do that is by answering the following questions:


How do I create a new serverless project?

With every new technology, we need to figure out what tools are available for us and how we can integrate them into our existing tool set. When getting started with serverless, we have a few options to consider.

First, we can use the good old browser to create, write and test functions. It’s powerful, and it enables us to code wherever we are; all we need is a computer and a browser running. The browser is a good starting point for writing our very first serverless function.

Serverless in the browser

Next, as you get more accustomed to the new concepts and become more productive, you might want to use your local environment to continue with your development. Typically you’ll want support for a few things:

  • Writing code in your editor of choice
  • Tools that do the heavy lifting and generate the boilerplate code for you
  • Run and debug code locally
  • Support for quickly deploying your code

Microsoft is my employer and I’ve mostly built serverless applications using Azure Functions so for the rest of this article I’ll continue using them as an example. With Azure Functions, you’ll have support for all these features when working with the Azure Functions Core Tools which you can install from npm.

npm install -g azure-functions-core-tools

Next, we can initialize a new project and create new functions using the interactive CLI:

func CLI

If your editor of choice happens to be VS Code, then you can use it to write serverless code too. There’s actually a great extension for it.

Once installed, a new icon will be added to the left-hand sidebar — this is where we can access all our Azure-related extensions! All related functions can to be grouped under the same project (also known as a function app). This is like a folder for grouping functions that should scale together and that we want to manage and monitor at the same time. To initialize a new project using VS Code, click on the Azure icon and then the folder icon.

Create new Azure Functions project

This will generate a few files that help us with global settings. Let’s go over those now.

host.json

We can configure global options for all functions in the project directly in the host.json file.

In it, our function app is configured to use the latest version of the serverless runtime (currently 2.0). We also configure functions to timeout after ten minutes by setting the functionTimeout property to 00:10:00 — the default value for that is currently five minutes (00:05:00).

In some cases, we might want to control the route prefix for our URLs or even tweak settings, like the number of concurrent requests. Azure Functions even allows us to customize other features like logging, healthMonitor and different types of extensions.

Here’s an example of how I’ve configured the file:

// host.json {   "version": "2.0",   "functionTimeout": "00:10:00",   "extensions": {   "http": {     "routePrefix": "tacos",     "maxOutstandingRequests": 200,     "maxConcurrentRequests": 100,     "dynamicThrottlesEnabled": true   } } }

Application settings

Application settings are global settings for managing runtime, language and version, connection strings, read/write access, and ZIP deployment, among others. Some are settings that are required by the platform, like FUNCTIONS_WORKER_RUNTIME, but we can also define custom settings that we’ll use in our application code, like DB_CONN which we can use to connect to a database instance.

While developing locally, we define these settings in a file named local.settings.json and we access them like any other environment variable.

Again, here’s an example snippet that connects these points:

// local.settings.json {   "IsEncrypted": false,   "Values": {     "AzureWebJobsStorage": "your_key_here",     "FUNCTIONS_WORKER_RUNTIME": "node",     "WEBSITE_NODE_DEFAULT_VERSION": "8.11.1",     "FUNCTIONS_EXTENSION_VERSION": "~2",     "APPINSIGHTS_INSTRUMENTATIONKEY": "your_key_here",     "DB_CONN": "your_key_here",   } }

Azure Functions Proxies

Azure Functions Proxies are implemented in the proxies.json file, and they enable us to expose multiple function apps under the same API, as well as modify requests and responses. In the code below we’re publishing two different endpoints under the same URL.

// proxies.json {   "$ schema": "http://json.schemastore.org/proxies",   "proxies": {     "read-recipes": {         "matchCondition": {         "methods": ["POST"],         "route": "/api/recipes"       },       "backendUri": "https://tacofancy.azurewebsites.net/api/recipes"     },     "subscribe": {       "matchCondition": {         "methods": ["POST"],         "route": "/api/subscribe"       },       "backendUri": "https://tacofancy-users.azurewebsites.net/api/subscriptions"     }   } }

Create a new function by clicking the thunder icon in the extension.

Create a new Azure Function

The extension will use predefined templates to generate code, based on the selections we made — language, function type, and authorization level.

We use function.json to configure what type of events our function listens to and optionally to bind to specific data sources. Our code runs in response to specific triggers which can be of type HTTP when we react to HTTP requests — when we run code in response to a file being uploaded to a storage account. Other commonly used triggers can be of type queue, to process a message uploaded on a queue or time triggers to run code at specified time intervals. Function bindings are used to read and write data to data sources or services like databases or send emails.

Here, we can see that our function is listening to HTTP requests and we get access to the actual request through the object named req.

// function.json {   "disabled": false,   "bindings": [     {       "authLevel": "anonymous",       "type": "httpTrigger",       "direction": "in",       "name": "req",       "methods": ["get"],       "route": "recipes"     },     {       "type": "http",       "direction": "out",       "name": "res"     }   ] }

index.js is where we implement the code for our function. We have access to the context object, which we use to communicate to the serverless runtime. We can do things like log information, set the response for our function as well as read and write data from the bindings object. Sometimes, our function app will have multiple functions that depend on the same code (i.e. database connections) and it’s good practice to extract that code into a separate file to reduce code duplication.

//Index.js module.exports = async function (context, req) {   context.log('JavaScript HTTP trigger function processed a request.');    if (req.query.name || (req.body && req.body.name)) {       context.res = {           // status: 200, /* Defaults to 200 */           body: "Hello " + (req.query.name || req.body.name)       };   }   else {       context.res = {           status: 400,           body: "Please pass a name on the query string or in the request body"       };   } };

Who’s excited to give this a run?

How do I run and debug Serverless functions locally?

When using VS Code, the Azure Functions extension gives us a lot of the setup that we need to run and debug serverless functions locally. When we created a new project using it, a .vscode folder was automatically created for us, and this is where all the debugging configuration is contained. To debug our new function, we can use the Command Palette (Ctrl+Shift+P) by filtering on Debug: Select and Start Debugging, or typing debug.

Debugging Serverless Functions

One of the reasons why this is possible is because the Azure Functions runtime is open-source and installed locally on our machine when installing the azure-core-tools package.

How do I install dependencies?

Chances are you already know the answer to this, if you’ve worked with Node.js. Like in any other Node.js project, we first need to create a package.json file in the root folder of the project. That can done by running npm init -y — the -y will initialize the file with default configuration.

Then we install dependencies using npm as we would normally do in any other project. For this project, let’s go ahead and install the MongoDB package from npm by running:

npm i mongodb

The package will now be available to import in all the functions in the function app.

How do I connect to third-party services?

Serverless functions are quite powerful, enabling us to write custom code that reacts to events. But code on its own doesn’t help much when building complex applications. The real power comes from easy integration with third-party services and tools.

So, how do we connect and read data from a database? Using the MongoDB client, we’ll read data from an Azure Cosmos DB instance I have created in Azure, but you can do this with any other MongoDB database.

//Index.js const MongoClient = require('mongodb').MongoClient;  // Initialize authentication details required for database connection const auth = {   user: process.env.user,   password: process.env.password };  // Initialize global variable to store database connection for reuse in future calls let db = null; const loadDB = async () => {   // If database client exists, reuse it     if (db) {     return db;   }   // Otherwise, create new connection     const client = await MongoClient.connect(     process.env.url,     {       auth: auth     }   );   // Select tacos database   db = client.db('tacos');   return db; };  module.exports = async function(context, req) {   try {     // Get database connection     const database = await loadDB();     // Retrieve all items in the Recipes collection      let recipes = await database       .collection('Recipes')       .find()       .toArray();     // Return a JSON object with the array of recipes      context.res = {       body: { items: recipes }     };   } catch (error) {     context.log(`Error code: $ {error.code} message: $ {error.message}`);     // Return an error message and Internal Server Error status code     context.res = {       status: 500,       body: { message: 'An error has occurred, please try again later.' }     };   } };

One thing to note here is that we’re reusing our database connection rather than creating a new one for each subsequent call to our function. This shaves off ~300ms of every subsequent function call. I call that a win!

Where can I save connection strings?

When developing locally, we can store our environment variables, connection strings, and really anything that’s secret into the local.settings.json file, then access it all in the usual manner, using process.env.yourVariableName.

local.settings.json {   "IsEncrypted": false,   "Values": {     "AzureWebJobsStorage": "",     "FUNCTIONS_WORKER_RUNTIME": "node",     "user": "your-db-user",     "password": "your-db-password",     "url": "mongodb://your-db-user.documents.azure.com:10255/?ssl=true"   } }

In production, we can configure the application settings on the function’s page in the Azure portal.

However, another neat way to do this is through the VS Code extension. Without leaving your IDE, we can add new settings, delete existing ones or upload/download them to the cloud.

Debugging Serverless Functions

How do I customize the URL path?

With the REST API, there are a couple of best practices around the format of the URL itself. The one I settled on for our Recipes API is:

  • GET /recipes: Retrieves a list of recipes
  • GET /recipes/1: Retrieves a specific recipe
  • POST /recipes: Creates a new recipe
  • PUT /recipes/1: Updates recipe with ID 1
  • DELETE /recipes/1: Deletes recipe with ID 1

The URL that is made available by default when creating a new function is of the form http://host:port/api/function-name. To customize the URL path and the method that we listen to, we need to configure them in our function.json file:

// function.json {   "disabled": false,   "bindings": [     {       "authLevel": "anonymous",       "type": "httpTrigger",       "direction": "in",       "name": "req",       "methods": ["get"],       "route": "recipes"     },     {       "type": "http",       "direction": "out",       "name": "res"     }   ] }

Moreover, we can add parameters to our function’s route by using curly braces: route: recipes/{id}. We can then read the ID parameter in our code from the req object:

const recipeId = req.params.id;

How can I deploy to the cloud?

Congratulations, you’ve made it to the last step! 🎉 Time to push this goodness to the cloud. As always, the VS Code extension has your back. All it really takes is a single right-click we’re pretty much done.

Deployment using VS Code

The extension will ZIP up the code with the Node modules and push them all to the cloud.

While this option is great when testing our own code or maybe when working on a small project, it’s easy to overwrite someone else’s changes by accident — or even worse, your own.

Don’t let friends right-click deploy!
— every DevOps engineer out there

A much healthier option is setting up on GitHub deployment which can be done in a couple of steps in the Azure portal, via the Deployment Center tab.

Github deployment

Are you ready to make Serverless APIs?

This has been a thorough introduction to the world of Servless APIs. However, there’s much, much more than what we’ve covered here. Serverless enables us to solve problems creatively and at a fraction of the cost we usually pay for using traditional platforms.

Chris has mentioned it in other posts here on CSS-Tricks, but he created this excellent website where you can learn more about serverless and find both ideas and resources for things you can build with it. Definitely check it out and let me know if you have other tips or advice scaling with serverless.

The post 7 things you should know when getting started with Serverless APIs appeared first on CSS-Tricks.

CSS-Tricks

, , , , , ,
[Top]

Should I Use Source Maps in Production?

It’s a valid question. A “source map” is a special file that connects a minified/uglified version of an asset (CSS or JavaScript) to the original authored version. Say you’ve got a filed called _header.scss that gets imported into global.scss which is compiled to global.css. That final CSS file is what gets loaded in the browser, so for example, when you inspect an element in DevTools, it might tell you that the <nav> is display: flex; because it says so on line 387 in global.css.

On line 528 of page.css</, we can find out that <code>.meta has position: relative;

But because that final CSS file is probably minified (all whitespace removed), DevTools is likely to tell us that we’ll find the declaration we’re looking for on line 1! Unfortunate, and not helpful for development.

That’s where source maps come in. Like I said up top, source maps are special files that connect that final output file the browser is actually using with the authored files that you actually work with and write code in on your file system.

Typically, source maps are a configuration option from the preprocessor. Here’s Babel’s options. I believe that with Sass, you don’t even have to pass a flag for it in the command or anything because it produces source maps by default.

So, these source maps are for developers. They are particularly useful for you and your team because they help tremendously for debugging issues as well as day-to-day work. I’m sure I make use of them just about every day. I’d say in general, they are used for local development. You might even .gitignore them or skip them in a deployment process in order to serve and store fewer assets to production. But there’s been some recent chatter about making sure they go to production as well.

David Heinemeier Hansson:

But source maps have long been seen merely as a local development tool. Not something you ship to production, although people have also been doing that, such that live debugging would be easier. That in itself is a great reason to ship source maps. […]

Additional, Rails 6 just committed to shipping source maps by default in production, also thanks to Webpack. You’ll be able to turn that feature off, but I hope you won’t. The web is a better place when we allow others to learn from our work.

Check out that issue thread for more interesting conversation about shipping source maps to production. The benefits boil down to these two things:

  1. It might help you track down bugs in production more easily
  2. It helps other people learn from your website more easily

Both are cool. Personally, I’d be opposed to shipping performance-optimized code for learning purposes alone. I wrote about that last year:

I don’t want my source to be human-readable, not for protective reasons, but because I care about web performance more. I want my website to arrive at light speed on a tiny spec of magical network packet dust and blossom into a complete website. Or do whatever computer science deems is the absolute fastest way to send website data between computers. I’m much more worried about the state of web performance than I am about web education. But even if I was very worried about web education, I don’t think it’s the network’s job to deliver teachability.

Shipping source maps to production is a nice middle ground. There’s no hit on performance (source maps don’t get loaded unless you have DevTools open, which is, IMO, irrelevant to a real performance discussion) with the benefit of delivering debugging and learning benefits.

The downsides brought up in recent discussion boil down to:

  1. Sourcemaps require compilation time
  2. It allows people to, I dunno, steal your code or something

I don’t care about #2 (sorry), and #1 seems generally negligible for a small or what we think of as the average site, though I’m afraid I can’t speak for mega sites.

One thing I should add though is that source maps can even be generated for CSS-in-JS tooling, so for those that literally inject styles into the DOM for you, those source maps are injected as well. I’ve seen major slowdowns in those situations, so I would say definitely do not ship source maps to production if you can’t split them out of your main bundles. Otherwise, I’d vote strongly that you do.

The post Should I Use Source Maps in Production? appeared first on CSS-Tricks.

CSS-Tricks

, , ,
[Top]

New ES2018 Features Every JavaScript Developer Should Know

The ninth edition of the ECMAScript standard, officially known as ECMAScript 2018 (or ES2018 for short), was released in June 2018. Starting with ES2016, new versions of ECMAScript specifications are released yearly rather than every several years and add fewer features than major editions used to. The newest edition of the standard continues the yearly release cycle by adding four new RegExp features, rest/spread properties, asynchronous iteration, and Promise.prototype.finally. Additionally, ES2018 drops the syntax restriction of escape sequences from tagged templates.

These new changes are explained in the subsections that follow.

The Rest/Spread Properties

One of the most interesting features added to ES2015 was the spread operator. This operator makes copying and merging arrays a lot simpler. Rather than calling the concat() or slice() method, you could use the ... operator:

const arr1 = [10, 20, 30];  // make a copy of arr1 const copy = [...arr1];  console.log(copy);    // → [10, 20, 30]  const arr2 = [40, 50];  // merge arr2 with arr1 const merge = [...arr1, ...arr2];  console.log(merge);    // → [10, 20, 30, 40, 50]

The spread operator also comes in handy in situations where an array must be passed in as separate arguments to a function. For example:

const arr = [10, 20, 30]  // equivalent to // console.log(Math.max(10, 20, 30)); console.log(Math.max(...arr));    // → 30

ES2018 further expands this syntax by adding spread properties to object literals. With the spread properties you can copy own enumerable properties of an object onto a new object. Consider the following example:

const obj1 = {   a: 10,   b: 20 };  const obj2 = {   ...obj1,   c: 30 };  console.log(obj2);    // → {a: 10, b: 20, c: 30}

In this code, the ... operator is used to retrieve the properties of obj1 and assign them to obj2. Prior to ES2018, attempting to do so would throw an error. If there are multiple properties with the same name, the property that comes last will be used:

const obj1 = {   a: 10,   b: 20 };  const obj2 = {   ...obj1,   a: 30 };  console.log(obj2);    // → {a: 30, b: 20}

Spread properties also provide a new way to merge two or more objects, which can be used as an alternative to the Object.assign() method:

const obj1 = {a: 10}; const obj2 = {b: 20}; const obj3 = {c: 30};  // ES2018 console.log({...obj1, ...obj2, ...obj3});    // → {a: 10, b: 20, c: 30}  // ES2015 console.log(Object.assign({}, obj1, obj2, obj3));    // → {a: 10, b: 20, c: 30}

Note, however, that spread properties do not always produce the same result as Object.assign(). Consider the following code:

Object.defineProperty(Object.prototype, 'a', {   set(value) {     console.log('set called!');   } });  const obj = {a: 10};  console.log({...obj});     // → {a: 10}  console.log(Object.assign({}, obj));     // → set called! // → {}

In this code, the Object.assign() method executes the inherited setter property. Conversely, the spread properties simply ignore the setter.

It’s important to remember that spread properties only copy enumerable properties. In the following example, the type property won’t show up in the copied object because its enumerable attribute is set to false:

const car = {   color: 'blue' };  Object.defineProperty(car, 'type', {   value: 'coupe',   enumerable: false });  console.log({...car});    // → {color: "blue"}

Inherited properties are ignored even if they are enumerable:

const car = {   color: 'blue' };  const car2 = Object.create(car, {   type: {     value: 'coupe',     enumerable: true,   } });  console.log(car2.color);                      // → blue console.log(car2.hasOwnProperty('color'));    // → false  console.log(car2.type);                       // → coupe console.log(car2.hasOwnProperty('type'));     // → true  console.log({...car2});                       // → {type: "coupe"}

In this code, car2 inherits the color property from car. Because spread properties only copy the own properties of an object, color is not included in the return value.

Keep in mind that spread properties can only make a shallow copy of an object. If a property holds an object, only the reference to the object will be copied:

const obj = {x: {y: 10}}; const copy1 = {...obj};     const copy2 = {...obj};   console.log(copy1.x === copy2.x);    // → true

The x property in copy1 refers to the same object in memory that x in copy2 refers to, so the strict equality operator returns true.

Another useful feature added to ES2015 was rest parameters, which enabled JavaScript programmers to use ... to represent values as an array. For example:

const arr = [10, 20, 30]; const [x, ...rest] = arr;  console.log(x);       // → 10 console.log(rest);    // → [20, 30]

Here, the first item in arr is assigned to x, and remaining elements are assigned to the rest variable. This pattern, called array destructuring, became so popular that the Ecma Technical Committee decided to bring a similar functionality to objects:

const obj = {   a: 10,   b: 20,   c: 30 };  const {a, ...rest} = obj;  console.log(a);       // → 10 console.log(rest);    // → {b: 20, c: 30}

This code uses the rest properties in a destructuring assignment to copy the remaining own enumerable properties into a new object. Note that rest properties must always appear at the end of the object, otherwise an error is thrown:

const obj = {   a: 10,   b: 20,   c: 30 };  const {...rest, a} = obj;    // → SyntaxError: Rest element must be last element

Also keep in mind that using multiple rest syntaxes in an object causes an error, unless they are nested:

const obj = {   a: 10,   b: {     x: 20,     y: 30,     z: 40   } };  const {b: {x, ...rest1}, ...rest2} = obj;    // no error  const {...rest, ...rest2} = obj;    // → SyntaxError: Rest element must be last element

Support for Rest/Spread Properties

Chrome Firefox Safari Edge
60 55 11.1 No
Chrome Android Firefox Android iOS Safari Edge Mobile Samsung Internet Android Webview
60 55 11.3 No 8.2 60

Node.js:

  • 8.0.0 (requires the --harmony runtime flag)
  • 8.3.0 (full support)

Asynchronous Iteration

Iterating over a collection of data is an important part of programming. Prior to ES2015, JavaScript provided statements such as for, for...in, and while, and methods such as map(), filter(), and forEach() for this purpose. To enable programmers to process the elements in a collection one at a time, ES2015 introduced the iterator interface.

An object is iterable if it has a Symbol.iterator property. In ES2015, strings and collections objects such as Set, Map, and Array come with a Symbol.iterator property and thus are iterable. The following code gives an example of how to access the elements of an iterable one at a time:

const arr = [10, 20, 30]; const iterator = arr[Symbol.iterator]();    console.log(iterator.next());    // → {value: 10, done: false} console.log(iterator.next());    // → {value: 20, done: false} console.log(iterator.next());    // → {value: 30, done: false} console.log(iterator.next());    // → {value: undefined, done: true}

Symbol.iterator is a well-known symbol specifying a function that returns an iterator. The primary way to interact with an iterator is the next() method. This method returns an object with two properties: value and done. The value property contains the value of the next element in the collection. The done property contains either true or false denoting whether or not the end of the collection has reached.

By default, a plain object is not iterable, but it can become iterable if you define a Symbol.iterator property on it, as in this example:

const collection = {   a: 10,   b: 20,   c: 30,   [Symbol.iterator]() {     const values = Object.keys(this);     let i = 0;     return {       next: () => {         return {           value: this[values[i++]],           done: i > values.length         }       }     };   } };  const iterator = collection[Symbol.iterator]();    console.log(iterator.next());    // → {value: 10, done: false} console.log(iterator.next());    // → {value: 20, done: false} console.log(iterator.next());    // → {value: 30, done: false} console.log(iterator.next());    // → {value: undefined, done: true}

This object is iterable because it defines a Symbol.iterator property. The iterator uses the Object.keys() method to get an array of the object’s property names and then assigns it to the values constant. It also defines a counter variable and gives it an initial value of 0. When the iterator is executed it returns an object that contains a next() method. Each time the next() method is called, it returns a {value, done} pair, with value holding the next element in the collection and done holding a Boolean indicating if the iterator has reached the need of the collection.

While this code works perfectly, it’s unnecessarily complicated. Fortunately, using a generator function can considerably simplify the process:

const collection = {   a: 10,   b: 20,   c: 30,   [Symbol.iterator]: function * () {     for (let key in this) {       yield this[key];     }   } };  const iterator = collection[Symbol.iterator]();    console.log(iterator.next());    // → {value: 10, done: false} console.log(iterator.next());    // → {value: 20, done: false} console.log(iterator.next());    // → {value: 30, done: false} console.log(iterator.next());    // → {value: undefined, done: true}

Inside this generator, a for...in loop is used to enumerate over the collection and yield the value of each property. The result is exactly the same as the previous example, but it’s greatly shorter.

A downside of iterators is that they are not suitable for representing asynchronous data sources. ES2018’s solution to remedy that is asynchronous iterators and asynchronous iterables. An asynchronous iterator differs from a conventional iterator in that, instead of returning a plain object in the form of {value, done}, it returns a promise that fulfills to {value, done}. An asynchronous iterable defines a Symbol.asyncIterator method (instead of Symbol.iterator) that returns an asynchronous iterator.

An example should make this clearer:

const collection = {   a: 10,   b: 20,   c: 30,   [Symbol.asyncIterator]() {     const values = Object.keys(this);     let i = 0;     return {       next: () => {         return Promise.resolve({           value: this[values[i++]],            done: i > values.length         });       }     };   } };  const iterator = collection[Symbol.asyncIterator]();    console.log(iterator.next().then(result => {   console.log(result);    // → {value: 10, done: false} }));  console.log(iterator.next().then(result => {   console.log(result);    // → {value: 20, done: false}  }));  console.log(iterator.next().then(result => {   console.log(result);    // → {value: 30, done: false}  }));  console.log(iterator.next().then(result => {   console.log(result);    // → {value: undefined, done: true}  }));

Note that it’s not possible to use an iterator of promises to achieve the same result. Although a normal, synchronous iterator can asynchronously determine the values, it still needs to determine the state of “done” synchronously.

Again, you can simplify the process by using a generator function, as shown below:

const collection = {   a: 10,   b: 20,   c: 30,   [Symbol.asyncIterator]: async function * () {     for (let key in this) {       yield this[key];     }   } };  const iterator = collection[Symbol.asyncIterator]();    console.log(iterator.next().then(result => {   console.log(result);    // → {value: 10, done: false} }));  console.log(iterator.next().then(result => {   console.log(result);    // → {value: 20, done: false}  }));  console.log(iterator.next().then(result => {   console.log(result);    // → {value: 30, done: false}  }));  console.log(iterator.next().then(result => {   console.log(result);    // → {value: undefined, done: true}  }));

Normally, a generator function returns a generator object with a next() method. When next() is called it returns a {value, done} pair whose value property holds the yielded value. An async generator does the same thing except that it returns a promise that fulfills to {value, done}.

An easy way to iterate over an iterable object is to use the for...of statement, but for...of doesn’t work with async iterables as value and done are not determined synchronously. For this reason, ES2018 provides the for...await...of statement. Let’s look at an example:

const collection = {   a: 10,   b: 20,   c: 30,   [Symbol.asyncIterator]: async function * () {     for (let key in this) {       yield this[key];     }   } };  (async function () {   for await (const x of collection) {     console.log(x);   } })();  // logs: // → 10 // → 20 // → 30

In this code, the for...await...of statement implicitly calls the Symbol.asyncIterator method on the collection object to get an async iterator. Each time through the loop, the next() method of the iterator is called, which returns a promise. Once the promise is resolved, the value property of the resulting object is read to the x variable. The loop continues until the done property of the returned object has a value of true.

Keep in mind that the for...await...of statement is only valid within async generators and async functions. Violating this rule results in a SyntaxError.

The next() method may return a promise that rejects. To gracefully handle a rejected promise, you can wrap the for...await...of statement in a try...catch statement, like this:

const collection = {   [Symbol.asyncIterator]() {     return {       next: () => {         return Promise.reject(new Error('Something went wrong.'))       }     };   } };  (async function() {   try {     for await (const value of collection) {}   } catch (error) {     console.log('Caught: ' + error.message);   } })();  // logs: // → Caught: Something went wrong.

Support for Asynchronous Iterators

Chrome Firefox Safari Edge
63 57 12 No
Chrome Android Firefox Android iOS Safari Edge Mobile Samsung Internet Android Webview
63 57 12 No 8.2 63

Node.js:

  • 8.10.0 (requires the –harmony_async_iteration flag)
  • 10.0.0 (full support)

Promise.prototype.finally

Another exciting addition to ES2018 is the finally() method. Several JavaScript libraries had previously implemented a similar method, which proved useful in many situations. This encouraged the Ecma Technical Committee to officially add finally() to the specification. With this method, programmers will be able to execute a block of code regardless of the promise’s fate. Let’s look at a simple example:

fetch('https://www.google.com')   .then((response) => {     console.log(response.status);   })   .catch((error) => {      console.log(error);   })   .finally(() => {      document.querySelector('#spinner').style.display = 'none';   });

The finally() method comes in handy when you need to do some clean up after the operation has finished regardless of whether or not it succeeded. In this code, the finally() method simply hides the loading spinner after the data is fetched and processed. Instead of duplicating the final logic in the then() and catch() methods, the code registers a function to be executed once the promise is either fulfilled or rejected.

You could achieve the same result by using promise.then(func, func) rather than promise.finally(func), but you would have to repeat the same code in both fulfillment handler and rejection handler, or declare a variable for it:

fetch('https://www.google.com')   .then((response) => {     console.log(response.status);   })   .catch((error) => {      console.log(error);   })   .then(final, final);  function final() {   document.querySelector('#spinner').style.display = 'none'; }

As with then() and catch(), the finally() method always returns a promise, so you can chain more methods. Normally, you want to use finally() as the last chain, but in certain situations, such as when making a HTTP request, it’s a good practice to chain another catch() to deal with errors that may occur in finally().

Support for Promise.prototype.finally

Chrome Firefox Safari Edge
63 58 11.1 18
Chrome Android Firefox Android iOS Safari Edge Mobile Samsung Internet Android Webview
63 58 11.1 No 8.2 63

Node.js:

10.0.0 (full support)

New RegExp Features

ES2018 adds four new features to the RegExp object, which further improves JavaScript’s string processing capabilities. These features are as follows:

  • s (dotAll) flag
  • Named capture groups
  • Lookbehind assertions
  • Unicode property escapes

s (dotAll) Flag

The dot (.) is a special character in a regular expression pattern that matches any character except line break characters such as line feed (\n) or carriage return (\r). A workaround to match all characters including line breaks is to use a character class with two opposite shorthands such as [\d\D]. This character class tells the regular expression engine to find a character that’s either a digit (\d) or a non-digit (\D). As a result, it matches any character:

console.log(/one[\d\D]two/.test('one\ntwo'));    // → true

ES2018 introduces a mode in which the dot can be used to achieve the same result. This mode can be activated on per-regex basis by using the s flag:

console.log(/one.two/.test('one\ntwo'));     // → false console.log(/one.two/s.test('one\ntwo'));    // → true

The benefit of using a flag to opt in to the new behavior is backwards compatibility. So existing regular expression patterns that use the dot character are not affected.

Named Capture Groups

In some regular expression patterns, using a number to reference a capture group can be confusing. For example, take the regular expression /(\d{4})-(\d{2})-(\d{2})/ which matches a date. Because date notation in American English is different from British English, it’s hard to know which group refers to the day and which group refers to the month:

const re = /(\d{4})-(\d{2})-(\d{2})/; const match= re.exec('2019-01-10');  console.log(match[0]);    // → 2019-01-10 console.log(match[1]);    // → 2019 console.log(match[2]);    // → 01 console.log(match[3]);    // → 10

ES2018 introduces named capture groups which uses the (?<name>...) syntax. So, the pattern to match a date can be written in a less ambiguous manner:

const re = /(?<year>\d{4})-(?<month>\d{2})-(?<day>\d{2})/; const match = re.exec('2019-01-10');  console.log(match.groups);          // → {year: "2019", month: "01", day: "10"} console.log(match.groups.year);     // → 2019 console.log(match.groups.month);    // → 01 console.log(match.groups.day);      // → 10

You can recall a named capture group later in the pattern by using the \k<name> syntax. For example, to find consecutive duplicate words in a sentence, you can use /\b(?<dup>\w+)\s+\k<dup>\b/:

const re = /\b(?<dup>\w+)\s+\k<dup>\b/; const match = re.exec('Get that that cat off the table!');          console.log(match.index);    // → 4 console.log(match[0]);       // → that that

To insert a named capture group into the replacement string of the replace() method, you will need to use the $ <name> construct. For example:

const str = 'red & blue';  console.log(str.replace(/(red) & (blue)/, '$  2 & $  1'));     // → blue & red  console.log(str.replace(/(?<red>red) & (?<blue>blue)/, '$  <blue> & $  <red>'));     // → blue & red

Lookbehind Assertions

ES2018 brings lookbehind assertions to JavaScript, which have been available in other regex implementations for years. Previously, JavaScript only supported lookahead assertions. A lookbehind assertion is denoted by (?<=...), and enables you to match a pattern based on the substring that precedes the pattern. For example, if you want to match the price of a product in dollar, pound, or euro without capturing the currency symbol, you can use /(?<=$ |£|€)\d+(\.\d*)?/:

const re = /(?<=$  |£|€)\d+(\.\d*)?/;  console.log(re.exec('199'));      // → null  console.log(re.exec('$  199'));     // → ["199", undefined, index: 1, input: "$  199", groups: undefined]  console.log(re.exec('€50'));      // → ["50", undefined, index: 1, input: "€50", groups: undefined]

There is also a negative version of lookbehind, which is denoted by (?<!...). A negative lookbehind allows you to match a pattern only if it is not preceded by the pattern within the lookbehind. For example, the pattern /(?<!un)available/ matches the word available if it does not have a “un” prefix:

const re = /(?<!un)available/;  console.log(re.exec('We regret this service is currently unavailable'));     // → null  console.log(re.exec('The service is available'));              // → ["available", index: 15, input: "The service is available", groups: undefined]

Unicode Property Escapes

ES2018 provides a new type of escape sequence known as Unicode property escape, which provides support for full Unicode in regular expressions. Suppose you want to match the Unicode character ㉛ in a string. Although ㉛ is considered a number, you can’t match it with the \d shorthand character class because it only supports ASCII [0-9] characters. Unicode property escapes, on the other hand, can be used to match any decimal number in Unicode:

const str = '㉛';  console.log(/\d/u.test(str));    // → false console.log(/\p{Number}/u.test(str));     // → true

Similarly, if you want to match any Unicode word character, you can use \p{Alphabetic}:

const str = 'ض';  console.log(/\p{Alphabetic}/u.test(str));     // → true  // the \w shorthand cannot match ض   console.log(/\w/u.test(str));    // → false

There is also a negated version of \p{...}, which is denoted by \P{...}:

console.log(/\P{Number}/u.test('㉛'));    // → false console.log(/\P{Number}/u.test('ض'));    // → true  console.log(/\P{Alphabetic}/u.test('㉛'));    // → true console.log(/\P{Alphabetic}/u.test('ض'));    // → false

In addition to Alphabetic and Number, there are several more properties that can be used in Unicode property escapes. You can find a list of supported Unicode properties in the current specification proposal.

Support for New RegExp Features

Chrome Firefox Safari Edge
s (dotAll) Flag 62 No 11.1 No
Named Capture Groups 64 No 11.1 No
Lookbehind Assertions 62 No No No
Unicode Property Escapes 64 No 11.1 No
Chrome (Android) Firefox (Android) iOS Safari Edge Mobile Samsung Internet Android Webview
s (dotAll) Flag 62 No 11.3 No 8.2 62
Named Capture Groups 64 No 11.3 No No 64
Lookbehind Assertions 62 No No No 8.2 62
Unicode Property Escapes 64 No 11.3 No No 64

Node.js:

  • 8.3.0 (requires the –harmony runtime flag)
  • 8.10.0 (support for s (dotAll) flag and lookbehind assertions)
  • 10.0.0 (full support)

Template Literal Revision

When a template literal is immediately preceded by an expression, it is called a tagged template literal. A tagged template comes in handy when you want to parse a template literal with a function. Consider the following example:

function fn(string, substitute) {   if(substitute === 'ES6') {     substitute = 'ES2015'   }   return substitute + string[1]; }  const version = 'ES6'; const result = fn`$  {version} was a major update`;  console.log(result);    // → ES2015 was a major update

In this code, a tag expression — which is a regular function — is invoked and passed the template literal. The function simply modifies the dynamic part of the string and returns it.

Prior to ES2018, tagged template literals had syntactic restrictions related to escape sequences. A backslash followed by certain sequence of characters were treated as special characters: a \x interpreted as a hex escape, a \u interpreted as a unicode escape, and a \ followed by a digit interpreted as an octal escape. As a result, strings such as "C:\xxx\uuu" or "\ubuntu" were considered invalid escape sequences by the interpreter and would throw a SyntaxError.

ES2018 removes these restrictions from tagged templates and instead of throwing an error, represents invalid escape sequences as undefined:

function fn(string, substitute) {   console.log(substitute);    // → escape sequences:   console.log(string[1]);     // → undefined }  const str = 'escape sequences:'; const result = fn`$  {str} \ubuntu C:\xxx\uuu`;

Keep in mind that using illegal escape sequences in a regular template literal still causes an error:

const result = `\ubuntu`; // → SyntaxError: Invalid Unicode escape sequence

Support for Template Literal Revision

Chrome Firefox Safari Edge
62 56 11 No
Chrome Android Firefox Android iOS Safari Edge Mobile Samsung Internet Android Webview
62 56 11 No 8.2 62

Node.js:

  • 8.3.0 (requires the –harmony runtime flag)
  • 8.10.0 (full support)

Wrapping up

We’ve taken a good look at several key features introduced in ES2018 including asynchronous iteration, rest/spread properties, Promise.prototype.finally(), and additions to the RegExp object. Although some of these features are not fully implemented by some browser vendors yet, they can still be used today thanks to JavaScript transpilers such as Babel.

ECMAScript is rapidly evolving and new features are being introduced every so often, so check out the list of finished proposals for the full scope of what’s new. Are there any new features you’re particularly excited about? Share them in the comments!

The post New ES2018 Features Every JavaScript Developer Should Know appeared first on CSS-Tricks.

CSS-Tricks

, , , , , ,
[Top]