Tag: into

A Deep Dive into Native Lazy-Loading for Images and Frames

Today’s websites are packed with heavy media assets like images and videos. Images make up around 50% of an average website’s traffic. Many of them, however, are never shown to a user because they’re placed way below the fold.

What’s this thing about images being lazy, you ask? Lazy-loading is something that’s been covered quite a bit here on CSS-Tricks, including a thorough guide with documentation for different approaches using JavaScript. In short, we’re talking about a mechanism that defers the network traffic necessary to load content when it’s needed — or rather when trigger the load when the content enters the viewport.

The benefit? A smaller initial page that loads faster and saves network requests for items that may not be needed if the user never gets there.

If you read through other lazy-loading guides on this or other sites, you’ll see that we’ve had to resort to different tactics to make lazy-loading work. Well, that’s about to change when lazy-loading will be available natively in HTML as a new loading attribute… at least in Chrome which will hopefully lead to wider adoption. Chrome has already merged the code for native lazy-loading and is expected to ship it in Chrome 75, which is slated to release June 4, 2019.

Eager cat loaded lazily (but still immediately because it's above the fold)

The pre-native approach

Until now, developers like ourselves have had to use JavaScript (whether it’s a library or something written from scratch) in order to achieve lazy-loading. Most libraries work like this:

  • The initial, server-side HTML response includes an img element without the src attribute so the browser does not load any data. Instead, the image’s URL is set as another attribute in the element’s data set, e. g. data-src.
  • <img data-src="https://tiny.pictures/example1.jpg" alt="...">
  • Then, a lazy-loading library is loaded and executed.
  • <script src="LazyLoadingLibrary.js"></script> <script>LazyLoadingLibrary.run()</script>
  • That keeps track of the user’s scrolling behavior and tells the browser to load the image when it is about to be scrolled into view. It does that by copying the data-src attribute’s value to the previously empty src attribute.
  • <img src="https://tiny.pictures/example1.jpg" data-src="https://tiny.pictures/example1.jpg" alt="...">

This has worked for a pretty long time now and gets the job done. But it’s not ideal for good reasons.

The obvious problem with this approach is the length of the critical path for displaying the website. It consists of three steps, which have to be carried out in sequence (after each other):

  1. Load the initial HTML response
  2. Load the lazy-loading library
  3. Load the image file

If this technique is used for images above the fold the website will flicker during loading because it is first painted without the image (after step 1 or 2, depending on if the script uses defer or async) and then — after having been loaded — include the image. It will also be perceived as loading slowly.

In addition, the lazy-loading library itself puts an extra weight on the website’s bandwidth and CPU requirements. And let’s not forget that a JavaScript approach won’t work for people who have JavaScript disabled (although we shouldn’t really care about them in 2019, should we?).

Oh, and what about sites that rely on RSS to distribute content, like CSS-Tricks? The initial image-less render means there are no images in the RSS version of content as well.

And so on.

Native lazy-loading to the rescue!

Lazy cat loaded lazily

As we noted at the start, Chromium and Google Chrome will ship a native lazy-loading mechanism in the form of a new loading attribute, starting in Chrome 75. We’ll go over the attribute and its values in just a bit, but let’s first get it working in our browsers so we can check it out together.

Enable native lazy-loading

Until Chrome 75 is officially released, we have Chrome Canary and can enable lazy-loading manually by switching two flags.

  1. Open chrome://flags in Chromium or Chrome Canary.
  2. Search for lazy.
  3. Enable both the “Enable lazy image loading” and the “Enable lazy frame loading” flag.
  4. Restart the browser with the button in the lower right corner of the screen.
Native lazy-loading flags in Google Chrome

You can check if the feature is properly enabled by opening your JavaScript console (F12). You should see the following warning:

[Intervention] Images loaded lazily and replaced with placeholders. Load events are deferred.”

All set? Now we get to dig into the loading attribute.

The loading attribute

Both the img and the iframe elements will accept the loading attribute. It’s important to note that its values will not be taken as a strict order by the browser but rather as a hint to help the browser make its own decision whether or not to load the image or frame lazily.

The attribute can have three values which are explained below. Next to the images, you’ll find tables listing your individual resource loading timings for this page load. Range response refers to a kind of partial pre-flight request made to determine the image’s dimensions (see How it works) for details). If this column is filled, the browser made a successful range request.

Please note the startTime column, which states the time image loading was deferred after the DOM had been parsed. You might have to perform a hard reload (CTRL + Shift + R) to re-trigger range requests.

The auto (or unset) value

<img src="auto-cat.jpg" loading="auto" alt="..."> <img src="auto-cat.jpg" alt="..."> <iframe src="https://css-tricks.com/" loading="auto"></iframe> <iframe src="https://css-tricks.com/"></iframe>
Auto cat loaded automatically
Auto cat loaded automatically

Setting the loading attribute to auto (or simply leaving the value blank, as in loading="") lets the browser decide whether or not to lazy-load an image. It takes many things into consideration to make that decision, like the platform, whether Data Saver mode is enabled, network conditions, image size, image vs. iframe, the CSS display property, among others. (See How it works) for info about why all this is important.)

The eager value

<img src="auto-cat.jpg" loading="eager" alt="..."> <iframe src="https://css-tricks.com/" loading="eager"></iframe>
Eager cat loaded eagerly
Eager cat loaded eagerly

The eager value provides a hint to the browser that an image should be loaded immediately. If loading was already deferred (e. g. because it had been set to lazy and was then changed to eager by JavaScript), the browser should start loading the image immediately.

The lazy value

<img src="auto-cat.jpg" loading="lazy" alt="..."> <iframe src="https://css-tricks.com/" loading="lazy"></iframe>
Lazy cat loaded lazily
Lazy cat loaded lazily

The lazy value provides a hints to the browser that an image should be lazy-loaded. It’s up to the browser to interpret what exactly this means, but the explainer document states that it should start loading when the user scrolls “near” the image such that it is probably loaded once it actually comes into view.

How the loading attribute works

In contrast to JavaScript lazy-loading libraries, native lazy-loading uses a kind of pre-flight request to get the first 2048 bytes of the image file. Using these, the browser tries to determine the image’s dimensions in order to insert an invisible placeholder for the full image and prevent content from jumping during loading.

The image’s load event is fired as soon as the full image is loaded, be it after the first request (for images smaller than 2 KB) or after the second one. Please note that the load event may never be fired for certain images because the second request is never made.

In the future, browsers might make twice as many image requests as there would be under the current proposal. First the range request, then the full request. Make sure your servers support the HTTP Range: 0-2047 header and respond with status code 206 (Partial Content) to prevent them from delivering the full image twice.

Due to the higher number of subsequent requests made by the same user, web server support for the HTTP/2 protocol will become more important.

Let’s talk about deferred content. Chrome’s rendering engine Blink uses heuristics to determine which content should be deferred and for how long to defer it. You can find a comprehensive list of requirements in Scott Little’s design documentation. This is a short breakdown of what will be deferred:

  • Images and frames on all platforms which have loading="lazy" set
  • Images on Chrome for Android with Data Saver enabled and that satisfy all of the following:
    • loading="auto" or unset
    • no width and height attributes smaller than 10px
    • not created programmatically in JavaScript
  • Frames which satisfy all of the following:
    • loading="auto" or unset
    • is from a third-party (different domain or protocol than the embedding page)
    • larger than 4 pixels in height and width (to prevent deferring tiny tracking frames)
    • not marked as display: none or visibility: hidden (again, to prevent deferring tracking frames)
    • not positioned off-screen using negative x or y coordinates

Responsive images with srcset

Native lazy-loading also works with responsive img elements using the srcset attribute. This attribute offers a list of image file candidates to the browser. Based on the user’s screen size, display pixel ratio, network conditions, etc., the browser will choose the optimal image candidate for the occasion. Image optimization CDNs like tiny.pictures are able to provide all image candidates in real-time without any back end development necessary.

<img src="https://demo.tiny.pictures/native-lazy-loading/lazy-cat.jpg" srcset="https://demo.tiny.pictures/native-lazy-loading/lazy-cat.jpg?width=400 400w, https://demo.tiny.pictures/native-lazy-loading/lazy-cat.jpg?width=800 800w" loading="lazy" alt="...">

Browser support

At the time of this writing, no browser supports native-loading by default. However, Chrome will enable the feature, as we’ve covered, starting in Chrome 75. No other browser vendor has announced support so far. (Edge being a kind of exception because it will soon make the switch to Chromium.)

You can detect the feature with a few lines of JavaScript:

if ("loading" in HTMLImageElement.prototype) {   // Support. } else {   // No support. You might want to dynamically import a lazy-loading library here (see below). }

See the Pen
Native lazy-loading browser support
by Erk Struwe (@erkstruwe)
on CodePen.

Automatic fallback to JavaScript solution with low-quality image placeholder

One very cool feature of most JavaScript-based lazy-loading libraries is the low-quality image placeholder (LQIP). Basically, it leverages the idea that browsers load (or perhaps I should say used to load) the src of an img element immediately, even if it gets later replaced by another URL. This way, it’s possible to load a tiny file size, low-quality image file on page load and later replace it with a full-sized version.

We can now use this to mimic the native lazy-loading’s 2 KB range requests in browsers that do not support this feature in order to achieve the same result, namely a placeholder with the actual image dimensions and a tiny file size.

See the Pen
Native lazy-loading with JavaScript library fallback and low-quality image placeholder
by Erk Struwe (@erkstruwe)
on CodePen.

Conclusion

I’m really excited about this feature. And frankly, I’m still wondering why it hasn’t got much more attention until now, given the fact that its release is imminent and the impact on global internet traffic will be remarkable, even if only small parts of the heuristics are changed.

Think about it: After a gradual roll-out for the different Chrome platforms and with auto being the default setting, the world’s most popular browser will soon lazy-load below-the-fold images and frames by default. Not only will the traffic amount of many badly-written websites drop significantly, but web servers will be hammered with tiny requests for image dimension detection.

And then there’s tracking: Assuming many unsuspecting tracking pixels and frames will be prevented from being loaded, the analytics and affiliate industry will have to act. We can only hope they don’t panic and add loading="eager" to every single image, rendering this great feature useless for their users. They should rather change their code to be recognized as tracking pixels by the heuristics described above.

Web developers, analytics and operations managers should check their website’s behavior with this feature and their servers’ support for Range requests and HTTP/2 immediately.

Image optimization CDNs could help out in case there are any issues to be expected or if you’d like to take image delivery optimization to the max (including automatic WebP support, low-quality image placeholders, and much more). Read more about tiny.pictures!

References

The post A Deep Dive into Native Lazy-Loading for Images and Frames appeared first on CSS-Tricks.

CSS-Tricks

, , , , , ,

Easily Turn Your Photos into Vectors with Photo Vectorizer

(This is a sponsored post.)

Photo Vectorizer is a simple-to-use Photoshop action that can convert any photo into a vector. With just a few clicks of your mouse, you can save tons of time and frustration by turning your photos into vectors. With super sharp results, these vectors are great for any project either online or in print.

Highlights:

  • Turn your photos into professional looking vectors.
  • Easy to use — just a few clicks.
  • Save tons of time.
  • Sharp, clean results.
  • Great for backgrounds, spot illustrations and presentations as well as printed projects like T- shirts, mugs, totes and more.

Visit Site

Direct Link to ArticlePermalink

The post Easily Turn Your Photos into Vectors with Photo Vectorizer appeared first on CSS-Tricks.

CSS-Tricks

, , , , , ,
[Top]

The Many Ways of Getting Data Into Charts

Data is available everywhere nowadays, whether it’s in a plain text file, a REST API, an online Google sheet… you name it! It’s that variety of context that makes building graphs more than simply having a database in your local project — where there is data, there is a way.

That’s pretty much what we’re going to look at in this post: making JavaScript data visualizations using data from a variety of sources.

If you want to follow along and make any of the visualizations we’re covering here, we’ll be using Chart.js so grab that and include it in your own environment:

<script src="https://cdnjs.cloudflare.com/ajax/libs/Chart.js/2.7.2/Chart.js"></script>

Source 1: HTML data table

Lots and lots of websites have data tables, and why not? They are a great way to show data and that’s what they were made to do. But, if you could have a visualization powered by the data in it — and for not much more effort — wouldn’t that be even better?

With a little JavaScript, the data in a HTML table can be isolated from the HTML and prepped for a chart. Look at the following data table:

Year Items Sold Turnover ($ ) Profit ($ )
2016 10 200 89
2017 25 550 225
2018 55 1200 600
2019 120 2450 1100

It contains sales data for a business. Now, if we could graph this out, it would be both visually compelling and help users glean insights. So let’s do it!

First of all, let’s define a function for our chart. This comes straight out of the Chart.js library, so it’s worth cross-referencing this with the documentation is something seems unclear. I’ve added some comments to call out key parts.

function BuildChart(labels, values, chartTitle) {   var ctx = document.getElementById("myChart").getContext('2d');   var myChart = new Chart(ctx, {     type: 'bar',     data: {       labels: labels, // Our labels       datasets: [{         label: chartTitle, // Name the series         data: values, // Our values         backgroundColor: [ // Specify custom colors           'rgba(255, 99, 132, 0.2)',           'rgba(54, 162, 235, 0.2)',           'rgba(255, 206, 86, 0.2)',           'rgba(75, 192, 192, 0.2)',           'rgba(153, 102, 255, 0.2)',           'rgba(255, 159, 64, 0.2)'         ],         borderColor: [ // Add custom color borders           'rgba(255,99,132,1)',           'rgba(54, 162, 235, 1)',           'rgba(255, 206, 86, 1)',           'rgba(75, 192, 192, 1)',           'rgba(153, 102, 255, 1)',           'rgba(255, 159, 64, 1)'         ],         borderWidth: 1 // Specify bar border width       }]     },     options: {       responsive: true, // Instruct chart js to respond nicely.       maintainAspectRatio: false, // Add to prevent default behavior of full-width/height      }   });   return myChart; }

Now that we have a function to create a chart for the table data, let’s write out the HTML for the table and add a canvas element that Chart.js can use to inject its charting magic. This is exactly the same data from the table example above.

<table class="table" id="dataTable">   <thead>     <th>Year</th>     <th>Items Sold</th>     <th>Turnover ($ )</th>     <th>Profit ($ )</th>   </thead>   <tbody>     <tr>       <td>2016</td>       <td>10</td>       <td>200</td>       <td>89</td>     </tr>     <tr>       <td>2017</td>       <td>25</td>       <td>550</td>       <td>225</td>     </tr>     <tr>       <td>2018</td>       <td>55</td>       <td>1200</td>       <td>600</td>     </tr>     <tr>       <td>2019</td>       <td>120</td>       <td>2450</td>       <td>1100</td>     </tr>   </tbody> </table>  <div class="chart">   <canvas id="myChart"></canvas> </div>

Then, we need to parse the table into JSON with vanilla JavaScript. This is what Chart.js will use to populate the charts and graphs.

var table = document.getElementById('dataTable'); var json = []]; // First row needs to be headers  var headers =[]; for (var i = 0; i < table.rows[0].cells.length; i++) {   headers[i] = table.rows[0].cells[i].innerHTML.toLowerCase().replace(/ /gi, ''); }  // Go through cells  for (var i = 1; i < table.rows.length; i++) {   var tableRow = table.rows[i];   var rowData = {};   for (var j = 0; j < tableRow.cells.length; j++) {     rowData[headers[j]] = tableRow.cells[j].innerHTML;   }    json.push(rowData); }  console.log(json);

We threw in that last line so we can check the output in the DevTools console. Here’s what is logged into the console:

Perfect! Our table headers become variables and are mapped to the content in the table cells.

All that’s left to do is map the year labels and items sold values to array (which Chart.js requires for its data object) and then to pass the data into the chart function.

This script maps the JSON values to an array of years. We can add it directly after the previous function.

// Map JSON values back to label array var labels = json.map(function (e) {   return e.year; }); console.log(labels); // ["2016", "2017", "2018", "2019"]  // Map JSON values back to values array var values = json.map(function (e) {   return e.itemssold; }); console.log(values); // ["10", "25", "55", "120"]

Call the BuildChart function from our first snippet (including a name for the chart) and we get a beautiful visualization of the data from the HTML table.

var chart = BuildChart(labels, values, "Items Sold Over Time");

And that is it! The charts data has now been isolated and passed into the JavaScript visualization and rendered.

See the Pen
BarChart Loaded From HTML Table
by Danny Englishby (@DanEnglishby)
on CodePen.

Source 2: A real-time API

There are many, many public APIs available in the world, many of which are rich with neat and useful data. Let’s use one of them to demonstrate how real-time data can be used to populate a chart. I’m going to use the Forbes 400 Rich List API to graph out the top 10 richest people in the world. I know, cool, eh?! I always find wealth rankings interesting, and there is so much more data this API provides. But for now, we will request data to display a chart with names and real time net worth using pure JavaScript!

First off, we want to define a chart function again, this time with a few more options.

function BuildChart(labels, values, chartTitle) {   var data = {     labels: labels,     datasets: [{       label: chartTitle, // Name the series       data: values,       backgroundColor: ['rgb(54, 162, 235)',         'rgb(54, 162, 235)',         'rgb(54, 162, 235)',         'rgb(54, 162, 235)',         'rgb(54, 162, 235)',         'rgb(54, 162, 235)',         'rgb(54, 162, 235)',         'rgb(54, 162, 235)',         'rgb(54, 162, 235)',         'rgb(54, 162, 235)',       ],     }],   };   var ctx = document.getElementById("myChart").getContext('2d');   var myChart = new Chart(ctx, {     type: 'horizontalBar',     data: data,     options: {       responsive: true, // Instruct chart JS to respond nicely.       maintainAspectRatio: false, // Add to prevent default behavior of full-width/height        scales: {         xAxes: [{           scaleLabel: {             display: true,             labelString: '$  Billion'           }         }],         yAxes: [{           scaleLabel: {             display: true,             labelString: 'Name'           }         }]       },     }   });   return myChart; }

Next, we’ll need that same HTML canvas element for where the chart will be rendered:

<div class="chart" style="position: relative; height:80vh; width:100vw">   <canvas id="myChart"></canvas> </div>

Now to grab some real time data. Let’s see what is returned in the console from the Forbes API call:

As you can see by the JSON returned, there is a lot of rich information that can injected into a chart. So, let’s make our rankings!

With some light JavaScript, we can request the data from the API, pick out and map the values we went to array variables, and finally pass our data in and rendering the chart.

var xhttp = new XMLHttpRequest(); xhttp.onreadystatechange = function () {   if (this.readyState == 4 && this.status == 200) {     var json = JSON.parse(this.response);     // Map JSON labels  back to values array     var labels = json.map(function (e) {       return e.name;     });     // Map JSON values back to values array     var values = json.map(function (e) {       return (e.realTimeWorth / 1000); // Divide to billions in units of ten     });     BuildChart(labels, values, "Real Time Net Worth"); // Pass in data and call the chart   } }; xhttp.open("GET", "https://forbes400.herokuapp.com/api/forbes400?limit=10", false); xhttp.send();

See the Pen
Chart Loaded From RichListAPI
by Danny Englishby (@DanEnglishby)
on CodePen.

Source 3: A Google Sheet

So far, we’ve looked at data in standard HTML tables and APIs to populate charts. But what if we already have an existing Google Sheet that has a bunch of data in it? Well, we can also use that to make a chart!

First, there are some rules for accessing a Google Sheet:

  • The Google Sheet must be published
  • The Google Sheet must be public (i.e. not set to private viewing)

As long as these two points are true, we can access a Google Sheet in the form of JSON, which means, of course, we can graph it out!

Here’s a Google Sheet with some made up data that I’ve published publicly. It consists of three fields of data: MachineId, Date, and ProductsProduced.

Now, if we that the sheet’s URL, we’ll want to note everything after the last slash:

https://docs.google.com/spreadsheets/d/1ySHjH6IMN0aKImYcuVHozQ_TvS6Npt4mDxpKDtsFVFg

This is the sheet identity and what we need for the GET request we send to Google. To make a GET request for JSON, we insert that string in the URL in another URL:

https://spreadsheets.google.com/feeds/list/[ID GOES HERE]/od6/public/full?alt=json

That gives us the following URL:

https://spreadsheets.google.com/feeds/list/1ySHjH6IMN0aKImYcuVHozQ_TvS6Npt4mDxpKDtsFVFg/od6/public/full?alt=json

And here’s the response we get in the console:

The important piece is the feed.entry object array. This holds vital sheet data, which looks like this when we drill into it:

Notice the items underlined in red. The Google Sheets API has preceded each of the column names with gsx$ (e.g. gsx$ date). These are exactly how we will dissect the data from the object, using these uniquely generated names. So, knowing this, it’s time to insert the data into some isolated arrays and pass them into our chart function.

function BuildChart(labels, values, chartTitle) {   var data = {     labels: labels,     datasets: [{       label: chartTitle, // Name the series       data: values,       backgroundColor: ['rgb(54, 162, 235)',         'rgb(54, 162, 235)',         'rgb(54, 162, 235)',         'rgb(54, 162, 235)',         'rgb(54, 162, 235)',         'rgb(54, 162, 235)',         'rgb(54, 162, 235)',         'rgb(54, 162, 235)',         'rgb(54, 162, 235)',         'rgb(54, 162, 235)',       ],     }],   };    var ctx = document.getElementById("myChart").getContext('2d');   var myChart = new Chart(ctx, {     type: 'bar',     data: data,     options: {       responsive: true, // Instruct chart js to respond nicely.       maintainAspectRatio: false, // Add to prevent default behavior of full-width/height        scales: {         xAxes: [{           scaleLabel: {             display: true,             labelString: 'Date'           }         }],         yAxes: [{           scaleLabel: {             display: true,             labelString: 'Produced Count'           }         }]       },     }   });    return myChart; }

And, you guessed it, next is the Canvas element:

<div class="chart" style="position: relative; height:80vh; width:100vw">   <canvas id="myChart"></canvas> </div>

…then the GET request, mapping our labels and values array and finally calling the chart passing in the data.

var xhttp = new XMLHttpRequest(); xhttp.onreadystatechange = function() {   if (this.readyState == 4 && this.status == 200) {     var json = JSON.parse(this.response);     console.log(json);    // Map JSON labels  back to values array   var labels = json.feed.entry.map(function (e) {     return e.gsx$ date.$ t;   });          // Map JSON values back to values array   var values = json.feed.entry.map(function (e) {     return e.gsx$ productsproduced.$ t;   });    BuildChart(labels, values, "Production Data");   } }; xhttp.open("GET", "https://spreadsheets.google.com/feeds/list/1ySHjH6IMN0aKImYcuVHozQ_TvS6Npt4mDxpKDtsFVFg/od6/public/full?alt=json", false); xhttp.send();

And, boom!

See the Pen
Chart Loaded From a Google Sheet
by Danny Englishby (@DanEnglishby)
on CodePen.

What will you make with data?

You probably get the point that there is a variety of ways we can get data to populate beautiful looking charts and graphs. As long as we have some formatted numbers and a data visualization library, then we have a lot of powers at our fingertips.

Hopefully now you’re thinking of where you might have data and how to plug into a chart! We didn’t even cover all of the possibilities here. Here are a few more resources you can use now that you’ve got chart-making chops.

A few more charting libraries

A few more places to store data

The post The Many Ways of Getting Data Into Charts appeared first on CSS-Tricks.

CSS-Tricks

, , , , ,
[Top]

How to Get a Progressive Web App into the Google Play Store

PWA (Progressive Web Apps) have been with us for some time now. Yet, each time I try explaining it to clients, the same question pops up: “Will my users be able to install the app using app stores?” The answer has traditionally been no, but this changed with Chrome 72 which shipped a new feature called TWA (Trusted Web Activities).

Trusted Web Activities are a new way to integrate your web-app content such as your PWA with yourAndroid app using a protocol based on Custom Tabs.

In this article, I will use Netguru’s existing PWA (Wordguru) and explain step-by-step what needs to be done to make the application available and ready to be installed straight from the Google Play app store.

Some of the things we cover here may sound silly to any Android Developers out there, but this article is written from the perspective of a front-end developer, particularly one who has never used Android Studio or created an Android Application. Also, please do note that a lot of what we’re covering here is still extremely experimental since it’s limited to Chrome 72.

Step 1: Set up a Trusted Web Activity

Setting up a TWA doesn’t require you to write any Java code, but you will need to have Android Studio. If you’ve developed iOS or Mac software before, this is a lot like Xcode in that it provides a nice development environment designed to streamline Android development. So, grab that and meet me back here.

Create a new TWA project in Android Studio

Did you get Android Studio? Well, I can’t actually hear or see you, so I’ll assume you did. Go ahead and crack it open, then click on “Start a new Android Studio project.” From there, let’s choose the “Add No Activity” option, which allows us to configure the project.

The configuration is fairly straightforward, but it’s always good to know what is what:

  • Name The name of the application (but I bet you knew that).
  • Package name: An identifier for Android applications on the Play Store. It must be unique, so I suggest using the URL of the PWA in reverse order (e.g. com.netguru.wordguru).
  • Save location: Where the project will exist locally.
  • Language: This allows us to select a specific code language, but there’s no need for that since our app is already, you know, written. We can leave this at Java, which is the default selection.
  • Minimum API level: This is the version of the Android API we’re working with and is required by the support library (which we’ll cover next). Let’s use API 19.

There are few checkboxes below these options. Those are irrelevant for us here, so leave them all unchecked, then move on to Finish.

Add TWA Support Library

A support library is required for TWAs. The good news is that we only need to modify two files to fill that requirement and the both live in the same project directory: Gradle Scripts. Both are named build.gradle, but we can distinguish which is which by looking at the description in the parenthesis.

There’s a Git package manager called JitPack that’s made specifically for Android apps. It’s pretty robust, but the bottom line is that it makes publishing our web app a breeze. It is a paid service, but I’d say it’s worth the cost if this is your first time getting something into the Google Play store.

Editor Note: This isn’t a sponsored plug for JitPack. It’s worth calling out because this post is assuming little-to-no familiarity with Android Apps or submitting apps to Google Play and it has less friction for managing an Android App repo that connects directly to the store. That said, it’s totally not a requirement.

Once you’re in JitPack, let’s connect our project to it. Open up that build.gradle (Project: Wordguru) file we just looked at and tell it to look at JitPack for the app repository:

allprojects {   repositories {     ...     maven { url 'https://jitpack.io' }     ...   } }

OK, now let’s open up that other build.gradle file. This is where we can add any required dependencies for the project and we do indeed have one:

// build.gradle (Module: app)  dependencies {   ...   implementation 'com.github.GoogleChrome:custom-tabs-client:a0f7418972'   ... }

TWA library uses Java 8 features, so we’re going to need enable Java 8. To do that we need to add compileOptions to the same file:

// build.gradle (Module: app)  android {   ...   compileOptions {     sourceCompatibility JavaVersion.VERSION_1_8     targetCompatibility JavaVersion.VERSION_1_8   }   ... }

There are also variables called manifestPlaceholders that we’ll cover in the next section. For now, let’s add the following to define where the app is hosted, the default URL and the app name:

// build.gradle (Module: app)  android {   ...   defaultConfig {     ...     manifestPlaceholders = [       hostName: "wordguru.netguru.com",       defaultUrl: "https://wordguru.netguru.com",       launcherName: "Wordguru"     ]     ...   }   ... }

Provide app details in the Android App Manifest

Every Android app has an Android App Manifest (AndroidManifest.xml) which provides essential details about the app, like the operating system it’s tied to, package information, device compatibility, and many other things that help Google Play display the app’s requirements.

The thing we’re really concerned with here is Activity (<activity>). This is what implements the user interface and is required for the “Activities” in “Trusted Web Activities.”

Funny enough, we selected the “Add No Activity” option when setting up our project in Android Studio and that’s because our manifest is empty and contains only the application tag.

Let’s start by opening up the manfifest file. We’ll replace the existing package name with our own application ID and the label with the value from the manifestPlaceholders variables we defined in the previous section.

Then, we’re going to actually add the TWA activity by adding an <activity> tag inside the <application> tag.

  <manifest   xmlns:android="http://schemas.android.com/apk/res/android"   package="com.netguru.wordguru"> // highlight    <application     android:allowBackup="true"     android:icon="@mipmap/ic_launcher"     android:label="$ {launcherName}" // highlight     android:supportsRtl="true"     android:theme="@style/AppTheme">      <activity       android:name="android.support.customtabs.trusted.LauncherActivity"       android:label="$ {launcherName}"> // highlight        <meta-data         android:name="android.support.customtabs.trusted.DEFAULT_URL"         android:value="$ {defaultUrl}" /> // highlight               <intent-filter>         <action android:name="android.intent.action.MAIN" />         <category android:name="android.intent.category.LAUNCHER" />       </intent-filter>               <intent-filter android:autoVerify="true">         <action android:name="android.intent.action.VIEW"/>         <category android:name="android.intent.category.DEFAULT" />         <category android:name="android.intent.category.BROWSABLE"/>         <data           android:scheme="https"           android:host="$ {hostName}"/> // highlight       </intent-filter>     </activity>   </application> </manifest>

And that, my friends, is Step 1. Let’s move on to Step 2.

Step 2: Verify the relationship between the website and the app

TWAs require a connection between the Android application and the PWA. To do that, we use Digital Asset Links.

The connection must be set on both ends, where TWA is the application and PWA is the website.

To establish that connection we need to modify our manifestPlaceholders again. This time, we need to add an extra element called assetStatements that keeps the information about our PWA.

// build.gradle (Module: app)  android {   ...   defaultConfig {     ...     manifestPlaceholders = [       ...       assetStatements: '[{ "relation": ["delegate_permission/common.handle_all_urls"], ' +         '"target": {"namespace": "web", "site": "https://wordguru.netguru.com"}}]'       ...     ]     ...   }   ... }

Now, we need to add a new meta-data tag to our application tag. This will inform the Android application that we want to establish the connection with the application specified in the manifestPlaceholders.

  <manifest   xmlns:android="http://schemas.android.com/apk/res/android"   package="$ {packageId}">    <application>     ...       <meta-data         android:name="asset_statements"         android:value="$ {assetStatements}" />     ...   </application> </manifest>

That’s it! we just established the application to website relationship. Now let’s jump into the conversion of website to application.

To establish the connection in the opposite direction, we need to create a .json file that will be available in the app’s /.well-known/assetlinks.json path. The file can be created using a generator that’s built into Android Studio. See, I told you Android Studio helps streamline Android development!

We need three values to generate the file:

  • Hosting site domain: This is our PWA URL (e.g. https://wordguru.netguru.com/).
  • App package name: This is our TWA package name (e.g. com.netguru.wordguru).
  • App package fingerprint (SHA256): This is a unique cryptographic hash that is generated based on Google Play Store keystore.

We already have first and second value. We can get the last one using Android Studio.

First we need to generate signed APK. In the Android Studio go to: Build → Generate Signed Bundle or APK → APK.

Next, use the existing keystore, if you already have one. If you need one, go to “Create new…” first.

Then let’s fill out the form. Be sure to remember the credentials as those are what the application will be signed with and they confirm your ownership of the application.

This will create a keystore file that is required to generate the app package fingerprint (SHA256). This file is extremely important as it is works as a proof that you are the owner of the application. If this file is lost, you will not be able to do any further updates to your application in the store.

Next up, let’s select type of bundle. In this case, we’re choosing “release” because it gives us a production bundle. We also need to check the signature versions.

This will generate our APK that will be used later to create a release in Google Play store. After creating our keystore, we can use it to generate required app package fingerprint (the SHA256).

Let’s head back to Android Studio, and go to Tools → App Links Assistant. This will open a sidebar that shows the steps that are required to create a relationship between the application and website. We want to go to Step 3, “Declare Website Association” and fill in required data: Site domain and Application ID. Then, select the keystore file generated in the previous step.

After filling the form press “Generate Digital Asset Links file” which will generate our assetlinks.json file. If we open that up, it should look something like this:

[{   "relation": ["delegate_permission/common.handle_all_urls"],   "target": {     "namespace": "android_app",     "package_name": "com.netguru.wordguru",     "sha256_cert_fingerprints": ["8A:F4:....:29:28"]   } }]

This is the file we need to make available in our app’s /.well-known/assetlinks.json path. I will not describe how to make it available on that path as it is too project-specific and outside the scope of this article.

We can test the relationship by clicking on the “Link and Verify” button. If all goes well, we get a confirmation with “Success!”

Yay! We’ve established a two-way relationship between our Android application and our PWA. It’s all downhill from here, so let’s drive it home.

Step 3: Get required assets

Google Play requires a few assets to make sure the app is presented nicely in the store. Specifically, here’s what we need:

  • App Icons: We need a variety of sizes, including 48×48, 72×72, 96×96, 144×144, 192×192… or we can use an adaptive icon.
  • High-res Icon: This is a 512×512 PNG image that is used throughout the store.
  • Feature Graphic: This is a 1024×500 JPG or 24-bit PNG (no alpha) banner that Google Play uses on the app details view.
  • Screenshots: Google Play will use these to show off different views of the app that users can check out prior to downloading it.

Having all those, we can proceed to the Google Play Store developers console and publish the application!

Step 4: Publish to Google Play!

Let’s go to the last step and finally push our app to the store.

Using the APK that we generated earlier (which is located in the AndroidStudioProjects directory), we need to go to the Google Play console to publish our application. I will not describe the process of publishing an application in the store as the wizard makes it pretty straightforward and we are provided step-by-step guidance throughout the process.

It may take few hours for the application to be reviewed and approved, but when it is, it will finally appear in the store.

If you can’t find the APK, you can create a new one by going to Build → Generate signed bundle / APK → Build APK, passing our existing keystore file and filling the alias and password that we used when we generated the keystore. After the APK is generated, a notice should appear and you can get to the file by clicking on the “Locate” link.

Congrats, your app is in Google Play!

That’s it! We just pushed our PWA to the Google Play store. The process is not as intuitive as we would like it to be, but still, with a bit of effort it is definitely doable, and believe me, it gives that great filling at the end when you see your app displayed in the wild.

It is worth pointing out that this feature is still very much early phase and I would consider it experimental for some time. I would not recommend going with a production release of your application for now because this only works with Chrome 72 and above — any version before that will be able to install the app, but the app itself will crash instantly which is not the best user experience.

Also, the official release of custom-tabs-client does not support TWA yet. If you were wondering why we used raw GitHub link instead of the official library release, well, that’s why.

The post How to Get a Progressive Web App into the Google Play Store appeared first on CSS-Tricks.

CSS-Tricks

, , , ,
[Top]

People Digging into Grid Sizing and Layout Possibilities

Jen Simmons has been coining the term intrinsic design, referring to a new era in web layout where the sizing of content has gone beyond fluid columns and media query breakpoints and into, I dunno, something a bit more exotic. For example, columns that are sized more by content and guidelines than percentages. And not always columns, but more like appropriate placement, however that needs to be done.

One thing is for sure, people are playing with the possibilities a lot right now. In the span of 10 days I’ve gathered these links:

The post People Digging into Grid Sizing and Layout Possibilities appeared first on CSS-Tricks.

CSS-Tricks

, , , , , ,
[Top]

Getting into GraphQL with AWS AppSync

GraphQL is becoming increasingly popular. The problem is that if you are a front-end developer, you are only half of the way there. GraphQL is not just a client technology. The server also has to be implemented according to the specification. This means that in order to implement GraphQL into your application, you need to learn not only GraphQL on the front end, but also GraphQL best practices, server-side development, and everything that goes along with it on the back end.

There will come a time when you will also have to deal with issues like scaling your server, complex authorization scenarios, malicious queries, and more issues that require more expertise and even deeper knowledge around what is traditionally categorized as back-end development.

Thankfully, we have an array of managed back-end service providers today that allow front-end developers to only worry about implementing features on the front end without having to deal with all of the traditional back-end work.

Services like Firebase (API) / AWS AppSync (database), Cloudinary (media), Algolia (search) and Auth0 (authentication) allow us to offload our complex infrastructure to a third-party provider and instead focus on delivering value to end users in the form of new features instead.

In this tutorial, we’ll learn how to take advantage of AWS AppSync, a managed GraphQL service, to build a full-stack application without writing a single line of back-end code.

While the framework we’re working in is React, the concepts and API calls we will be using are framework-agnostic and will work the same in Angular, Vue, React Native, Ionic or any other JavaScript framework or application.

We will be building a restaurant review app. In this app, we will be able to create a restaurant, view restaurants, create a review for a restaurant, and view reviews for a restaurant.

An image showing four different screenshots of the restaurant app in mobile view.

The tools and frameworks that we will be using are React, AWS Amplify, and AWS AppSync.

AWS Amplify is a framework that allows us to create and connect to cloud services, like authentication, GraphQL APIs, and Lambda functions, among other things. AWS AppSync is a managed GraphQL service.

We’ll use Amplify to create and connect to an AppSync API, then write the client side React code to interact with the API.

View Repo

Getting started

The first thing we’ll do is create a React project and move into the new directory:

npx create-react-app ReactRestaurants  cd ReactRestaurants

Next, we’ll install the dependencies we’ll be using for this project. AWS Amplify is the JavaScript library we’ll be using to connect to the API, and we’ll use Glamor for styling.

yarn add aws-amplify glamor

The next thing we need to do to is install and configure the Amplify CLI:

npm install -g @aws-amplify/cli  amplify configure

Amplify’s configure will walk you through the steps needed to begin creating AWS services in your account. For a walkthrough of how to do this, check out this video.

Now that the app has been created and Amplify is ready to go, we can initialize a new Amplify project.

amplify init

Amplify init will walk you through the steps to initialize a new Amplify project. It will prompt you for your desired project name, environment name, and text editor of choice. The CLI will auto-detect your React environment and select smart defaults for the rest of the options.

Creating the GraphQL API

One we’ve initialized a new Amplify project, we can now add the Restaurant Review GraphQL API. To add a new service, we can run the amplify add command.

amplify add api

This will walk us through the following steps to help us set up the API:

? Please select from one of the below mentioned services GraphQL ? Provide API name bigeats ? Choose an authorization type for the API API key ? Do you have an annotated GraphQL schema? N ? Do you want a guided schema creation? Y ? What best describes your project: Single object with fields ? Do you want to edit the schema now? Y

The CLI should now open a basic schema in the text editor. This is going to be the schema for our GraphQL API.

Paste the following schema and save it.

// amplify/backend/api/bigeats/schema.graphql  type Restaurant @model {   id: ID!   city: String!   name: String!   numRatings: Int   photo: String!   reviews: [Review] @connection(name: "RestaurantReview") } type Review @model {   rating: Int!   text: String!   createdAt: String   restaurant: Restaurant! @connection(name: "RestaurantReview") }

In this schema, we’re creating two main types: Restaurant and Review. Notice that we have @model and @connection directives in our schema.

These directives are part of the GraphQL Transform tool built into the Amplify CLI. GraphQL Transform will take a base schema decorated with directives and transform our code into a fully functional API that implements the base data model.

If we were spinning up our own GraphQL API, then we’d have to do all of this manually:

  1. Define the schema
  2. Define the operations against the schema (queries, mutations, and subscriptions)
  3. Create the data sources
  4. Write resolvers that map between the schema operations and the data sources.

With the @model directive, the GraphQL Transform tool will scaffold out all schema operations, resolvers, and data sources so all we have to do is define the base schema (step 1). The @connection directive will let us model relationships between the models and scaffold out the appropriate resolvers for the relationships.

In our schema, we use @connection to define a relationship between Restaurant and Reviews. This will create a unique identifier for the restaurant ID for the review in the final generated schema.

Now that we’ve created our base schema, we can create the API in our account.

amplify push
? Are you sure you want to continue? Yes ? Do you want to generate code for your newly created GraphQL API Yes ? Choose the code generation language target javascript ? Enter the file name pattern of graphql queries, mutations and subscriptions src/graphql/**/*.js ? Do you want to generate/update all possible GraphQL operations - queries, mutations and subscriptions Yes

Because we’re creating a GraphQL application, we typically would need to write all of our local GraphQL queries, mutations and subscriptions from scratch. Instead, the CLI will be inspecting our GraphQL schema and then generating all of the definitions for us and saving them locally for us to use.

After this is complete, the back end has been created and we can begin accessing it from our React application.

If you’d like to view your AppSync API in the AWS dashboard, visit https://console.aws.amazon.com/appsync and click on your API. From the dashboard you can view the schema, data sources, and resolvers. You can also perform queries and mutations using the built-in GraphQL editor.

Building the React client

Now that the API is created and we can begin querying for and creating data in our API. There will be three operations we will be using to interact with our API:

  1. Creating a new restaurant
  2. Querying for restaurants and their reviews
  3. Creating a review for a restaurant

Before we start building the app, let’s take a look at how these operations will look and work.

Interacting with the AppSync GraphQL API

When working with a GraphQL API, there are many GraphQL clients available.

We can use any GraphQL client we’d would like to interact with an AppSync GraphQL API, but there are two that are configured specifically to work most easily. These are the Amplify client (what we will use) and the AWS AppSync JS SDK (similar API to Apollo client).

The Amplify client is similar to the fetch API in that it is promise-based and easy to reason about. The Amplify client does not support offline out of the box. The AppSync SDK is more complex but does support offline out of the box.

To call the AppSync API with Amplify, we use the API category. Here’s an example of how to call a query:

import { API, graphqlOperation } from 'aws-amplify' import * as queries from './graphql/queries'  const data = await API.graphql(graphqlOperation(queries.listRestaurants))

For a mutation, it is very similar. The only difference is we need to pass in a a second argument for the data we are sending in the mutation:

import { API, graphqlOperation } from 'aws-amplify' import * as mutations from './graphql/mutations'  const restaurant = { name: "Babalu", city: "Jackson" } const data = await API.graphql(graphqlOperation(   mutations.createRestaurant,   { input: restaurant } ))

We use the graphql method from the API category to call the operation, wrapping it in graphqlOperation, which parses GraphQL query strings into the standard GraphQL AST.

We’ll be using this API category for all of our GraphQL operation in the app.

Here is the repo containing the final code for this project.

Configuring the React app with Amplify

The first thing we need to do in our app is configure it to recognize our Amplify credentials. When we created our API, the CLI created a new file called aws-exports.js in our src folder.

This file is created and updated for us by the CLI as we create, update and delete services. This file is what we’ll be using to configure the React application to know about our services.

To configure the app, open up src/index.js and add the following code:

import Amplify from 'aws-amplify' import config from './aws-exports' Amplify.configure(config)

Next, we will create the files we will need for our components. In the src directory, create the following files:

  • Header.js
  • Restaurant.js
  • Review.js
  • CreateRestaurant.js
  • CreateReview.js

Creating the components

While the styles are referenced in the code snippets below, the style definitions have been omitted to make the snippets less verbose. For style definitions, see the final project repo.

Next, we’ll create the Header component by updating src/Header.js.

// src/Header.js  import React from 'react' import { css } from 'glamor' const Header = ({ showCreateRestaurant }) => (   <div {...css(styles.header)}>     <p {...css(styles.title)}>BigEats</p>     <div {...css(styles.iconContainer)}>       <p {...css(styles.icon)} onClick={showCreateRestaurant}>+</p>     </div>   </div> )  export default Header

Now that our Header is created, we’ll update src/App.js. This file will hold all of the interactions with the API, so it is pretty large. We’ll define the methods and pass them down as props to the components that will call them.

// src/App.js  import React, { Component } from 'react' import { API, graphqlOperation } from 'aws-amplify'  import Header from './Header' import Restaurants from './Restaurants' import CreateRestaurant from './CreateRestaurant' import CreateReview from './CreateReview' import Reviews from './Reviews' import * as queries from './graphql/queries' import * as mutations from './graphql/mutations'  class App extends Component {   state = {     restaurants: [],     selectedRestaurant: {},     showCreateRestaurant: false,     showCreateReview: false,     showReviews: false   }   async componentDidMount() {     try {       const rdata = await API.graphql(graphqlOperation(queries.listRestaurants))       const { data: { listRestaurants: { items }}} = rdata       this.setState({ restaurants: items })     } catch(err) {       console.log('error: ', err)     }   }   viewReviews = (r) => {     this.setState({ showReviews: true, selectedRestaurant: r })   }   createRestaurant = async(restaurant) => {     this.setState({       restaurants: [...this.state.restaurants, restaurant]     })     try {       await API.graphql(graphqlOperation(         mutations.createRestaurant,         {input: restaurant}       ))     } catch(err) {       console.log('error creating restaurant: ', err)     }   }   createReview = async(id, input) => {     const restaurants = this.state.restaurants     const index = restaurants.findIndex(r => r.id === id)     restaurants[index].reviews.items.push(input)     this.setState({ restaurants })     await API.graphql(graphqlOperation(mutations.createReview, {input}))   }   closeModal = () => {     this.setState({       showCreateRestaurant: false,       showCreateReview: false,       showReviews: false,       selectedRestaurant: {}     })   }   showCreateRestaurant = () => {     this.setState({ showCreateRestaurant: true })   }   showCreateReview = r => {     this.setState({ selectedRestaurant: r, showCreateReview: true })   }   render() {     return (       <div>         <Header showCreateRestaurant={this.showCreateRestaurant} />         <Restaurants           restaurants={this.state.restaurants}           showCreateReview={this.showCreateReview}           viewReviews={this.viewReviews}         />         {           this.state.showCreateRestaurant && (             <CreateRestaurant               createRestaurant={this.createRestaurant}               closeModal={this.closeModal}                />           )         }         {           this.state.showCreateReview && (             <CreateReview               createReview={this.createReview}               closeModal={this.closeModal}                  restaurant={this.state.selectedRestaurant}             />           )         }         {           this.state.showReviews && (             <Reviews               selectedRestaurant={this.state.selectedRestaurant}               closeModal={this.closeModal}               restaurant={this.state.selectedRestaurant}             />           )         }       </div>     );   } } export default App

We first create some initial state to hold the restaurants array that we will be fetching from our API. We also create Booleans to control our UI and a selectedRestaurant object.

In componentDidMount, we query for the restaurants and update the state to hold the restaurants retrieved from the API.

In createRestaurant and createReview, we send mutations to the API. Also notice that we provide an optimistic update by updating the state immediately so that the UI gets updated before the response comes back in order to make our UI snappy.

Next, we’ll create the Restaurants component (src/Restaurants.js).

// src/Restaurants.js  import React, { Component } from 'react'; import { css } from 'glamor'  class Restaurants extends Component {   render() {     const { restaurants, viewReviews } = this.props     return (       <div {...css(styles.container)}>         {           restaurants.length === Number(0) && (             <h1               {...css(styles.h1)}             >Create your first restaurant by clicking +</h1>           )         }         {           restaurants.map((r, i) => (             <div key={i}>               <img                 src={r.photo}                 {...css(styles.image)}               />               <p {...css(styles.title)}>{r.name}</p>               <p {...css(styles.subtitle)}>{r.city}</p>               <p                 onClick={() => viewReviews(r)}                 {...css(styles.viewReviews)}               >View Reviews</p>               <p                 onClick={() => this.props.showCreateReview(r)}                 {...css(styles.createReview)}               >Create Review</p>             </div>           ))         }       </div>     );   } }  export default Restaurants

This component is the main view of the app. We map over the list of restaurants and show the restaurant image, its name and location, and links that will open overlays to show reviews and create a new review.

Next, we’ll look at the Reviews component (src/Reviews.js). In this component, we map over the list of reviews for the chosen restaurant.

// src/Reviews.js  import React from 'react' import { css } from 'glamor'  class Reviews extends React.Component {   render() {     const { closeModal, restaurant } = this.props     return (       <div {...css(styles.overlay)}>         <div {...css(styles.container)}>           <h1>{restaurant.name}</h1>           {             restaurant.reviews.items.map((r, i) => (               <div {...css(styles.review)} key={i}>                 <p {...css(styles.text)}>{r.text}</p>                 <p {...css(styles.rating)}>Stars: {r.rating}</p>               </div>             ))           }           <p onClick={closeModal}>Close</p>         </div>       </div>     )   } }  export default Reviews

Next, we’ll take a look at the CreateRestaurant component (src/CreateRestaurant.js). This component holds a form that keeps up with the form state. The createRestaurant class method will call this.props.createRestaurant, passing in the form state.

// src/CreateRestaurant.js  import React from 'react' import { css } from 'glamor';  class CreateRestaurant extends React.Component {   state = {     name: '', city: '', photo: ''   }   createRestaurant = () => {     if (       this.state.city === '' || this.state.name === '' || this.state.photo === ''     ) return     this.props.createRestaurant(this.state)     this.props.closeModal()   }   onChange = ({ target }) => {     this.setState({ [target.name]: target.value })   }   render() {     const { closeModal } = this.props     return (       <div {...css(styles.overlay)}>         <div {...css(styles.form)}>           <input             placeholder='Restaurant name'             {...css(styles.input)}             name='name'             onChange={this.onChange}           />           <input             placeholder='City'             {...css(styles.input)}             name='city'             onChange={this.onChange}           />           <input             placeholder='Photo'             {...css(styles.input)}             name='photo'             onChange={this.onChange}           />           <div             onClick={this.createRestaurant}             {...css(styles.button)}           >             <p               {...css(styles.buttonText)}             >Submit</p>           </div>           <div             {...css([styles.button, { backgroundColor: '#555'}])}             onClick={closeModal}           >             <p               {...css(styles.buttonText)}             >Cancel</p>           </div>         </div>       </div>     )   } }  export default CreateRestaurant

Next, we’ll take a look at the CreateReview component (src/CreateReview.js). This component holds a form that keeps up with the form state. The createReview class method will call this.props.createReview, passing in the restaurant ID and the form state.

// src/CreateReview.js  import React from 'react' import { css } from 'glamor'; const stars = [1, 2, 3, 4, 5] class CreateReview extends React.Component {   state = {     review: '', selectedIndex: null   }   onChange = ({ target }) => {     this.setState({ [target.name]: target.value })   }   createReview = async() => {     const { restaurant } = this.props     const input = {       text: this.state.review,       rating: this.state.selectedIndex + 1,       reviewRestaurantId: restaurant.id     }     try {       this.props.createReview(restaurant.id, input)       this.props.closeModal()     } catch(err) {       console.log('error creating restaurant: ', err)     }   }   render() {     const { selectedIndex } = this.state     const { closeModal } = this.props     return (       <div {...css(styles.overlay)}>         <div {...css(styles.form)}>           <div {...css(styles.stars)}>             {               stars.map((s, i) => (                 <p                   key={i}                   onClick={() => this.setState({ selectedIndex: i })}                   {...css([styles.star, selectedIndex === i && { backgroundColor: 'gold' }])}                 >{s} star</p>               ))             }           </div>           <input             placeholder='Review'             {...css(styles.input)}             name='review'             onChange={this.onChange}           />           <div             onClick={this.createReview}             {...css(styles.button)}           >             <p               {...css(styles.buttonText)}             >Submit</p>           </div>           <div             {...css([styles.button, { backgroundColor: '#555'}])}             onClick={closeModal}           >             <p               {...css(styles.buttonText)}             >Cancel</p>           </div>         </div>       </div>     )   } }  export default CreateReview

Running the app

Now that we have built our back-end, configured the app and created our components, we’re ready to test it out:

npm start

Now, navigate to http://localhost:3000. Congratulations, you’ve just built a full-stack serverless GraphQL application!

Conclusion

The next logical step for many applications is to apply additional security features, like authentication, authorization and fine-grained access control. All of these things are baked into the service. To learn more about AWS AppSync security, check out the documentation.

If you’d like to add hosting and a Continuous Integration/Continuous Deployment pipeline for your app, check out the Amplify Console.

I also maintain a couple of repositories with additional resources around Amplify and AppSync: Awesome AWS Amplify and Awesome AWS AppSync.

If you’d like to learn more about this philosophy of building apps using managed services, check out my post titled “Full-stack Development in the Era of Serverless Computing.”

The post Getting into GraphQL with AWS AppSync appeared first on CSS-Tricks.

CSS-Tricks

, , ,
[Top]

Angular, Autoprefixer, IE11, and CSS Grid Walk into a Bar…

I am attracted to the idea that you shouldn’t care how the code you author ends up in the browser. It’s already minified. It’s already gzipped. It’s already transmogrified (real word!) by things that polyfill it, things that convert it into code that older browsers understand, things that make it run faster, things that strip away unused bits, and things that break it into chunks by technology far above my head.

The trend is that the code we author is farther and farther away from the code we write, and like I said, I’m attracted to that idea because generally, the purpose of that is to make websites faster for users.

But as Dave notes, when something goes wrong…

As toolchains grow and become more complex, unless you are expertly familiar with them, it’s very unclear what transformations are happening in our code. Tracking the differences between the input and output and the processes that code underwent can be overwhelming. When there’s a problem, it’s increasingly difficult to hop into the assembly line and diagnose the issue and often there’s not an precise fix.

Direct Link to ArticlePermalink

The post Angular, Autoprefixer, IE11, and CSS Grid Walk into a Bar… appeared first on CSS-Tricks.

CSS-Tricks

, , , , , ,
[Top]

When’s the last time you SFTP’d (or the like) into a server and changed a file directly?

In the grand tradition of every single poll question I’ve ever posted, the poll below has a has a fundamental flaw. In this case, there is no option between “In the last month” and “Never” but, alas, the results are interesting:

What I was trying to get at with this poll is how many people do and don’t do any sort of editing of production files directly and instead work locally. I don’t think I need to launch a major investigation to know that it’s most people and more than in the past.

Most workflows these days have us working locally and pushing new and updated files through a version control system, and even through systems beyond that, perhaps continuous integration processes, testing processes, actions, deployment — it’s big world of DevOps out there! Rarely do we skip the line and dip our fingers into production servers and make live manipulations. Cowboy coding, they sometimes call it.

But just because you don’t generally code that way doesn’t mean you never do it, which is another flaw of this poll. I know DevOps nerds who are constantly SSH-ing into servers to manipulate configuration files. I personally still hop into Coda and will directly SFTP into servers sometimes to edit .htaccess files, which are sometimes on my production sites and dev sites and that I generally .gitignore.

Anyways, I think this is a fun thing to talk about, so feel free to have at it in the comments. I’d love to hear to what degree you cowboy code. Still do it all the time and love it? Do you do it because you haven’t learned another way or that’s how your workplace demands? Do you have a personal philosophy about it? Let the rodeo begin. 🤠

The post When’s the last time you SFTP’d (or the like) into a server and changed a file directly? appeared first on CSS-Tricks.

CSS-Tricks

, , , , , , , , ,
[Top]