Tag: Sites

A Few Background Patterns Sites

If I need a quick background pattern to spruce something up, I often think of the CSS3 Patterns Gallery. Some of those are pretty intense but remember they are easily editable because they are just CSS. That means you could take these bold zags and chill them out.

My usual go-to through is Hero Patterns. They are also editable, but they already start from a pretty chill place, which is usually what I’m looking for from a pattern. They also happen to provide ones we’ve baked into the Assets Panel on CodePen for extra-easy access.

If you’re into SVG-based patterns (and who isn’t?) SVG Backgrounds has some extra clever ones. Looks like it’s gotten a nice design refresh lately, too, where the editable options are intuitive and the code is easy to copy. If you are a DIY type, remember SVG literally has a <pattern> element you can harness.

I’ve seen some new fun pattern sites lately though! One is the exceptionally deep Tartanify which has over 5,000 Scottish tartan patterns. Paulina Hetman even wrote about it for us.

Beautiful Dingbats has a very nice pattern generator as well that seems pretty newish. It’s got very fun controls to play with and easy output.

One that is really mind-blowing is Mazeletter. It’s a collection of nine fonts that are made to be infinitely tiling, so you essentially have unlimited pattern possibilities you can make from characters.

Just to end with a classic here… you can’t go wrong with a little noise.

The post A Few Background Patterns Sites appeared first on CSS-Tricks.

CSS-Tricks

, ,

Adding Dynamic And Async Functionality To JAMstack Sites

Jason Lengstorf:

Here’s an incomplete list of things that I’ve repeatedly heard people claim the JAMstack can’t handle that it definitely can:

  • Load data asynchronously
  • Handle processing files, such as manipulating images
  • Read from and write to a database
  • Handle user authentication and protect content behind a login

There is still a misconception that JAMstack = use a static site generator and that’s it, despite the fact that almost every article I’ve ever read about JAMstack gets into how it’s about pre-rendering what you can, and using client-side JavaScript and APIs to do the rest.

Phil laid that out very nicely for us recently.

This misconception seems very real to me. I hear it regularly. As I was writing this, I saw this question posted on Reddit.

Beginner question. Is JAM useful for applications or only for websites?

I’ll spare you from a speech about the uselessness of trying to distinguish between “apps” and “sites” but I think this helps make the point that there is plenty of confusion out there.


If you’re in a festive mood…

Tim Chase got creative and wrote this tongue-in-cheek poem. It’s obviously a joke but its assumption comes from the exact other angle, that JAMstack requires client-side JavaScript to do anything:

I do not like that stack that’s JAM
I do not like it, Sam-I-am.
I will not run it for a spell,
I will not use your GraphQL.
I will not run it over QUIC
No, Sam-I-am, it makes me sick.
Listen how it makes me cough
It does not work with JS off.

And Phil responded:

These thoughts make sense, I must agree
Except you really don’t need all three
It’s up to you. For you to choose.
JavaScript’s just an option you might use.
And if you do, success might be
From enhancing things progressively.

A JAMstack site might seem reliant
On doing everything in the client
In fact though, it depends on what
Requirements and use-cases you have got
The biggest key though, to remember
Is to serve things statically, and pre-render.

Direct Link to ArticlePermalink

The post Adding Dynamic And Async Functionality To JAMstack Sites appeared first on CSS-Tricks.

CSS-Tricks

, , , , ,
[Top]

The Rising Complexity of JAMstack Sites and How to Manage Them

When you add anything with user-generated content or dynamic data to a static site, the complexity of the build process can become comparable to launching a monolithic CMS. How can we add rich content to static sites without stitching together multiple third-party services?

For people in the development community static site generators are a popular choice over traditional content management systems (CMS) like WordPress. By comparison static sites are usually lightweight, highly configurable, fast, easy to use and can be deployed almost anywhere.

With static websites, no code is generated on the server; we’ve replaced databases and server-side code with APIs and build processes.

This has become known as a JAMstack, which stands for JavaScript, APIs and Markup. I have a strong persuasion towards JAMstack sites because I feel more in control of the output than I do when working with the often large and monolithic CMSs I’ve sometimes had to use on client projects.

Despite my enthusiasm, I’m often disheartened by the steep complexity curve I typically encounter about halfway through a JAMstack project. Normally the first few weeks are incredibly liberating. It’s easy to get started, there is good visible progress, everything feels lean and fast. Over time, as more features are added, the build steps become more complex, multiple APIs are added, and suddenly everything feels slow. In other words, the development experience begins to suffer.

It usually looks something like this:

A hand-drawn chart showing complexity of a project over time. It shows a complexity curve that rises steeply at the end.

One of the reasons for this steep rise in complexity is there are limits to the type of data that markdown can easily represent. Relationships are one example where static sites struggle. Relationships between pages or collections of assets (such as an image gallery) can only be represented by markdown in inefficient ways. It requires significant preprocessing to resolve anything more complicated than a simple set of tags or categories. If you’ve ever had to do it, you will also know the authoring experience of managing relationships in markdown isn’t ideal.

User-generated content is another area that can cause a steep rise in the complexity of static sites. Adding features like comments, ratings, likes or any other kind of dynamic content will require third-party services — each has its own account that needs to be managed, not to mention that adding third-party scripts can have a negative impact of page performance.

If a service doesn’t match your specific requirements, sometimes it’s possible to cobble solutions together using generic platforms like Google Forms or AirTable.

The end result is we’ve outsourced the database, fragmented the content management experience and stitched together a bundle of compromises. That’s a stark contrast from the initial ease of setting up and deploying a JAMstack site.

Although this complexity curve is not unique to JAMstack projects, adding rich features to markdown-driven sites is far more difficult than we care to admit. What happened!? A lack of complexity was one of the reasons we favored JAMstack in the first place.

We did that thing that web developers do. We moved the complexity from one space into another and congratulated ourselves 😂. Not having complexity on the server-side is good for front-end performance, but there is little incentive to optimize any further once we do this. Ridiculous build times and complicated tool-chains have become an acceptable reality for modern front-end web development.

JAMstack Plus

Before I come across as sounding too critical, I should make it clear that I absolutely love static site generators. I think they are a perfect solution for many simple sites and you should still use them. However, I feel like a simple content management layer that I own and can configure is preferable to:

  • poor content management experiences,
  • complicated integration of third-party services, and
  • inefficient build processes.

I want to combine the benefits of a CMS with static site generators.

And it seems I’m not the only one who has reached this conclusion:

The solution doesn’t need to be another third-party service or require abandoning static sites entirely. You can use a personalized content management layer and unified API to enrich a static site. It might not be as hard as you think.

The first step is to create an API for your site. You can use any headless CMS, but the challenge I’ve had with many options is they make a lot of assumptions about the type of content you want. You might not want the CMS to manage pages and posts, but rather use it to store comments or images. I find this particularly difficult with WordPress. I often feel like I’m forcing a blogging platform to be just the service I need.

The new version of KeystoneJS (Keystone 5) is an excellent alternative to more opinionated content management systems. It’s made up of tiny independent components, so you only add the parts you need. This means it doesn’t feel like modifying a blogging platform. Instead it’s like creating a personalized mini-CMS and API to work specifically with your site.

I call this approach JAMstack Plus.

To help you get started with this idea I’ve created two projects:

  1. Supermaya, a starter kit for the static site generator Eleventy.
  2. Keystone JAMstack Plus, a blog enrichment platform.

Introducing Supermaya

The first project I want to share with you is Supermaya, an Eleventy starter kit designed to help add rich features to a blog or website without a complicated build process.

It comes with the all “blog standard” features including:

  • Posts and Pages
  • Pagination
  • Tags
  • RSS feed
  • Service worker
  • Lazy loading images
  • Critical CSS (if enabled)

It also has considerate and accessible markup. If deployed correctly, it should get full scores on a lighthouse audit out-of-the-box:

Supermaya scores 100% on Lighthouse tests.

I didn’t build Supermaya specifically as a platform to add user-generated content to static sites. Instead, I started it because I was not satisfied with the way existing static site generators integrate with other build tools. That’s why all the pre-processing steps in Supermaya are built into Eleveny itself. This includes the compilation of SCSS and JavaScript. Unifying the compilation steps eliminates the need for build tools like Grunt, Gulp or Webpack running in parallel.

After this, I realized the other reason for increasing complexity on JAMstack sites was integration with third-party services, usually for user-generated content. To solve this, Supermaya has optional tie-ins with a Keystone JAMstack Plus starter-kit, which makes it easier to add user-generated content and other rich features.

You can deploy both Keystone and Supermaya together and connect them at the same time by following the instructions during installation. This will deploy Keystone to Herouku and Supermaya to Netlify, as well as configure your admin user and API URL.

Rich features are added with progressive enhancement, so if the API cannot be reached or there is a server error, the site will continue to function without noticeable degradation or delays for users.

JAMstack Plus starter kit

The Keystone JAMstack Plus starter kit allows you to add rich features to a blog including:

  • Comments
  • Claps
  • Reading list, and
  • Logins

Just like Supermaya, it can be used on its own. After it’s deployed, you’ll get access to an admin interface that allows you to create and manage content. You’ll also get a GraphQL endpoint that can be connected to Supermaya.

It’s configured with the intention of being a headless CMS for user-generated content. It expects pages and posts to be managed by a static site generator. However, with a little work — and following the examples in Supermaya — you can connect any front-end to the GraphQL API.

I’d encourage you to modify the starter-kit: Add additional features or provide content for pages directly from Keystone. If you add features that could be used by the rest of the community contribute back to the starter-kit and we can make it easier for everyone to add rich features to their sites without the need for third-party services.

Note: The automatic deploy will deploy to a free instance of Heroku. This will sleep periodically if not used which can result in slow API response times after periods of inactivity. You can upgrade to a hobby instance to avoid this.

Consider owning your own data

JAMstack and servers are not incompatible. There’s always a server (usually multiple) — it’s just a question of who owns it. If you are using any kind of third-party service, the chances are they own your account information, your content and collect user data.

Sometimes this might be an acceptable compromise compared with the overhead of deploying and managing a back-end service, but when the complexity of stitching together several APIs becomes comparable to a CMS, I believe managing a tiny configurable service that you own, can provide a better experience for users, developers and content managers. It also provides a solid platform for websites to grow beyond purely static content into more complicated and varying types of applications.

I don’t think JAMstack should defined by pushing all the complexity into the front-end build process or by compromising on developer and user experience. Instead, I think JAMstack should focus on providing lean, configurable static front-ends. These can be connected to APIs to provide user-generated data and content management services. There is no reason not to own and manage these services, if it provides the best outcome.

CSS-Tricks

, , , , ,
[Top]

scrapestack: An API for Scraping Sites

(This is a sponsored post.)

Not every site has an API to access data from it. Most don’t, in fact. If you need to pull that data, one approach is to “scrape” it. That is, load the page in web browser (that you automate), find what you are looking for in the DOM, and take it.

You can do this yourself if you want to deal with the cost, maintenance, and technical debt. For example, this is one of the big use-cases for “headless” browsers, like how Puppeteer can spin up and control headless Chrome.

Or, you can use a tool like scrapestack that is a ready-to-use API that not only does the scraping for you, but likely does it better, faster, and with more options than trying to do it yourself.

Say my goal is to pull the latest completed meetup from a Meetup.com page. Meetup.com has an API, but it’s pricy and requires OAuth and stuff. All we need is the name and link of a past meetup here, so let’s just yank it off the page.

We can see what we need in the DOM:

To have a play, let’s scrape it with the scrapestack API client-side with jQuery:

$ .get('https://api.scrapestack.com/scrape',   {     access_key: 'MY_API_KEY',     url: 'https://www.meetup.com/BendJS/'   },   function(websiteContent) {      // we have the entire sites HTML here!    } );

Within that callback, I can now also use jQuery to traverse the DOM, snagging the pieces I want, and constructing what I need on our site:

// Get what we want let event = $ (websiteContent)   .find(".groupHome-eventsList-pastEvents .eventCard")   .first(); let eventTitle = event   .find(".eventCard--link")[0].innerText; let eventLink =    `https://www.meetup.com/` +    event.find(".eventCard--link").attr("href");  // Use it on page $ ("#event").append(`   $ {eventTitle} `);

In real usage, if we were doing it client-side like this, we’d make use of some rudimentary storage so we wouldn’t have to hit the API on every page load, like sticking the result in localStorage and invalidating after a few days or something.

It works!

It’s actually much more likely that we do our scraping server-side. For one thing, that’s the way to protect your API keys, which is your responsibility, and not really possible on a public-facing site if you’re using the API directly client-side.

Myself, I’d probably make a cloud function to do it, so I can stay in JavaScript (Node.js), and have the opportunity to tuck the data in storage somewhere.

I’d say go check out the documentation and see if this isn’t the right answer next time you need to do some scraping. You get 10,000 requests on the free plan to try it out anyway, and it jumps up a ton on any of the paid plans with more features.

Direct Link to ArticlePermalink

The post scrapestack: An API for Scraping Sites appeared first on CSS-Tricks.

CSS-Tricks

, ,
[Top]

Static First: Pre-Generated JAMstack Sites with Serverless Rendering as a Fallback

You might be seeing the term JAMstack popping up more and more frequently. I’ve been a fan of it as an approach for some time.

One of the principles of JAMstack is that of pre-rendering. In other words, it generates your site into a collection of static assets in advance, so that it can be served to your visitors with maximum speed and minimum overhead from a CDN or other optimized static hosting environment.

But if we are going to pre-generate our sites ahead of time, how do we make them feel dynamic? How do we build sites that need to change often? How do we work with things like user generated content?

As it happens, this can be a great use case for serverless functions. JAMstack and serverless are the best of friends. They complement each other wonderfully.

In this article, we’ll look at a pattern of using serverless functions as a fallback for pre-generated pages in a site that is comprised almost entirely of user generated content. We’ll use a technique of optimistic URL routing where the 404 page is a serverless function to add serverless rendering on the fly.

Buzzwordy? Perhaps. Effective? Most certainly!

You can go and have a play with the demo site to help you imagine this use case. But only if you promise to come back.

https://vlolly.net

Is that you? You came back? Great. Let’s dig in.

The idea behind this little example site is that it lets you create a nice, happy message and virtual pick-me-up to send to a friend. You can write a message, customize a lollipop (or a popsicle, for my American friends) and get a URL to share with your intended recipient. And just like that, you’ve brightened up their day. What’s not to love?

Traditionally, we’d build this site using some server-side scripting to handle the form submissions, add new lollies (our user generated content) to a database and generate a unique URL. Then we’d use some more server-side logic to parse requests for these pages, query the database to get the data needed to populate a page view, render it with a suitable template, and return it to the user.

That all seems logical.

But how much will it cost to scale?

Technical architects and tech leads often get this question when scoping a project. They need to plan, pay for, and provision enough horsepower in case of success.

This virtual lollipop site is no mere trinket. This thing is going to make me a gazillionaire due to all the positive messages we all want to send each other! Traffic levels are going to spike as the word gets out. I had better have a good strategy of ensuring that the servers can handle the hefty load. I might add some caching layers, some load balancers, and I’ll design my database and database servers to be able to share the load without groaning from the demand to make and serve all these lollies.

Except… I don’t know how to do that stuff.

And I don’t know how much it would cost to add that infrastructure and keep it all humming. It’s complicated.

This is why I love to simplify my hosting by pre-rendering as much as I can.

Serving static pages is significantly simpler and cheaper than serving pages dynamically from a web server which needs to perform some logic to generate views on demand for every visitor.

Since we are working with lots of user generated content, it still makes sense to use a database, but I’m not going to manage that myself. Instead, I’ll choose one of the many database options available as a service. And I’ll talk to it via its APIs.

I might choose Firebase, or MongoDB, or any number of others. Chris compiled a few of these on an excellent site about serverless resources which is well worth exploring.

In this case, I selected Fauna to use as my data store. Fauna has a nice API for stashing and querying data. It is a no-SQL flavored data store and gives me just what I need.

https://fauna.com

Critically, Fauna have made an entire business out of providing database services. They have the deep domain knowledge that I’ll never have. By using a database-as-a-service provider, I just inherited an expert data service team for my project, complete with high availability infrastructure, capacity and compliance peace of mind, skilled support engineers, and rich documentation.

Such are the advantages of using a third-party service like this rather than rolling your own.

Architecture TL;DR

I often find myself doodling the logical flow of things when I’m working on a proof of concept. Here’s my doodle for this site:

And a little explanation:

  1. A user creates a new lollipop by completing a regular old HTML form.
  2. The new content is saved in a database, and its submission triggers a new site generation and deployment.
  3. Once the site deployment is complete, the new lollipop will be available on a unique URL. It will be a static page served very rapidly from the CDN with no dependency on a database query or a server.
  4. Until the site generation is complete, any new lollipops will not be available as static pages. Unsuccessful requests for lollipop pages fall back to a page which dynamically generates the lollipop page by querying the database API on the fly.

This kind of approach, which first assumes static/pre-generated assets, only then falling back to a dynamic render when a static view is not available was usefully described by Markus Schork of Unilever as “Static First” which I rather like.

In a little more detail

You could just dive into the code for this site, which is open source and available for you to explore, or we could talk some more.

You want to dig in a little further, and explore the implementation of this example? OK, I’ll explain in some more details:

  • Getting data from the database to generate each page
  • Posting data to a database API with a serverless function
  • Triggering a full site re-generation
  • Rendering on demand when pages are yet to be generated

Generating pages from a database

In a moment, we’ll talk about how we post data into the database, but first, let’s assume that there are some entries in the database already. We are going to want to generate a site which includes a page for each and every one of those.

Static site generators are great at this. They chomp through data, apply it to templates, and output HTML files ready to be served. We could use any generator for this example. I chose Eleventy due to it’s relative simplicity and the speed of its site generation.

To feed Eleventy some data, we have a number of options. One is to give it some JavaScript which returns structured data. This is perfect for querying a database API.

Our Eleventy data file will look something like this:

// Set up a connection with the Fauna database. // Use an environment variable to authenticate // and get access to the database. const faunadb = require('faunadb'); const q = faunadb.query; const client = new faunadb.Client({   secret: process.env.FAUNADB_SERVER_SECRET });  module.exports = () => {   return new Promise((resolve, reject) => {     // get the most recent 100,000 entries (for the sake of our example)     client.query(       q.Paginate(q.Match(q.Ref("indexes/all_lollies")),{size:100000})     ).then((response) => {       // get all data for each entry       const lollies = response.data;       const getAllDataQuery = lollies.map((ref) => {         return q.Get(ref);       });       return client.query(getAllDataQuery).then((ret) => {         // send the data back to Eleventy for use in the site build         resolve(ret);       });     }).catch((error) => {       console.log("error", error);       reject(error);     });   }) }

I named this file lollies.js which will make all the data it returns available to Eleventy in a collection called lollies.

We can now use that data in our templates. If you’d like to see the code which takes that and generates a page for each item, you can see it in the code repository.

Submitting and storing data without a server

When we create a new lolly page we need to capture user content in the database so that it can be used to populate a page at a given URL in the future. For this, we are using a traditional HTML form which posts data to a suitable form handler.

The form looks something like this (or see the full code in the repo):

<form name="new-lolly" action="/new" method="POST">    <!-- Default "flavors": 3 bands of colors with color pickers -->   <input type="color" id="flavourTop" name="flavourTop" value="#d52358" />   <input type="color" id="flavourMiddle" name="flavourMiddle" value="#e95946" />   <input type="color" id="flavourBottom" name="flavourBottom" value="#deaa43" />    <!-- Message fields -->   <label for="recipientName">To</label>   <input type="text" id="recipientName" name="recipientName" />    <label for="message">Say something nice</label>   <textarea name="message" id="message" cols="30" rows="10"></textarea>    <label for="sendersName">From</label>   <input type="text" id="sendersName" name="sendersName" />    <!-- A descriptive submit button -->   <input type="submit" value="Freeze this lolly and get a link">  </form>

We have no web servers in our hosting scenario, so we will need to devise somewhere to handle the HTTP posts being submitted from this form. This is a perfect use case for a serverless function. I’m using Netlify Functions for this. You could use AWS Lambda, Google Cloud, or Azure Functions if you prefer, but I like the simplicity of the workflow with Netlify Functions, and the fact that this will keep my serverless API and my UI all together in one code repository.

It is good practice to avoid leaking back-end implementation details into your front-end. A clear separation helps to keep things more portable and tidy. Take a look at the action attribute of the form element above. It posts data to a path on my site called /new which doesn’t really hint at what service this will be talking to.

We can use redirects to route that to any service we like. I’ll send it to a serverless function which I’ll be provisioning as part of this project, but it could easily be customized to send the data elsewhere if we wished. Netlify gives us a simple and highly optimized redirects engine which directs our traffic out at the CDN level, so users are very quickly routed to the correct place.

The redirect rule below (which lives in my project’s netlify.toml file) will proxy requests to /new through to a serverless function hosted by Netlify Functions called newLolly.js.

# resolve the "new" URL to a function [[redirects]]   from = "/new"   to = "/.netlify/functions/newLolly"   status = 200

Let’s look at that serverless function which:

  • stores the new data in the database,
  • creates a new URL for the new page and
  • redirects the user to the newly created page so that they can see the result.

First, we’ll require the various utilities we’ll need to parse the form data, connect to the Fauna database and create readably short unique IDs for new lollies.

const faunadb = require('faunadb');          // For accessing FaunaDB const shortid = require('shortid');          // Generate short unique URLs const querystring = require('querystring');  // Help us parse the form data  // First we set up a new connection with our database. // An environment variable helps us connect securely // to the correct database. const q = faunadb.query const client = new faunadb.Client({   secret: process.env.FAUNADB_SERVER_SECRET })

Now we’ll add some code to the handle requests to the serverless function. The handler function will parse the request to get the data we need from the form submission, then generate a unique ID for the new lolly, and then create it as a new record in the database.

// Handle requests to our serverless function exports.handler = (event, context, callback) => {    // get the form data   const data = querystring.parse(event.body);   // add a unique path id. And make a note of it - we'll send the user to it later   const uniquePath = shortid.generate();   data.lollyPath = uniquePath;    // assemble the data ready to send to our database   const lolly = {     data: data   };    // Create the lolly entry in the fauna db   client.query(q.Create(q.Ref('classes/lollies'), lolly))     .then((response) => {       // Success! Redirect the user to the unique URL for this new lolly page       return callback(null, {         statusCode: 302,         headers: {           Location: `/lolly/$ {uniquePath}`,         }       });     }).catch((error) => {       console.log('error', error);       // Error! Return the error with statusCode 400       return callback(null, {         statusCode: 400,         body: JSON.stringify(error)       });     });  }

Let’s check our progress. We have a way to create new lolly pages in the database. And we’ve got an automated build which generates a page for every one of our lollies.

To ensure that there is a complete set of pre-generated pages for every lolly, we should trigger a rebuild whenever a new one is successfully added to the database. That is delightfully simple to do. Our build is already automated thanks to our static site generator. We just need a way to trigger it. With Netlify, we can define as many build hooks as we like. They are webhooks which will rebuild and deploy our site of they receive an HTTP POST request. Here’s the one I created in the site’s admin console in Netlify:

Netlify build hook

To regenerate the site, including a page for each lolly recorded in the database, we can make an HTTP POST request to this build hook as soon as we have saved our new data to the database.

This is the code to do that:

const axios = require('axios'); // Simplify making HTTP POST requests  // Trigger a new build to freeze this lolly forever axios.post('https://api.netlify.com/build_hooks/5d46fa20da4a1b70XXXXXXXXX') .then(function (response) {   // Report back in the serverless function's logs   console.log(response); }) .catch(function (error) {   // Describe any errors in the serverless function's logs   console.log(error); });

You can see it in context, added to the success handler for the database insertion in the full code.

This is all great if we are happy to wait for the build and deployment to complete before we share the URL of our new lolly with its intended recipient. But we are not a patient lot, and when we get that nice new URL for the lolly we just created, we’ll want to share it right away.

Sadly, if we hit that URL before the site has finished regenerating to include the new page, we’ll get a 404. But happily, we can use that 404 to our advantage.

Optimistic URL routing and serverless fallbacks

With custom 404 routing, we can choose to send every failed request for a lolly page to a page which will can look for the lolly data directly in the database. We could do that in with client-side JavaScript if we wanted, but even better would be to generate a ready-to-view page dynamically from a serverless function.

Here’s how:

Firstly, we need to tell all those hopeful requests for a lolly page that come back empty to go instead to our serverless function. We do that with another rule in our Netlify redirects configuration:

# unfound lollies should proxy to the API directly [[redirects]]   from = "/lolly/*"   to = "/.netlify/functions/showLolly?id=:splat"   status = 302

This rule will only be applied if the request for a lolly page did not find a static page ready to be served. It creates a temporary redirect (HTTP 302) to our serverless function, which looks something like this:

const faunadb = require('faunadb');                  // For accessing FaunaDB const pageTemplate = require('./lollyTemplate.js');  // A JS template litereal   // setup and auth the Fauna DB client const q = faunadb.query; const client = new faunadb.Client({   secret: process.env.FAUNADB_SERVER_SECRET });  exports.handler = (event, context, callback) => {    // get the lolly ID from the request   const path = event.queryStringParameters.id.replace("/", "");    // find the lolly data in the DB   client.query(     q.Get(q.Match(q.Index("lolly_by_path"), path))   ).then((response) => {     // if found return a view     return callback(null, {       statusCode: 200,       body: pageTemplate(response.data)     });    }).catch((error) => {     // not found or an error, send the sad user to the generic error page     console.log('Error:', error);     return callback(null, {       body: JSON.stringify(error),       statusCode: 301,       headers: {         Location: `/melted/index.html`,       }     });   }); }

If a request for any other page (not within the /lolly/ path of the site) should 404, we won’t send that request to our serverless function to check for a lolly. We can just send the user directly to a 404 page. Our netlify.toml config lets us define as many level of 404 routing as we’d like, by adding fallback rules further down in the file. The first successful match in the file will be honored.

# unfound lollies should proxy to the API directly [[redirects]]   from = "/lolly/*"   to = "/.netlify/functions/showLolly?id=:splat"   status = 302  # Real 404s can just go directly here: [[redirects]]   from = "/*"   to = "/melted/index.html"   status = 404

And we’re done! We’ve now got a site which is static first, and which will try to render content on the fly with a serverless function if a URL has not yet been generated as a static file.

Pretty snappy!

Supporting larger scale

Our technique of triggering a build to regenerate the lollipop pages every single time a new entry is created might not be optimal forever. While it’s true that the automation of the build means it is trivial to redeploy the site, we might want to start throttling and optimizing things when we start to get very popular. (Which can only be a matter of time, right?)

That’s fine. Here are a couple of things to consider when we have very many pages to create, and more frequent additions to the database:

  • Instead of triggering a rebuild for each new entry, we could rebuild the site as a scheduled job. Perhaps this could happen once an hour or once a day.
  • If building once per day, we might decide to only generate the pages for new lollies submitted in the last day, and cache the pages generated each day for future use. This kind of logic in the build would help us support massive numbers of lolly pages without the build getting prohibitively long. But I’ll not go into intra-build caching here. If you are curious, you could ask about it over in the Netlify Community forum.

By combining both static, pre-generated assets, with serverless fallbacks which give dynamic rendering, we can satisfy a surprisingly broad set of use cases — all while avoiding the need to provision and maintain lots of dynamic infrastructure.

What other use cases might you be able to satisfy with this “static first” approach?

The post Static First: Pre-Generated JAMstack Sites with Serverless Rendering as a Fallback appeared first on CSS-Tricks.

CSS-Tricks

, , , , , , ,
[Top]

Decaying Sites

Websites have a tendency to decay all by themselves. Link rot, they call it. Unpaid domain name registrations. Companies that have gone out of business. Site owners that have lost interest. What’s sadder than a 404? Landing on a holding page of a URL that used to exist, but now has fallen into the hands of some domain hoarder after it expired, hoping someone will pay a premium to get it back.

That stuff is no fun. But what about sites that are totally still around, just old? What kind of fun things could we do to indicate oldness that’s, like, on purpose?

On the CodePen blog, we call out blog posts that haven’t been updated in at least a couple of years. We update documentation, sure, but we tend to leave blog posts alone as a historical record. So, we’re pretty clear about that:

<?php if (get_the_modified_date("Y") < 2017) { ?>   <p class="callout"><strong>Heads up!</strong> This blog post hasn't been updated in over 2 years. CodePen is an ever changing place, so if this post references features, you're probably better off checking the <a href="/documentation/">docs</a>. Get in touch with <a href="https://codepen.io/support/">support</a> if you have further questions.</p> <?php } ?>

We style it up like a little warning:

But what if it was less obvious? What if the text just kinda started going all to crap? Words falling off their lines and going out of focus? The older the content, the more decay:

See the Pen
Decayed Text
by Chris Coyier (@chriscoyier)
on CodePen.

What if you let a site decaye on purpose? Say, perhaps, you’re holding oto client work and the client hasn’t paid their bill. Dragoi Ciprian has a little idea (repo) for that. You set the due date and deadline:

var due_date = new Date('2017-02-27'); var days_deadline = 60;

Here’s a demo of that. As I write, I’m 30 days into a 90-day deadline. If the demo looks blank to you, well, I guess I should have paid my bill so this code could have been removed 😉

See the Pen
not-paid Demo
by Chris Coyier (@chriscoyier)
on CodePen.

Or maybe the screen could kinda flash red, like you’re getting hit in a video game.

Dave once mentioned this would be a cool browser extension, like the browser window could flash red when certain bad things are happening, like layout jank.

Or you could get all glitchy! (This demo is click-to-load, fast colors and motion warning.)

See the Pen
Glitchy loader
by Nathaniel Inman (@NathanielInman)
on CodePen.

See the Pen
CSS Glitched Text
by Lucas Bebber (@lbebber)
on CodePen.

Perhaps rather than basing things off a payment due date or the age of the content, these effects come into play based on how long it’s been since the site’s dependencies have been updated. Or at least had some kind of deployment push.


This is only sorta tangentially related, but it reminds me of the very, very scary game Lose/Lose:

Lose/Lose is a video-game with real life consequences. Each alien in the game is created based on a random file on the players [sic] computer. If the player kills the alien, the file it is based on is deleted. If the players [sic] ship is destroyed, the application itself is deleted.

Although touching aliens will cause the player to lose the game, and killing aliens awards points, the aliens will never actually fire at the player. This calls into question the player’s mission, which is never explicitly stated, only hinted at through classic game mechanics. Is the player supposed to be an aggressor? Or merely an observer, traversing through a dangerous land?

The post Decaying Sites appeared first on CSS-Tricks.

CSS-Tricks

,
[Top]