Tag: WebGL

Creating WebGL Effects with CurtainsJS

This article focuses adding WebGL effects to <image> and <video> elements of an already “completed” web page. While there are a few helpful resources out there on this subject (like these two), I hope to help simplify this subject by distilling the process into a few steps: 

  • Create a web page as you normally would.
  • Render pieces that you want to add WebGL effects to with WebGL.
  • Create (or find) the WebGL effects to use.
  • Add event listeners to connect your page with the WebGL effects.

Specifically, we’ll focus on the connection between regular web pages and WebGL. What are we going to make? How about a draggle image slider with an interactive mouse hover!

We won’t cover the core functionality of slider or go very far into the technical details of WebGL or GLSL shaders. However, there are plenty of comments in the demo code and links to outside resources if you’d like to learn more. 

We’re using the latest version of WebGL (WebGL2) and GLSL (GLSL 300) which currently do not work in Safari or in Internet Explorer. So, use Firefox or Chrome to view the demos. If you’re planning to use any of what we’re covering in production, you should load both the GLSL 100 and 300 versions of the shaders and use the GLSL 300 version only if curtains.renderer._isWebGL2 is true. I cover this in the demo above.

First, create a web page as you normally would

You know, HTML and CSS and whatnot. In this case, we’re making an image slider but that’s just for demonstration. We’re not going to go full-depth on how to make a slider (Robin has a nice post on that). But here’s what I put together:

  1. Each slide is equal to the full width of the page.
  2. After a slide has been dragged, the slider continues to slide in the direction of the drag and gradually slow down with momentum.
  3. The momentum snaps the slider to the nearest slide at the end point. 
  4. Each slide has an exit animation that’s fired when the drag starts and an enter animation that’s fired when the dragging stops.
  5. When hovering the slider, a hover effect is applied similar to this video.

I’m a huge fan of the GreenSock Animation Platform (GSAP). It’s especially useful for us here because it provides a plugin for dragging, one that enables momentum on drag, and one for splitting text by line . If you’re uncomfortable creating sliders with GSAP, I recommend spending some time getting familiar with the code in the demo above.

Again, this is just for demonstration, but I wanted to at least describe the component a bit. These are the DOM elements that we will keep our WebGL synced with. 

Next, use WebGL to render the pieces that will contain WebGL effects

Now we need to render our images in WebGL. To do that we need to:

  1. Load the image as a texture into a GLSL shader.
  2. Create a WebGL plane for the image and correctly apply the image texture to the plane.
  3. Position the plane where the DOM version of the image is and scale it correctly.

The third step is particularly non-trivial using pure WebGL because we need to track the position of the DOM elements we want to port into the WebGL world while keeping the DOM and WebGL parts in sync during scroll and user interactions.

There’s actually a library that helps us do all of this with ease: CurtainsJS! It’s the only library I’ve found that easily creates WebGL versions of DOM images and videos and syncs them without too many other features (but I’d love to be proven wrong on that point, so please leave a comment if you know of others that do this well).

With Curtains, this is all the JavaScript we need to add:

// Create a new curtains instance const curtains = new Curtains({ container: "canvas", autoRender: false }); // Use a single rAF for both GSAP and Curtains function renderScene() {   curtains.render(); } gsap.ticker.add(renderScene); // Params passed to the curtains instance const params = {   vertexShaderID: "slider-planes-vs", // The vertex shader we want to use   fragmentShaderID: "slider-planes-fs", // The fragment shader we want to use     // Include any variables to update the WebGL state here   uniforms: {     // ...   } }; // Create a curtains plane for each slide const planeElements = document.querySelectorAll(".slide"); planeElements.forEach((planeEl, i) => {   const plane = curtains.addPlane(planeEl, params);   // const plane = new Plane(curtains, planeEl, params); // v7 version   // If our plane has been successfully created   if(plane) {     // onReady is called once our plane is ready and all its texture have been created     plane.onReady(function() {       // Add a "loaded" class to display the image container       plane.htmlElement.closest(".slide").classList.add("loaded");     });   } });

We also need to update our updateProgress function so that it updates our WebGL planes.

function updateProgress() {   // Update the actual slider   animation.progress(wrapVal(this.x) / wrapWidth);      // Update the WebGL slider planes   planes.forEach(plane => plane.updatePosition()); }

We also need to add a very basic vertex and fragment shader to display the texture that we’re loading. We can do that by loading them via <script> tags, like I do in the demo, or by using backticks as I show in the final demo.  

Again, this article will not go into a lot of detail on the technical aspects of these GLSL shaders. I recommend reading The Book of Shaders and the WebGL topic on Codrops as starting points.

If you don’t know much about shaders, it’s sufficient to say that the vertex shader positions the planes and the fragment shader processes the texture’s pixels. There are also three variable prefixes that I want to point out:

  • ins are passed in from a data buffer. In vertex shaders, they come from the CPU (our program). In fragment shaders, they come from the vertex shader.
  • uniforms are passed in from the CPU (our program).
  • outs are outputs from our shaders. In vertex shaders, they are passed into our fragment shader. In fragment shaders, they are passed to the frame buffer (what is drawn to the screen).

Once we’ve added all of that to our project, we have the same exact thing before but our slider is now being displayed via WebGL! Neat.

CurtainsJS easily converts images and videos to WebGL. As far as adding WebGL effects to text, there are several different methods but perhaps the most common is to draw the text to a <canvas> and then use it as a texture in the shader (e.g. 1, 2). It’s possible to do most other HTML using html2canvas (or similar) and use that canvas as a texture in the shader; however, this is not very performant.

Create (or find) the WebGL effects to use

Now we can add WebGL effects since we have our slider rendering with WebGL. Let’s break down the effects seen in our inspiration video:

  1. The image colors are inverted.
  2. There is a radius around the mouse position that shows the normal color and creates a fisheye effect.
  3. The radius around the mouse animates from 0 when the slider is hovered and animates back to 0 when it is no longer hovered.
  4. The radius doesn’t jump to the mouse’s position but animates there over time.
  5. The entire image translates based on the mouse’s position in reference to the center of the image.

When creating WebGL effects, it’s important to remember that shaders don’t have a memory state that exists between frames. It can do something based on where the mouse is at a given time, but it can’t do something based on where the mouse has been all by itself. That’s why for certain effects, like animating the radius once the mouse has entered the slider or animating the position of the radius over time, we should use a JavaScript variable and pass that value to each frame of the slider. We’ll talk more about that process in the next section.

Once we modify our shaders to invert the color outside of the radius and create the fisheye effect inside of the radius, we’ll get something like the demo below. Again, the point of this article is to focus on the connection between DOM elements and WebGL so I won’t go into detail about the shaders, but I did add comments to them.

But that’s not too exciting yet because the radius is not reacting to our mouse. That’s what we’ll cover in the next section.

I haven’t found a repository with a lot of pre-made WebGL shaders to use for regular websites. There’s ShaderToy and VertexShaderArt (which have some truly amazing shaders!), but neither is aimed at the type of effects that fit on most websites. I’d really like to see someone create a repository of WebGL shaders as a resource for people working on everyday sites. If you know of one, please let me know.

Add event listeners to connect your page with the WebGL effects

Now we can add interactivity to the WebGL portion! We need to pass in some variables (uniforms) to our shaders and affect those variables when the user interacts with our elements. This is the section where I’ll go into the most detail because it’s the core for how we connect JavaScript to our shaders.

First, we need to declare some uniforms in our shaders. We only need the mouse position in our vertex shader:

// The un-transformed mouse position uniform vec2 uMouse;

We need to declare the radius and resolution in our fragment shader:

uniform float uRadius; // Radius of pixels to warp/invert uniform vec2 uResolution; // Used in anti-aliasing

Then let’s add some values for these inside of the parameters we pass into our Curtains instance. We were already doing this for uResolution! We need to specify the name of the variable in the shader, it’s type, and then the starting value:

const params = {   vertexShaderID: "slider-planes-vs", // The vertex shader we want to use   fragmentShaderID: "slider-planes-fs", // The fragment shader we want to use      // The variables that we're going to be animating to update our WebGL state   uniforms: {     // For the cursor effects     mouse: {        name: "uMouse", // The shader variable name       type: "2f",     // The type for the variable - https://webglfundamentals.org/webgl/lessons/webgl-shaders-and-glsl.html       value: mouse    // The initial value to use     },     radius: {        name: "uRadius",       type: "1f",       value: radius.val     },          // For the antialiasing     resolution: {        name: "uResolution",       type: "2f",        value: [innerWidth, innerHeight]      }   }, };

Now the shader uniforms are connected to our JavaScript! At this point, we need to create some event listeners and animations to affect the values that we’re passing into the shaders. First, let’s set up the animation for the radius and the function to update the value we pass into our shader:

const radius = { val: 0.1 }; const radiusAnim = gsap.from(radius, {    val: 0,    duration: 0.3,    paused: true,   onUpdate: updateRadius }); function updateRadius() {   planes.forEach((plane, i) => {     plane.uniforms.radius.value = radius.val;   }); }

If we play the radius animation, then our shader will use the new value each tick.

We also need to update the mouse position when it’s over our slider for both mouse devices and touch screens. There’s a lot of code here, but you can walk through it pretty linearly. Take your time and process what’s happening.

const mouse = new Vec2(0, 0); function addMouseListeners() {   if ("ontouchstart" in window) {     wrapper.addEventListener("touchstart", updateMouse, false);     wrapper.addEventListener("touchmove", updateMouse, false);     wrapper.addEventListener("blur", mouseOut, false);   } else {     wrapper.addEventListener("mousemove", updateMouse, false);     wrapper.addEventListener("mouseleave", mouseOut, false);   } } 
 // Update the stored mouse position along with WebGL "mouse" function updateMouse(e) {   radiusAnim.play();      if (e.changedTouches && e.changedTouches.length) {     e.x = e.changedTouches[0].pageX;     e.y = e.changedTouches[0].pageY;   }   if (e.x === undefined) {     e.x = e.pageX;     e.y = e.pageY;   }      mouse.x = e.x;   mouse.y = e.y;      updateWebGLMouse(); } 
 // Updates the mouse position for all planes function updateWebGLMouse(dur) {   // update the planes mouse position uniforms   planes.forEach((plane, i) => {     const webglMousePos = plane.mouseToPlaneCoords(mouse);     updatePlaneMouse(plane, webglMousePos, dur);   }); } 
 // Updates the mouse position for the given plane function updatePlaneMouse(plane, endPos = new Vec2(0, 0), dur = 0.1) {   gsap.to(plane.uniforms.mouse.value, {     x: endPos.x,     y: endPos.y,     duration: dur,     overwrite: true,   }); } 
 // When the mouse leaves the slider, animate the WebGL "mouse" to the center of slider function mouseOut(e) {   planes.forEach((plane, i) => updatePlaneMouse(plane, new Vec2(0, 0), 1) );      radiusAnim.reverse(); }

We should also modify our existing updateProgress function to keep our WebGL mouse synced.

// Update the slider along with the necessary WebGL variables function updateProgress() {   // Update the actual slider   animation.progress(wrapVal(this.x) / wrapWidth);      // Update the WebGL slider planes   planes.forEach(plane => plane.updatePosition());      // Update the WebGL "mouse"   updateWebGLMouse(0); }

Now we’re cooking with fire! Our slider now mets all of our requirements.

Two additional benefits of using GSAP for your animations is that it provides access to callbacks, like onComplete, and GSAP keeps everything perfectly synced no matter the refresh rate (e.g. this situation).

You take it from here!

This is, of course, just the tip of the iceberg when it comes to what we can do with the slider now that it is in WebGL. For example,  common effects like turbulence and displacement can be added to the images in WebGL. The core concept of a displacement effect is to move pixels around based on a gradient lightmap that we use as an input source. We can use this texture (that I pulled from this displacement demo by Jesper Landberg — you should give him a follow) as our source and then plug it into our shader. 

To learn more about creating textures like these, see this article, this tweet, and this tool. I am not aware of any existing repositories of images like these, but if you know of one please, let me know.

If we hook up the texture above and animate the displacement power and intensity so that they vary over time and based on our drag velocity, then it will create a nice semi-random, but natural-looking displacement effect:

It’s also worth noting that Curtains has its own React version if that’s how you like to roll.

That’s all I’ve got for now. If you create something using what you’ve learned from this article, I’d love to see it! Connect with me via Twitter.

The post Creating WebGL Effects with CurtainsJS appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.


, , ,

Building an Images Gallery using PixiJS and WebGL

Sometimes, we have to go a little further than HTML, CSS and JavaScript to create the UI we need, and instead use other resources, like SVG, WebGL, canvas and others.

For example, the most amazing effects can be created with WebGL, because it’s a JavaScript API designed to render interactive 2D and 3D graphics within any compatible web browser, allowing GPU-accelerated image processing.

That said, working with WebGL can be very complex. As such, there’s a variety of libraries that to make it relatively easier, such as PixiJSThree.js, and Babylon.js, among others.We’re going to work with a specific one of those, PixiJS, to create a gallery of random images inspired by this fragment of a Dribbble shot by Zhenya Rynzhuk.

This looks hard, but you actually don’t need to have advanced knowledge of WebGL or even PixiJS to follow along, though some basic knowledge of Javascript (ES6) will come in handy. You might even want to start by getting familiar with  the basic concept of fragment shaders used in WebGL, with The Book of Shaders as a good starting point.

With that, let’s dig into using PixiJS to create this WebGL effect!

Initial setup

Here’s what we’ll need to get started:

  1. Add the PixiJS library as a script in the HTML.
  2. Have a <canvas> element (or add it dynamically from Javascript), to render the application.
  3. Initialize the application with new PIXI.Application(options).

See, nothing too crazy yet. Here’s the JavaScript we can use as a boilerplate:

// Get canvas view const view = document.querySelector('.view') let width, height, app  // Set dimensions function initDimensions () {   width = window.innerWidth   height = window.innerHeight }  // Init the PixiJS Application function initApp () {   // Create a PixiJS Application, using the view (canvas) provided   app = new PIXI.Application({ view })   // Resizes renderer view in CSS pixels to allow for resolutions other than 1   app.renderer.autoDensity = true   // Resize the view to match viewport dimensions   app.renderer.resize(width, height) }  // Init everything function init () {   initDimensions()   initApp() }  // Initial call init()

When executing this code, the only thing that we will see is a black screen as well as a message like this in the if we open up the console:
PixiJS 5.0.2 - WebGL 2 - http://www.pixijs.com/.

We are ready to start drawing on the canvas using PixiJS and WebGL!

Creating the grid background with a WebGL Shader

Next we will create a background that contains a grid, which will allow us to clearly visualize the distortion effect we’re after. But first, we must know what a shader is and how it works.I recommended The Book of Shaders earlier as a starting point to learn about them and this is those concepts will come into play. If you have not done it yet, I strongly recommend that you review that material, and only then continue here.

We are going to create a fragment shader that prints a grid background on the screen:

// It is required to set the float precision for fragment shaders in OpenGL ES // More info here: https://stackoverflow.com/a/28540641/4908989 #ifdef GL_ES precision mediump float; #endif  // This function returns 1 if `coord` correspond to a grid line, 0 otherwise float isGridLine (vec2 coord) {   vec2 pixelsPerGrid = vec2(50.0, 50.0);   vec2 gridCoords = fract(coord / pixelsPerGrid);   vec2 gridPixelCoords = gridCoords * pixelsPerGrid;   vec2 gridLine = step(gridPixelCoords, vec2(1.0));   float isGridLine = max(gridLine.x, gridLine.y);   return isGridLine; }  // Main function void main () {   // Coordinates for the current pixel   vec2 coord = gl_FragCoord.xy;   // Set `color` to black   vec3 color = vec3(0.0);   // If it is a grid line, change blue channel to 0.3   color.b = isGridLine(coord) * 0.3;   // Assing the final rgba color to `gl_FragColor`   gl_FragColor = vec4(color, 1.0); }

This code is drawn from a demo on Shadertoy,  which is a great source of inspiration and resources for shaders.

In order to use this shader, we must first load the code from the file it is in and — only after it has been loaded correctly— we will initialize the app.

// Loaded resources will be here const resources = PIXI.Loader.shared.resources  // Load resources, then init the app PIXI.Loader.shared.add([   'shaders/backgroundFragment.glsl' ]).load(init)

Now, for our shader to work where we can see the result, we will add a new element (an empty Sprite) to the stage, which we will use to define a filter. This is the way PixiJS lets us execute custom shaders like the one we just created.

// Init the gridded background function initBackground () {   // Create a new empty Sprite and define its size   background = new PIXI.Sprite()   background.width = width   background.height = height   // Get the code for the fragment shader from the loaded resources   const backgroundFragmentShader = resources['shaders/backgroundFragment.glsl'].data   // Create a new Filter using the fragment shader   // We don't need a custom vertex shader, so we set it as `undefined`   const backgroundFilter = new PIXI.Filter(undefined, backgroundFragmentShader)   // Assign the filter to the background Sprite   background.filters = [backgroundFilter]   // Add the background to the stage   app.stage.addChild(background) }

And now we see the gridded background with blue lines. Look closely because the lines are a little faint against the dark background color.

The distortion effect 

Our background is now ready, so let’s see how we can add the desired effect (Cubic Lens Distortion) to the whole stage, including the background and any other element that we add later, like images. For this, need to create a new filter and add it to the stage. Yes, we can also define filters that affect the entire stage of PixiJS!

This time, we have based the code of our shader on this awesome Shadertoy demo that implements the distortion effectusing different configurable parameters.

#ifdef GL_ES precision mediump float; #endif  // Uniforms from Javascript uniform vec2 uResolution; uniform float uPointerDown;  // The texture is defined by PixiJS varying vec2 vTextureCoord; uniform sampler2D uSampler;  // Function used to get the distortion effect vec2 computeUV (vec2 uv, float k, float kcube) {   vec2 t = uv - 0.5;   float r2 = t.x * t.x + t.y * t.y;   float f = 0.0;   if (kcube == 0.0) {     f = 1.0 + r2 * k;   } else {     f = 1.0 + r2 * (k + kcube * sqrt(r2));   }   vec2 nUv = f * t + 0.5;   nUv.y = 1.0 - nUv.y;   return nUv; }  void main () {   // Normalized coordinates   vec2 uv = gl_FragCoord.xy / uResolution.xy;    // Settings for the effect   // Multiplied by `uPointerDown`, a value between 0 and 1   float k = -1.0 * uPointerDown;   float kcube = 0.5 * uPointerDown;   float offset = 0.02 * uPointerDown;      // Get each channel's color using the texture provided by PixiJS   // and the `computeUV` function   float red = texture2D(uSampler, computeUV(uv, k + offset, kcube)).r;   float green = texture2D(uSampler, computeUV(uv, k, kcube)).g;   float blue = texture2D(uSampler, computeUV(uv, k - offset, kcube)).b;      // Assing the final rgba color to `gl_FragColor`   gl_FragColor = vec4(red, green, blue, 1.0); }

We are using two uniforms this time. Uniforms are variables that we pass to the shader via JavaScript:

  • uResolution: This is a JavaScript object thaincludes {x: width, y: height}. This uniform allows us to normalize the coordinates of each pixel in the range [0, 1].
  • uPointerDown: This is a float in the range [0, 1], which allows us to animate the distortion effect, increasing its intensity proportionally.

Let’s see the code that we have to add to our JavaScript to see the distortion effect caused by our new shader:

// Target for pointer. If down, value is 1, else value is 0 // Here we set it to 1 to see the effect, but initially it will be 0 let pointerDownTarget = 1 let uniforms  // Set initial values for uniforms function initUniforms () {   uniforms = {     uResolution: new PIXI.Point(width, height),     uPointerDown: pointerDownTarget   } }  // Set the distortion filter for the entire stage const stageFragmentShader = resources['shaders/stageFragment.glsl'].data const stageFilter = new PIXI.Filter(undefined, stageFragmentShader, uniforms) app.stage.filters = [stageFilter]

We can already enjoy our distortion effect!

This effect is static at the moment, so it’s not terribly fun just yet. Next, we’ll see how we can make the effect dynamically respond to pointer events.

Listening to pointer events

PixiJS makes it’s surprisingly simple to listen to events, even multiple events that respond equally to mouse and touch interactions. In this case, we want our animation to work just as well on desktop as on a mobile device, so we must listen to the events corresponding to both platforms.

PixiJs provides an interactive attribute that lets us do just that. We apply it to an element and start listening to events with an API similar to jQuery:

// Start listening events function initEvents () {   // Make stage interactive, so it can listen to events   app.stage.interactive = true    // Pointer & touch events are normalized into   // the `pointer*` events for handling different events   app.stage     .on('pointerdown', onPointerDown)     .on('pointerup', onPointerUp)     .on('pointerupoutside', onPointerUp)     .on('pointermove', onPointerMove) }

From here, we will start using a third uniform (uPointerDiff), which will allow us to explore the image gallery using drag and drop. Its value will be equal to the translation of the scene as we explore the gallery. Below is the code corresponding to each of the event handling functions:

// On pointer down, save coordinates and set pointerDownTarget function onPointerDown (e) {   console.log('down')   const { x, y } = e.data.global   pointerDownTarget = 1   pointerStart.set(x, y)   pointerDiffStart = uniforms.uPointerDiff.clone() }  // On pointer up, set pointerDownTarget function onPointerUp () {   console.log('up')   pointerDownTarget = 0 }  // On pointer move, calculate coordinates diff function onPointerMove (e) {   const { x, y } = e.data.global   if (pointerDownTarget) {     console.log('dragging')     diffX = pointerDiffStart.x + (x - pointerStart.x)     diffY = pointerDiffStart.y + (y - pointerStart.y)   } }

We still will not see any animation if we look at our work, but we can start to see how the messages that we have defined in each event handler function are correctly printed in the console.

Let’s now turn to implementing our animations!

Animating the distortion effect and the drag and drop functionality

The first thing we need to start an animation with PixiJS (or any canvas-based animation) is an animation loop. It usually consists of a function that is called continuously, using requestAnimationFrame, which in each call renders the graphics on the canvas element, thus producing the desired animation.

We can implement our own animation loop in PixiJS, or we can use the utilities included in the library. In this case, we will use the add method of app.ticker, which allows us to pass a function that will be executed in each frame. At the end of the init function we will add this:

// Animation loop // Code here will be executed on every animation frame app.ticker.add(() => {   // Multiply the values by a coefficient to get a smooth animation   uniforms.uPointerDown += (pointerDownTarget - uniforms.uPointerDown) * 0.075   uniforms.uPointerDiff.x += (diffX - uniforms.uPointerDiff.x) * 0.2   uniforms.uPointerDiff.y += (diffY - uniforms.uPointerDiff.y) * 0.2 })

Meanwhile, in the Filter constructor for the background, we will pass the uniforms in the stage filter. This allows us to simulate the translation effect of the background with this tiny modification in the corresponding shader:

uniform vec2 uPointerDiff;  void main () {   // Coordinates minus the `uPointerDiff` value   vec2 coord = gl_FragCoord.xy - uPointerDiff;    // ... more code here ... }

And now we can see the distortion effect in action, including the drag and drop functionality for the gridd background. Play with it!

Randomly generate a masonry grid layout

To make our UI more interesting, we can randomly generate the sizing and dimensions of the grid cells. That is, each image can have different dimensions, creating a kind of masonry layout.

Let’s use Unsplash Source, which will allow us to get random images from Unsplash and define the dimensions we want. This will facilitate the task of creating a random masonry layout, since the images can have any dimension that we want, and therefore, generate the layout beforehand.

To achieve this, we will use an algorithm that executes the following steps:

  1. We will start with a list of rectangles.
  2. We will select the first rectangle in the list divide it into two rectangles with random dimensions, as long as both rectangles have dimensions equal to or greater than the minimum established limit. We’ll add a check to make sure it’s possible and, if it is, add both resulting rectangles to the list.
  3. If the list is empty, we will finish executing. If not, we’ll go back to step two.

I think you’ll get a much better understanding of how the algorithm works in this next demo. Use the buttons to see how it runs: Next  will execute step two, All will execute the entire algorithm, and Reset will reset to step one.

Drawing solid rectangles

Now that we can properly generate our random grid layout, we will use the list of rectangles generated by the algorithm to draw solid rectangles in our PixiJS application. That way, we can see if it works and make adjustments before adding the images using the Unsplash Source API.

To draw those rectangles, we will generate a random grid layout that is five times bigger than the viewport and position it in the center of the stage. That allows us to move with some freedom to any direction in the gallery.

// Variables and settings for grid const gridSize = 50 const gridMin = 3 let gridColumnsCount, gridRowsCount, gridColumns, gridRows, grid let widthRest, heightRest, centerX, centerY, rects  // Initialize the random grid layout function initGrid () {   // Getting columns   gridColumnsCount = Math.ceil(width / gridSize)   // Getting rows   gridRowsCount = Math.ceil(height / gridSize)   // Make the grid 5 times bigger than viewport   gridColumns = gridColumnsCount * 5   gridRows = gridRowsCount * 5   // Create a new Grid instance with our settings   grid = new Grid(gridSize, gridColumns, gridRows, gridMin)   // Calculate the center position for the grid in the viewport   widthRest = Math.ceil(gridColumnsCount * gridSize - width)   heightRest = Math.ceil(gridRowsCount * gridSize - height)   centerX = (gridColumns * gridSize / 2) - (gridColumnsCount * gridSize / 2)   centerY = (gridRows * gridSize / 2) - (gridRowsCount * gridSize / 2)   // Generate the list of rects   rects = grid.generateRects() }

So far, we have generated the list of rectangles. To add them to the stage, it is convenient to create a container, since then we can add the images to the same container and facilitate the movement when we drag the gallery.

Creating a container in PixiJS is like this:

let container  // Initialize a Container element for solid rectangles and images function initContainer () {   container = new PIXI.Container()   app.stage.addChild(container) }

Now we can now add the rectangles to the container so they can be displayed on the screen.

// Padding for rects and images const imagePadding = 20  // Add solid rectangles and images // So far, we will only add rectangles function initRectsAndImages () {   // Create a new Graphics element to draw solid rectangles   const graphics = new PIXI.Graphics()   // Select the color for rectangles   graphics.beginFill(0xAA22CC)   // Loop over each rect in the list   rects.forEach(rect => {     // Draw the rectangle     graphics.drawRect(       rect.x * gridSize,       rect.y * gridSize,       rect.w * gridSize - imagePadding,       rect.h * gridSize - imagePadding     )   })   // Ends the fill action   graphics.endFill()   // Add the graphics (with all drawn rects) to the container   container.addChild(graphics) }

Note that we have added to the calculations a padding (imagePadding) for each rectangle. In this way the images will have some space among them.

Finally, in the animation loop, we need to add the following code to properly define the position for the container:

// Set position for the container container.x = uniforms.uPointerDiff.x - centerX container.y = uniforms.uPointerDiff.y - centerY

And now we get the following result:

But there are still some details to fix, like defining limits for the drag and drop feature. Let’s add this to the onPointerMove event handler, where we effectively check the limits according to the size of the grid we have calculated:

diffX = diffX > 0 ? Math.min(diffX, centerX + imagePadding) : Math.max(diffX, -(centerX + widthRest)) diffY = diffY > 0 ? Math.min(diffY, centerY + imagePadding) : Math.max(diffY, -(centerY + heightRest))

Another small detail that makes things more refined is to add an offset to the grid background. That keeps the blue grid lines in tact. We just have to add the desired offset (imagePadding / 2 in our case) to the background shader this way:

// Coordinates minus the `uPointerDiff` value, and plus an offset vec2 coord = gl_FragCoord.xy - uPointerDiff + vec2(10.0);

And we will get the final design for our random grid layout:

Adding images from Unsplash Source

We have our layout ready, so we are all set to add images to it. To add an image in PixiJS, we need a Sprite, which defines the image as a Texture of it. There are multiple ways of doing this. In our case, we will first create an empty Sprite for each image and, only when the Sprite is inside the viewport, we will load the image, create the Texture and add it to the Sprite. Sound like a lot? We’ll go through it step-by-step.

To create the empty sprites, we will modify the initRectsAndImages function. Please pay attention to the comments for a better understanding:

// For the list of images let images = []  // Add solid rectangles and images function initRectsAndImages () {   // Create a new Graphics element to draw solid rectangles   const graphics = new PIXI.Graphics()   // Select the color for rectangles   graphics.beginFill(0x000000)   // Loop over each rect in the list   rects.forEach(rect => {     // Create a new Sprite element for each image     const image = new PIXI.Sprite()     // Set image's position and size     image.x = rect.x * gridSize     image.y = rect.y * gridSize     image.width = rect.w * gridSize - imagePadding     image.height = rect.h * gridSize - imagePadding     // Set it's alpha to 0, so it is not visible initially     image.alpha = 0     // Add image to the list     images.push(image)     // Draw the rectangle     graphics.drawRect(image.x, image.y, image.width, image.height)   })   // Ends the fill action   graphics.endFill()   // Add the graphics (with all drawn rects) to the container   container.addChild(graphics)   // Add all image's Sprites to the container   images.forEach(image => {     container.addChild(image)   }) }

So far, we only have empty sprites. Next, we will create a function that’s responsible for downloading an image and assigning it as Texture to the corresponding Sprite. This function will only be called if the Sprite is inside the viewport so that the image only downloads when necessary.

On the other hand, if the gallery is dragged and a Sprite is no longer inside the viewport during the course of the download, that request may be aborted, since we are going to use an AbortController (more on this on MDN). In this way, we will cancel the unnecessary requests as we drag the gallery, giving priority to the requests corresponding to the sprites that are inside the viewport at every moment.

Let’s see the code to land the ideas a little better:

// To store image's URL and avoid duplicates let imagesUrls = {}  // Load texture for an image, giving its index function loadTextureForImage (index) {   // Get image Sprite   const image = images[index]   // Set the url to get a random image from Unsplash Source, given image dimensions   const url = `https://source.unsplash.com/random/$ {image.width}x$ {image.height}`   // Get the corresponding rect, to store more data needed (it is a normal Object)   const rect = rects[index]   // Create a new AbortController, to abort fetch if needed   const { signal } = rect.controller = new AbortController()   // Fetch the image   fetch(url, { signal }).then(response => {     // Get image URL, and if it was downloaded before, load another image     // Otherwise, save image URL and set the texture     const id = response.url.split('?')[0]     if (imagesUrls[id]) {       loadTextureForImage(index)     } else {       imagesUrls[id] = true       image.texture = PIXI.Texture.from(response.url)       rect.loaded = true     }   }).catch(() => {     // Catch errors silently, for not showing the following error message if it is aborted:     // AbortError: The operation was aborted.   }) }

Now we need to call the loadTextureForImage function for each image whose corresponding Sprite is intersecting with the viewport. In addition, we will cancel the fetch requests that are no longer needed, and we will add an alpha transition when the rectangles enter or leave the viewport.

// Check if rects intersects with the viewport // and loads corresponding image function checkRectsAndImages () {   // Loop over rects   rects.forEach((rect, index) => {     // Get corresponding image     const image = images[index]     // Check if the rect intersects with the viewport     if (rectIntersectsWithViewport(rect)) {       // If rect just has been discovered       // start loading image       if (!rect.discovered) {         rect.discovered = true         loadTextureForImage(index)       }       // If image is loaded, increase alpha if possible       if (rect.loaded && image.alpha < 1) {         image.alpha += 0.01       }     } else { // The rect is not intersecting       // If the rect was discovered before, but the       // image is not loaded yet, abort the fetch       if (rect.discovered && !rect.loaded) {         rect.discovered = false         rect.controller.abort()       }       // Decrease alpha if possible       if (image.alpha > 0) {         image.alpha -= 0.01       }     }   }) }

And the function that verifies if a rectangle is intersecting with the viewport is the following:

// Check if a rect intersects the viewport function rectIntersectsWithViewport (rect) {   return (     rect.x * gridSize + container.x <= width &&     0 <= (rect.x + rect.w) * gridSize + container.x &&     rect.y * gridSize + container.y <= height &&     0 <= (rect.y + rect.h) * gridSize + container.y   ) }

Last, we have to add the checkRectsAndImages function to the animation loop:

// Animation loop app.ticker.add(() => {   // ... more code here ...    // Check rects and load/cancel images as needded   checkRectsAndImages() })

Our animation is nearly ready!

Handling changes in viewport size

When initializing the application, we resized the renderer so that it occupies the whole viewport, but if the viewport changes its size for any reason (for example, the user rotates their mobile device), we should re-adjust the dimensions and restart the application.

// On resize, reinit the app (clean and init) // But first debounce the calls, so we don't call init too often let resizeTimer function onResize () {   if (resizeTimer) clearTimeout(resizeTimer)   resizeTimer = setTimeout(() => {     clean()     init()   }, 200) } // Listen to resize event window.addEventListener('resize', onResize)

The clean function will clean any residuals of the animation that we were executing before the viewport changed its dimensions:

// Clean the current Application function clean () {   // Stop the current animation   app.ticker.stop()   // Remove event listeners   app.stage     .off('pointerdown', onPointerDown)     .off('pointerup', onPointerUp)     .off('pointerupoutside', onPointerUp)     .off('pointermove', onPointerMove)   // Abort all fetch calls in progress   rects.forEach(rect => {     if (rect.discovered && !rect.loaded) {       rect.controller.abort()     }   }) }

In this way, our application will respond properly to the dimensions of the viewport, no matter how it changes. This gives us the full and final result of our work!

Some final thoughts

Thanks for taking this journey with me! We walked through a lot but we learned a lot of concepts along the way and walked out with a pretty neat piece of UI. You can check the code on GitHub, or play with demos on CodePen.

If you have worked with WebGL before (with or without using other libraries), I hope you saw how nice it is working with PixiJS. It abstracts the complexity associated with the WebGL world in a great way, allowing us to focus on what we want to do rather than the technical details to make it work.

Bottom line is that PixiJS brings the world of WebGL closer for front-end developers to grasp, opening up a lot of possibilities beyond HTML, CSS and JavaScript.

The post Building an Images Gallery using PixiJS and WebGL appeared first on CSS-Tricks.


, , , , ,

Techniques for Rendering Text with WebGL

As is the rule in WebGL, anything that seems like it should be simple is actually quite complicated. Drawing lines, debugging shaders, text rendering… they are all damn hard to do well in WebGL.

Isn’t that weird? WebGL doesn’t have a built-in function for rendering text. Although text seems like the most basic of functionalities. When it comes down to actually rendering it, things get complicated. How do you account for the immense amount of glyphs for every language? How do you work with fixed-width, or proportional-width fonts? What do you do when text needs to be rendered top-to-bottom, left-to-right, or right-to-left? Mathematical equations, diagrams, sheet music?

Suddenly it starts to make sense why text rendering has no place in a low-level graphics API like WebGL. Text rendering is a complex problem with a lot of nuances. If we want to render text, we need to get creative. Fortunately, a lot of smart folks already came up with a wide range of techniques for all our WebGL text needs.

We’ll learn at some of those techniques in this article, including how to generate the assets they need and how to use them with ThreeJS, a JavaScript 3D library that includes a WebGL renderer. As a bonus, each technique is going to have a demo showcasing use cases.

Table of Contents

A quick note on text outside of WebGL

Although this article is all about text inside WebGL, the first thing you should consider is whether you can get away with using HMTL text or canvas overlayed on top of your WebGL canvas. The text can’t be occluded with the 3D geometry as an overlay, but you can get styling and accessibility out of the box. That’s all you need in a lot of cases.

Font geometries

One of the common ways to render text is to build the glyphs with a series of triangles, much like a regular model. After all, rendering points, lines and triangles are a strength of WebGL.

When creating a string, each glyph is made by reading the triangles from a font file of triangulated points. From there, you could extrude the glyph to make it 3D, or scale the glyphs via matrix operations.

Regular font representation (left) and font geometry (right)

Font geometry works best for a small amount of text. That’s because each glyph contains many triangles, which causes drawing to become problematic.

Rendering this exact paragraph you are reading right now with font geometry creates 185,084 triangles and 555,252 vertices. This is just 259 letters. Write the whole article using a font geometry and your computer fan might as well become an airplane turbine.

Although the number of triangles varies by the precision of the triangulation and the typeface in use, rendering lots of text will probably always be a bottleneck when working with font geometry.

How to create a font geometry from a font file

If it were as easy as choosing the font you want and rendering the text. I wouldn’t be writing this article. Regular font formats define glyphs with Bezier curves. On the flip side, drawing those in WebGL is extremely expensive on the CPU and is also complicated to do. If we want to render text, we need to create triangles (triangulation) out of Bezier curves.

I’ve found a few triangulation methods, but by no means are any of them perfect and they may not work with every font. But at least they’ll get you started for triangulating your own typefaces.

Method 1: Automatic and easy

If you are using ThreeJS, you pass your typeface through FaceType.js to read the parametric curves from your font file and put them into a .json file. The font geometry features in ThreeJS take care of triangulating the points for you:

const geometry = new THREE.FontGeometry("Hello There", {font: font, size: 80})

Alternatively, if you are not using ThreeJS and don’t need to have real-time triangulation. You could save yourself the pain of a manual process by using ThreeJS to triangulate the font for you. Then you can extract the vertices and indices from the geometry, and load them in your WebGL application of choice.

Method 2: Manual and painful

The manual option for triangulating a font file is extremely complicated and convoluted, at least initially. It would require a whole article just to explain it in detail. That said, we’ll quickly go over the steps of a basic implementation I grabbed from StackOverflow.

See the Pen
Triangulating Fonts
by Daniel Velasquez (@Anemolo)
on CodePen.

The implementation basically breaks down like this:

  1. Add OpenType.js and Earcut.js to your project.
  2. Get Bezier curves from your .tff font file using OpenType.js.
  3. Convert Bezier curves into closed shapes and sort them by descending area.
  4. Determine the indices for the holes by figuring out which shapes are inside other shapes.
  5. Send all of the points to Earcut with the hole indices as a second parameter.
  6. Use Earcut’s result as the indices for your geometry.
  7. Breath out.

Yeah, it’s a lot. And this implementation may not work for all typefaces. It’ll get you started nonetheless.

Using text geometries in ThreeJS

Thankfully, ThreeJS supports text geometries out of the box. Give it a .json of your favorite font’s Bezier curves and ThreeJS takes care of triangulating the vertices for you in runtime.

var loader = new THREE.FontLoader(); var font; var text = "Hello World" var loader = new THREE.FontLoader(); loader.load('fonts/helvetiker_regular.typeface.json', function (helvetiker) {   font = helvetiker;   var geometry = new THREE.TextGeometry(text, {     font: font,     size: 80,     height: 5,   }); }


  • It’s easily extruded to create 3D strings.
  • Scaling is made easier with matrix operations.
  • It provides great quality depending on the amount of triangles used.


  • This doesn’t scale well with large amounts of text due to the high triangle count. Since each character is defined by a lot of triangles, even rendering something as brief as “Hello World” results in 7,396 triangles and 22,188 vertices.
  • This doesn’t lend itself to common text effects.
  • Anti-aliasing depends on your post-processing aliasing or your browser default.
  • Scaling things too big might show the triangles.

Demo: Fading Letters

In the following demo, I took advantage of how easy it is to create 3D text using font geometries. Inside a vertex shader, the extrusion is increased and decreased as time goes on. Pair that with fog and standard material and you get these ghostly letters coming in and out of the void.

Notice how with a low amount of letters the amount of triangles is already in the tens of thousands!

Text (and canvas) textures

Making text textures is probably the simplest and oldest way to draw text in WebGL. Open up Photoshop or some other raster graphics editor, draw an image with some text on it, then render these textures onto a quad and you are done!

Alternatively, you could use the canvas to create the textures on demand at runtime. You’re able to render the canvas as a texture onto a quad as well.

Aside for being the least complicated technique of the bunch. Text textures and canvas textures have the benefit of only needed one quad per texture, or given piece of text. If you really wanted to, you could write the entire British Encyclopedia on a single texture. That way, you only have to render a single quad, six vertices and two faces. Of course, you would do it in a scale, but the idea still remains: You can batch multiple glyphs into same quad. Both text and canvas textures experience have issues with scaling, particularly when working with lots of text.

For text textures, the user has to download all the textures that make up the text, then keep them in memory. For canvas textures, the user doesn’t have to download anything — but now the user’s computer has to do all the rasterizing at runtime, and you need to keep track of where every word is located in the canvas. Plus, updating a big canvas can be really slow.

How to create and use a text texture

Text textures don’t have anything fancy going on for them. Open up your favorite raster graphics editor, draw some text on the canvas and export it as an image. Then you can load it as a texture, and map it on a plane:

// Load texture let texture = ; const geometry = new THREE.PlaneBufferGeometry(); const material new THREE.MeshBasicMaterial({map: texture}); this.scene.add(new Mesh(geometry,material));

If your WebGL app has a lot of text, downloading a huge sprite sheet of text might not be ideal, especially for users on slow connections. To avoid the download time, you can rasterize things on demand using an offscreen canvas then sample that canvas as a texture.

Let’s trade download time for performance since rasterizing lots of text takes more than a moment.

function createTextCanvas(string, parameters = {}){          const canvas = document.createElement("canvas");     const ctx = canvas.getContext("2d");          // Prepare the font to be able to measure     let fontSize = parameters.fontSize || 56;     ctx.font = `$ {fontSize}px monospace`;          const textMetrics = ctx.measureText(text);          let width = textMetrics.width;     let height = fontSize;          // Resize canvas to match text size      canvas.width = width;     canvas.height = height;     canvas.style.width = width + "px";     canvas.style.height = height + "px";          // Re-apply font since canvas is resized.     ctx.font = `$ {fontSize}px monospace`;     ctx.textAlign = parameters.align || "center" ;     ctx.textBaseline = parameters.baseline || "middle";          // Make the canvas transparent for simplicity     ctx.fillStyle = "transparent";     ctx.fillRect(0, 0, ctx.canvas.width, ctx.canvas.height);          ctx.fillStyle = parameters.color || "white";     ctx.fillText(text, width / 2, height / 2);          return canvas; }  let texture = new THREE.Texture(createTextCanvas("This is text"));

Now you can use the texture on a plane, like the previous snippet. Or you could create a sprite instead.

As an alternative, you could use more efficient libraries to create texture or sprites, like three-text2d or three-spritetext. And if you want text multi-line text, you should check out this amazing tutorial.


  • This provides great 1-to-1 quality with static text.
  • There’s a low vertex/face count. Each string can use as little as six vertices and two faces.
  • It’s easy to implement texture on a quad.
  • It’s fairly trivial to add effects, like borders and glows, using canvas or a graphics editor.
  • Canvas makes it easy to create multi-line text.


  • Looks blurry if scaled, rotated or transformed after rasterizing.
  • On-non retina, text looks crunchy.
  • You have to rasterize all the strings used. A lot of strings means a lot of data to download.
  • On-demand rasterizing with canvas can be slow if you keep constantly updating the canvas.

Demo: Canvas texture

Canvas textures work well with a limited amount of text that doesn’t change often. So I built a simple wall of text with the quads re-using the same texture.

Bitmap fonts

Both font geometries and text textures experience the same problems handling lots of text. Having one million vertices per piece of text is super inefficient, and creating one texture per piece of text doesn’t really scale.

Bitmap fonts solve this issue by rasterizing all unique glyphs into a single texture, called a texture atlas. This means you can assemble any given string at runtime by creating a quad for each glyph and sampling the section of the texture atlas.

This means users only have to download and use a single texture for all of the text. It also means you only need to render as little as one quad per glyph:

A visual of bitmap font sampling

Rendering this whole article would be approximately 117,272 vertices and 58,636 triangles. That’s 3.1 times more efficient compared to a font geometry rendering just a single paragraph. That a huge improvement!

Because bitmap fonts rasterize the glyph into a texture, they suffer from the same problem as regular images. Zoom in or scale and you start seeing a pixelated and blurry mess. If you want text at a different size, you should send a secondary bitmap with the glyphs on that specific size. Or you could use a Signed Distance Field (SDF) which we’ll cover in the next section.

How to create bitmap fonts

There are a lot of tools to generate bitmaps. Here are some of the more relevant options out there:

  • Angelcode’s bmfont – This is by the creators of the bitmap format.
  • Hiero – This is a Java open-source tool. It’s very similar to Anglecode’s bmfont, but it allows you to add text effects.
  • Glyphs Designer – This is a paid MacOS app.
  • ShoeBox – This is an tool for dealing with sprites, including bitmap fonts.

We’ll use Anglecode’s bmfont for our example because I think it’s the easiest one to get started. At the bottom of this section, you can find other tools if you think it lacks the functionality you are looking for.

When you open the app, you’ll see a screen full of letters that you can select to use.The nice thing about this is that you are able to grab just the glyphs you need instead of sending Greek symbols.

The app’s sidebar allows you to choose and select groups of glyphs.

The BmFont application

Ready to export? Go to OptionsSave bitmap as. And done!

But we’re getting a little ahead of ourselves. Before exporting, there are a few important settings you should check.

Export and Font Option settings
  • Font settings: This let you choose the font and size you want to use. The most important item here is “Match char height.” By default, the app’s “size” option uses pixels instead of points. You’ll see a substantial difference between your graphics editor’s font size and the font size that is generated. Select the “Match char height” options if you want your glyphs to make sense.
  • Export settings: For the export, make sure the texture size is a power of two (e.g. 16×16, 32×32, 64×64, etc.). Then you are able to take advantage of “Linear Mipmap linear” filtering, if needed.

At the bottom of the settings, you’ll see the “file format” section. Choosing either option here is fine as long as you can read the file and create the glyphs.

If you are looking for the smallest file size. I ran a ultra non-scientific test where I created a bitmap of all lowecase and uppercase Latin characters and compared each option. For Font Descriptors, the most efficient format is Binary.

Font Descriptor Format File Size
Binary 3 KB
Raw Text 11 KB
Texture Format File Size
Targa 64 KB
DirectDraw Surface 65 KB

PNG is the smallest file size for Text Texture.

Of course, it’s a little more complicated than just file sizes. To get a better idea of which option to use, you should look into parsing time and run-time performance. If you would like to know the pros and cons of each formats, check out this discussion.

How to use bitmap fonts

Creating bitmap font geometry is a bit more involved than just using a texture because we have to construct the string ourselves. Each glyph has its own height and width, and samples a different section of the texture. We have to create a quad for each glyph on our string so we can give them the correct UVs to sample it’s glyph.

You can use three-bmfont-text in ThreeJS to create strings using bitmaps, SDFs and MSDFs. It takes care of multi-line text, and batching all glyphs onto a single geometry. Note that it needs to be installed in a project from npm.

var createGeometry = require('three-bmfont-text') var loadFont = require('load-bmfont')  loadFont('fonts/Arial.fnt', function(err, font) {   // create a geometry of packed bitmap glyphs,    // word wrapped to 300px and right-aligned   var geometry = createGeometry({     font: font,     text: "My Text"   })        var textureLoader = new THREE.TextureLoader();   textureLoader.load('fonts/Arial.png', function (texture) {     // we can use a simple ThreeJS material     var material = new THREE.MeshBasicMaterial({       map: texture,       transparent: true,       color: 0xaaffff     })      // now do something with our mesh!     var mesh = new THREE.Mesh(geometry, material)   }) })

Depending whenever your text is drawn as as full black or full white, use the invert option.


  • It’s fast and simple to render.
  • It’s a 1:1 ratio and resolution independent.
  • It can render any string, given the glyphs.
  • It’s good for lots of text that needs to change often.
  • It’s works extremely well with a limited number of glyphs.
  • It’s includes support for things like kerning, line height and word-wrapping at run-time.


  • It only accepts a limited set of characters and styles.
  • It requires pre-rasterizing glyphs and extra bin packing for optimal usage.
  • It’s blurry and pixelated at large scales, and can also be rotated or transformed.
  • There’s only one quad per rendered glyph.

Interactive Demo: The Shining Credits

Raster bitmap fonts work great for movie credits because we only need a few sizes and styles. The drawback is that the text isn’t great with responsive designs because it’ll look blurry and pixelated at larger sizes.

For the mouse effect, I’m making calculations by mapping the mouse position to the size of the view then calculating the distance from the mouse to the text position. I’m also rotating the text when it hits specific points on the z-axis and y-axis.

Signed distance fields

Much like bitmap fonts, signed distance field (SDF) fonts are also a texture atlas. Unique glyphs are batch into a single “texture atlas” that can create any string at runtime.

But instead of storing the rasterized glyph on the texture the way bitmap fonts do, the glyph’s SDF is generated and stored instead which allows for a high resolution shape from a low resolution image.

Like polygonal meshes (font geometries), SDFs represent shapes. Each pixel on an SDF stores the distance to the closest surface. The sign indicates whenever the pixel is inside or outside the shape. If the sign is negative, then the pixel is inside; if it’s positive, then the pixel is outside. This video illustrates the concept nicely.

SDFs are also commonly used for raytracing and volumetric rendering.

Because an SDF stores distance in each pixel, the raw result looks like a blurry version of the original shape. To output the hard shape you’ll need to alpha test it at 0.5 which is the border of the glyph. Take a look at how the SDF of the letter “A” compares to it’s regular raster image:

Raster text beside of a raw and an alpha tested SDF

As I mentioned earlier, the big benefit of SDFs is being able to render high resolution shapes from low resolution SDF. This means you can create a 16pt font SDF and scale the text up to 100pt or more without losing much crispness.

SDFs are good at scaling because you can almost perfectly reconstruct the distance with bilinear interpolation, which is a fancy way of saying we can get values between two points. In this case, bilinear interpolating between two pixels on a regular bitmap font gives us the in-between color, resulting in a linear blur.

On an SDF, bilinear interpolating between two pixels provides the in-between distance to the nearest edge. Since these two pixel distances are similar to begin with, the resulting value doesn’t lose much information about the glyph. This also means the bigger the SDF, the more accurate and less information is lost.

However, this process comes with a caveat. If the rate change between pixels is not linear — like in the case of sharp corners — bilinear interpolation gives out an inaccurate value, resulting in chipped or rounded corners when scaling an SDF much higher than its original size.

SDF rounded corners

Aside from bumping the texture side, the only real solution is to use multi-channel SDFs, which is what we’ll cover in the next section.

If you want to take a deeper dive into the science behind SDFs, check out the Chris Green’s Master’s thesis (PDF) on the topic.


  • They maintain crispness, even when rotated, scaled or transformed.
  • They are ideal for dynamic text.
  • They provide good quality to the size ratio. A single texture can be used to render tiny and huge font sizes without losing much quality.
  • They have a low vertex count of only four vertices per glyph.
  • Antialiasing is inexpensive as is making borders, drop shadows and all kinds of effects with alpha testing.
  • They’re smaller than MSDFs (which we’ll see in a bit).


  • The can result in rounded or chipped corners when the texture is sampled beyond its resolution. (Again, we’ll see how MSDFs can prevent that.)
  • They’re ineffective at tiny font sizes.
  • They can only be used with monochrome glyphs.

Multi-channel signed distance fields

Multi-channel signed distance field (MSDF) fonts is a bit of a mouthful and a fairly recent variation on SDFs that is capable of producing near-perfect sharp corners by using all three color channels. They do look quite mind blowing at first but don’t let that put you off because they are easy to use than they appear.

A multi-channel signed distance field file can look a little spooky at first.

Using all three color channels does result in a heavier image, but that’s what gives MSDFs a far better quality-to-space ratio than regular SDFs. The following image shows the difference between an SDF and an MSDF for a font that has been scaled up to 50px.

The SDF font results in rounded corners, even at 1x zoom, while the MSDF font retains sharp edges, even at 5x zoom.

Like a regular SDF, an MSDF stores the distance to the nearest edge but changes the color channels whenever it finds a sharp corner. We get the shape by drawing where two color channels or more agree. Although there’s a bit more technique involved. Check out the README for this MSDF generator for a more thorough explanation.


  • They support a higher quality and space ratio than SDFs. and are often the better choice.
  • They maintain sharp corners when scaled.


  • They may contain small artifacts but those can be avoided by bumping up the texture size of the glyph.
  • They requires filtering the median of the three values at runtime which is a bit expensive.
  • They are only compatible with monochrome glyphs.

How to create MSDF fonts

The quickest way to create an MSDF font is to use the msdf-bmfont-web tool. It has most of the relevant customization options and it gets the job done in seconds, all in the browser. Alternatively, there are a number of Google Fonts that have already been converted into MSDF by the folks at A-Frame.

If you are also looking to generate SDFs or your typeface, it requires some special tweaking thanks to some problematic glyphs. The msdf-bmfont-xml CLI gives you a wide range of options, without making things overly confusing. Let’s take a look at how you would use it.

First, you’ll need to install globally it from npm:

npm install msdf-bmfont-xml -g

Next, give it a .ttf font file with your options:

msdf-bmfont ./Open-Sans-Black.ttf --output-type json --font-size 76 --texture-size 512,512

Those options are worth digging into a bit. While msdf-bmfont-xml provides a lot of options to fine-tune your font, there are really just a few options you’ll probably need to correctly generate an MSDF:

  • -t <type> or <code>--field-type <msdf or sdf>: msdf-bmfont-xml generates MSDFs glyph atlases by default. If you want to generate an SDF instead, you need to specify it by using -t sdf.
  • -f <xml or json> or --output-type <xml or json>: msdf-bmfont-xml generates an XML font file that you would have to parse to JSON at runtime. You can avoid this parsing step by exporting JSON straight away.
  • -s, --font-size <fontSize>: Some artifacts and imperfections might show up if the font size is super small. Bumping up the font size will get rid of them most of the time. This example shows a small imperfection in the letter “M.”
  • -m <w,h> or --texture-size <w,h>: If all your characters don’t fit in the same texture, a second texture is created to fit them in. Unless you are trying to take advantage of a multi-page glyph atlas, I recommend increasing the texture size so that it fits over all of the characters on one texture to avoid extra work.

There are other tools that help generate MSDF and SDF fonts:

  • msdf-bmfont-web: A web tool to create MSDFs (but not SDFs) quickly and easily
  • msdf-bmfont: A Node tool using Cairo and node-canvas
  • msdfgen: The original command line tool that all other MSDF tools are based from
  • Hiero: A tool for generating both bitmaps and SDF fonts

How to use SDF and MSDF fonts

Because SDF and MSDF fonts are also glyph atlases, we can use three-bmfont-text like we did for bitmap fonts. The only difference is that we have to get the glyph our of the distance fields with a fragment shader.

Here’s how that works for SDF fonts. Since our distance field has a value greater than .5 outside our glyph and less than 0.5 inside our glyph, we need to alpha test in a fragment shader on each pixel to make sure we only render pixels with a distance less than 0.5, rendering just the inside of the glyphs.

const fragmentShader = `    uniform vec3 color;   uniform sampler2D map;   varying vec2 vUv;      void main(){     vec4 texColor = texture2D(map, vUv);     // Only render the inside of the glyph.     float alpha = step(0.5, texColor.a);      gl_FragColor = vec4(color, alpha);     if (gl_FragColor.a < 0.0001) discard;   } `;  const vertexShader = `   varying vec2 vUv;      void main {     gl_Position = projectionMatrix * modelViewMatrix * position;     vUv = uv;   } `;  let material = new THREE.ShaderMaterial({   fragmentShader, vertexShader,   uniforms: {     map: new THREE.Uniform(glyphs),     color: new THREE.Uniform(new THREE.Color(0xff0000))   } })

Similarly, we can import the font from three-bmfont-text which comes with antialiasing out of the box. Then we can use it directly on a RawShaderMaterial:

let SDFShader = require('three-bmfont-text/shaders/sdf'); let material = new THREE.RawShaderMaterial(MSDFShader({   map: texture,   transparent: true,   color: 0x000000 }));

MSDF fonts are a little different. They recreate sharp corners by the intersections of two color channels. Two or more color channels have to agree on it. Before doing any alpha texting, we need to get the median of the three color channels to see where they agree:

const fragmentShader = `    uniform vec3 color;   uniform sampler2D map;   varying vec2 vUv;    float median(float r, float g, float b) {     return max(min(r, g), min(max(r, g), b));   }      void main(){     vec4 texColor = texture2D(map, vUv);     // Only render the inside of the glyph.     float sigDist = median(texColor.r, texColor.g, texColor.b) - 0.5;     float alpha = step(0.5, sigDist);     gl_FragColor = vec4(color, alpha);     if (gl_FragColor.a < 0.0001) discard;   } `; const vertexShader = `   varying vec2 vUv;      void main {     gl_Position = projectionMatrix * modelViewMatrix * position;     vUv = uv;   } `;  let material = new THREE.ShaderMaterial({   fragmentShader, vertexShader,   uniforms: {     map: new THREE.Uniform(glyphs),     color: new THREE.Uniform(new THREE.Color(0xff0000))   } })

Again, we can also import from three-bmfont-text using its MSDFShader which also comes with antialiasing out of the box. Then we can use it directly on a RawShaderMaterial:

let MSDFShader = require('three-bmfont-text/shaders/msdf'); let material = new THREE.RawShaderMaterial(MSDFShader({   map: texture,   transparent: true,   color: 0x000000 }));

Demo: Star Wars intro

The Star Wars drawl intro is a good example where MSDF and SDF fonts work well because the effect needs the text to come in multiple sizes. We can use a single MSDF and the text always looks sharp! Although, sadly, three-bm-font doesn’t support justified text yet. Applying left justification would make for a more balanced presentation.

For the light saber effect, I’m raycasting an invisible plane the size of the plane, drawing onto a canvas that’s the same size, and sampling that canvas by mapping the scene position to the texture coordinates.

Bonus tip: Generating 3D text with height maps

Aside from font geometries, all the techniques we’ve covered generate strings or glyphs on a single quad. If you want to build 3D geometries out of a flat texture, your best choice is to use a height map.

A height map is a technique where the geometry height is bumped up using a texture. This is normally used to generate mountains in games, but it turns out to be useful rendering text as well.

The only caveat is that you’ll need a lot of faces for the text to look smooth.

Further reading

Different situations call for different techniques. Nothing we saw here is a silver bullet and they all have their advantages and disadvantages.

There are a lot of tools and libraries out there to help make the most of WebGL text, most of which actually originate from outside WebGL. If you want to keep learning, I highly recommend you go beyond WebGL and check out some of these links:

The post Techniques for Rendering Text with WebGL appeared first on CSS-Tricks.


, , ,