Putting everything together

In Part 1 and Part 2, we discussed some of the graphic tools that Darryl Huffman used to create his “Perlin Flow Field” on CodePen.

The way that Darryl made several graphics tools and techniques to work together was something that piqued my curiosity. I did some basic reverse engineering to explore his approach.

As a reminder, this is again a link to Darryl’s work:

See the Pen Perlin Flow Field by Darryl Huffman (@darrylhuffman) on CodePen.


In the previous post, we separated Darryl’s code into three sections:

  • The “context” canvas and the Hair class
  • The WebGL (Three.js), the shader, and the texture canvas
  • The interaction between the texture (aka Garryl’s perlinCanvas) and the “context” canvas

In this third and last post, we will check how Darryl brought together the canvas figure and the noise function, as well as having a quick look at the oscillation effect of the “hairs”.

The “Screenshots”

We have the canvas strokes and the noise function. Now what?

Extracting that data directly from the WebGL API is not always an easy task. However, the canvas API is a robust one and includes many helpful methods that may help to overcome those difficulties. In particular, the image-related methods are very powerful.

A common trick to extract data from a graphic APIs such as the WebGL API or the video API API to be later used for 2D animation on canvas element is the “screenshot” method.

The approach consists in taking images of the source’s graphics that can be eventually translated into canvas API data. If you can take screenshots at every animation frame, you can simulate the graphics at the source much like a GIF or a stop-motion film.

This is the trick that Darryl used, where the perlinCanvas was the intermediate between both APIs.

Now, you might think that this trick seemed an overkill, specially because the WebGL API renders over the canvas API. The main reason to use this trick is that even if you can overlap elements of both APIs, trying to extract data directly from the source into the target might bring some disruption of execution of the source graphics, leading to data discrepancies and performance issues. Better to work with a static representation at every frame instead.

Waves and Trigonometry

Another interesting aspect of the project is the one used to create the effect of waves passing through the “hairs”.

Darryl wanted the strokes to oscillate right and left based on data coming from the WebGL renderer. More specifically, variations in the coloring of the pixels of each “screenshot” should drive changes in the rotation of the stroke, giving the illusion of balancing or waving.

The canvas API offers different solutions for rotating drawings. Examples are:

  • an arc method
  • a rotate method

Darryl used a different approach. He calculated the angle of rotation, positioned the pen in the origin point of the stroke ((x,y)) and redraw the stroke using the canvas’ lineTo method based on the angle of rotation.

For that he needed to use some formulas. And which functions are common to waves, rotations, and angles? Indeed, trigonometic functions.

Darryl used one of the values of rgba-encoded colors to calculate the angles of rotation. Remember that rgb-encoded colors are represented by a vector of values ranging from 0 to 255. He needed only one of those three coordinates because in his project all the coordinates had the same value per pixel (grey-scale).

The Code

Let’s see first the use of the canvas interface and then let’s explore the balancing movement of the strokes.

THE DARRYL’S “PERLIN” CANVAS

As we previously mentioned in Part 1, the canvas that Darryl would use as an adapter (the “Perlin” canvas) was declared with similar properties to the “context” canvas but not appended to any HTML element:


  8
  9
 10 
 11 
 12
 13
...

 21
 22
 23 
 24
 25 
 26 


const canvas = document.createElement('canvas'),
        context = canvas.getContext('2d'),
        perlinCanvas = document.createElement('canvas'),
        perlinContext = perlinCanvas.getContext('2d'),
        width = canvas.width = container.offsetWidth,
        height = canvas.height = container.offsetHeight,
...
        
document.body.appendChild(canvas)

let perlinImgData = undefined

perlinCanvas.width = width
perlinCanvas.height = height

The “Perlin” canvas (actually, the perlinContext) would be later associated with the renderer variable that was declared in line 5 of the original code and which would be linked to the WebGL renderer at line 231, inside the noiseCanvas function:


  3
  4
  5 
  6
  7
...

 63
...

 69 
 70 
 71 
...

 78
 79
 80
 81
 82
 83
...

231 
...

272


let container = document.body,
    startTime = new Date().getTime(),
    renderer

function init() {  <-- The init function sets the canvas elements and renders the "context" canvas 
...

 function render() {  <-- This render function is inside init and renders the "context" canvas 
... 

	perlinContext.clearRect(0, 0, width, height)
	perlinContext.drawImage(renderer.domElement, 0, 0)
	perlinImgData = perlinContext.getImageData(0, 0, width, height)
...

 }
 render()

}

function noiseCanvas() {  <-- noiseCanvas focuses on the WebGL graphics and its rendering 
...

	renderer = new THREE.WebGLRenderer({ alpha: true })
...

}

How the WebGL renderer is associated with the perlinContext can be seen in line 70 of the original code, enclosed in the canvas render function. Here the perlinContext takes a “screenshot” of the WebGL renderer. The data of the “screenshot” is then passed to the perlinImgData using the getImageData method.

STROKE’S WAVE MOVEMENT AND THE draw METHOD

In order to see how the data of the perlinImgData was used, we have to come back to the draw method of each instance of class Hair.


...
 44
 45
 46
 47
 48 
 49
 50
 51
 52
 53 
 54
...


...
		draw(){
    			let { position, length } = this,
			    { x, y } = position,
			    i = (y * width + x) * 4,
			    d = perlinImgData.data,
			    noise = d[i],
			    angle = (noise / 255) * Math.PI
			
			context.moveTo(x, y)
			context.lineTo(x + Math.cos(angle) * length, y + Math.sin(angle) * length)
		}
...

perlinImgData is the one passed to the draw method of the instance. The rgba values of every pixel in the “screenshot” would be listed in perlinImgData.data. Actually, perlinImgData.data is more an array.

Searching the array was done using an index i. The calculation of i gives a glance of how the color data is arranged in the array. If a pixel were located at position (x,y) of the image, one of its rgba-coded color values would be located at (y * width + x) * 4 in the array.

From the perlinImgData.data array, Darryl extracted a single value per pixel using that “mysterious” index. He then entered the value from the array into the ratio of a formula to obtain an angle’s output in radians within a range between 0 (array’s value = 0) and PI (array’s value = 255).

The resulting angle was used to calculate the rotation of the stroke using trigonometric formulas.

In Action

Let's see the two graphics together.

Here we draw 700 "hairs" as an example. Remember that the "perlin" canvas was eventually instantiated with the same dimensions as the context canvas, but it was not appended to any HTML element. perlinImgData was declared at a higher scope and set to "undefined".

let perlinImgData = undefined

perlinCanvas.width = width
perlinCanvas.height = height

So far we haven't used the data coming from the perlinCanvas.

Let's do it!

The perlinCanvas collects a screenshot of the noise flow at each animation frame. The data from the image is then passed to perlinImgData.

You have seen this before! The draw method in class Hair. Notice the perlinImgData.data, the index i and the canvas API methods, moveTo and lineTo.


    ...
    draw(){
            let { position, length } = this,
            { x, y } = position,
            i = (y * width + x) * 4,
            d = perlinImgData.data,
            noise = d[i],
            angle = (noise / 255) * Math.PI
        
        context.moveTo(x, y)
        context.lineTo(x + Math.cos(angle) * length, y + Math.sin(angle) * length)
    }
}

The index is used to look for one of the rgba (0-255) encoded coloring values for the pixel in the data array perlinImgData.data. Both canvas are of the same dimension, and the extracted values correspond to pixels on the perlinContext screenshot that match exactly the same position of the "hair" origins in the context canvas. The array's value is entered into a formula to get an "angle" value between 0 and PI.

This angle would be used to re-draw the line as rotated from the origin point of the hair using the **lineTo** method.

Now let's overlap both graphics.







Tada!

If you look closely at the last image, you might notice that the “hairs” move to the right and left based on how light or dark the the passing noise flow is.

This passing noise flow is kept away from the viewer, giving the illusion of an invisible force flowing through the hairs.

So… What did we learn from this code?

There are few things I learned from this single project:

  • the use of flowing noise functions to get effects on canvas-based graphics
  • a revision of the power of the canvas API as intermediate to get data between graphical APIs
  • a refreshing of rotation programming techniques
  • … and more

In fact, there are still things we can get deeper into by just using this example, but I think we could stop here.

Final Remarks

I still find myself ripping at the Darryl’s pen as I watch the moving of the hairs mimicking the passing of a soft water stream accross floating grass.

From this project I liked the way some nice effects were obtained with a minimal effort. If you consider all the tools and technology involved you might wonder what I mean with “minimal effort”.

It is fair to say that only when you have at least a basic understanding of all those technologies and techniques the effort of putting them together becomes closer to “minimal”. And I would not disagree. But for this project in particular is more about the few elements used between all those technologies to generate the final effect. For example, how a single value from the WebGL rendering was enough to create a nice visual effect in the context canvas.

Now, what are your thoughts? Was there anything Darryl Huffman could have done differently? Is this pen one that could work for some of your projects? Is there any other effect that you would like to try using similar techniques?

I hope that all three of these posts were helpful for you. With this we have completed our analysis of this implementation. Time for something new. Meanwhile, I wish you happy coding!