A Perlin-like flow with canvas, shaders and three.js (Part 2)
In a previous post we discussed how Darryl Huffman created the figure on his “Perlin Flow Field” that will be affected by the animation.
This is again a link to Darryl’s work:
See the Pen Perlin Flow Field by Darryl Huffman (@darrylhuffman) on CodePen.
Darryl utilized the 2D graphics canvas API, Three.js (the 3D graphics library), and GLSL shaders. The way all those graphic tools were made to work together was something that piqued my curiosity. In an attemp to determine how Darryl obtained that result, I made some basic reverse engineering.
One thing I found nice from Darryl’s project was his use of a noise function for the animation.
Adding noise with shaders is very nice!
Yeah… But let’s be honest - unless you truly understand shaders and what those noise functions do, it might be very challenging to programatically get the required effect.
In this post we will have an helicopter view at the noise function.
The Code (the noiseCanvas function)
In the previous post we separated Darryl’s code into three sections:
- The “context” canvas and the Hair class
- The noise function, the shader and the Three.js plane geometry
- The interaction between the texture (aka Garryl’s “perlinCanvas”) and the “context” canvas.
Our focus is the second one. The noiseCanvas function in Darryl’s pen contained all of the functionality needed to render the GLSL graphics.
THE NOISE FUNCTION
For this project, Garryl Huffman utilized a noise function, which could be quickly derived from the name of the pen. “Perlin” is in fact the surname of Ken Perlin, the developer of a noise function that bears his name, which has had a significant impact on the digital graphics sector since its introduction in 1983.
However, it is worth noting that Garry Huffman may have given an inappropiate name to his pen. By reading the javascript code of his pen, you would notice that the noise function used by Garryl is actually authored by Ian McEwan who refers to it as a simplex noise function. The simplex noise function was an improved algorithm made by the same Ken Perlin over its classic Perlin noise. The simplex noise function by Ian McEwan, in collaboration with Stefan Gustavson, is actually one of the several efforts to improve the Perlin’s simplex noise function.
Now, I won’t extend about the noise function here. If you are still looking for a good explanation of noise functions and a clarification of how the Perlin noise differs from the simplex noise I will strongly recommend this excellent chapter of “The Book of Shaders”. Part of the work made by Ian McEwan and Stefan Gustavson can be found at the (apparently defunt) Ashima Arts repository or even in recent articles, like this scientific article dated 2022.
The noise function is written in GLSL, which is the default C-style language used to communicate with the openGL graphics API, which is the base of the WebGL API, the openGL for the web. In the Darryl’s pen the GLSL script of the noise function is given as a string in the Javascript code:
|
|
The syntax of the simplex function is around 100 lines long so there is a lot of excluded lines here. I wanted only to hightlight the snoise function (line 113 in Darryl’s code). Notice the purposed use of gradients: those gradients are the ones that give the flow behaviour to the noise function.
THE SHADERS
Shaders, as you probably know, are functions in openGL / WebGL that control the pixel properties. The vertex shader controls the geometries of the scene, and the fragment shader control the coloring at each pixel. The shaders are also written in GLSL and therefore are again provided as string:
|
|
Notice that the simplex function is inserted within the fragment shader as a string format (${Noise3D}). It is here, in the fragment shader, where the noise function (snoise) would be eventually used.
In fact, it is in the fragment shader where the action takes place. Large part of that action is finally collected in a variable “c” as a single value, which is then used to set the final output collected as gl_FragColor. (OBSERVATION: recent versions of GLSL are not forcing anymore the use of the gl_FragColor as fragment shader output, but rather allowing user-defined fragment shader outputs)
On the contrary, the code of the vertex shader is rather less “turbulent”. It relies fully on built-in uniform variables that will be passed from Three.js - projectionMatrix, modelViewMatrix and position. In the way they are set in Darryl’s code it is like saying: “just render the geometry that will be defined in the Three.js code as it is”.
THREE.JS AND THE PLANE
And all what Darryl wanted as geometry was just a simple plane. Building a plane in Three.js is quite simple. You just need to instantiate an scene, usually with a camera and possibly some lights, make a renderer available, and then define the plane geometry, the material covering that geometry, then create a mesh with both of them and finally register the mesh into the scene.
|
|
Until here all very simple. In the case of Darryl’s project though, the material of the plane was the shader. Furthermore, a dynamically changing shader material.
THE MATERIAL OF THE PLANE
If you see how Darryl added the shader material to the plane using Three.js, you might agree that adding a shader material to a Three.js mesh is fairly simple. This is simpler if the shader material doesn’t involve any dynamic changes, being rather a static material. But what if you would like the material to change based on values that change in your javascript code?
For that, you need some way to pass data from outside the shader to the GLSL shader context. The way that is done is through uniforms, which are special GLSL variables that could act as inputs/outputs that communicate with scopes external to GLSL.
|
|
The uniform that will be important for Darryl’s code is the time uniform. The time will set the pace of the advance of the color of the pixels along the gradient of the noise function.
Having the corresponding uniforms and the shaders complete the construction of the Three.js’s shader Material instance, named shaderMaterial in the code.
|
|
THE RENDER FUNCTION
In the Darryl’s project, the noiseCanvas function ended with the render function. The render function updates the graphics using the requestAnimationFrame but before the rendering the time uniform is updated.
|
|
The value of the time uniform is increased at each rendering frame using a simple implemantion based on the system clock.
Tada!
If you reviewed the code you might have notice that the Three.js renderer and plane had dimensions but that they were not added to any HTML element. That was made on purpose by the author - the apparent changes on the texture of the Three.js plane should stay invisible to the viewer.
If you are interested in seeing how the noise function behaves, I have revealed the function by adding it to an HTML element:
So… What did we learn from this code?
In this second part of our analysis of the Darryl Huffman’s “Perlin Flow Field” we got the basic ideas of the use of shaders and noise functions in combination with Three.js. We are also unveiling a couple facts, like keeping the noise function invisible to the viewer.
In fact, Darryl kept that invisible because his only interest was to extract values from the noise function without showing the graphics, giving this idea of “invisible force” affecting the movement of the “hairs” rendered on the “context” canvas.
However, he needed a way to extract the values from the invisible noise function into the visible canvas. There is a common trick used for that, using again the canvas API as adapter. We will go through it in a third and last post about Darryl’s pen.
For now, happy coding!