In a previous post we discussed how Darryl Huffman created the figure on his “Perlin Flow Field” that will be affected by the animation.

This is again a link to Darryl’s work:

See the Pen Perlin Flow Field by Darryl Huffman (@darrylhuffman) on CodePen.

Darryl utilized the 2D graphics canvas API, Three.js (the 3D graphics library), and GLSL shaders. The way all those graphic tools were made to work together was something that piqued my curiosity. In an attemp to determine how Darryl obtained that result, I made some basic reverse engineering.

One thing I found nice from Darryl’s project was his use of a noise function for the animation.

Adding noise with shaders is very nice!

Yeah… But let’s be honest - unless you truly understand shaders and what those noise functions do, it might be very challenging to programatically get the required effect.

In this post we will have an helicopter view at the noise function.

The Code (the noiseCanvas function)

In the previous post we separated Darryl’s code into three sections:

  • The “context” canvas and the Hair class
  • The noise function, the shader and the Three.js plane geometry
  • The interaction between the texture (aka Garryl’s “perlinCanvas”) and the “context” canvas.

Our focus is the second one. The noiseCanvas function in Darryl’s pen contained all of the functionality needed to render the GLSL graphics.

THE NOISE FUNCTION

For this project, Garryl Huffman utilized a noise function, which could be quickly derived from the name of the pen. “Perlin” is in fact the surname of Ken Perlin, the developer of a noise function that bears his name, which has had a significant impact on the digital graphics sector since its introduction in 1983.

However, it is worth noting that Garry Huffman may have given an inappropiate name to his pen. By reading the javascript code of his pen, you would notice that the noise function used by Garryl is actually authored by Ian McEwan who refers to it as a simplex noise function. The simplex noise function was an improved algorithm made by the same Ken Perlin over its classic Perlin noise. The simplex noise function by Ian McEwan, in collaboration with Stefan Gustavson, is actually one of the several efforts to improve the Perlin’s simplex noise function.

Now, I won’t extend about the noise function here. If you are still looking for a good explanation of noise functions and a clarification of how the Perlin noise differs from the simplex noise I will strongly recommend this excellent chapter of “The Book of Shaders”. Part of the work made by Ian McEwan and Stefan Gustavson can be found at the (apparently defunt) Ashima Arts repository or even in recent articles, like this scientific article dated 2022.

The noise function is written in GLSL, which is the default C-style language used to communicate with the openGL graphics API, which is the base of the WebGL API, the openGL for the web. In the Darryl’s pen the GLSL script of the noise function is given as a string in the Javascript code:


 84
 85
 86
 87
...

113 
114
115
...

143
144
145
146
...

174
175
176
177
178
179
180
181
182
183
184
185
186
187
188


	let Noise3D = `
//
// Description : Array and textureless GLSL 2D/3D/4D simplex 
//               noise functions.
...

float snoise(vec3 v)
{ 
const vec2  C = vec2(1.0/6.0, 1.0/3.0) ;
...

// Gradients: 7x7 points over a square, mapped onto an octahedron.
// The ring size 17*17 = 289 is close to a multiple of 49 (49*6 = 294)
float n_ = 0.142857142857; // 1.0/7.0
vec3  ns = n_ * D.wyz - D.xzx;
...

//Normalise gradients
vec4 norm = taylorInvSqrt(vec4(dot(p0,p0), dot(p1,p1), dot(p2, p2), dot(p3,p3)));
p0 *= norm.x;
p1 *= norm.y;
p2 *= norm.z;
p3 *= norm.w;

// Mix final noise value
vec4 m = max(0.6 - vec4(dot(x0,x0), dot(x1,x1), dot(x2,x2), dot(x3,x3)), 0.0);
m = m * m;
return 42.0 * dot( m*m, vec4( dot(p0,x0), dot(p1,x1), 
dot(p2,x2), dot(p3,x3) ) );
}
`


The syntax of the simplex function is around 100 lines long so there is a lot of excluded lines here. I wanted only to hightlight the snoise function (line 113 in Darryl’s code). Notice the purposed use of gradients: those gradients are the ones that give the flow behaviour to the noise function.

THE SHADERS

Shaders, as you probably know, are functions in openGL / WebGL that control the pixel properties. The vertex shader controls the geometries of the scene, and the fragment shader control the coloring at each pixel. The shaders are also written in GLSL and therefore are again provided as string:


...
190
191
192
193
194
195
196 
197
198
199
200
201
202
203
204
205
206 
207
208
209 
210
211
212
213
214
215
216
217 
218
219
220
221
...


...
	const shaders = {
		fragment: `

uniform vec2 resolution;
uniform float time;

${Noise3D}

void main() {
float speed = 16.;
float scale = 3.5;

vec2 st = gl_FragCoord.xy/resolution.xy;
st.x *= resolution.x/resolution.y;
st *= scale;

float noise = snoise(vec3(st.x, st.y, time * speed * 0.01));
float c = (noise + 1.) / 2.;

gl_FragColor = vec4(c, c, c, 1.);
}

`,
		vertex: `

void main() {

gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}
`
	}

...

Notice that the simplex function is inserted within the fragment shader as a string format (${Noise3D}). It is here, in the fragment shader, where the noise function (snoise) would be eventually used.

In fact, it is in the fragment shader where the action takes place. Large part of that action is finally collected in a variable “c” as a single value, which is then used to set the final output collected as gl_FragColor. (OBSERVATION: recent versions of GLSL are not forcing anymore the use of the gl_FragColor as fragment shader output, but rather allowing user-defined fragment shader outputs)

On the contrary, the code of the vertex shader is rather less “turbulent”. It relies fully on built-in uniform variables that will be passed from Three.js - projectionMatrix, modelViewMatrix and position. In the way they are set in Darryl’s code it is like saying: “just render the geometry that will be defined in the Three.js code as it is”.

THREE.JS AND THE PLANE

And all what Darryl wanted as geometry was just a simple plane. Building a plane in Three.js is quite simple. You just need to instantiate an scene, usually with a camera and possibly some lights, make a renderer available, and then define the plane geometry, the material covering that geometry, then create a mesh with both of them and finally register the mesh into the scene.

...
223
224
225
226
227
228
229
230
231
232
233
234
...

252
253
254
255
256
257
258
259
260
261
...

    ...
	let width = container.offsetWidth,
	    height = container.offsetHeight,
	    currentTime = 0,
	    timeAddition = Math.random() * 1000

	const scene = new THREE.Scene(),
		 camera = new THREE.OrthographicCamera( width / - 2, width / 2, height / 2, height / - 2, 0, 100 )
	
	renderer = new THREE.WebGLRenderer({ alpha: true })

	renderer.setSize( container.offsetWidth, container.offsetHeight )
	//container.appendChild(renderer.domElement)
    ...


	let geometry = new THREE.PlaneGeometry( width, height, 32 );
	let plane = new THREE.Mesh( geometry, shaderMaterial );
	scene.add( plane );
	plane.position.z = 0.5;


	camera.position.y = 0;
	camera.position.x = 0;
	camera.position.z = 100;
    ...

Until here all very simple. In the case of Darryl’s project though, the material of the plane was the shader. Furthermore, a dynamically changing shader material.

THE MATERIAL OF THE PLANE

If you see how Darryl added the shader material to the plane using Three.js, you might agree that adding a shader material to a Three.js mesh is fairly simple. This is simpler if the shader material doesn’t involve any dynamic changes, being rather a static material. But what if you would like the material to change based on values that change in your javascript code?

For that, you need some way to pass data from outside the shader to the GLSL shader context. The way that is done is through uniforms, which are special GLSL variables that could act as inputs/outputs that communicate with scopes external to GLSL.

...
237
238
239
240
241
242
...
...

let uniforms = {
    time: { value: 1 + timeAddition },
    resolution: { value: new THREE.Vector2(container.offsetWidth, container.offsetHeight) }
}

...

The uniform that will be important for Darryl’s code is the time uniform. The time will set the pace of the advance of the color of the pixels along the gradient of the noise function.

Having the corresponding uniforms and the shaders complete the construction of the Three.js’s shader Material instance, named shaderMaterial in the code.

...
242
243
244
245
246
247
248
249
250
251
252
...
...

let shaderMaterial = new THREE.ShaderMaterial( {
    uniforms:       uniforms,
    vertexShader:   shaders.vertex,
    fragmentShader: shaders.fragment,
    //blending:       THREE.AdditiveBlending,
    depthTest:      false,
    transparent:    true,
    vertexColors:   true
});

...

THE RENDER FUNCTION

In the Darryl’s project, the noiseCanvas function ended with the render function. The render function updates the graphics using the requestAnimationFrame but before the rendering the time uniform is updated.

...
225
226
...

263 
264
265
266 
267
268 
269
270
271
...

    ...
	    currentTime = 0,
	    timeAddition = Math.random() * 1000
    ...

	function render() {
		var now = new Date().getTime();
		currentTime = (now - startTime) / 1000;
		uniforms.time.value = currentTime + timeAddition;

		requestAnimationFrame( render );
		renderer.render( scene, camera );
	}
	render();
    ...

The value of the time uniform is increased at each rendering frame using a simple implemantion based on the system clock.

Tada!

If you reviewed the code you might have notice that the Three.js renderer and plane had dimensions but that they were not added to any HTML element. That was made on purpose by the author - the apparent changes on the texture of the Three.js plane should stay invisible to the viewer.

If you are interested in seeing how the noise function behaves, I have revealed the function by adding it to an HTML element:

So… What did we learn from this code?

In this second part of our analysis of the Darryl Huffman’s “Perlin Flow Field” we got the basic ideas of the use of shaders and noise functions in combination with Three.js. We are also unveiling a couple facts, like keeping the noise function invisible to the viewer.

In fact, Darryl kept that invisible because his only interest was to extract values from the noise function without showing the graphics, giving this idea of “invisible force” affecting the movement of the “hairs” rendered on the “context” canvas.

However, he needed a way to extract the values from the invisible noise function into the visible canvas. There is a common trick used for that, using again the canvas API as adapter. We will go through it in a third and last post about Darryl’s pen.

For now, happy coding!