20201027

Random noise textures on the GPU in WebGL

Procedural rendering routines often need pseudorandom numbers. For graphics rendering, we usually prefer correct-ish and fast code over properly "correct" code. My approach to random number generation is usually to intermittently seed a weak pseudo-random number generator, like a linear-feedback shift register, from a stronger source like Javascript's Math.random(). 

On the GPU, things are not so simple: we want a pseudorandom algorithm that is fast, local, and parallel. For compatibility, we should also have this RNG use an 8-bit RGBA texture for its internal state. Bitops are a bit tricky in WebGL shaders (though not impossible), so we'll use a linear congruential generator.

 

The linear congruential generator is quite simple: it applies a linear transformation $ax+c$, then takes the remainder modulo some constant $m$. 

$$x_{n+1} = (a x_n + c) \operatorname{mod} m$$

Usually, we do this with integers, and $m$ is some prime number. While its possible to interpret the RGBA texture values as storing a byte [0,255] mapped to [0,1], running a weak RNG for each color channel is unsatisfactory. At best, the RNG has a period of 256, since the state stored between updates is effectively a single 8-bit unsigned integer for each color channel. 

Rather than try to interpret the RGBA vec4 as a 32-bit integer, I opted to mix state between adjacent pixels. This turns each channel in the texture into a giant RNG with as much state as you have pixels. I can't make any claims that this is optimal or correct, only that the results are fast and look nice. 

Initialize this texture with random values. Set the texture sampling to nearest-neighbor with periodic boundary conditions  (and no mipmaps, if it matters). In some instances, you may need to manually implement periodic boundary conditions for non power-of-two sized textures.

Here's the main code. It sums up pixels in the local neighborhood, multiplies them by some random float, then stores them back in the texture modulo 1.0 (that's modulo 256 if we're thinking of the color data as unsigned integers). I forget how I chose the constant for the linear recurrence, but it was almost surely a variant of the "try random things, smash the keyboard, and see what works" algorithm.

void main() {
    vec2 scale = 1./vec2(W,H);
    vec4 c0 = texture2D(data,(XY+vec2( 0, 0))*scale);
    vec4 c1 = texture2D(data,(XY+vec2( 0, 1))*scale);
    vec4 c2 = texture2D(data,(XY+vec2( 1, 0))*scale);
    vec4 c3 = texture2D(data,(XY+vec2(-1, 0))*scale);
    vec4 c4 = texture2D(data,(XY+vec2( 0,-1))*scale);
    vec3 color = mod((c0+c1+c2+c3+c4).rgb
        *8.8857658763167322,1.0);
    gl_FragColor = vec4(color,1.);
}

This gives you a uniform RNG for each color channel in the texture to play with. If you want Gaussian distributed data, you can use the Box-Muller transform to convert the (R,G) and (B,A) channels in a pair of Gaussian random numbers each.  

This random noise shader is up as an example on the webgpgpu repository. Click on the image below to view an example of Gaussian color noise: 


No comments:

Post a Comment