Experiments in Volumetric Rendering via Raymarching in the Browser

A view of the volumetric rendering output showing the generated 3D noise projected onto a canvas
After reading about and browsing many other peoples' work in computer graphics, I decided to try to implement raymarching application in the web browser. I was working heavily with noise generation and visualization at the time, creating my Noise Function Compositor which generated 3D noise, projected it as a 2D movie, and then colorized it to RGB. I wanted to take things up to the next dimension by creating a 3D visualization of noise.

I was inspired by a lot of really impressive prior art (WebGL Volume Rendering Made Easy and Volumetric raymarched clouds on Shadertoy to list a couple) and I wanted to try it out for myself from scratch. I found a library called GPU.JS that allows JavaScript code (a subset of it anyway) that runs on the GPU. It transforms it into shader language, compiles it, and executes it on the GPU via WebGL and then returns the result back to normal JavaScript. My plan was pretty simple: I generated some 3D noise as values from 0.0 to 1.0 using the noise-rs library and stored it as a matrix. I then shipped that data over to GPU.JS using Emscripten (the whole asm.js and Rust component here was largely useless, but I built this out of an existing framework that used it much more extensively).

Once the data was there, I used GPU.JS to convert the buffer into a texture and ship it over to the GPU. At that point, there only remaining step was the linear algebra and raymarching algorithm itself. I did my best to avoid using external guides or reference for the algorithm itself since it seemed pretty intuitive to me at the time, but after many hours of struggling and failed attempts I eventually ended up using some external resources for help. The hardest part was dealing with the "virtual screen" through which individual rays were fired (one per pixel) from the origin point in order to create the output image. There were all manner of normalizations, cross products, dot products, and other abstruse transformations that needed to be done to these vectors, and wrapping my head around it all was quite a challenge.

After a lot more linear algebra than I'd expected, I finally managed to create my renderer, very much from scratch. The performance was atrocious probably due to a mix of inefficient algorithms and the GPU.JS library not being the best choice for this particular job (writing actual GLSL directly would have been a lot better) but it worked and I was very happy. Adding in the logic and math for panning around the camera and moving it around took some additional effort, but the final product demonstrates that working pretty well.



I was originally planning to re-write the renderer using raw GLSL shaders and adding some nice additions like interactivity with the mouse and bilinear interpolation (I tried implementing that manually but was not successful, and the fact that it was a built-in feature in GLSL wasn't particularly inspiring). However, I ended up moving on to other things and never really did any additional work there.

That being said, I'm still fascinated by computer graphics and would love to work on a similar project in the future. Although this project didn't go to far or turn into anything particularly interesting, I think that I succeeded in accomplishing what I set out to do (implement volumetric rendering from scratch in the web browser) and it gave me very useful intros into low-level compute graphics and linear algebra topics that I'd not previously encountered.