Volume Rendering with WebGL

13 Jan 2019
Figure 1: Example volume renderings, using the WebGL volume renderer described in this post. Left: A simulation of the spatial probability distribution of electrons in a high potential protein molecule. Right: A CT scan of a Bonsai Tree. Both datasets are from the Open SciVis Datasets repository.

In scientific visualization, volume rendering is widely used to visualize 3D scalar fields. These scalar fields are often uniform grids of values, representing, for example, charge density around a molecule, an MRI or CT scan, air flow around an airplane, etc. Volume rendering is a conceptually straightforward method for turning such data into an image: by sampling the data along rays from the eye and assigning a color and transparency to each sample, we can produce useful and beautiful images of such scalar fields (see Figure 1). In a GPU renderer, these 3D scalar fields are stored as 3D textures; however, in WebGL1 3D textures were not supported, requiring additional hacks to emulate them for volume rendering. Recently, WebGL2 added support for 3D textures, allowing for an elegant and fast volume renderer to be implemented entirely in the browser. In this post we’ll discuss the mathematical background for volume rendering, and how it can be implemented in WebGL2 to create an interactive volume renderer entirely in the browser! Before we start, you can try out the volume renderer described in this post online.


Comments in LaTeX

10 Jul 2018

When writing a paper in LaTeX, it’s common to leave notes and comments in the text, either to yourself or your co-authors. I used to write these as just different colored text using \textcolor{...}, with each author assigned a color, or all with the same color. However, with more authors it can get hard to keep picking legible font colors. Futhermore, sometimes just a different color font doesn’t stand out quite as much as I’d like from the rest of the text. More recently I’ve switched to using highlights for the comments, which works well with multiple authors, and helps the comments stand out from the rest of the text. This is easy to do with the soul and xcolor packages.


Distributed Rendering with Rust and Mio

02 Jan 2016

In this post we’ll take a look at adding distributed rendering to tray_rust which will let us take advantage of multiple machines when rendering an image, like a compute cluster. To do this we’ll look at options for how to distribute the rendering job across multiple nodes and what sort of communication is needed synchronize their work. We’ll also look into how we can use mio to write an efficient master process that can manage multiple workers effectively.

After implementing a simple technique to distribute the job we’ll discuss the scalability of this approach and possible paths forward to improve it. I’ve also recently written a plugin for Blender so you can easily create your own scenes and will mention a bit on how to run the ray tracer on Google Compute Engine (or AWS EC2) if you want to try out the distributed rendering yourself.


Rendering an Animation in Rust

16 Dec 2015

In this post we’ll look at adding a pretty awesome new feature to tray_rust, something I’ve never implemented before: animation! We’ll take a look at a simple way for sampling time in our scene, how we can associate time points with transformations of objects to make them move and how to compute smooth animation paths with B-Splines. Then we’ll wrap up with rendering a really cool animation by using 60 different machines spread across two clusters at my lab.

Find older posts in the Archive