Projects

ChameleonRT

ChameleonRT is an example interactive path tracer which runs on multiple CPU and GPU ray tracing backends: Embree, DXR, OptiX, Vulkan, and Metal. Each backend uses an identical path tracer that supports a subset of the Disney BSDF material model to provide the same rendered result in each backend. Supports loading OBJ and glTF files. Binaries are built automatically through Github Actions and preserved for download via tagged releases.

ispc-rs

ispc-rs is a small Rust library meant to be used as a compile time dependency for Rust projects to allow them to build and link with code written in ISPC . ISPC is a language that makes it possible to take advantage the CPU’s vector units without needing hand-written intrinsics. Through this library it’s easy to write very fast vector code in ISPC and link it with (still quick!) higher-level code in Rust to get a good balance of high performance and ease of use.

The images shown above are from the rt example (top row) which demonstrates a simple fast and parallel path tracer and the ddvol example (bottom row) which is a scientific visualization volume renderer. Both use higher level Rust code to read in a scene file and setup the objects in the scene and then call into ISPC to render in parallel, making good use of the CPU’s SIMD units.

tray_rust

tray_rust is a toy physically based ray tracer built off of the techniques discussed in Physically Based Rendering . It began life as a port of tray to Rust to check out the language. The renderer is currently capable of path tracing, supports triangle meshes (MTL support coming soon), and various physically based material models (including measured data from the MERL BRDF Database ) along with rigid body animation and image-parallel distributed rendering. More details on rendering performance and other features can be found in the readme README .

tray_rust can also render animations, and is the renderer I’ve used for the University of Utah’s teapot rendering competition after the first year. The video is my submission from 2016. To simplify making scenes for tray_rust I also wrote a Blender plugin which I used to make the animation shown, and is the easiest way to create scenes for the renderer though some features are still needed.

WebGPU Volume Path Tracer

A volume path tracer implemented in WebGPU. The renderer uses Woodcock sampling to traverse and sample the volume and scientific visualization-style coloring to provide interactive physically-based visualization of 3D grid volumes. Try it out online!

WebGPU glTF Renderer

A glTF WebGPU renderer and loader written from scratch. Supports uploading glTF binary files to render them with a basic material color model. GLTF primitives are rendered as WebGPU render bundles, with shaders generated for materials/attributes and cached using a shader caching system. Try it out online!

tray

tray is a toy physically based ray tracer built off of the techniques discussed inPhysically Based Rendering . It currently has support for path tracing, bidirectional path tracing and photon mapping. tray also supports physically materials such as microfacet models like Torrance-Sparrow and measured data from the MERL BRDF Database .

This is the project tray_rust is based on and as such may not see much more development since most of my ray tracing work is now going on there.

WebGPU Block-Compressed Marching Cubes

WebGPU Block-Compressed Marching Cubes is an algorithm for interactive isosurface extraction on massive compressed data sets that runs entirely on the GPU in the browser using WebGPU. The technique works on block-compressed volumes using ZFP’s fixed-rate compression algorithm. To compute an isosurface the required blocks of the data are decompressed and cached on the fly using a GPU-driven LRU cache. This enables interactive visualization of large volumes entirely in the browser. This work was published at LDAV in 2020, you can learn more in the paper, get the code on Github, or try it out online!

WebGPU Marching Cubes

A GPU parallel implementation of the classic Marching Cubes algorithm using WebGPU. Work is parallelized over the dual grid to find active voxels, active voxel IDs are compacted down via exclusive scans and compactions, vertices are output to a single buffer for rendering by computing vertex offsets via an exclusive scan and then writing vertices to the offsets for each voxel in parallel. Try it out online!

topo-vol

topo-vol is a topology guided volume exploration and analysis tool, written for the final project in Bei Wang’s Computational Topology Course . It is built on top of the Topology ToolKit and VTK for computation, and uses ImGui and a custom rendering system for the UI and volume rendering. By computing relevant topological structures (e.g. the contour tree) and classifying segments of data corresponding to the branches in this tree we can avoid occlusion issues with global transfer functions and create more useful, detailed renderings. See the report for more details.

WebGL Volume Raycaster

A scientific visualization style volume raycaster written using WebGL2 and Javascript. The renderer uses an arcball camera which supports mouse or touch input, and dynamically adjusts the sampling rate to maintain a smooth framerate, even on mobile devices. The volumes are downloaded via XMLHttpRequest from Dropbox when selected. I’ve also written a post about volume rendering for scientific visualization, and how I’ve implemented in WebGL to write this renderer. Try it out online!

μPacket

μPacket is an extremely simple packet based ray tracer that uses the AVX/AVX2 instruction set to trace eight rays at once through the scene. Currently it only supports spheres and planes with Lambertian BRDFs illuminated by a single point light. Illumination is computed with Whitted ray tracing, although recursion only goes as far as computing shadows since there are no reflective or transmissive materials at the moment.

Ray packets were first introduced by Wald et al., 2001 and are now widely used in high performance ray tracers like Embree due to the performance gain achieved with good packet (and now stream) tracing techniques.

Current plans for this project are to switch to trace ray streams and add support for a path tracing integrator for higher quality images.

DXR Ambient Occlusion Baking

A demo of ambient occlusion map baking using DXR inline ray tracing. Uses xatlas to unwrap the mesh. The mesh is rasterized into the atlas image by using the UV coordinates as the output vertex positions, and the AO factor computed by using inline ray tracing in the fragment shader. The image shows the computed AO map on Sponza.

WebGL EWA Surface Splatter

An elliptical weighted average (EWA) surface splatter renderer, implemented in WebGL, which also supports painting on the surfaces. Try it out online! This implements the papers: Object Space EWA Surface Splatting: A Hardware Accelerated Approach to High Quality Point Rendering by Ren, Pfister and Zwicker, and High-Quality Point-Based Rendering on Modern GPUs by Botsch and Kobbelt, with a few shortcuts. It also uses the deferred shading for splatting approach described in High-quality surface splatting on today’s GPUs by Botsch, Hornung, Zwicker and Kobbelt. The renderer uses an arcball camera which supports mouse or touch input, and downloads datasets via XMLHttpRequest from Dropbox when selected. Built on top of webgl-util for some WebGL utilities, glMatrix for matrix/vector operations, and FileSaver.js for saving models.

WebGPU Implicit Isosurface Raycaster

An implicit isosurface raycaster using WebGPU. Ray-isosurface intersections are computed using the technique described by Marmitt et al. ‘Fast and Accurate Ray-Voxel Intersection Techniques for Iso-Surface Ray Tracing’, 2004. Try it out online!

WebGPU Volume Animation Player

A WebGL tool for playing back time-series volumetric data captures in the browser. Scientists can upload a series of Zipped WebP image stacks which are uploaded to the GPU and volume rendered. The set of image stacks is played back as an animation to view the time-dependent behavior of the data set. Data sets and time series can be specified as URL parameters to enable scientists to quickly share data visualizations with colleagues. Try it out online!

WebGL Marching Cubes

A WebGL + WebASM implementation of the classic Marching Cubes algorithm for extracting isosurfaces from 3D volume data. An isosurface is a surface which represents points in the 3D data which all have the same value (e.g., pressure, temperature). The isosurface extraction code is implemented in Rust and compiled to WebAssembly to accelerate extraction of the surface. Depending on your browser, the WebASM version is 10-50x faster than the pure Javascript one! The surface is rendered as a triangle mesh and combined with the volume during the volume raycasting step, in a manner roughly similar to shadow mapping. Try it out online!

WebGPU Experiments

A series of examples written while learning about WebGPU : a glTF viewer, a web-based LiDAR viewer, and a data-parallel Marching Cubes implementation using compute shaders. The glTF viewer uses a custom glb importer to load data efficiently into WebGPU and supports the basic glTF features. The LiDAR viewer uses LAStools.js , a version of libLAS compiled to Web Assembly, to load las and laz files directly in the browser. The Marching Cubes example is a data-parallel implementation of marching cubes written using compute shaders to leverage GPU compute for interactive isosurface extraction. If you have a browser with WebGPU enabled, you can try them out: glTF Viewer , LiDAR Viewer , Marching Cubes .

WebGL Neuron Visualizer

A neuron visualization system using WebGL2. The renderer uses my WebGL2 volume renderer to display the RAW volume or imported TIFF stack and renders the imported neuron traces as lines within the volume. It can import single-channel 8bit and 16bit TIFF image stacks, and can import neuron traces in the SWC file format. TIFF files are loaded using a build of libtiff compiled to WebAssembly . The neurons are composited within the volume by rendering out the depth buffer and using this to terminate rays early in the volume when they hit the geometry, roughly similar to shadow mapping. A default demo dataset is included which renders a stitched version of the DIADEM NC Layer 1 Axons from the DIADEM Challenge . Try it out online!

Charm++ Experiments

A set of different test applications to learn and try out Charm++ for distributed rendering applications. The image shown is from the pathtracing test application, which renders distributed data with global illumination by routing rays around the network. There’s also a basic image-parallel and data-parallel scientific visualization volume renderer .

bspline

A Rust library for computing B-spline interpolating curves on generic control points. bspline can be used to evaluate B-splines of varying orders on any type that can be linearly interpolated, ranging from floats, positions, RGB colors to transformation matrices and so on. The bspline logo (above) was generated using this library with a cubic B-spline in 2D for the positioning of the curve and a quadratic B-spline in RGB space to color it.

Spline Viewer

A viewer for B-spline curves and surfaces, initially written for a course on computer aided geometric design. You can edit and create 2D B-splines and tweak some properties of loaded 3D curves and surfaces. Useful for learning about and playing with B-splines.

tobj

tobj is a tiny OBJ loader in Rust that draws inspiration for its API and design from Syoyo’s excellent library, tinyobjloader . The crate aims to be a simple, fast and lightweight option for loading OBJ and MTL files for easy integration with realtime and offline renderers, or really any other project where you need to load OBJ files!

The image shown is from a demo viewer written to test tobj named tobj_viewer displaying the Rungholt model. The model can be found on Morgan McGuire’s meshes page and was originally built by kescha.

ssao

This is sort of an implementation ofScalable Ambient Obscurance by McGuire et al. however I make a few simplifying shortcuts in my implementation and don’t achieve as good performance or quality. I was also unable to get their new recommended estimator to behave so this implementation still uses the Alchemy AO estimator initially recommended in the paper. There’s a somewhat longer write up available on my classpage since this was initially implemented as a class project.

The image shows just the ambient occlusion value for a view of the Crytek Sponza scene.

lfwatch

lfwatch is a lightweight file watcher for Windows, Linux and OS X. It monitors some desired directories for file changes and calls the callback set for the directory with information about the file change event. This was written to do hot reloading of GLSL shaders but could be useful for some other applications.