Blog

The VIS 2020 Streaming Infrastructure

Fri Nov 13, 2020

Now that it’s been a bit over a week since VIS 2020 ended I thought I’d write up some information on the streaming infrastructure we used during conference. For context, IEEE VIS 2020, like all conferences this year and likely well into the next, was held as a virtual event. VIS 2020 was “hosted” by the University of Utah, as it was originally planned (pre-COVID) to be held in Salt Lake City. My advisor was one of the co-chairs, and asked if I’d volunteer to be on the Technology Committee for VIS 2020. The role of this committee is to manage the technical aspects of streaming the event. The change to a virtual format brings a lot of challenges, especially when pivoting later in the planning cycle (the past in-person events are typically over a year in the making). However, the virtual format also brings improvements in terms of accessibility, cost to attendees, environmental impact, and archiving.

This post will be one part technical documentation and one part reflection. The feedback we received for VIS 2020 was overwhelmingly positive, and thus I hope that both the technical documentation on how we ran the event and the reflection on what worked and didn’t are helpful to organizers planning virtual events through the next year.

Before we begin, I must of course mention that this was not a solo effort. Alex Bock and Martin Falk were also on the Tech committee and provided valuable advice about their experience running EGEV 2020 as a virtual event earlier this year, which was also well received. We followed the same model for VIS, which aims to keep the feeling of a live conference while reducing surface area for technical issues. I must also mention the amazing work done by Alper Sarikaya, Hendrik Strobelt, Jagoda Walny, and Steve Petruzza on the web committee setting up the virtual conference webpage. The webpage was adapted from mini-conf, originally written by Alexander Rush and Hendrik Strobelt. Alper has written up a blog post about this, so I won’t cover it here. Finally, during the event we had a rotation of about 24 student volunteers who were responsible for managing the streams and assisting presenters with technical issues, without whom the event would not have been possible.

Continue

The RTX Shader Binding Table Three Ways

Wed Nov 20, 2019

DirectX Ray Tracing, Vulkan’s NV Ray Tracing extension, and OptiX (or collectively, the RTX APIs) build on the same execution model for running user code to trace and process rays. The user creates a Shader Binding Table (SBT), which consists of a set of shader function handles and embedded parameters for these functions. The shaders in the table are executed depending on whether or not a geometry was hit by a ray, and which geometry was hit. When a geometry is hit, a set of parameters specified on both the host and device side of the application combine to determine which shader is executed. The RTX APIs provide a great deal of flexibility in how the SBT can be set up and indexed into during rendering, leaving a number of options open to applications. However, with incorrect SBT access leading to crashes and difficult bugs, sparse examples or documentation, and subtle differences in naming and SBT setup between the APIs, properly setting up and accessing the SBT is an especially thorny part of the RTX APIs for new users.

In this post we’ll look at the similarities and differences of each ray tracing API’s shader binding table to gain a fundamental understanding of the execution model. I’ll then present an interactive tool for constructing the SBT, building a scene which uses it, and executing trace calls on the scene to see which hit groups and miss shaders are called. Finally, we’ll look at how this model can be brought back to the CPU using Embree, to potentially build a unified low-level API for ray tracing.

Updated 5/1/2020: Added discussion on KHR_ray_tracing for Vulkan.

Continue

Faster Shadow Rays on RTX

Fri Sep 6, 2019

To determine if a hit point can be directly lit by a light source in the scene we need to perform a visibility test between the point and the light. In a path tracer we must perform at least one visibility test per hit point to shade the point, or two if we’re using multiple importance sampling (one for the light sample, and one for the BSDF sample). When rendering just ambient occlusion, e.g., for baking occlusion maps, we may send even more shadow rays per hit-point. Fortunately, shadow rays can be relatively cheap to trace, as we don’t care about finding the closest hit point or computing surface shading information, but just whether or not something is intersected by the ray. There are a few options and combinations of ray flags which we can use when deciding how to trace shadow rays on RTX (through DXR, OptiX or Vulkan). I recently learned a method for skipping all hit group shaders (any hit, closest hit) and instead using just the miss shader to determine if the ray is not occluded. This was a bit non-obvious to me, though has been used by others (see Chris Wyman’s Intro to DXR and Sascha Willems’s Ray Tracing Shadows Example After switching to this approach in ChameleonRT I decided to run a small benchmark comparing some of the options for tracing shadow rays. I’ll also discuss an extra trick we can use to simplify the shader binding table setup, which lets us skip creating an occlusion hit group entirely.

Continue

Volume Rendering with WebGL

Sun Jan 13, 2019

Figure 1: Example volume renderings, using the WebGL volume renderer described in this post. Left: A simulation of the spatial probability distribution of electrons in a high potential protein molecule. Right: A CT scan of a Bonsai Tree. Both datasets are from the Open SciVis Datasets repository.

In scientific visualization, volume rendering is widely used to visualize 3D scalar fields. These scalar fields are often uniform grids of values, representing, for example, charge density around a molecule, an MRI or CT scan, air flow around an airplane, etc. Volume rendering is a conceptually straightforward method for turning such data into an image: by sampling the data along rays from the eye and assigning a color and transparency to each sample, we can produce useful and beautiful images of such scalar fields (see Figure 1). In a GPU renderer, these 3D scalar fields are stored as 3D textures; however, in WebGL1 3D textures were not supported, requiring additional hacks to emulate them for volume rendering. Recently, WebGL2 added support for 3D textures, allowing for an elegant and fast volume renderer to be implemented entirely in the browser. In this post we’ll discuss the mathematical background for volume rendering, and how it can be implemented in WebGL2 to create an interactive volume renderer entirely in the browser! Before we start, you can try out the volume renderer described in this post online.

Continue

Comments in LaTeX

Tue Jul 10, 2018

When writing a paper in LaTeX, it’s common to leave notes and comments in the text, either to yourself or your co-authors. I used to write these as just different colored text using \textcolor{...}, with each author assigned a color, or all with the same color. However, with more authors it can get hard to keep picking legible font colors. Futhermore, sometimes just a different color font doesn’t stand out quite as much as I’d like from the rest of the text. More recently I’ve switched to using highlights for the comments, which works well with multiple authors, and helps the comments stand out from the rest of the text. This is easy to do with the soul and xcolor packages.

Continue

Distributed Rendering with Rust and Mio

Sat Jan 2, 2016

In this post we’ll take a look at adding distributed rendering to tray_rust which will let us take advantage of multiple machines when rendering an image, like a compute cluster. To do this we’ll look at options for how to distribute the rendering job across multiple nodes and what sort of communication is needed synchronize their work. We’ll also look into how we can use mio to write an efficient master process that can manage multiple workers effectively.

After implementing a simple technique to distribute the job we’ll discuss the scalability of this approach and possible paths forward to improve it. I’ve also recently written a plugin for Blender so you can easily create your own scenes and will mention a bit on how to run the ray tracer on Google Compute Engine (or AWS EC2) if you want to try out the distributed rendering yourself.

Continue

Rendering an Animation in Rust

Wed Dec 16, 2015

In this post we’ll look at adding a pretty awesome new feature to tray_rust, something I’ve never implemented before: animation! We’ll take a look at a simple way for sampling time in our scene, how we can associate time points with transformations of objects to make them move and how to compute smooth animation paths with B-Splines. Then we’ll wrap up with rendering a really cool animation by using 60 different machines spread across two clusters at my lab.

Continue

Porting a Ray Tracer to Rust, part 3

Fri May 15, 2015

It’s been a little while since my last post on tray_rust as I’ve been a busy with classes, but I’ve had a bit of free time to implement some extremely cool features. In this post we’ll look at porting over the path tracing code and adding a bounding volume hierarchy, along with adding support for triangle meshes and measured material data from the MERL BRDF Database introduced by Matusik et al. in 2003 in A Data-Driven Reflectance Model. In the process of implementing the BVH we’ll get a taste of Rust’s generic programming facilities and use them to write a flexible BVH capable of storing any type that can report its bounds. In the spirit of fogleman’s gorgeous Go Gopher in Go we’ll wrap up by rendering the Rust logo in Rust using a model made by Nylithius on BlenderArtists.

If you’ve been following Rust’s development a bit you have probably noticed that the timing of this post is not a coincidence, since Rust 1.0.0 is being released today!

Continue