August 30, 2021
This tutorial is an updated version of my previous one and updates the code listing to match the finalizing WebGPU APIs. If you’ve read the previous version of this tutorial you can skim through the code listings to get up to date.
In this second post of the series we’ll learn about Bind Groups, which let us pass buffers and textures to our shaders. When writing a renderer, we typically have inputs which do not make sense as vertex attributes (e.g., transform matrices, material parameters), or simply cannot be passed as vertex attributes (e.g., textures). Such parameters are instead passed as uniforms in GLSL terms, or root parameters in HLSL terms. The application then associates the desired buffers and textures with the parameters in the shader. In WebGPU, the association of data to parameters is made using Bind Groups. In this post, we’ll use Bind Groups to pass a uniform buffer containing a view transform to our vertex shader, allowing us to add camera controls to our triangle from the previous post. If you haven’t read the updated first post in this series I recommend reading that first, as we’ll continue directly off the code written there.
ContinueAugust 29, 2021
This tutorial is an updated version of my previous one and updates the code listing to match the finalizing WebGPU APIs. If you’ve read the previous version of this tutorial you can skim through the code listings to get up to date. For an easy way to get started, I recommend grabbing my WebGPU + webpack starter template which includes the code from this tutorial. You can start by deleting the code there and rewriting it following this tutorial, or follow along in the code as you read this page.
WebGPU is a modern graphics API for the web, in development by the major browser vendors. When compared to WebGL, WebGPU provides more direct control over the GPU to allow applications to leverage the hardware more efficiently, similar to Vulkan and DirectX 12. WebGPU also exposes additional GPU capabilities not available in WebGL, such as compute shaders and storage buffers, enabling powerful GPU compute applications to run on the web. As with the switch from OpenGL to Vulkan, WebGPU exposes more complexity to the user than WebGL, though the API strikes a good balance between complexity and usability, and overall is quite nice to work with. In this series, we’ll learn the key aspects of WebGPU from the ground up, with the goal of going from zero to a basic glTF model renderer. This post marks our initial step on this journey, where we’ll setup a WebGPU context and get a triangle on the screen.
ContinueDecember 20, 2020
The Apple M1 available in the MacBook Air, MacBook Pro 13”, and Mac Mini has been the focus of a ton of benchmarking writeups and blog posts about the new chip. The performance overall, and especially performance/watt, that Apple has achieved with the chip is very impressive. As a ray tracing person, what caught my eye the most was the performance AnandTech reported in their CineBench benchmarks. These scores were 1.6x higher than I got on my old Haswell desktop and 2x higher than my new Tiger Lake laptop! I had also been interested in trying out the new ray tracing API for Metal that was announced at WWDC this year, which bears some resemblance to the DirectX, Vulkan, and OptiX GPU ray tracing APIs. So, I decided to pick up a Mac Mini to do some testing on my own interactive path tracing project, ChameleonRT, and to get it running on the new Metal ray tracing API. In this post, we’ll take a look at the new Metal ray tracing API to see how it lines up with DirectX, Vulkan, OptiX and Embree, then we’ll make some fair (and some extremely unfair) ray tracing performance comparisons against the M1.
ContinueNovember 15, 2020
I recently picked up a Raspberry Pi 4 8GB model to use for some lightweight server tasks on my home network. After setting up Pi-Hole, OpenVPN, Plex, and Samba, I got curious about using it to re-encode some videos I had. The videos are on an external drive being monitored by Plex and shared on the network by Samba, and some are quite large since they’re at a (likely unnecessarily) high bitrate. Trimming them down would help save a bit of space, and gives me an excuse to play around with Python, FFmpeg, and the Pi’s hardware accelerated video encoder. In this post, I’ll cover how to get FFmpeg setup to use the Pi 4’s video encoding hardware on a 64-bit OS and the little encoding manager/dashboard, FBED, that I put together to monitor the progress of the encoding tasks.
ContinueNovember 13, 2020
Now that it’s been a bit over a week since VIS 2020 ended I thought I’d write up some information on the streaming infrastructure we used during conference. For context, IEEE VIS 2020, like all conferences this year and likely well into the next, was held as a virtual event. VIS 2020 was “hosted” by the University of Utah, as it was originally planned (pre-COVID) to be held in Salt Lake City. My advisor was one of the co-chairs, and asked if I’d volunteer to be on the Technology Committee for VIS 2020. The role of this committee is to manage the technical aspects of streaming the event. The change to a virtual format brings a lot of challenges, especially when pivoting later in the planning cycle (the past in-person events are typically over a year in the making). However, the virtual format also brings improvements in terms of accessibility, cost to attendees, environmental impact, and archiving.
This post will be one part technical documentation and one part reflection. The feedback we received for VIS 2020 was overwhelmingly positive, and thus I hope that both the technical documentation on how we ran the event and the reflection on what worked and didn’t are helpful to organizers planning virtual events through the next year.
Before we begin, I must of course mention that this was not a solo effort. Alex Bock and Martin Falk were also on the Tech committee and provided valuable advice about their experience running EGEV 2020 as a virtual event earlier this year, which was also well received. We followed the same model for VIS, which aims to keep the feeling of a live conference while reducing surface area for technical issues. I must also mention the amazing work done by Alper Sarikaya, Hendrik Strobelt, Jagoda Walny, and Steve Petruzza on the web committee setting up the virtual conference webpage. The webpage was adapted from mini-conf, originally written by Alexander Rush and Hendrik Strobelt. Alper has written up a blog post about this, so I won’t cover it here. Finally, during the event we had a rotation of about 24 student volunteers who were responsible for managing the streams and assisting presenters with technical issues, without whom the event would not have been possible.
Continue