From 0 to glTF with WebGPU: Rendering the Full glTF Scene

June 24, 2023

Figure 1: In this post, we'll look at how to fix our terribly broken 2 Cylinder Engine. Left: A buggy render of 2CylinderEngine.glb achieved when ignoring the glTF node transformations. Right: The correct rendering with meshes positioned based on the hierarchy of transforms specified in the glTF node tree.

Loading and drawing our first mesh from a glTF file was quite a bit of work in the previous post, but with this core piece in place we can start adding a lot more functionality to our renderer pretty quickly. If you tried loading up glTF files into the renderer from the previous post, you may have noticed that they didn’t look how you expected. This is because glTF files often contain many meshes that make up different parts of the scene geometry, most of which will be missing since we only loaded the first mesh last time. If we just add a simple loop through the meshes to load and draw them all we’ll frequently end up with a scene like the broken engine on the left in the image above. This is because the meshes are reference and transformed by the glTF node hierarchy, and we need to load and handle these nested transformations to render the correct scene shown on the right. The test model we’ll be using for this post is the 2CylinderEngine from the Khronos glTF samples repo, which has nested transformations in its node hierarchy that make it a great test case. So grab 2CylinderEngine.glb and let’s get started!

Continue  

From 0 to glTF with WebGPU: Rendering the First glTF Mesh

May 16, 2023

Now that we’ve seen how to draw a triangle in the first post and hook up camera controls so we can look around in the second post, we’re at the point where the avocado really hits the screen and we can start drawing our first glTF primitives! I say the avocado hits the screen because that’s the glTF test model we’ll be using. You can grab it from the Khronos glTF samples repo. glTF files come in two flavors (minus other extension specific versions), a standard “.gltf” version that stores the JSON header in one file and binary data and textures in separate files, and a “.glb” version, that combines the JSON header and all binary or texture data into a single file. We’ll be loading .glb files in this series to simplify how many files we have to deal with to get a model into the renderer, so grab the glTF-Binary Avocado.glb and let’s get started!

Figure 1: It takes quite a bit to get Avocado.glb on the screen, but this beautiful image of our expected final (and delicious) result should be enough motivation to keep us going!
Continue  

From 0 to glTF with WebGPU: Bind Groups - Updated for Chrome 113 Release

April 11, 2023

This tutorial is an updated version of my previous one and updates the code listing to match the finalizing WebGPU APIs. If you’ve read the previous version of this tutorial you can skim through the code listings to get up to date. The code for the blog series is also available on GitHub.

In this second post of the series we’ll learn about Bind Groups, which let us pass buffers and textures to our shaders. When writing a renderer, we typically have inputs which do not make sense as vertex attributes (e.g., transform matrices, material parameters), or simply cannot be passed as vertex attributes (e.g., textures). Such parameters are instead passed as uniforms in GLSL terms, or root parameters in HLSL terms. The application then associates the desired buffers and textures with the parameters in the shader. In WebGPU, the association of data to parameters is made using Bind Groups. In this post, we’ll use Bind Groups to pass a uniform buffer containing a view transform to our vertex shader, allowing us to add camera controls to our triangle from the previous post. If you haven’t read the updated first post in this series I recommend reading that first, as we’ll continue directly off the code written there.

Continue  

From 0 to glTF with WebGPU: The First Triangle - Updated for Chrome 113 Release

April 10, 2023

This tutorial is an updated version of my previous one and updates the code listing to match the finalizing WebGPU APIs. If you’ve read the previous version of this tutorial you can skim through the code listings to get up to date. For an easy way to get started, I recommend grabbing my WebGPU + webpack starter template which includes the code from this tutorial. You can start by deleting the code there and rewriting it following this tutorial, or follow along in the code as you read this page. The code for the blog series is also available on GitHub.

WebGPU is a modern graphics API for the web, in development by the major browser vendors. When compared to WebGL, WebGPU provides more direct control over the GPU to allow applications to leverage the hardware more efficiently, similar to Vulkan and DirectX 12. WebGPU also exposes additional GPU capabilities not available in WebGL, such as compute shaders and storage buffers, enabling powerful GPU compute applications to run on the web. As with the switch from OpenGL to Vulkan, WebGPU exposes more complexity to the user than WebGL, though the API strikes a good balance between complexity and usability, and overall is quite nice to work with. In this series, we’ll learn the key aspects of WebGPU from the ground up, with the goal of going from zero to a basic glTF model renderer. This post marks our initial step on this journey, where we’ll setup a WebGPU context and get a triangle on the screen.

Continue  

A Dive into Ray Tracing Performance on the Apple M1

December 20, 2020

The Apple M1 available in the MacBook Air, MacBook Pro 13”, and Mac Mini has been the focus of a ton of benchmarking writeups and blog posts about the new chip. The performance overall, and especially performance/watt, that Apple has achieved with the chip is very impressive. As a ray tracing person, what caught my eye the most was the performance AnandTech reported in their CineBench benchmarks. These scores were 1.6x higher than I got on my old Haswell desktop and 2x higher than my new Tiger Lake laptop! I had also been interested in trying out the new ray tracing API for Metal that was announced at WWDC this year, which bears some resemblance to the DirectX, Vulkan, and OptiX GPU ray tracing APIs. So, I decided to pick up a Mac Mini to do some testing on my own interactive path tracing project, ChameleonRT, and to get it running on the new Metal ray tracing API. In this post, we’ll take a look at the new Metal ray tracing API to see how it lines up with DirectX, Vulkan, OptiX and Embree, then we’ll make some fair (and some extremely unfair) ray tracing performance comparisons against the M1.

Continue  
Find older posts in the Archive