We are optimizing WebGL shaders for the Intel GMA 950 chipset, which is basically the slowest WebGL-capable device we care about. Unfortunately, it’s a fairly common chipset too. On the plus side, if we run well on the GMA 950, we should basically run well anywhere. 🙂
When you’re writing GLSL in WebGL on Windows, your code is three layers of abstraction away from what actually runs on the GPU. First, ANGLE translates your GLSL into HLSL. Then, D3DX compiles the HLSL into optimized shader assembly bytecode. That shader bytecode is given to the driver, where it’s translated into hardware instructions for execution on the silicon.
Ideally, when writing GLSL, we’d like to see at least the resulting D3D shader assembly.
With a great deal of effort and luck, I have finally succeeded at extracting Direct3D shader instructions from WebGL. I installed NVIDIA Nsight and Visual Studio 2013 in Boot Camp on my Mac. To use Nsight, you need to create a dummy Visual Studio project. Without a project, it won’t launch at all. Once you have your dummy project open, configure Nsight (via Nsight User Properties under Project Settings) to launch Firefox.exe instead. Firefox is easier to debug than Chrome because it runs in a single process.
If you’re lucky — and I’m not sure why it’s so unreliable — the Nsight UI will show up inside Firefox. If it doesn’t show up, try launching it again. Move the window around, open various menus. Eventually you should have the ability to capture a frame. If you try to capture a frame before the Nsight UI shows up in Firefox, Firefox will hang.
Once you capture a frame, find an interesting draw call, make sure the geometry is from your WebGL application, and then begin looking at the shader. (Note: again, this tool is unreliable. Sometimes you get to look at the HLSL that ANGLE produced, which you can compile and disassemble with fxc.exe, and sometimes you get the raw shader assembly.)
Anyway, check this out. We’re going to focus on the array lookup in the following bone skinning GLSL:
attribute vec4 vBoneIndices; uniform vec4 uBones[3 * 68]; ivec3 i = ivec3(vBoneIndices.xyz) * 3; vec4 row0 = uBones[i.x];
ANGLE translates the array lookup into HLSL. Note the added bounds check for security. (Why? D3D claims it already bounds-checks array accesses.)
int3 _i = (ivec3(_vBoneIndices) * 3); float4 _row0 = _uBones[int(clamp(float(_i.x), 0.0, 203.0))]
This generates the following shader instructions:
# NOTE: v0 is vBoneIndices # The actual load isn't shown here. This is just index calculation. def c0, 2.00787401, -1, 3, 0 def c218, 203, 2, 0.5, 0 # r1 = v0, truncated towards zero slt r1.xyz, v0, -v0 frc r2.xyz, v0 add r3.xyz, -r2, v0 slt r2.xyz, -r2, r2 mad r1.xyz, r1, r2, r3 mul r2.xyz, r1, c0.z # times three # clamp max r2.xyz, r2, c0.w min r2.xyz, r2, c218.x # get ready to load, using a0.x as index into uBones mova a0.x, r2.y
That blob of instructions that implements truncation towards zero? Let’s decode that.
r1.xyz = (v0 < 0) ? 1 : 0 r2.xyz = v0 - floor(v0) r3.xyz = v0 - r2 r2.xyz = (-r2 < r2) ? 1 : 0 r1.xyz = r1 * r2 + r3
Simplified further:
r1.xyz = (v0 < 0) ? 1 : 0 r2.xyz = (floor(v0) < v0) ? 1 : 0 r1.xyz = (r1 * r2) + floor(v0)
In short, r1 = floor(v0)
, UNLESS v0 < 0
and floor(v0) < v0
, in which case r1 = floor(v0) + 1
.
That’s a lot of instructions just to calculate an array index. After the index is calculated, it’s multiplied by three, clamped to the array boundaries (securitah!), and loaded into the address register.
Can we convince ANGLE and HLSL that the array index will never be negative? (It should know this, since it’s already clamping later on, but whatever.) Perhaps avoid a bunch of generated code? Let’s tweak the GLSL a bit.
ivec3 i = ivec3(max(vec3(0), vBoneIndices.xyz)) * 3; vec4 row0 = uBones[i.x];
Now the instruction stream is substantially reduced!
def c0, 2.00787401, -1, 0, 3 def c218, 203, 1, 2, 0.5 # clamp v0 positive max r1, c0.z, v0.xyzx # r1 = floor(r1) frc r2, r1.wyzw add r1, r1, -r2 mul r1, r1, c0.w # times three # bound-check against array min r2.xyz, r1.wyzw, c218.x mova a0.x, r2.y
By clamping the bone indices against zero before converting to integer, the shader optimizer eliminates several instructions.
Can we get rid of the two floor
instructions? We know that the mova
instruction rounds to the nearest integer when converting a float to an index. Given that knowledge, I tried to eliminate the floor by making my GLSL match mova
semantics, but the HLSL compiler didn’t seem smart enough to elide the two floor instructions. If you can figure this out, please let me know!
Either way, I wanted to demonstrate that, by reading the generated Direct3D shader code from your WebGL shaders, you may find small changes that result in big wins.
One thought to “Optimizing WebGL Shaders by Reading D3D Shader Assembly”