Skip to content

Hands-On with Slang: A Practical Tutorial for Graphics Teams

The inaugural Khronos Shading Languages Symposium takes place February 12-13 in San Diego. Two back-to-back sessions will cover Slang's differentiable features (Shannon Woods, NVIDIA) and practical GLSL-to-Slang porting (Chris Hebert, NVIDIA). This tutorial gives you hands-on experience before you watch those talks, or serves as a practical companion if you can't attend.


If you've been following the shader language ecosystem, you've likely heard about Slang. Developed by NVIDIA and now governed by Khronos, Slang promises single-source authoring across Direct3D, Vulkan, Metal, WebGPU, and even GPU compute via Python.

But what does working with Slang actually look like?

This tutorial walks you through the basics: writing a simple Slang shader, compiling it to multiple backends, and using SlangPy to run automatic differentiation on the GPU. By the end, you'll have working code and a concrete sense of what Slang adoption involves.

Prerequisites

You'll need:

  • Vulkan SDK 1.3.2 or later (includes slangc, the Slang compiler)
  • Python 3.8+ with slangpy, numpy, and matplotlib
  • A GPU with Vulkan support

Set up a virtual environment and install dependencies:

# Create a virtual environment (macOS / Linux)
python3 -m venv venv

# Activate the virtual environment (macOS / Linux bash or zsh)
source venv/bin/activate

# Install required packages into the virtual environment
pip install slangpy numpy matplotlib

Using a virtual environment keeps dependencies isolated and makes it clear what packages each project requires.

Part 1: Write a Slang Shader

Let's start with a simple differentiable function: a quadratic polynomial. This is deliberately minimal. We want to focus on the mechanics of Slang, not the math.

Create a file called example.slang:

// example.slang
// A differentiable polynomial: y = a*x^2 + b*x + c

[Differentiable]
float polynomial(float a, float b, float c, float x)
{
return a * x * x + b * x + c;
}

// Forward-mode derivative helper: dy/dx for the quadratic
float polynomial_d(float a, float b, float c, float x)
{
// Treat a, b, c as constants (zero derivative)
let a_pair = diffPair(a, 0.0f);
let b_pair = diffPair(b, 0.0f);
let c_pair = diffPair(c, 0.0f);

// Treat x as the variable of interest, seed dx = 1
let x_pair = diffPair(x, 1.0f);

// fwd_diff(polynomial) now receives four DifferentialPair<float> arguments
let result = fwd_diff(polynomial)(a_pair, b_pair, c_pair, x_pair);

// result.p is y, result.d is dy/dx
return result.d;
}

// Backward-mode derivative helper: dy/dx
float polynomial_d_bwd(float a, float b, float c, float x)
{
// All differentiable inputs become DifferentialPair<T>.
// Use 'var' so bwd_diff can write gradients into the .d fields.
var a_pair = diffPair(a, 0.0f);
var b_pair = diffPair(b, 0.0f);
var c_pair = diffPair(c, 0.0f);
var x_pair = diffPair(x, 0.0f);

// Seed dL/dy = 1, so dL/dx = dy/dx when L = y
let dLdy = 1.0f;

// This runs the synthesized backward pass of 'polynomial'
bwd_diff(polynomial)(a_pair, b_pair, c_pair, x_pair, dLdy);

// x_pair.d now holds dL/dx, which here is dy/dx
return x_pair.d;
}

// Dummy compute entry to keep everything referenced
[shader("compute")]
[numthreads(1, 1, 1)]
void computeMain(
uint3 tid : SV_DispatchThreadID,
RWStructuredBuffer<float> outBuffer)
{
// Only use thread (0,0,0) for our dummy work
if (tid.x != 0 || tid.y != 0 || tid.z != 0)
return;

// Example evaluation
outBuffer[0] = polynomial(1.0, -4.0, 3.0, 2.0);
outBuffer[1] = polynomial_d(1.0, -4.0, 3.0, 2.0);
}

The [Differentiable] attribute tells Slang that this function can be differentiated. The compiler will automatically generate backward propagation code when you use fwd_diff or bwd_diff.

Part 2: Compile to Multiple Backends

With slangc (included in the Vulkan SDK), you can target different APIs from the same source.

Compile to Metal:

slangc example.slang -target metal -entry computeMain -stage compute -o example.metal

Compile to HLSL:

slangc example.slang -target hlsl -entry computeMain -stage compute -o example.hlsl

Compile to WGSL (for WebGPU):

slangc example.slang -target wgsl -entry computeMain -stage compute -o example.wgsl

Compile to SPIR-V (for Vulkan):

slangc example.slang -target spirv -entry computeMain -stage compute -o example.spv

This is the "write once, compile everywhere" value proposition in action. The same shader source produces valid output for each platform.

Troubleshooting (macOS):

If compilation to Metal fails with an error about metallib not found, make sure Xcode command line tools are installed and up to date:

xcode-select --install

Part 3: Use SlangPy for GPU-Accelerated Differentiation

Now let's run this shader from Python using SlangPy. This demonstrates how Slang bridges the gap between graphics shaders and machine-learning workflows.

Create run_slang.py:

import pathlib
import numpy as np
import matplotlib.pyplot as plt
import slangpy as spy

def main():
# 1. Create device and load module
device = spy.create_device(
include_paths=[pathlib.Path(__file__).parent.absolute()],
)
module = spy.Module.load_from_file(device, "example.slang")

# 2. Quadratic parameters: y = a*x^2 + b*x + c
a = 1.0
b = -4.0
c = 3.0
x_star_analytic = -b / (2.0 * a) # Analytic minimum at x = 2.0

# 3. Sample grid in x
x_np = np.linspace(-1.0, 5.0, 200, dtype=np.float32)
x = spy.Tensor.numpy(device, x_np)

# 4. Forward pass: y(x)
y: spy.Tensor = module.polynomial(a=a, b=b, c=c, x=x, _result="tensor")
y_np = y.to_numpy()

# 5. Derivatives on GPU

# Forward-mode helper
dx_fwd: spy.Tensor = module.polynomial_d(a=a, b=b, c=c, x=x, _result="tensor")
dx_fwd_np = dx_fwd.to_numpy()

# Backward-mode helper
dx_bwd: spy.Tensor = module.polynomial_d_bwd(a=a, b=b, c=c, x=x, _result="tensor")
dx_bwd_np = dx_bwd.to_numpy()

# 6. Check that forward and backward agree
diff = np.max(np.abs(dx_fwd_np - dx_bwd_np))
print("max |dy/dx (fwd) - dy/dx (bwd)| =", diff)

# diff should be zero. Forward and backward differentiation
# are two ways of applying the chain rule through the same
# computation graph. Forward and backward differentiation
# are the same for a scalar-to-scalar function (x->y). This
# is not the case for vector-to-scalar functions.

# 7. Use one of them (they are the same) to locate the numeric minimum
idx_min = int(np.argmin(np.abs(dx_fwd_np)))
x_star_num = float(x_np[idx_min])
y_star_num = float(y_np[idx_min])

print("Analytic minimum x* =", x_star_analytic)
print("Numeric minimum x* =", x_star_num)
print("Numeric y(x*) =", y_star_num)

# 8. Plot
fig, (ax_f, ax_d) = plt.subplots(2, 1, figsize=(6, 8), sharex=True)

# Parabola
ax_f.plot(x_np, y_np, label="y = a x^2 + b x + c")
ax_f.scatter([x_star_num], [y_star_num], marker="x", s=80, label="numeric minimum")
ax_f.axvline(x_star_analytic, linestyle="--", label="analytic minimum")
ax_f.set_ylabel("y")
ax_f.legend()
ax_f.grid(True, alpha=0.3)

# Derivative
ax_d.plot(x_np, dx_fwd_np, label="dy/dx (fwd_diff)")
ax_d.axhline(0, color="red", linestyle="--", alpha=0.5)
ax_d.axvline(x_star_num, linestyle=":", label="numeric minimum x*")
ax_d.set_xlabel("x")
ax_d.set_ylabel("dy/dx")
ax_d.legend()
ax_d.grid(True, alpha=0.3)

plt.tight_layout()
plt.savefig("slang_autodiff_result.png", dpi=150)
print("Plot saved to slang_autodiff_result.png")

if __name__ == "__main__":
main()

Run it:

python run_slang.py

Expected output:

max |dy/dx (fwd) - dy/dx (bwd)| = 0.0
Analytic minimum x* = 2.0
Numeric minimum x* = 1.9849246740341187
Numeric y(x*) = -1.0
Plot saved to slang_autodiff_result.png

The numeric minimum is slightly off from the analytic value because we're sampling a discrete grid. The key point: we computed the derivative on the GPU using Slang's automatic differentiation, not a hand-written backward pass.

Part 4: A Real-World Example - Autodesk Aurora

For a more substantial example, look at Autodesk's Aurora path tracer, which uses Slang for cross-platform ray tracing across Direct3D, Vulkan, and Metal.

The Aurora repository includes production shaders you can study:

If you've been following our coverage of OpenPBR and MaterialX, you'll recognize Standard Surface as the predecessor to OpenPBR. Aurora's Slang implementation demonstrates how a portable shader language can implement these material standards across multiple graphics APIs.

This connection is becoming more direct: MaterialX recently added a dedicated Slang shader generator, positioning Slang as a bridge for open-standard material interoperability. For teams investing in USD + MaterialX + OpenPBR pipelines, Slang provides a path to consistent shader implementations across renderers.

Want to see Slang in a full Vulkan pipeline?

Check out Sascha Willems' How to Vulkan in 2026, which uses Slang throughout and demonstrates how much Vulkan development has improved since 2016.

What You've Learned

In this tutorial you:

  1. Wrote a differentiable Slang shader using the [Differentiable] attribute and fwd_diff/bwd_diff constructs
  2. Compiled to multiple backends (Metal, HLSL, WGSL, SPIR-V) from a single source
  3. Ran GPU-accelerated differentiation from Python using SlangPy
  4. Verified forward and backward modes agree for a scalar-to-scalar function

This is the foundation for more advanced workflows: neural rendering, learned materials, differentiable path tracing, and gradient-based optimization of rendering parameters.

Next Steps

If you're evaluating Slang for your team:

  1. Try the Slang Playground: shader-slang.com/playground lets you experiment without installing anything
  2. Review the maturity table: Not all features are at the same stability level. Check github.com/shader-slang/slang/blob/master/docs/target-compatibility.md
  3. Watch the symposium talks: Shannon Woods on differentiable features and Chris Hebert on practical porting will give you deeper context
  4. Start with a subset: Pick a representative shader, compile it with Slang, and compare outputs

How 4D Pipeline Can Help

With over a decade of cross-platform graphics engineering experience, our team can help you evaluate, integrate, and deploy Slang within your rendering or compute stack. We've shipped pipelines across Unity, Unreal, WebGL, and custom Vulkan backends, and maintained large cross-API codebases.

Whether your goal is to unify shader authoring, modernize your compiler infrastructure, or explore differentiable rendering and ML integration, we can help you translate Slang's technical promise into measurable production gains.


References


This article is a companion to our earlier piece, Entering the Slang Ecosystem: A Newcomer's Guide for Graphics Teams, which covers the strategic and business case for Slang adoption.