Skip to content

Entering the Slang Ecosystem: A Newcomer's Guide for Graphics Teams

Every time your team ships to a new platform, they rewrite shaders. Every API update means another round of testing and fixes. The result is duplicated engineering effort, slower releases, and a codebase that grows more fragile with each target you add.

Slang, developed by NVIDIA and now governed by Khronos, is built to eliminate that duplication. Valve migrated their entire Source 2 shader codebase, the engine behind Dota 2, Half-Life: Alyx, and Counter-Strike 2, by changing ten lines of code. Autodesk runs their cross-platform path tracer on it. One authoring language. One compiler. Every major graphics API.

This guide explains what Slang is, when adoption makes sense, and what it takes to integrate.

What is Slang?

Slang is a shading language and compiler infrastructure that allows developers to author GPU code once and target multiple graphics and compute APIs. According to its GitHub project, it is "designed to enable real-time graphics developers to work with large-scale, high-performance shader codebases in a modular and extensible fashion."

The Khronos press release describes Slang as "an open-source shading language and compiler" backed by NVIDIA, with fifteen years of research and development behind it.

The key idea is single-source authoring (write once) and portable deployment across HLSL, SPIR-V, MSL, WGSL, and CUDA/CPU compute. Crucially, Slang is a superset of HLSL, meaning existing HLSL code can be ingested directly with minimal modification. This is why Valve's migration required only ten lines of change. Slang includes both the language and the compiler.

A note on portability: Slang enables you to write shaders once and target many backends, but it does not remove the need to understand your target hardware. Each API and GPU generation exposes different capabilities, feature levels, and driver behaviors. When deploying across Direct3D, Vulkan, Metal, or WebGPU, always verify feature support. Portability means the same code can compile everywhere, not that it will run identically everywhere.

Supported Targets: Slang includes both the shading language and the compiler.

Why Was Slang Created?

The shader ecosystem of the last decade has shown increasing signs of strain. At Vulkanised 2024, Theresa Foley from NVIDIA outlined several pressures:

Shader languages originally designed for tens of lines now support tens of thousands. API evolution (Vulkan extensions, SPIR-V changes) moves faster than language fundamentals can adapt. Variant explosion means large numbers of shader permutations for platforms, features, and content. The rise of machine learning and differentiable rendering means traditional shading languages are inadequate for emerging workflows.

Slang addresses these challenges through modern language design, modular architecture, first-class support for differentiation, and cross-platform code generation.

Design Goals and Features

Modular, scalable shader codebases: Features like modules, generics, and interfaces allow clearer organisation and separation of concerns, similar to modern general-purpose languages. This makes code easier to reason about and maintain.

Portable deployment across APIs: The Slang compiler supports backends including HLSL (for Direct3D), SPIR-V (for Vulkan), MSL (for Metal), WGSL (for WebGPU), and compute targets like CUDA and CPU.

Incremental adoption: Existing HLSL and GLSL codebases can be ingested with minimal changes to benefit from the new infrastructure.

Performance parity: The language aims to offer modern type systems and abstractions without sacrificing GPU performance or platform-specific features.

Built-in automatic differentiation: Slang supports differentiation as a first-class language feature, enabling shaders and renderers to compute derivatives and integrate with machine-learning frameworks.

Developer tooling: Slang is included in the Vulkan SDK (as of version 1.3.2) and integrates with the Khronos standards ecosystem.

Business Case

For graphics teams, the business motivations are compelling:

Reduced maintenance cost: Rather than maintaining independent shader compilers or translation layers for each API, teams use one unified infrastructure.

Faster platform coverage: Targeting a new API means authoring once and compiling for all supported backends, reducing duplication.

Low migration overhead: The Khronos press release states that Valve compiled the entire production Source 2 HLSL codebase with Slang while modifying only ten lines of code.

Developer productivity: Modularization and cleaner language abstractions lead to faster iteration, fewer bugs, and better code reuse.

Future-proofing for ML workflows: With automatic differentiation built in, studios exploring neural rendering, learned materials, or differentiable path tracers can adopt Slang without creating separate research pipelines.

Case Studies

Slang is still young but has had notable early adoption.

Valve: The Source 2 engine's entire HLSL codebase was compiled with Slang, requiring only around ten lines of change. This unified deployment across Direct3D and Vulkan for titles including Dota 2, Half-Life: Alyx, and Counter-Strike 2.

Autodesk: Autodesk's Aurora path tracer (see image below) uses Slang for single-source ray tracing across Direct3D, Vulkan, and Metal, simplifying cross-platform maintenance.

NVIDIA and Academia: Slang powers Omniverse, RTX Remix, and Portal RTX, and underpins differentiable rendering research developed with MIT, UCSD, and UW.

sample

Screenshots of the Autodesk Telescope model rendered with Aurora. Model courtesy of Roberto Ziche.

The Intersection with Machine Learning

Historically, graphics and machine learning were distinct disciplines. Graphics engineers authored shaders in HLSL or GLSL to express deterministic rendering pipelines. Machine-learning practitioners worked with frameworks such as PyTorch and TensorFlow to train models through data-driven optimization. Today, these domains are converging. Neural rendering, differentiable path tracing, 3D Gaussian splatting, and learned materials all depend on integrating gradient-based learning into rendering workflows.

Until recently, bringing such methods into production rendering was impractical because differentiable pipelines required maintaining two shader implementations: one for the forward pass and one for its derivative. Any change to the forward shader demanded a corresponding update to the backward path, doubling maintenance effort and introducing risk for human error.

Slang eliminates that duplication through first-class automatic differentiation constructs. Annotating existing shaders with fwd_diff and bwd_diff enables gradient-based optimization with minimal code changes. SlangPy exposes this functionality directly within Python, allowing rapid prototyping of shaders and compute kernels from a familiar scripting environment.

From a strategic perspective, this convergence means renderers become trainable and invertible, graphics assets become learnable, and a unified codebase can serve both real-time rendering and training workflows.

WebGPU Relationship

Slang targets WGSL as one of its backends. The compiler supports WebGPU target code generation, which means graphics teams can author in Slang and deploy in browser or cloud-native environments.

For teams targeting WebGPU alongside native platforms, Slang provides one authoring language and multiple deployment targets.

When Slang May Not Be the Right Choice

No single technology is ideal for all scenarios:

If your deployment is browser-only (WebGPU and WGSL) and you have no native platforms or compute-intensive workflows, WGSL alone may suffice and migration cost may outweigh gains.

If your shader pipeline is mature, stable, and not undergoing change or growth, the benefit of switching may be limited.

If your team works at the ISA level (PTX, SASS) for a single platform and performance margins are the priority, a higher-level abstraction may introduce unwanted risk.

If your organisation prohibits external compiler toolchains or places strict constraints on third-party open-source components, you may prefer in-house or vendor-specific solutions.

A note on maturity: The Slang project maintains a feature maturity table that categorises each capability according to its development and stability status. Before integrating Slang into a production pipeline, review this table to confirm that the specific features you depend on are stable. The documentation also provides detailed notes on target-specific behaviours for SPIR-V, Metal, and WGSL.

Outlook

Looking forward, Slang is well positioned for large, evolving shader codebases where modularisation and cross-target compilation help manage growth. It suits multi-platform engine support spanning Windows, console, mobile, and browser in one codebase. It enables ML-augmented rendering pipelines where differentiable shaders and compute-graphics fusion provide strategic advantage. And it supports WebGPU and native convergence through a unified authoring model.

To make the most of Slang, graphics teams should consider an evaluation approach: start with a subset of your shader code, compile to existing targets and compare outputs, explore the differentiable features if ML workflows matter, and benchmark migration cost against developer iteration cycle.

Conclusion

Slang is open source, governed by Khronos, and supported by an active, industry-wide community. It delivers a production-ready language and compiler infrastructure designed to unify GPU shader authoring and bridge the gap between graphics and machine learning workflows.

For graphics teams managing multi-API shader pipelines or investigating neural rendering, Slang offers a practical route to reduced integration cost, greater flexibility, and long-term portability. The next step is to select a representative shader set, compile it with Slang, verify correctness, and assess the overall migration effort.

How 4D Pipeline Can Help

With more than a decade of cross-platform graphics engineering experience, our team can help you evaluate, integrate, and deploy Slang within your rendering or compute stack. We have shipped pipelines across Unity, Unreal, WebGL, and custom Vulkan backends, and maintained large cross-API codebases. That practical background positions us to guide shader teams through incremental adoption: ingesting existing HLSL or GLSL sources, validating cross-compiled outputs, tuning for Metal and WebGPU targets, and establishing reliable CI build paths for Slang-based toolchains.

 


References