At this year’s GDC event, Nvidia showed off an updated version of its Zorah demo from the GeForce RTX 50-series launch, with changes including controls that allowed one to view the scenes to see features such as RTX Mega Geometry in full action. In a panel discussion about the demo and everything neural rendering, Nvidia confirmed one thing that we all suspected but also one thing that might surprise you—Zorah is 100% ray tracing and it’s faster this way than if it used rasterization.
For as long as I’ve been messing around with 3D graphics (almost 30 years), I’ve loved seeing GPU vendors release a cool standalone demo to showcase new rendering tricks or some fancy hardware feature. Over the years, they’ve somewhat fallen by the wayside, replaced by games as the best choice for showing off your new graphics card.
This isn’t to say that Nvidia’s Zorah demo isn’t visually impressive—it’s absolutely stunning when seen in real-time on a big OLED monitor—but the days of demos massively shifting the goalposts of what rendering can achieve are long gone. Zorah’s problem isn’t that it’s bad, it’s just that the nearest reference points to it (e.g. games that use path tracing) are equally as impressive.
Nvidia held a panel discussion at GDC 2025 to discuss its recent advances in GPU technology and graphics rendering, and one of the panellists dropped a little snippet that made me pay a lot more attention to what was being said (hey, I was jet lagged to heck). John Spitzer, Nvidia’s VP of developer and performance technology, was enthusing about the Zorah demo:
“There’s no rasterization going on at all. This is all ray traced, including the primary rays. The amazing part is that it’s actually faster than rasterizing them, so it’s not done because it’s kind of cool to say that in the demo. It’s actually, in this case, the right thing to do.”
Given that Zorah is a showcase of Nvidia’s full suite of RTX neural rendering technologies, as well as RTX Mega Geometry, the fact that it’s a fully ray-traced demo makes total sense. However, the fact that it’s faster than making the same scene via traditional techniques perhaps marks the point that we now have the hardware to be able to fully abandon traditional rasterization.
Well, not yet, as Zorah isn’t the speediest of demos, even on an RTX 5090, and it needs every performance trick that RTX neural rendering, RTX Mega Geometry, and DLSS 4 can bring to the table. That said, it does show the benefits that AI offers for rendering and while we’ve been familiar with it just being used for upscaling and frame generation, over the coming years we’ll see it being leveraged increasingly more to do fancier graphics at playable frame rates.
And it does all of this through the power of approximation. RTX Neural Materials, for example, takes the long, complex shaders for hugely detailed materials, and uses a small neural network that represents how light interacts with the material to produce a result that’s a good approximation of the original thing, except it does it all much faster. Much like how AI can be used to interpolate an entire frame, it can now be used to interpolate a specific shader.
RTX Neural Radiance Cache does a similar thing by applying a neural network to the result of a few rays bouncing once or twice in a scene and then inferring what the end result of hundreds or thousands of bounces would be like.
Of course, as with all approximations, all of this AI rendering stuff isn’t perfect and arguably may never be, but neither are the rasterization techniques that we’re all familiar with. It just needs to be good enough that you can’t tell while gaming.
Having a fully path-traced scene running at a playable frame rate is only possible because of AI’s strength at approximating stuff that follows set algorithms and physical laws.
And it’s not just Nvidia that’s going down this road: AMD and Intel didn’t add matrix cores to their GPUs just for upscaling or frame gen. Rendering has always been a game of approximation, of course, so all of this is just an evolution of how we turn glowing dots on a screen into worlds that make us believe it’s real.
Like it or not, AI really is the future of graphics.
Source link
Add comment