DLSS 5 only takes 2D rendered frames and motion vectors as input, not 3D game engine data, confirms NVIDIA

The model does not have access to 3D geometry, depth buffers, or PBR material properties, all of which are inferred or guessed by the AI.

DLSS 5 only takes 2D rendered frames and motion vectors as input, not 3D game engine data, confirms NVIDIA
Comment IconFacebook IconX IconReddit Icon
Tech Reporter
Published
2 minutes & 45 seconds read time
TL;DR: NVIDIA's DLSS 5 processes only 2D frames and motion vectors, lacking access to 3D geometry or PBR data, causing AI to alter character details and ignore artistic intent. Despite developer controls, the technology risks generic results and temporal artifacts, indicating significant improvements are needed before release.

NVIDIA's latest DLSS 5 technology has been nothing if not controversial, with some hailing it as one of the most transformative leaps in real-time graphics, while others dismiss it as little more than 'AI slop'. In a conversation with YouTuber Daniel Owen, NVIDIA's Jacob Freeman has revealed a lot more about the specifics of the technology; details that would otherwise be veiled behind layers of marketing language.

NVIDIA has now confirmed that DLSS 5 takes only a scene's 2D-rendered frame and its motion vectors as input. This stands in contrast to marketing claims that the technology is 'anchored to source 3D content,' a phrase that suggested a deeper understanding of the game engine. In reality, as the model sits at the end of the graphics pipeline, it only sees the 2D frame and remains blind to the 3D geometry of objects.

Likewise, the model lacks access to PBR (Physically Based Rendering) properties provided by the engine. As a result, it is forced to infer what the material is supposed to look like rather than reading these properties directly from the game engine. This forces the model to rely on semantic labeling to identify clusters of pixels as eyes, cheeks, lips, etcetera. If the training data is biased toward perfect faces, the model risks reinterpreting or 'yassifying' a character's face to a generic standard, rather than preserving the developer's original intent.

NVIDIA says, "the underlying geometry is unchanged," but the YouTuber showed a clear result in which the AI was caught generating, or rather hallucinating, hair and facial details that simply do not exist in the original character models. It seems like while the original 3D models are indeed preserved, DLSS 5 simply painted a new image over those pixels. This is likely a result of the AI's training data, where it 'decided' that a realistic version of that hairstyle required hair in those specific areas.

The YouTuber also pointed out that, with DLSS 5 on, the character Grace Ashcroft from Resident Evil Requiem appeared with unintended makeup and altered facial features, completely ignoring the scene's grim context and the character's lore. When pressed on this loss of artistic intent, NVIDIA responded that developers will have access to an intensity slider to blend the AI's output with the original frame, color grading tools (gamma, saturation, and contrast), and the ability to exempt certain objects from the AI's generative pass completely. Still, developers cannot make the model context-aware or fine-tune the model to better fit their artistic style.

DLSS 5 only takes 2D rendered frames and motion vectors as input, not 3D game engine data, confirms NVIDIA 11049115

To many, the technology, in its current state, feels less like a rendering revolution and more like a glorified 'Snapchat beauty filter' that masks a game's true depth and detail in favor of what the model deems the perfect photorealistic image. Furthermore, the presence of ghosting artifacts in NVIDIA's own promotional footage suggests that without a deeper, lower-level access and integration of the 3D scene, temporal stability remains challenging. As it stands, DLSS 5 certainly has room for improvement. If NVIDIA hopes to avoid being branded with the 'AI Slop' tag, it must address these concerns before the technology hits consumer GPUs this fall.

Photo of the VIPERA GeForce RTX 4090 Graphics Card
Best Deals: VIPERA GeForce RTX 4090 Graphics Card
Today7 days ago30 days ago
$3446.96 USD$3419.95 USD
$5999 CAD$5580 CAD
$3446.96 USD$3419.95 USD
$3446.96 USD$3419.95 USD
Check PriceCheck Price
* Prices last scanned 5/5/2026 at 12:40 am CDT - prices may be inaccurate. As an Amazon Associate, we earn from qualifying purchases. We earn affiliate commission from any Newegg or PCCG sales.
News Source:youtube.com

Tech Reporter

Email IconX IconLinkedIn Icon

Hassam is a veteran tech journalist and editor with over eight years of experience embedded in the consumer electronics industry. His obsession with hardware began with childhood experiments involving semiconductors, a curiosity that evolved into a career dedicated to deconstructing the complex silicon that powers our world. From benchmarking PC internals to stress-testing flagship CPUs and GPUs, Hassam specializes in translating high-level engineering into deep, unbiased insights for the enthusiast community.

Follow TweakTown on Google News
Newsletter Subscription