NVIDIA's latest DLSS 5 technology has been nothing if not controversial, with some hailing it as one of the most transformative leaps in real-time graphics, while others dismiss it as little more than 'AI slop'. In a conversation with YouTuber Daniel Owen, NVIDIA's Jacob Freeman has revealed a lot more about the specifics of the technology; details that would otherwise be veiled behind layers of marketing language.
NVIDIA has now confirmed that DLSS 5 takes only a scene's 2D-rendered frame and its motion vectors as input. This stands in contrast to marketing claims that the technology is 'anchored to source 3D content,' a phrase that suggested a deeper understanding of the game engine. In reality, as the model sits at the end of the graphics pipeline, it only sees the 2D frame and remains blind to the 3D geometry of objects.
Likewise, the model lacks access to PBR (Physically Based Rendering) properties provided by the engine. As a result, it is forced to infer what the material is supposed to look like rather than reading these properties directly from the game engine. This forces the model to rely on semantic labeling to identify clusters of pixels as eyes, cheeks, lips, etcetera. If the training data is biased toward perfect faces, the model risks reinterpreting or 'yassifying' a character's face to a generic standard, rather than preserving the developer's original intent.
NVIDIA says, "the underlying geometry is unchanged," but the YouTuber showed a clear result in which the AI was caught generating, or rather hallucinating, hair and facial details that simply do not exist in the original character models. It seems like while the original 3D models are indeed preserved, DLSS 5 simply painted a new image over those pixels. This is likely a result of the AI's training data, where it 'decided' that a realistic version of that hairstyle required hair in those specific areas.
The YouTuber also pointed out that, with DLSS 5 on, the character Grace Ashcroft from Resident Evil Requiem appeared with unintended makeup and altered facial features, completely ignoring the scene's grim context and the character's lore. When pressed on this loss of artistic intent, NVIDIA responded that developers will have access to an intensity slider to blend the AI's output with the original frame, color grading tools (gamma, saturation, and contrast), and the ability to exempt certain objects from the AI's generative pass completely. Still, developers cannot make the model context-aware or fine-tune the model to better fit their artistic style.

To many, the technology, in its current state, feels less like a rendering revolution and more like a glorified 'Snapchat beauty filter' that masks a game's true depth and detail in favor of what the model deems the perfect photorealistic image. Furthermore, the presence of ghosting artifacts in NVIDIA's own promotional footage suggests that without a deeper, lower-level access and integration of the 3D scene, temporal stability remains challenging. As it stands, DLSS 5 certainly has room for improvement. If NVIDIA hopes to avoid being branded with the 'AI Slop' tag, it must address these concerns before the technology hits consumer GPUs this fall.




