DLSS 5 is on track for a Fall 2026 debut and replaces a game’s original textures with AI-inflused versions to make them hyperreal. Or, out of one uncanny valley and into another!
DLSS 5 is on track for a Fall 2026 debut and replaces a game’s original textures with AI-inflused versions to make them hyperreal. Or, out of one uncanny valley and into another!
I think you think you are making an argument, but the hair is the best example. The strands haven’t been changed a bit, all the unique curls, all there. Generative AI would have changed that big time. You might be getting confused by some of the shots like the Starfield ones, that have been taken from different frames (look at the person in the background).
I’ve actually just been corrected, this was referred to employ some form of generative AI by Jensen. It’s also significantly different enough to what I generally thought of as AI slop and my issues with it that it could also be said that I am a supporter of generative AI now. I am surprised by the application of the label, but it does prove me wrong.
I’d suggest taking a look at the comparisons on Nvidia’s website, because it really makes it obvious how much this is changing things https://www.nvidia.com/en-us/geforce/news/dlss5-breakthrough-in-visual-fidelity-for-games/
If we look at the one that’s in the article thumbnail, the blonde woman in Resident Evil, you can see it has made significant changes to her face: her eyes are bigger and the outside corners of them have been moved up, and her lips are much fuller
Edit: also it straight up changes the skin colour of the black football player in an orange shirt, and that’s presumably meant to be a representation of a specific real person. It’s not even a lighting change either, because the shirt is the exact same colour. It’s only his skin that changes
Not only have I done that, I overlayed one image on top of the other in GIMP to test it out with the opacity slider. Her eyes are not bigger, and the corners have not been moved up. The overlay is perfect, and transitions perfectly. I think that what you are referring to is the optical illusion of the eyes appearing to get “bigger” when they get brighter, but if you say, place it around a fixed reference, it is clear they remain the same size.
Regarding the football player, if you look at the entire scene, there’s a dark tone applied to everything, including the soccer ball. It seems to make dark scenes brighter and outdoor scenes darker. Having said that, I agree, the filter does exaggerate the skin color of the football player, but that’s what it alters, the lighting and material properties. There’s even a point where you can place the bar that the transition is seamless enough that it appears to be the same shot of the face. To test whether this was the case, I put it into GIMP, and using just the brightness slider tried to see whether I could make the colors match just from changing the brightness - and I could.
What I actually found more interesting is that in every other example, even the clothing folds remained the same - this is the only example where the folds in the clothing seem to change. Looking at the background, there’s also some evidence it’s not the same frame. I doubt it’s from a material change, it’s just that they are really one frame apart.
Without using GIMP, you can also take the football player, anyone of them, and zoom close up. Make a note of every features in their face, because it is preserved, if exaggerated.
Here’s a screen recording of me doing exactly that and getting results that do not match what you’re saying
You are working with different frames, and you are also flickering between them as opposed to using the opacity slider, which makes it difficult to see how the brightness and material effects are being altered between the two. All you need to do is gradually shift the opacity layer from the top layer once you’ve aligned them. You are actually working with the source images while I just down and dirty snipped it, gonna try getting the source image of the side by side comparison from the same frame and see if the higher definition makes a difference. I would make it a streamable, but I have no experience doing it.
Yeah, just tried it out. The ones actually from the same frame are pretty low res in comparison, but the high res ones you are choosing are from different frames, so even if you align them using the pupil as a reference, zooming out shows just how uneven they are due to minor shifts in position. Unfortunately, that means having to resort to the lower resolution alternative.
Smooth fades with the brightness upped for visibility: left eye, right eye, lips
Here are the source images for you: DLSS off and DLSS on
Streamable is just a video uploading site, you can put any video file on there for free (though it will be deleted after a while). I used OBS to screen-record, it’s free and fairly simple
Yep, got it to work (hardest part was the cropping): https://streamable.com/j0ryqe
Your images are coming from different frames. If you go to the YouTube link, you can see where they were copied from and how the idle animation distorts them. Unfortunately, they’ve only included the intro clip to the video as a side by side of the same frame. Here is your example, zoomed out - it was never going to match: https://imgur.com/a/vRu1Xxa
I mean, they’re the images that Nvidia chose to present as the comparison, but watching the video I do not see her eyes and lips growing like that in the idle animation
Imgur isn’t available in the UK, I’m afraid
With all due respect, I don’t think this shows what you think it shows. Here is that exact video downloaded, zoomed in, and brightened to clarify it: https://streamable.com/hpxx37
That’s ok, I can paste what you were trying to compare here:
I’m not seeing the relevance of your new video. This filter manipulates brightness and material at a pixel level, which my video shows at several. At the level of focus you are trying to show, there are still material differences being applied, like how light bounces of off the skin, eye, and lips, and the filter is working over detail that I already warned you the only frames that could be compared against each other are lacking.
My video already shows it applying well enough, but if try to zoom up to the pixels in an image that does not have the quality to show what it’s parting from and ignore what’s happening on the quality that can be made it, it certainly can be argued into a different story.
I think my example already does a decent job at showing that this isn’t just the typical image generation AI, so I’m afraid we’ll have to disagree from here on out, as I don’t think either can make the example to each other any more clearer. Regardless, if you are as interested as I am on this, it will be something true experts go over and point out when it gets released.