Reminder: Nvidia does not care about gamers and is not targeting gamers. They couldn’t give a flying fuck if nobody ever bought a GeForce card ever again. This is them showcasing real time genAI performance as an advertisement to other AI companies.
Useful for modifying live footage, from say a security camera…
What could possibly go wrong.
Nvidia: “Those game devs are just wrong.”
If a fan released this as a shader injector, I think it would be celebrated.
Nvidia pushing it under their brand for upscaling is fuck-right-off territory.
If they’d talked to… anyone, beforehand, they’d know RE9’s graphics don’t need help. Show us how this affects unmodded Skyrim. Show us LA Noire actually looking like all the actors it stars. Real-time style transfer surely works the other way too; show us Doom Eternal as a cartoon. Make it a silly thing users can do, rather than yet another bullshit feature to bribe into new games and lord over AMD. Were the anti-competitive margins from CUDA not comfortable enough?
This is the same amount of disrespect as taking someone else’s artwork, put it through some yassification filter, and then call it ”fixed”
Dlss in general just seems like shit, tbh. In every implementation I’ve seen, it’s essentially necessary to enable it to get a decent framerate, but it makes every game look like blurry shit.
At least on poorly optimized games without dlss, graphics are okay but sharp on low settings and decent framerates are still possible. It seems that dlss has enabled devs to do even shittier optimization, though, because dlss will pick up the slack and enable higher framerates. So on dlss enabled games, the choice is no longer between high framerates with okay graphics and low framerates with great graphics, the choice is now low framerates with terrible graphics or a slideshow with great graphics.
All of the huge problems that I saw with it so far.
-
1: It completely changes the lighting so everything is perfectly lit like it was taken in a professional photo studio, even where it makes absolutely no sense, completely running the atmosphere the developers were intending to set.
-
2: It sometimes downright changes the looks of the characters so bad that they look like another person
-
3: It creates a ton of distracting artefacts.
-
4: It makes everything look like it is of the same style
Seriously, it’s like they took the essence of the Half-Life 2 Cinematic mod and made an AI that applies it to any game.
-
That’s too far.
MSAA, SMAA, temporal AA, heaven forbid FXAA, they all suck. So does resolution scaling; if you can’t run native, you take a massive hit.
I don’t want to go back to that world of juggling between them or suffering with AA off or hacked in. It’s easy to say “oh, just code it better,” but all these solutions are inefficient on modern hardware; go back, and you leave performance on the table.
DLSS/XeSS/FSR4 and Unreal’s scaling are very convenient solutions. It antialiases perfectly, it scales to your monitor wonderfully. It’s not universal, but it looks fantastic as long as the hardware supports it base res and performance is alright.
Now, frame gen is too much of a mixed bag, and DLSS5 as demoed is obviously too far.
I disagree with you, but I respect your opinion
In every implementation I’ve seen, it’s essentially necessary to enable it to get a decent framerate, but it makes every game look like blurry shit.
I mean, I got a 4070 super and rarely use DLSS even in 4k and get triple digit fps…
There’s a bunch of minor settings that eat up crazy resources. DLSS is only needed if your other settings are too high. Sometimes that trade off can be worth it, but often not
If you think you have no options but dlss, you need to spend more time in settings.
A 4070 Super is an expensive and powerful card, though, so that’s not a very good sample. DLSS 4 is more for lower end cards, like a 4060, and only on games with bad optimization (which tend to use in house engines, rather than something like UE5).
Hell, graphics haven’t even improved all that much since my old 2070 days, and yet somehow it can’t even run half of the new releases at 1440p. Some of that are those expenses special effects (which you can’t always disable) but some of that is just really shit optimization.
I mean, it’s more expensive today than two years ago…
But it’s not like I was saying it was crap, but it’s a “1440p card” that can still easily do 4k if you change some settings.
since my old 2070 days, and yet somehow it can’t even run half of the new releases at 1440p.
An 8 year old card, won’t run modern new release at a resolutions higher than it ran stuff when it was released?
Like, I’m pretty sure 1440 screens weren’t even common in 2018, that card was made for 1080.
It’s just a weird spot to stop generation wise
How many of our all-time favorite games even have photo-realistic graphics? Is this just the logical outgrowth, an endpoint, of a generation-long strategy to accomplish a goal that nobody really wants?
Plenty of games pursue photorealism - they just don’t brag about it like Kojima. I’d count every game that got as close as technology allowed, then changed just enough to dodge the uncanny valley. Halo Infinite is stylized; the original game is just old. LA Noire advertised its verisimilitude and now looks like any other seventh-gen title. People have been going “holy shit, it’s so real!” since, like, Night Driver.
The good ending from here is the end of that arms race.
Half the push for ballooning budgets and decade-long dev cycles has been escalating standards for what feels real-ish. RDR2 cost half a billion dollars and shipped with fifty gigabytes of visual assets. It already feels only as pretty as modded GTA V. But a filter like this presumably works fine on RDR… 1. A game that cost a lot less, took a fraction as long to come out, and desperately lacks several graphical features we now take for granted. If your graphical style is “like realism, but” then you can now jump straight into the uncanny valley from models someone banged out on a Friday afternoon.
In other words, 2027 kinda graphics, with 2007 kinda budgets.
What’s more likely to happen is that behemoth publishers will hire even more people to do everything the hard way, and then also fight this instant realism filter, so it only looks the way it already looked when they did things the hard way. Because nothing good is allowed to happen ever again.
Anybody got a good ootl summary of the controversy?
It’s an AI powered ‘feature’ that changes the details of facial features. These changes increase the level of detail to a higher resolution at the cost of: accurately portraying original lighting conditions (hair colour changes, shadows are misplaced), texture stability (characters face changes randomly from scene to scene and often is unrecognizable from the original), and artistic intent (subjective: see comparisons to evaluate for yourself).
So that generally recognizable AI slop appearance of high contrast and yellow lighting that those Coca-Cola ads are known for are coming to all your games with the quality you’ve come to expect from sloppity slop slop, slop, at a cost in performance that almost certainly will require a new card (likely 2) that you can’t buy because the irony eats itself.
;tldr NVIDIA needs to l2 read a room.
Basically Nvidia put out this video with much fanfare about DLSS 5 being the future . Pretty much everyone has been mocking how it changes lighting, artstyle, character facial features, and introduces that weird AI blurriness in every instance shown.
lmao the comments in that video.
This one basically sums it up:
Brave of them to turn the comments on.
dlss is AI slop
It’s wild how those who made this decision clearly think “hyper-realistic” is an improvement. Have they never heard of the Uncanny Valley? Have they ever actually played a video game? I’m guessing the answer is “no” to both.
I have spent more than enough time playing various games, and not once have I thought, “I wish the game I play to escape reality would better resemble reality.” Imagine the characters start looking like actual people you care about, and you’re in a game where you’re made to shoot them. Or what if one of the characters comes to resemble you, and now whatever that character says or does, it will be associated with you? And random strangers start to call out to you in public, treating you as if you’re actually that character?
Then of course, the Uncanny Valley itself is a treacherous pit to avoid. At a certain point in the journey toward hyper-realism, any minor flaw can become unsettling. That issue doesn’t happen with games that are clearly fictional renderings. There’s a reason many people consider realistic-looking porcelain dolls to be creepy, but they don’t feel the same way about a rag doll (except maybe for those with buttons for eyes. Thanks, Coraline.)
Well this was a no brainer, I don’t know if anyone in the topic really believed game devs liked it, Executive and financing though? They love it.
But… “leadership and executive love DLSS 5” sounds worse than “Game Dev’s Love DLSS 5”.
The game devs position gives a false sense of security of “oh well maybe it isn’t actually a bad thing”… the other just sounds like cost saving.
Well, it’s the only way to make Starfield seem alright.
…Albeit only in stills or YT shorts.
I bet game devs don’t even want DLSS anyway. I certainly don’t. For me it’s native or nothing.
Using it for upscaling and frame interpolation was reasonable - that felt like a good use of AI to emulate higher performance for a lower cost. But using it as a filter over the entire vision of the game? Utterly insane and strongly smells of NVIDIA targetting other companies for more investment/sales, not gamers.







