Uh huh…
First optional but gets pushed more and more until it’s mandatory.
Nvidia has no history of forcing it on us. It’s always an option. They have a lot of options today
Okay so ethics debate aside… it just looks bad? If they get this to work in real time, that’s technologically impressive, but it literally just looks like a “slop filter”.
- Overtuned contrast
- Overuse of back lighting
- Too many shiny and wet surfaces
- And the worst of all: faces get “yassified” to look like entirely different people - looking more like overly-edited supermodel photos than real people
Yes: hair and skin are notoriously difficult to render well in real time… but if you’re running a model like this, you can probably afford to calculate the SSS properly lol
Or idk. Maybe only run the slop filter on skin and hair. RTGI is already pretty much perfect these days if you do it right
RTGI is already pretty much perfect these days if you do it right
Ray Tracing the entire scene looks great. It’s also way more computationally expensive than upscaling.
DLSS is just a shortcut, and shortcuts have costs. I don’t like the image quality cost so I don’t use DLSS (XeSS looks better anyway) and so I just buy more powerful hardware. Someone on a low-end machine can’t simply enable raytracing and still have a playable game, DLSS gives them more options.
The hogwarts legacy demo was especially egregious. The mesh they applied make that teenager look like a thirty year old with the crap the did. It was absolutely horrible.
And yeah, even the resident evil screen, I know that everyone is focused on the whole AI only fans update, but just the scene behind the character, everything was just brighter, kind of ruining the moody, melancholy look and I’m guessing the intent of the original scene.
FYI @TheObviousSolution@thebrainbin.org has been supporting this in every thread that popped up with misinformation and toxic positivity. Likely a bot or a marketing aide.
Be aware the NV’s marketing is all over this, even in Lemmy.
One of the many aliases spamming positivity.
Yeah, I just downvoted and moved on. Thought about replying, still have it in draft, but figured it would have been pointless with what the guy wrote even originally and then was proven right when the walls of text with nothing substantial came from it.
This is an alt of TheObviousSolution@lemmy.ca, yes. There’s another one, alts are not new.
supporting this in every thread that popped up
The support:
-
NVIDIA is an abusive monopoly fueling international cartels partly responsible for the exaggerated PC prices.
-
Requires high end GPUs, meaning giving said monopoly more power
-
In actual gameplay, motion when the character talks creates this uncanny valley effect.
One of the many aliases spamming positivity.
… I just created this one. I think you just stated your real problem with my comments
positivity.
I’m not completely negative about the technology.
misinformation
This was actually right, before I created the account. Which is why I went through and corrected myself, and have made sure to rectify myself soon after. The purpose of this account is not to spread misinformation, it is to work on my mbin alts.
-
Remember that Bethesda is owned by Microsoft who wants to increase adoption of this technology for their own financial gain
How would Microsoft benefit from this? I thought they were mainly in the LLM and user information hoarding business.
will all be under our artists’ control
Literally impossible. The entire point of the tech is it is autonomous, that it can “improve” things moment by moment. That is by definition outside of their control. Also, it better be fucking optional because only the 1% are playing games with dual-5090s. These fuckers are so out of touch.
Also,
This is a very early look
Motherfucker, you say this is releasing within the year. How is this “very early”? It should be in the polishing up stages by any reasonable, professional timeline.
I’ve now read more about this and developers DO have a ton of control. They can choose what parts of the image to apply it to and with what intensity. So I guess it’s not literally impossible
I don’t see why it wouldn’t work the same way as shaders. There’s just no way a developer making a 3d puzzle game would be forced to have it enabled
You don’t understand, once DLSS 5 is released into the wild then nobody will have a choice. It’s basically Skynet, the end of the world, Snow Crash, a breach in the Black Wall.
It will install itself the moment a person searches for Godot tutorials and nobody can ever disable it. It would be LITERALLY IMPOSSIBLE (didn’t you see that they said ‘literally’?!) for an artist to control.
/s
I hate Nvidia and think this demo (mostly) looks like shit but these hyperbolic reactions are making me feel like the crazy one. I know it’s janky and running on 2 cards but it’s wild that it’s happening in real time and IMO it’s really interesting tech. There are so many cool ways this could be applied beyond hyper realism
You’re not crazy, you’re just reading a topic associated with AI and so it’s full of bots, their misinformation and outrage, and the idiots that are influenced by them.
Like all of these threads, we get these insane bad faith ‘arguments’, misinformation and heavy vote manipulation.
There are certainly valid criticisms about DLSS. It creates visual artifacts, it’s often used as a crutch by games to create performance, in the case of DLSS 5 the overall effect is weird as you’ve said. I agree with a lot of the complaints and I’ll probably enable DLSS 5 once and then go back to native… but I think that a lot of comments here are just ridiculous so you’re not alone there :P.
I read some more about it and it looks like developers have a lot of granular control. Not just % applied but options per object type. So they can max it out for faces, 50% for water, 25% for foliage, etc.
There are some legitimately awesome use cases for this especially if they let developers train their own models. I didn’t play Death Stranding but I know they’ve got detailed face scans of Norman Reedus…imagine if the Norman filter got applied to his character in-game.
If it’s that controllable that’s pretty cool. I could see it being useful to do things that are normally expensive (like raytracing shadows on grass) but which don’t really matter if they’re altered a bit. Being able to exclude faces or important set pieces would be a big plus.
Not that it matters much for me, my next card will likely be AMD for Linux reasons.
Motherfucker, you say this is releasing within the year. How is this “very early”? It should be in the polishing up stages by any reasonable, professional timeline.
Don’t worry, they’ll speed up the dev and QA time with AI.
Claiming something is still in progress and that major change can happen before release is a classic tech industry public relations game, and too many “influencers” take it at face value.
It’s infuriating.
If this was:
“under our artists’ control, and totally optional for players.” - Bethesda
Then Nvidia and Bethesda and whoever involved should’ve said so from the beginning of the release.
Just like Jordan Gerblick from gamesradar says this is obvious damage control as people are justifyably pissed with onslaught of AI slop
Seriously shut the fuck up and give people what we really want:
- affordable reasonably priced GPUs
- stable drivers
- and keep your AI filth out of our stuff and games🖕

It was a matter time for AI slop to came to videogames. But the solution is rather easy, just don’t buy those games.
This is no different from any tiktok filter. It’s slop, If I wanted my characters to look like nazi Noem I’d design them thusly.
PS: Yassified games will now be $80 and you’ll like it.
Didnt Todd outright say ‘this is the way I would have wanted start field to look’? I swear that was said in the digital foundery video.
This is damage control till we can get it in our hands and hope we forgot we dont like the slop they are feeding their money pigs.
Time to…

I don’t like AI, but setting that aside, isn’t DLSS just frame generation to improve performance of games on weaker hardware? So if the AI bubble is buying up all your DDR5 and your DDR4 isn’t fast enough for the frame rate and resolution you want, isn’t DLSS a good thing? Like yes it’s using AI to figure out what goes in between the frames, but it’s basing that off of human created frames around it. Kind of like how those 60FPS 4K anime intros on YouTube work. Except a lot of people don’t like them either… but when it comes to gaming, people who want 60FPS or higher can leave the setting on, and people who don’t want it can suffer at 20-30FPS if that’s what they want.
I do realise that what people really want is affordable computer parts, but that’s not happening any time soon, and it won’t happen soon enough for the next generations of Xbox and PlayStation. While the current generations were marred by availability at launch due to scalpers, I feel the next generation will be marred by its compromises in the name of AI. And, while this is a PC gaming comm, the consoles do drive the industry, and PC gamers get what’s left. Fortunately — especially with Betheslop games — you can often mod some of the more egregious shit out.
That’s what the tech was touted as over a decade ago when this started with both DLSS and FSR. Give an extension of life to your older cards.
Currently that’s not what it used for, it’s now a tool that allows developers to not give a crap about optimizing the game or creating textures and models that look good out of the box. Now it’s THE tool that will get you up to 60 fps with a GPU that has a ton of expensive RAM because developers don’t have to care just let AI make it’s guess, better have money because screw your low end gaming, it’s THE tool to ensure that native rendering and models don’t have to be good, just slap on what you want it to look like and let AI do the rest, artists intention be damned.
With it running on two 5090’s, and yeah, yeah, they said it’ll run on one card and we all should believe corporations, it looks like their little way of starting to make owning a gaming computer too expensive for anyone and why don’t you just subscribe to our cloud gaming instead where you can rent out the capability we decide to give you. Call me cynical but that’s what I’m seeing here.
Except DLSS 5 isn’t just upscaling. It’s replacing the image.
And to achieve what it does, they used one 5090 to render the game normally, and an entire second 5090 just to run DLSS 5.
How is that an improvement in efficiency? And all to achieve a look that lands deeper in the uncanny valley than anyone has ever been.
Except DLSS 5 isn’t just upscaling. It’s replacing the image.
Technically all upscaling replaces the frame with a higher resolution frame.
Even with non-AI upscaling, like linear or bicubic, the original frame isn’t copied and then upscaled. The upscaled image is built based on the old image andreplaces the original frame in the frame buffer. DLSS doesn’t alter the process, it just uses a neural network instead of a linear/bicubic algorithm.
The new difference with DLSS 5 seems to be that instead of using the frame as the only input it also takes in additional information from earlier in the rendering pipeline (motion vectors) prior to upscaling. This would theoretically create more accurate outputs.
It’s kind of like how asking an LLM a question becomes more accurate if you first paste the Wikipedia article which answers your question into the context. Having more information allows for better output quality.
And to achieve what it does, they used one 5090 to render the game normally, and an entire second 5090 just to run DLSS 5.
How is that an improvement in efficiency?
Based on the reporting the use of 2x 5090s in the demo was due to the VRAM requirements of the current iteration, it isn’t due to a higher compute requirement. The official DLSS5 release will run on a single card (according to NVIDIA).
It’s adding light sources and details that weren’t there, which it can’t possible keep consistent from one scene to the next.
For the light sources especially, it’s removing shadows and adding light in ways that make no physical sense.
Using motion vectors and geometry data isn’t new. Previous generations of DLSS as well as framegen were already doing that.
What’s new here, is that they stopped inferring details, and started making them up.
The output will not be “more accurate”. It can’t be.
Even if this model doesn’t implement the randomness of other AI tech and remains deterministic, that still won’t allow devs to accurately control output for the literally infinite number of potential scenes players can create in a game.
I get your point, I don’t think it looks very good on the whole and I almost certainly won’t use it.
However, the direction that they’re going in inserting it earlier in the rendering chain seems a bit more promising than simply taking a low-res output and making it bigger.
I could easily see having the ability to add properties of materials/shaders which would exclude them from the process. An artist may not care too much about how the grass is enhanced they may want to disable it for parts of a character’s model or set pieces in the world.
That kind of thing isn’t really possible with DLSS as it stands now (and probably isn’t possible with DLSS 5), but the idea of attacking the problem earlier in the rendering sequence is interesting.
I just don’t give a shit about graphics anymore. I run almost anything at very high details on an rtx2700 super, which I bought off eBay right before covid hit. I don’t need graphics that require more hardware. If DLSS is just compensating for shitty vibe code and the base game runs like shit, I’m not going to play it. Got plenty of games in my backlog anyway.
but when it comes to gaming, people who want 60FPS or higher can leave the setting on, and people who don’t want it can suffer at 20-30FPS if that’s what they want.
The problem is that devs have stopped bothering with optimizing games, and instead ship with shit like DLSS or FSR on by default. And the only way to get 60fps is to keep it on.
I think it’s a testament to DLSS 5 that people are calling it AI slop and can’t seem to recognize that the geometry/shape aren’t being modified, just lighting and material effects. Reminds me of this: https://www.youtube.com/watch?v=DKCyk3CeUFY
There is a lot of valid criticism of AI slop, there is even a lot of criticism about DLSS multi frame generation, but people who misuse the same term for everything just take away the meaning and credibility it and they had. For example, this technology wasn’t even trained on IP theft for a change!
The term slop is essentially meaningless.
It’s like people that ‘woke’ as an insult, it applies to everything they don’t like despite nobody having a clear definition of what it actually means.
To me, slop is the mass produced articles/videos created by generative and not ‘everything that is done with machine learning’.
Simply calling everything AI ‘slop’ is meaningless virtue signaling, like using ‘woke’.
I think that being a purist about terminology misses the point and substance of the complaints people have about this.
The point and substance of an argument are made with more precise and nuanced words, not by using less of them poorly. There is no point and substance to deliberately trying to portray this as generative AI, which a lot of the comments are trying to do.
For example, you’ve said nothing and have absolutely not made the point and substance of your problem with DLSS 5 clear, while I actually have. People would have to take wild guesses to try to get to “the point and substance” of your issue with it.
Perhaps what you are actually referring to is the tendency for people to justify lying and throwing shade about a thing if they hate the thing associated with it enough. That’s just throwing sloppy shade to me. Judging by the downvotes and the correlation that exists between this tendency and them, I suspect this might apply here instead.
I’ve actually just been corrected, this was referred to employ some form of generative AI by Jensen. It’s also significantly different enough to what I generally thought of as AI slop and my issues with it that it could also be said that I am a supporter of generative AI now. I am surprised by the application of the label, but it does prove me wrong.
Fine then. Make it clear how it is not approproate to label this generative AI. That’s the basis of your claim that everyone else is being sloppy. Back it up with more than just your own declaration.
Even here you’ve not backed up your beliefs or statements with anything beyond restating your original point.
To anyone just glancing at the promo before and after image, this appears to just be applying image generating AI toolchain tech to the preexisting frame generation. There is at least some amount of responsibility on nvidia for using an image that gives off this look.
Pretending that a reasonable conclusion that a large amount of people are drawing simply isn’t reasonable, and that it is for reasons entirely self-evident, is just masturbation.
I don’t think it’s possible to convince anyone with a closed mind, but sure.
This is doing the same thing as here: https://www.youtube.com/watch?v=DKCyk3CeUFY
It is not changing geometry or shapes.
It is changing lighting.
It is changing material properties.
There are no “image generating AI toolchain tech” involved. There is no image generation happening.
To quote the literal title of a previous post, “Nvidia’s DLSS 5 AI-infused tech transforms pixels with photorealistic lighting and materials” - but it does not transform geometry. I know this because rather than live in my assumptions, I dared look up more information about it instead of tucking in my presumptions at the end of my comment.
It does involve AI (just like previous DLSS has), and just like previous versions, looks at color and motion vectors. It’s outputs are lighting and material properties, “applying a mask”. It can be criticized, but for different reasons. It seems to create an uncanny valley effect worse than generative AI would in actual usage, precisely because it is not changing geometry or shapes, “image generating”.
It can be confirmed by looking at the examples. I urge you to do the same, but I don’t have a lot of hope. MAGA exists because of confirmation bias, and it does not have exclusivity on it. While wrong and being an asshole about it, thanks for at least making some effort of an explanation this time.
I’ve actually just been corrected, this was referred to employ some form of generative AI by Jensen. It’s also significantly different enough to what I generally thought of as AI slop and my issues with it that it could also be said that I am a supporter of generative AI now. I am surprised by the application of the label for the aforementioned reasons, but it does prove me wrong.
I’ve not been an asshole here, you’ve consistently talked down at everyone calling this slop due to some minor technicality in terminology that you’ve still failed to back up or expand on beyond linking to the same video a second time.
You also have really zeroed in on some claims that I’ve literally never heard anyone make:
It is not changing geometry. It is changing lighting. It is changing material properties.
No one has said shit about geometry, lighting, or materials because that is not the level at which DLSS operates. Both in previous versions and in this latest version.
It’s not what anyone thinks is going on here, and it calls into question your own understanding of all this that you’ve now insisted upon it twice. It’s not making lighting and materials changes. You’re confusing raytracing which is often turned on and off in graphics presets alongside DLSS because of the intense resource usage, but it is not part of DLSS. Go download a mod for finer grained graphics settings controls in Cyberpunk 2077 and that much will be made clear.
There are plenty of tools people can use to get an idea of how any games’ rendering pipline works, such as Special K as shouted out by the video you linked. Personally I like Reshade for getting a look at render passes, output targets, buffers etc.
DLSS operates on a completed “flat” render output/buffer. As far as I’m aware, It has no knowledge of geometry, materials, or shaders unless the devs are really doing wacky shit and have direct line to nvidia devs. Maybe they’re passing it the depth and normal buffers as well as the flat render output. That opens a lot of options (see marty’s RTGI shader) but is demonstrably still just working with slightly more than gets slapped on the screen as a flat raster image.
It can do edge detection as movement detection through comparing a number of the previous input frames using the types of techniques used in video compression to detect and handle movement, as the end of your video makes small mention of.
Usually it’s used on the output of the 3d render pipeline before the flat HUD elements are slapped on top. Apparently a lot of games the guy that made the video tested didn’t seperate out the HUD layer, or maybe it had something to do with his previous methodology. I’m not watching multiple of his videos to check, and I find it kind of hilarious that someone would think they were some voice of knowledge on how this stuff works if they put the kind of effort they indicated they had for their previous videos without using Special K.
I had already watched the video you linked. I’ve now watched it twice to ensure I didn’t miss anything.
It’s some guy playing with the features in Special K that allow you to utilize DLSS at arbitrary upscalong ratios while allowing HUD elements to render at the viewport resolution. It has nothing to do with the underlying tech or how DLSS works beyond showing that the defaults in most games could be better tuned.
He has a short bit talking about older anti-aliasing tech, then says that DLSS is an advancement without actually getting into how it works.
In all 18 minutes, there is hardly 60 seconds discussing the actual tech, and it literally uses the term generation.
So to be clear, since you seem to be highly mistaken about this: DLSS uses image generation technology along with some very fancy edge detection to attempt to fill in gaps and generate extra details that are not present in the original image.
It is not rendering only the needed sections at higher resolution or anything along those lines, but I can see how someone may think that was implied by your video.
So again, now that I hopefully have shown you that I do in fact know more than a decent bit about how DLSS works, and you still have not provided more to back up your point beyond a video of some guy fucking around with Special K and going “whoa cool”…
What part of DLSS generating image data that does not exist in a lower resolution source image and using it to fill in what would otherwise be repeated pixels in a traditionally upscaled (nearest neighbor, bilinear, trilinear, etc) image… how is that not generative?
Edit:
Would it kill you to not double the length of your goddamn comment after posting it?
I’ve got better things to do at this point than continue this, but at a glance I see that you took Nvidia’s news post’s wording as gospel.
Edit again:
It’s clear now, you got hung up on some misleading marketing wording in one of the headlines. You even admit it uses AI to generate additional image data. Stop being condescending.
Confirmation bias and closed mindedness it is.
(DLSS 4.5)
DLSS 4 introduced a transformer model architecture with NVIDIA GeForce RTX 50 Series GPUs. That enabled a leap in image quality over our previous convolutional neural network. The second-gen transformer model for DLSS 4.5 Super Resolution uses 5x more compute and is trained on an expanded data set, so it has greater context awareness of every scene and more intelligent use of pixel sampling and motion vectors.
“It AlTeRs ThE fINaL iMaGe So It GeNeRaTeS iMaGe DaTa” at this point. I don’t think you are even bothering to check just how many things you could call image generation at that point.
you took Nvidia’s news post’s wording as gospel.
“ThE dEvElOpEr Is LyInG!”
NVIDIA might be many things, in marketing particularly so, but in this particular blog it is not. Then again, it’s like what I said:
Perhaps what you are actually referring to is the tendency for people to justify lying and throwing shade about a thing if they hate the thing associated with it enough.
Ergo, now nothing NVIDIA says can be trusted now. If you were going to be this reductive, not sure why you didn’t open with this. It’s a clear win from your perspective, but I don’t think there’s any hope of a shared reality between us. It’s all a lie by big corpo, after all.
It’s funny how you complain about me not providing more links, while calling the most direct ones lies. All I would have done is having to subject a creator to the same sort of shade you are trying to throw at me. After all, if the primary source of information is lying, those reporting it are just spreading lies.
Not gonna subject other people to downvotes and harassment from assholes, they get enough of that already. I’m afraid you’ll just have to disingenuously act as if you can’t perform searches yourself or that they exist.
I already was pretty certain nothing that I said could convince you, but it’s going to be so funny when in a few months this take becomes so obviously bad. I like to type and edit, sue me, although it’s also funny how quickly you also decided to participate in the endeavor. Call it a chance to disengage.
It’s just tragic how having the capacity to know better, some people fool themselves. This is not image generation, buddy, and that’s what AI slop typically refers to. The term AI has long preceded the term AI slop.
Sorry, gonna have a wonderful day.
I call it how I see it - close minded because of how set you are arguing against something that seems rather evident, an asshole because you downvote first and don’t provide explanations without an ordeal of an interaction that immediately begins with belittling me with false claims (there was plenty of backing up you skipped over with your downvotes across the threads), and compared to MAGA because they are just such an evident example of people stuck in their own bubble through extreme confirmation bias and closed mindedness. I could be more respectful, but were you?
I think at this point in time, we have to come up with a term for these sort of threads: circlejerk slop. Guys, stop making generative AI look good, as bad as it is I’d choose it any day of the week over these circlejerk hallucinations. Do not expect them to carry across time and place.
I’ve actually just been corrected, this was referred to employ some form of generative AI by Jensen, who presumably did not lie in this instance. It’s also significantly different enough to what I generally thought of as AI slop and my issues with it that it could also be said that I am a supporter of generative AI now. I am surprised by the application of the label, but it does prove me wrong.
My guy you literally linked some guy fucking around in Special K as supposedly an explanation of the tech, you misread a marketing headline as being technically descriptive, and yourself even admit that it uses AI to generate which is the common usage nowadays for the label.of slop.
I definitely appreciate being called close minded, an asshole, and compared to MAGA for not agreeing with your personal stick up your ass about what you think is proper terminology though.
Have a rotten day.
Please refrain from spreading misinformation and toxic trolling. We do not condone this kind of behaviour on our instance.
It’s a single shot, picked to showcase the technology. Even here the ear’s outline is messed with.
But more importantly the material/texture being replaced is wrong. Its way too bright and sharp. It’s no more realistic than the original, it simply has different drawbacks and frankly looks jarringly out of place. It also fucks up the eyes’ tracking and the water ripples on the ground.
Exactly, on the other post it looked like bot posts, but I think it was actual people… Which is sad. Even if you hate Nvidia and ai… Which is completely fine.
There’s DLSS 5, from the janky animation shifts when you aren’t looking at a still photo of character’s faces to the amount of hardware you’d actually need to get these results and the cost of it in today’s market and whether photorealism is truly needed to enjoy a game, specially if it means giving money to the largest culprit of the AI bubble. But just jumping on the circlejerk of calling it AI slop is sort of its own slop.
I’ve actually just been informed this was referred to employ some form of generative AI by Jensen.










