Literally impossible. The entire point of the tech is it is autonomous, that it can “improve” things moment by moment. That is by definition outside of their control. Also, it better be fucking optional because only the 1% are playing games with dual-5090s. These fuckers are so out of touch.
Also,
This is a very early look
Motherfucker, you say this is releasing within the year. How is this “very early”? It should be in the polishing up stages by any reasonable, professional timeline.
I’ve now read more about this and developers DO have a ton of control. They can choose what parts of the image to apply it to and with what intensity. So I guess it’s not literally impossible
You don’t understand, once DLSS 5 is released into the wild then nobody will have a choice. It’s basically Skynet, the end of the world, Snow Crash, a breach in the Black Wall.
It will install itself the moment a person searches for Godot tutorials and nobody can ever disable it. It would be LITERALLY IMPOSSIBLE (didn’t you see that they said ‘literally’?!) for an artist to control.
I hate Nvidia and think this demo (mostly) looks like shit but these hyperbolic reactions are making me feel like the crazy one. I know it’s janky and running on 2 cards but it’s wild that it’s happening in real time and IMO it’s really interesting tech. There are so many cool ways this could be applied beyond hyper realism
You’re not crazy, you’re just reading a topic associated with AI and so it’s full of bots, their misinformation and outrage, and the idiots that are influenced by them.
Like all of these threads, we get these insane bad faith ‘arguments’, misinformation and heavy vote manipulation.
There are certainly valid criticisms about DLSS. It creates visual artifacts, it’s often used as a crutch by games to create performance, in the case of DLSS 5 the overall effect is weird as you’ve said. I agree with a lot of the complaints and I’ll probably enable DLSS 5 once and then go back to native… but I think that a lot of comments here are just ridiculous so you’re not alone there :P.
I read some more about it and it looks like developers have a lot of granular control. Not just % applied but options per object type. So they can max it out for faces, 50% for water, 25% for foliage, etc.
There are some legitimately awesome use cases for this especially if they let developers train their own models. I didn’t play Death Stranding but I know they’ve got detailed face scans of Norman Reedus…imagine if the Norman filter got applied to his character in-game.
If it’s that controllable that’s pretty cool. I could see it being useful to do things that are normally expensive (like raytracing shadows on grass) but which don’t really matter if they’re altered a bit. Being able to exclude faces or important set pieces would be a big plus.
Not that it matters much for me, my next card will likely be AMD for Linux reasons.
Motherfucker, you say this is releasing within the year. How is this “very early”? It should be in the polishing up stages by any reasonable, professional timeline.
Don’t worry, they’ll speed up the dev and QA time with AI.
Claiming something is still in progress and that major change can happen before release is a classic tech industry public relations game, and too many “influencers” take it at face value.
Literally impossible. The entire point of the tech is it is autonomous, that it can “improve” things moment by moment. That is by definition outside of their control. Also, it better be fucking optional because only the 1% are playing games with dual-5090s. These fuckers are so out of touch.
Also,
Motherfucker, you say this is releasing within the year. How is this “very early”? It should be in the polishing up stages by any reasonable, professional timeline.
I’ve now read more about this and developers DO have a ton of control. They can choose what parts of the image to apply it to and with what intensity. So I guess it’s not literally impossible
I don’t see why it wouldn’t work the same way as shaders. There’s just no way a developer making a 3d puzzle game would be forced to have it enabled
You don’t understand, once DLSS 5 is released into the wild then nobody will have a choice. It’s basically Skynet, the end of the world, Snow Crash, a breach in the Black Wall.
It will install itself the moment a person searches for Godot tutorials and nobody can ever disable it. It would be LITERALLY IMPOSSIBLE (didn’t you see that they said ‘literally’?!) for an artist to control.
/s
I hate Nvidia and think this demo (mostly) looks like shit but these hyperbolic reactions are making me feel like the crazy one. I know it’s janky and running on 2 cards but it’s wild that it’s happening in real time and IMO it’s really interesting tech. There are so many cool ways this could be applied beyond hyper realism
You’re not crazy, you’re just reading a topic associated with AI and so it’s full of bots, their misinformation and outrage, and the idiots that are influenced by them.
Like all of these threads, we get these insane bad faith ‘arguments’, misinformation and heavy vote manipulation.
There are certainly valid criticisms about DLSS. It creates visual artifacts, it’s often used as a crutch by games to create performance, in the case of DLSS 5 the overall effect is weird as you’ve said. I agree with a lot of the complaints and I’ll probably enable DLSS 5 once and then go back to native… but I think that a lot of comments here are just ridiculous so you’re not alone there :P.
I read some more about it and it looks like developers have a lot of granular control. Not just % applied but options per object type. So they can max it out for faces, 50% for water, 25% for foliage, etc.
There are some legitimately awesome use cases for this especially if they let developers train their own models. I didn’t play Death Stranding but I know they’ve got detailed face scans of Norman Reedus…imagine if the Norman filter got applied to his character in-game.
If it’s that controllable that’s pretty cool. I could see it being useful to do things that are normally expensive (like raytracing shadows on grass) but which don’t really matter if they’re altered a bit. Being able to exclude faces or important set pieces would be a big plus.
Not that it matters much for me, my next card will likely be AMD for Linux reasons.
Don’t worry, they’ll speed up the dev and QA time with AI.
Claiming something is still in progress and that major change can happen before release is a classic tech industry public relations game, and too many “influencers” take it at face value.
It’s infuriating.