Uh huh…

  • CerebralHawks@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    5
    ·
    11 hours ago

    I don’t like AI, but setting that aside, isn’t DLSS just frame generation to improve performance of games on weaker hardware? So if the AI bubble is buying up all your DDR5 and your DDR4 isn’t fast enough for the frame rate and resolution you want, isn’t DLSS a good thing? Like yes it’s using AI to figure out what goes in between the frames, but it’s basing that off of human created frames around it. Kind of like how those 60FPS 4K anime intros on YouTube work. Except a lot of people don’t like them either… but when it comes to gaming, people who want 60FPS or higher can leave the setting on, and people who don’t want it can suffer at 20-30FPS if that’s what they want.

    I do realise that what people really want is affordable computer parts, but that’s not happening any time soon, and it won’t happen soon enough for the next generations of Xbox and PlayStation. While the current generations were marred by availability at launch due to scalpers, I feel the next generation will be marred by its compromises in the name of AI. And, while this is a PC gaming comm, the consoles do drive the industry, and PC gamers get what’s left. Fortunately — especially with Betheslop games — you can often mod some of the more egregious shit out.

    • inclementimmigrant@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      5 hours ago

      That’s what the tech was touted as over a decade ago when this started with both DLSS and FSR. Give an extension of life to your older cards.

      Currently that’s not what it used for, it’s now a tool that allows developers to not give a crap about optimizing the game or creating textures and models that look good out of the box. Now it’s THE tool that will get you up to 60 fps with a GPU that has a ton of expensive RAM because developers don’t have to care just let AI make it’s guess, better have money because screw your low end gaming, it’s THE tool to ensure that native rendering and models don’t have to be good, just slap on what you want it to look like and let AI do the rest, artists intention be damned.

      With it running on two 5090’s, and yeah, yeah, they said it’ll run on one card and we all should believe corporations, it looks like their little way of starting to make owning a gaming computer too expensive for anyone and why don’t you just subscribe to our cloud gaming instead where you can rent out the capability we decide to give you. Call me cynical but that’s what I’m seeing here.

    • MentalEdge@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      edit-2
      10 hours ago

      Except DLSS 5 isn’t just upscaling. It’s replacing the image.

      And to achieve what it does, they used one 5090 to render the game normally, and an entire second 5090 just to run DLSS 5.

      How is that an improvement in efficiency? And all to achieve a look that lands deeper in the uncanny valley than anyone has ever been.

      • CerebralHawks@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 hour ago

        Well, then I’m either thinking of an older DLSS, or something else entirely.

        BBC ran an article with a bit more to it, showing the same Resident Evil image with/without DLSS 5, and it looked more different than I expected it to. So nah, I’m not gonna defend this.

        • MentalEdge@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          51 minutes ago

          Yeah. Previous generations of DLSS were about achieving the same result with less.

          DLSS 5 is about “improving” what was previously the end result, even in cases where DLSS wouldn’t have been needed in the first place.

          Nvidia is claiming it achieves “hollywood movie cgi” fidelity.

          In practice it looks like the result is just not pleasant.

      • FauxLiving@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        9 hours ago

        Except DLSS 5 isn’t just upscaling. It’s replacing the image.

        Technically all upscaling replaces the frame with a higher resolution frame.

        Even with non-AI upscaling, like linear or bicubic, the original frame isn’t copied and then upscaled. The upscaled image is built based on the old image andreplaces the original frame in the frame buffer. DLSS doesn’t alter the process, it just uses a neural network instead of a linear/bicubic algorithm.

        The new difference with DLSS 5 seems to be that instead of using the frame as the only input it also takes in additional information from earlier in the rendering pipeline (motion vectors) prior to upscaling. This would theoretically create more accurate outputs.

        It’s kind of like how asking an LLM a question becomes more accurate if you first paste the Wikipedia article which answers your question into the context. Having more information allows for better output quality.

        And to achieve what it does, they used one 5090 to render the game normally, and an entire second 5090 just to run DLSS 5.

        How is that an improvement in efficiency?

        Based on the reporting the use of 2x 5090s in the demo was due to the VRAM requirements of the current iteration, it isn’t due to a higher compute requirement. The official DLSS5 release will run on a single card (according to NVIDIA).

        • MentalEdge@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          5 hours ago

          It’s adding light sources and details that weren’t there, which it can’t possible keep consistent from one scene to the next.

          For the light sources especially, it’s removing shadows and adding light in ways that make no physical sense.

          Using motion vectors and geometry data isn’t new. Previous generations of DLSS as well as framegen were already doing that.

          What’s new here, is that they stopped inferring details, and started making them up.

          The output will not be “more accurate”. It can’t be.

          Even if this model doesn’t implement the randomness of other AI tech and remains deterministic, that still won’t allow devs to accurately control output for the literally infinite number of potential scenes players can create in a game.

          • FauxLiving@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            5 hours ago

            I get your point, I don’t think it looks very good on the whole and I almost certainly won’t use it.

            However, the direction that they’re going in inserting it earlier in the rendering chain seems a bit more promising than simply taking a low-res output and making it bigger.

            I could easily see having the ability to add properties of materials/shaders which would exclude them from the process. An artist may not care too much about how the grass is enhanced they may want to disable it for parts of a character’s model or set pieces in the world.

            That kind of thing isn’t really possible with DLSS as it stands now (and probably isn’t possible with DLSS 5), but the idea of attacking the problem earlier in the rendering sequence is interesting.

    • Ibuthyr@feddit.org
      link
      fedilink
      English
      arrow-up
      3
      ·
      9 hours ago

      I just don’t give a shit about graphics anymore. I run almost anything at very high details on an rtx2700 super, which I bought off eBay right before covid hit. I don’t need graphics that require more hardware. If DLSS is just compensating for shitty vibe code and the base game runs like shit, I’m not going to play it. Got plenty of games in my backlog anyway.

    • prole@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      9 hours ago

      but when it comes to gaming, people who want 60FPS or higher can leave the setting on, and people who don’t want it can suffer at 20-30FPS if that’s what they want.

      The problem is that devs have stopped bothering with optimizing games, and instead ship with shit like DLSS or FSR on by default. And the only way to get 60fps is to keep it on.