Uh huh…

  • FauxLiving@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    11 hours ago

    Except DLSS 5 isn’t just upscaling. It’s replacing the image.

    Technically all upscaling replaces the frame with a higher resolution frame.

    Even with non-AI upscaling, like linear or bicubic, the original frame isn’t copied and then upscaled. The upscaled image is built based on the old image andreplaces the original frame in the frame buffer. DLSS doesn’t alter the process, it just uses a neural network instead of a linear/bicubic algorithm.

    The new difference with DLSS 5 seems to be that instead of using the frame as the only input it also takes in additional information from earlier in the rendering pipeline (motion vectors) prior to upscaling. This would theoretically create more accurate outputs.

    It’s kind of like how asking an LLM a question becomes more accurate if you first paste the Wikipedia article which answers your question into the context. Having more information allows for better output quality.

    And to achieve what it does, they used one 5090 to render the game normally, and an entire second 5090 just to run DLSS 5.

    How is that an improvement in efficiency?

    Based on the reporting the use of 2x 5090s in the demo was due to the VRAM requirements of the current iteration, it isn’t due to a higher compute requirement. The official DLSS5 release will run on a single card (according to NVIDIA).

    • MentalEdge@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      6 hours ago

      It’s adding light sources and details that weren’t there, which it can’t possible keep consistent from one scene to the next.

      For the light sources especially, it’s removing shadows and adding light in ways that make no physical sense.

      Using motion vectors and geometry data isn’t new. Previous generations of DLSS as well as framegen were already doing that.

      What’s new here, is that they stopped inferring details, and started making them up.

      The output will not be “more accurate”. It can’t be.

      Even if this model doesn’t implement the randomness of other AI tech and remains deterministic, that still won’t allow devs to accurately control output for the literally infinite number of potential scenes players can create in a game.

      • FauxLiving@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 hours ago

        I get your point, I don’t think it looks very good on the whole and I almost certainly won’t use it.

        However, the direction that they’re going in inserting it earlier in the rendering chain seems a bit more promising than simply taking a low-res output and making it bigger.

        I could easily see having the ability to add properties of materials/shaders which would exclude them from the process. An artist may not care too much about how the grass is enhanced they may want to disable it for parts of a character’s model or set pieces in the world.

        That kind of thing isn’t really possible with DLSS as it stands now (and probably isn’t possible with DLSS 5), but the idea of attacking the problem earlier in the rendering sequence is interesting.