• pivot_root@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    1
    ·
    edit-2
    3 days ago

    Sometimes, I ask OpenClaw to generate some code

    https://github.com/lutris/lutris/discussions/6530#discussioncomment-16088355

    OpenClaw is extremely vulnerable to prompt injection. If the maintainer is using it to author code, you absolutely can not trust that the code is safe from exploits obfuscated as unintentional logic errors or bugs.

    There’s purity testing, and then there’s being cautious about running code made by someone who is doing something incredibly stupid and unsafe. This is the latter.

    • 9WhiteTeeth@lemmy.today
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      6
      ·
      3 days ago

      You are assuming the author is being unsafe & not auditing code for very basic security issues.

      Let me present this angle, small teams of volunteer open source developers finally have a way to help ease the amount of code they produce, but you want them to continue doing all the work manually because AI hurts your feefees.

      Further, you are openly declaring you don’t trust the devs to audit their own code.

      If you can find a security vulnerability in the code (it is open source after all) I’ll cede, but otherwise, I think it is a good thing responsible AI use can help shoulder the work these folks do for our benefit.

      • pivot_root@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        2 days ago

        For a research experiment, a university snuck malicious commits with subtle but exploitable bugs past the maintainers of the Linux kernel.

        I trust the Linux kernel maintainers to be capable of finding obfuscated exploits far more than I trust this guy, and even they failed to identify a bunch of them.

        • 9WhiteTeeth@lemmy.today
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          4
          ·
          2 days ago

          Two things, the experiment you are referring to was specifically designed to deceive whereas AI vulnerabilities would just be simple bugs.

          Secondly, the security requirements of the Linux Kernel are way more important/stringent than Lutris, which has no special access & is often even further sandboxed if installed via Flatpak.

          I just don’t see this as an issue until it proves to be one. People are always welcome to fork a “pure” version.

          • pivot_root@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            2 days ago

            the experiment you are referring to was specifically designed to deceive whereas AI vulnerabilities would just be simple bugs.

            In my original comment, I was specifically referring to OpenClaw. Given that it doesn’t live in a vacuum and can be influenced with prompt injection, it’s not safe to assume that whatever bugs it creates aren’t specifically designed to deceive.

            Secondly, the security requirements of the Linux Kernel are way more important/stringent than Lutris, which has no special access & is often even further sandboxed if installed via Flatpak.

            Sure, but that’s not the point I was trying to make. You said that I don’t trust the guy to audit the code for malicious intent before committing and I gave you a reason why nobody should: if multiple people with decades of experience in a specialized domain can’t catch vulnerabilities disguised as subtle bugs, one guy who isn’t scrutinizing the changes nearly as hard definitely won’t.