• BillyTheKid2@lemmy.ca
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 day ago

    Not disagreeing with you, but Anthropic believes code is the path to AGI.

    I want to be clear so somebody doesn’t have a fit - I do not personally believe LLMs are capable of AGI. But this isn’t about what I believe.

    They believe that coding is the path because it’s verifiable and a generatable. Frontier AI companies aren’t training on the global internet anymore, it’s poisoned with AI slop. Non-frontier AI companies do, we’ve all seen it. But it’s my opinion that non frontier AI companies are basically all but irrelevant (I’m not talking about open source/hugging face). Anthropic knows this, and their idea (again, not mine, don’t get mad at me please!) is that by training on code their AI will get better at non-coding activities as well, and if they make it good enough at coding it’ll become truly intelligent in all ways.

    What I’m getting at is, there’s lots of good reasons to avoid using LLMs/AIs/Companies that shove ai down my throat (looking at you Microsoft- I don’t fucking want copilot in my fucking notepad - if anybody from MS is reading this fuck your AI in everything and fuck your AI ridden operating system), but local LLMs are not a replacement for Opus and Anthropic isn’t scraping the open internet anymore. I’m sure they did at first though.

    The biggest problem is when developers begin to depend on it too much without learning the nuance I couldn’t agree more. The brain is like a muscle, if you use it, it gets stronger. If you don’t, it gets weaker. “Vibe” coding is using your brain at a minimum, and if all you do is vibe out slop you’re not really learning much.

    • TheObviousSolution@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      16 hours ago

      local LLMs are not a replacement for Opus

      https://www.bitdoze.com/best-open-source-llms-claude-alternative/

      Something tells me you haven’t even made the effort. They are not that good, in the same way that LibreOffice is not as good as Excel. But if you are going to make the argument you quote, then you can work that brain muscle and adapt.

      And they aren’t training off of the Internet because they are training on your input. It’s mind-boggling to me how some people are so willing to train their replacements while also paying them for the effort to do so for an advantage set very temporary in the future we are heading. A lot of your criticism doesn’t even apply to local LLMs - either they are trained by model distillation from more advances models or because they are images temporally set in stone. It’s also telling how implicitly willing you seem to be able to let the Internet burn, because the inevitability is becoming a corporate slave and accepting their ever increasing subscription fees which you can’t ignore because “hey, they’ve got the most users, the Internet is too dead, your open alternatives are no replacements for us”. You say you are not, but you are saying everything an AI AGI astrosurfer would be saying, and the irony of hearing this in an open source “federated” platform over something like Reddit is paramount.

      • Evotech@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 hours ago

        Sorry but it’s not even slightly comparable.

        Frontier models vs whatever you can realistically host on your own that is.

        • TheObviousSolution@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          4 hours ago

          That you don’t want to or aren’t able to compare them doesn’t mean they can’t be compared. You do you, or more aptly, have an AI do you since you can’t bother.

          • Evotech@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            25 minutes ago

            Oh I’ve tried. Don’t assume I haven’t

            In terms of functionality on paper it’s similar. In terms of what they can realistically do it’s not.

      • BillyTheKid2@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 hours ago

        I could have worded that differently, I apologize.

        They aren’t a replacement for somebody like me who doesn’t have a screaming GPU.

        Yes they train on input. I don’t like it either. It’s not just creepy, but I’m sure breaks privacy laws everywhere.

        Regardless, you’ve already decided who I am so I don’t see this conversation being productive.

        I again apologize for not making my previous comment more straightforward.

        • TheObviousSolution@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          9 hours ago

          Oh, I don’t think I know who you are, I just think it’s indiscernible.

          They aren’t a replacement for somebody like me who doesn’t have a screaming GPU.

          You can run small LLMs that are still surprisingly good purely on modern CPUs, although I’m sure that’s part of the intent of trying to lock down supplies behind the bubble.