• dreamkeeper@literature.cafe
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    12
    ·
    edit-2
    2 days ago

    Dude, he’s just another greedy billionaire. The guy doesn’t deserve all the glaze gets

    Edit: He’s also incredibly wrong, like all other AI cultists. LLMs are a useful tool but they’re no where even close to the level of computers or the Internet.

    • FauxLiving@lemmy.world
      link
      fedilink
      English
      arrow-up
      22
      arrow-down
      3
      ·
      2 days ago

      LLMs are a useful tool but they’re no where even close to the level of computers or the Internet.

      LLMs are not, certainly.

      But neural networks (“AI”) can do pretty incredible things and the money being poured into LLMs is being spent on AI research (and all of the RAM/graphics cards in the world).

      We’re only seeing LLMs and image generators because it’s what we have the most training data of. The Internet doesn’t have hundreds of billions of MRIs or robotic motion plans, so those uses of AI take longer to appear.

        • CriticalMiss@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          17 hours ago

          Valve is trying to use neural networks for their anti cheat. They want to move it entirely to server side and rely less on the client to make Linux gaming an industry standard.

          Instead of spying on your PC to see if you’re using something you’re not supposed to, they want to examine your behavior instead and act based upon that. I think this is a good usecase.

        • SabinStargem@lemmy.today
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          19 hours ago

          Fixing my PC’s rebooting issue, for one. It diagnosed the error logs that I gave it, and suggested to use my motherboard’s Load-Line Calibration feature to prevent the shutdowns. Before, I could get over a dozen reboots in a day.

          For me, not having sudden reboots gives me a great deal of mental peace.

        • Bo7a@piefed.ca
          link
          fedilink
          English
          arrow-up
          8
          ·
          edit-2
          1 day ago

          They are very good pattern matching machines. Most of our life science scientists are out there finding patterns. Like which antibodies pair with which cellular components to do things like predict cancer before it is a problem.

          They are also very good at determining locations for clinical trials based on criteria found in previous trials that would be near impossible to do for a human.

        • FauxLiving@lemmy.world
          link
          fedilink
          English
          arrow-up
          14
          ·
          edit-2
          2 days ago

          Predict protein structures better than any other methods.

          Here we provide the first computational method that can regularly predict protein structures with atomic accuracy even in cases in which no similar structure is known. We validated an entirely redesigned version of our neural network-based model, AlphaFold, in the challenging 14th Critical Assessment of protein Structure Prediction (CASP14)15, demonstrating accuracy competitive with experimental structures in a majority of cases and greatly outperforming other methods.

      • XLE@piefed.social
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        7
        ·
        edit-2
        2 days ago

        The fun part about those other uses, like MRIs, is that it requires the work of skilled professionals and then apparently weakens the skill of those professionals, which sure sounds like a nasty downward spiral.

        Using AI Made Doctors Worse at Spotting Cancer Without Assistance

        This is effectively pitching potential snake oil to the uninformed, while ignoring every real-life issue in the medical industry and side effects it would cause.

        • FauxLiving@lemmy.world
          link
          fedilink
          English
          arrow-up
          9
          arrow-down
          1
          ·
          2 days ago

          Sure, tools make people worse at doing the thing without tools.

          Using AutoCAD made draftsmen worse at drafting, that doesn’t matter because there is no occasion where you need to draft complex plans without a computer. If AI diagnosis makes doctors worse at reading MRIs… that would only matter in a world where they’re reading MRIs but also don’t have access to a computer. There is no hospital that has a functional MRI machine that wouldn’t be able to access these tools.

          The important thing is that the doctors, when using these AI tools, are measurably more effective. The result is the thing that matters for public health, not any individual’s ability to operate without their tools.

          https://newsnetwork.mayoclinic.org/discussion/mayo-clinic-ai-detects-pancreatic-cancer-up-to-3-years-before-diagnosis-in-landmark-validation-study/

          Researchers used the AI model to analyze nearly 2,000 CT scans, including scans from patients later diagnosed with pancreatic cancer — all originally interpreted as normal. The system, called the Radiomics-based Early Detection Model (REDMOD), identified 73% of those prediagnostic cancers at a median of about 16 months before diagnosis — nearly double the detection rate of specialists reviewing the same scans without AI assistance.

          Doubling the early detection rate of one of the most deadly types of cancers will result in many more lives being saved.

            • FauxLiving@lemmy.world
              link
              fedilink
              English
              arrow-up
              6
              arrow-down
              1
              ·
              2 days ago

              Machine Learning isn’t restricted to neural networks.

              Radiologist and pathologist have always had a massive error rate because of human cognition bias.

              Seems like that’s a bad thing and we should be happy that there are tools which improve their accuracy.

          • XLE@piefed.social
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 days ago

            Are you confident that the American healthcare system wouldn’t declare experts to be a redundancy and simply replace them with the AI? Not only would that fit with their well-known profit motive, it is explicitly what AI companies claim they want to do.

            I would love to live in a utopia where AI can be used ethically, but it is dangerous to promote the assumption that it magically just will be.

            • FauxLiving@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              1
              ·
              2 days ago

              Are you confident that the American healthcare system wouldn’t declare experts to be a redundancy and simply replace them with the AI?

              Yes.

              Nothing about this tool replaces experts any more than a calculator or computer can replace a human mathematician.

              I would love to live in a utopia where AI can be used ethically, but it is dangerous to promote the assumption that it magically just will be.

              I don’t assume that AI will always be used ethically (see: War, LLM propaganda bots, etc). Like every technology it is possible to do bad things with it and it will require regulations and laws addressing this.

              Dismissing a technology because it is used by bad people, if you actually applied that standard consistently in your life, would have you living naked in a cave without access to fire or tools.

              You don’t need to believe in a utopia to understand that a world where 70% of pancreatic cancer is detected 3 years earlier is better than one where 30% of pancreatic cancer is detected 3 years earlier.

              • XLE@piefed.social
                link
                fedilink
                English
                arrow-up
                3
                ·
                2 days ago

                FauxLiving, I appreciate your guarantees about the future, but can you demonstrate why the for-profit medical and AI industries wouldn’t cut corners if the AI behaved the way you hope it will?

                • FauxLiving@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  2
                  ·
                  2 days ago

                  First, this is a peer-reviewed result not me expressing my hopes.

                  Second, this application does not replace radiologists. It is a tool for radiologists in one specific type of diagnosis.

                  If you have some hypothetical future outcome in mind, then the burden of proof is on you to prove your position, not on me to disprove it.

                  The data shows that this system works.

                  • XLE@piefed.social
                    link
                    fedilink
                    English
                    arrow-up
                    2
                    arrow-down
                    2
                    ·
                    22 hours ago

                    FauxLiving, the burden of proof is on you to show us why your AI utopian vision will happen as you predict it. A paper does not guarantee your fantasy.

            • SaveTheTuaHawk@lemmy.ca
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 days ago

              Are you confident that the American healthcare system wouldn’t declare experts to be a redundancy and simply replace them with the AI?

              That would lead to a legal liability. The reality is all radiology scans and pathology slide images are cross checked by software and if there is a discrepancy, another pathologist is consulted. This is because the error rate of pathologists and radiologists is conservatively 1% which is far too high.

        • Flatfire@lemmy.ca
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          1
          ·
          2 days ago

          There’s a balance to be struck here. Relying on automation tooling wholesale will always make you worse. There’s a reason that even though we have calculators, it’s important to know the fundamental maths that would let you perform those same calculations yourself. For the majority of people, it’s probably not critical, but if you need to validate that information, you cerainly want to be able to understand how the original conclusion was drawn.

          The same goes for software engineering, where AI is seeing heavy use. People asking it to build who programs receive bug riddled and inefficient code, but software engineers who are using it for rapid prototyping or to reduce the work of rewriting common functions in different projects are going to be more effective because they understand what the resulting structure should look like.

          AI is not a replacement for the human, and if there’s a future for it, it will be assistive to the fundamentals and knowledge human specialists already posess. But that requires the continued education and development of skills within the industries these tools are deployed in.

          • XLE@piefed.social
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            2
            ·
            edit-2
            2 days ago

            Code generation and medical result generation are similar enough to compare (I think), but to expound on the point I was making to the other person I replied to: There is far less medical data online than there is code. We basically have every code textbook online. We have tons of examples to create scaffolds from. We don’t have so much medical data, and the people promoting the tools to the medical field tend to be the tech bros who don’t mention the caveats of what their products can do.

            In other words, if AI could be good in medicine, it needs to be rolled out by none of the people who are currently pushing for it, and the caveats need to be explained in a way that none of them do. (It’s not objective, it will not create new science like OpenAI CEO Sam Alman says, etc.) If AI boosters managed to convince the medical field of the same things, they have already convinced politicians and journalists of, I think the result would be rapid quality degradation of treatment, deskilling, lots of unnecessary death. And boosters that promote potential benefits without acknowledging that are being very reckless.