Sure, tools make people worse at doing the thing without tools.
Using AutoCAD made draftsmen worse at drafting, that doesn’t matter because there is no occasion where you need to draft complex plans without a computer. If AI diagnosis makes doctors worse at reading MRIs… that would only matter in a world where they’re reading MRIs but also don’t have access to a computer. There is no hospital that has a functional MRI machine that wouldn’t be able to access these tools.
The important thing is that the doctors, when using these AI tools, are measurably more effective. The result is the thing that matters for public health, not any individual’s ability to operate without their tools.
Researchers used the AI model to analyze nearly 2,000 CT scans, including scans from patients later diagnosed with pancreatic cancer — all originally interpreted as normal. The system, called the Radiomics-based Early Detection Model (REDMOD), identified 73% of those prediagnostic cancers at a median of about 16 months before diagnosis — nearly double the detection rate of specialists reviewing the same scans without AI assistance.
Doubling the early detection rate of one of the most deadly types of cancers will result in many more lives being saved.
Are you confident that the American healthcare system wouldn’t declare experts to be a redundancy and simply replace them with the AI? Not only would that fit with their well-known profit motive, it is explicitly what AI companies claim they want to do.
I would love to live in a utopia where AI can be used ethically, but it is dangerous to promote the assumption that it magically just will be.
Are you confident that the American healthcare system wouldn’t declare experts to be a redundancy and simply replace them with the AI?
Yes.
Nothing about this tool replaces experts any more than a calculator or computer can replace a human mathematician.
I would love to live in a utopia where AI can be used ethically, but it is dangerous to promote the assumption that it magically just will be.
I don’t assume that AI will always be used ethically (see: War, LLM propaganda bots, etc). Like every technology it is possible to do bad things with it and it will require regulations and laws addressing this.
Dismissing a technology because it is used by bad people, if you actually applied that standard consistently in your life, would have you living naked in a cave without access to fire or tools.
You don’t need to believe in a utopia to understand that a world where 70% of pancreatic cancer is detected 3 years earlier is better than one where 30% of pancreatic cancer is detected 3 years earlier.
FauxLiving, I appreciate your guarantees about the future, but can you demonstrate why the for-profit medical and AI industries wouldn’t cut corners if the AI behaved the way you hope it will?
FauxLiving, the burden of proof is on you to show us why your AI utopian vision will happen as you predict it. A paper does not guarantee your fantasy.
Are you confident that the American healthcare system wouldn’t declare experts to be a redundancy and simply replace them with the AI?
That would lead to a legal liability. The reality is all radiology scans and pathology slide images are cross checked by software and if there is a discrepancy, another pathologist is consulted. This is because the error rate of pathologists and radiologists is conservatively 1% which is far too high.
Sure, tools make people worse at doing the thing without tools.
Using AutoCAD made draftsmen worse at drafting, that doesn’t matter because there is no occasion where you need to draft complex plans without a computer. If AI diagnosis makes doctors worse at reading MRIs… that would only matter in a world where they’re reading MRIs but also don’t have access to a computer. There is no hospital that has a functional MRI machine that wouldn’t be able to access these tools.
The important thing is that the doctors, when using these AI tools, are measurably more effective. The result is the thing that matters for public health, not any individual’s ability to operate without their tools.
https://newsnetwork.mayoclinic.org/discussion/mayo-clinic-ai-detects-pancreatic-cancer-up-to-3-years-before-diagnosis-in-landmark-validation-study/
Doubling the early detection rate of one of the most deadly types of cancers will result in many more lives being saved.
That’s not AI. Those algorithms were pattern matching developed at Carnegie Mellon 15 years ago. so now they want to call it AI.
Radiologist and pathologist have always had a massive error rate because of human cognition bias.
Machine Learning isn’t restricted to neural networks.
Seems like that’s a bad thing and we should be happy that there are tools which improve their accuracy.
Are you confident that the American healthcare system wouldn’t declare experts to be a redundancy and simply replace them with the AI? Not only would that fit with their well-known profit motive, it is explicitly what AI companies claim they want to do.
I would love to live in a utopia where AI can be used ethically, but it is dangerous to promote the assumption that it magically just will be.
Yes.
Nothing about this tool replaces experts any more than a calculator or computer can replace a human mathematician.
I don’t assume that AI will always be used ethically (see: War, LLM propaganda bots, etc). Like every technology it is possible to do bad things with it and it will require regulations and laws addressing this.
Dismissing a technology because it is used by bad people, if you actually applied that standard consistently in your life, would have you living naked in a cave without access to fire or tools.
You don’t need to believe in a utopia to understand that a world where 70% of pancreatic cancer is detected 3 years earlier is better than one where 30% of pancreatic cancer is detected 3 years earlier.
FauxLiving, I appreciate your guarantees about the future, but can you demonstrate why the for-profit medical and AI industries wouldn’t cut corners if the AI behaved the way you hope it will?
First, this is a peer-reviewed result not me expressing my hopes.
Second, this application does not replace radiologists. It is a tool for radiologists in one specific type of diagnosis.
If you have some hypothetical future outcome in mind, then the burden of proof is on you to prove your position, not on me to disprove it.
The data shows that this system works.
FauxLiving, the burden of proof is on you to show us why your AI utopian vision will happen as you predict it. A paper does not guarantee your fantasy.
deleted by creator
That would lead to a legal liability. The reality is all radiology scans and pathology slide images are cross checked by software and if there is a discrepancy, another pathologist is consulted. This is because the error rate of pathologists and radiologists is conservatively 1% which is far too high.
deleted by creator