Forums
New posts
Search forums
What's new
New posts
New profile posts
Latest activity
Members
Registered members
Current visitors
New profile posts
Search profile posts
Members Blogs
Log in
Register
What's new
Search
Search
Search titles only
By:
New posts
Search forums
Menu
Log in
Register
Install the app
Install
Forums
Nationalist
Economy
Artifical Intelligence and it's impacts.
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Reply to thread
Message
<blockquote data-quote="Mad as Fish" data-source="post: 140998" data-attributes="member: 396"><p><em>Health practitioners are becoming increasingly uneasy about the medical community making widespread use of error-prone generative AI tools.</em></p><p><em></em></p><p><em>One glaring error proved so persuasive that it took over a year to be caught. In their May 2024 research paper introducing a healthcare AI model, dubbed Med-Gemini, Google researchers showed off the AI analyzing brain scans from the radiology lab for various conditions.</em></p><p><em></em></p><p><em>It identified an "old left basilar ganglia infarct," referring to a purported part of the brain — "basilar ganglia" — that simply doesn't exist in the human body. Board-certified neurologist Bryan Moore flagged the issue to The Verge, highlighting that Google fixed its blog post about the AI — but failed to revise the research paper itself.</em></p><p><em></em></p><p><em>The AI likely conflated the basal ganglia, an area of the brain that's associated with motor movements and habit formation, and the basilar artery, a major blood vessel at the base of the brainstem. Google blamed the incident on a simple misspelling of "basal ganglia."</em></p><p><em></em></p><p><em>It's an embarrassing reveal that underlines persistent and impactful shortcomings of the tech. Even the latest "reasoning" AIs by the likes of Google and OpenAI are spreading falsehoods dreamed up by large language models that are trained on vast swathes of the internet.</em></p><p><em></em></p><p><em>In Google's search results, this can lead to headaches for users during their research and fact-checking efforts.</em></p><p><em></em></p><p><em>But in a hospital setting, those kinds of slip-ups could have devastating consequences. It's not just Med-Gemini. Google's more advanced healthcare model, dubbed MedGemma, also led to varying answers depending on the way questions were phrased, leading to errors some of the time.</em></p><p><em></em></p><p><em>"Their nature is that [they] tend to make up things, and it doesn’t say ‘I don’t know,’ which is a big, big problem for high-stakes domains like medicine," Judy Gichoya, Emory University associate professor of radiology and informatics, told The Verge."</em></p><p><em></em></p><p><em>What happened with Med-Gemini isn’t the exception, it’s the rule. </em></p><p><em></em></p><p><em>It is the perfect metaphor for the state of AI today: confidently wrong, beautifully phrased, and fundamentally hollow.</em></p><p><em></em></p><p><em>LLMs architecture cannot be evolved. No amount of scale, fine-tuning, or prompt engineering will turn a glorified guesser into a system that truly understands. It is structurally incapable of grounding itself in reality. It cannot model the world, cannot self-correct or admit when it doesn’t know because it doesn’t 'understand' anything to begin with.</em></p><p><em></em></p><p><em>The world deserves better, systems that are grounded in the real world, that learn and understand the way we do, not just mimic the surface of thought, but replicate the underlying process. Systems that know when they don’t know. </em></p><p><em></em></p><p><em>We call this performance theater as progress. We listen to LLM hypesters spinning foundational flaws into features, insisting that the next version will somehow cross the chasm, deep down even they know that it won't.</em></p><p><em></em></p><p><em>Build Cognitive AI → Unlock Real Intelligence.</em></p></blockquote><p></p>
[QUOTE="Mad as Fish, post: 140998, member: 396"] [I]Health practitioners are becoming increasingly uneasy about the medical community making widespread use of error-prone generative AI tools. One glaring error proved so persuasive that it took over a year to be caught. In their May 2024 research paper introducing a healthcare AI model, dubbed Med-Gemini, Google researchers showed off the AI analyzing brain scans from the radiology lab for various conditions. It identified an "old left basilar ganglia infarct," referring to a purported part of the brain — "basilar ganglia" — that simply doesn't exist in the human body. Board-certified neurologist Bryan Moore flagged the issue to The Verge, highlighting that Google fixed its blog post about the AI — but failed to revise the research paper itself. The AI likely conflated the basal ganglia, an area of the brain that's associated with motor movements and habit formation, and the basilar artery, a major blood vessel at the base of the brainstem. Google blamed the incident on a simple misspelling of "basal ganglia." It's an embarrassing reveal that underlines persistent and impactful shortcomings of the tech. Even the latest "reasoning" AIs by the likes of Google and OpenAI are spreading falsehoods dreamed up by large language models that are trained on vast swathes of the internet. In Google's search results, this can lead to headaches for users during their research and fact-checking efforts. But in a hospital setting, those kinds of slip-ups could have devastating consequences. It's not just Med-Gemini. Google's more advanced healthcare model, dubbed MedGemma, also led to varying answers depending on the way questions were phrased, leading to errors some of the time. "Their nature is that [they] tend to make up things, and it doesn’t say ‘I don’t know,’ which is a big, big problem for high-stakes domains like medicine," Judy Gichoya, Emory University associate professor of radiology and informatics, told The Verge." What happened with Med-Gemini isn’t the exception, it’s the rule. It is the perfect metaphor for the state of AI today: confidently wrong, beautifully phrased, and fundamentally hollow. LLMs architecture cannot be evolved. No amount of scale, fine-tuning, or prompt engineering will turn a glorified guesser into a system that truly understands. It is structurally incapable of grounding itself in reality. It cannot model the world, cannot self-correct or admit when it doesn’t know because it doesn’t 'understand' anything to begin with. The world deserves better, systems that are grounded in the real world, that learn and understand the way we do, not just mimic the surface of thought, but replicate the underlying process. Systems that know when they don’t know. We call this performance theater as progress. We listen to LLM hypesters spinning foundational flaws into features, insisting that the next version will somehow cross the chasm, deep down even they know that it won't. Build Cognitive AI → Unlock Real Intelligence.[/I] [/QUOTE]
Insert quotes…
Name
Verification
Does Doxxie know his real father.
Post reply
Latest Threads
S
The Anti-Mass Migration movement has to be at least partly about race, it is indeed Ireland for the Irish
Started by scolairebocht
Yesterday at 9:10 AM
Replies: 20
Scholairebochts Blog.
A Million Views.
Started by Declan
Saturday at 10:54 PM
Replies: 12
Public Chat and Announcements
An Open Letter to SwordOfStZip
Started by AN2
Oct 11, 2025
Replies: 12
Public Chat and Announcements
C
athletics
Started by céline
Oct 8, 2025
Replies: 4
Public Chat and Announcements
S
The real agenda in this Presidential Election?
Started by scolairebocht
Oct 6, 2025
Replies: 11
Scholairebochts Blog.
Popular Threads
Ukraine.
Started by Declan
Feb 21, 2022
Replies: 15K
World at War
US Politics.
Started by jpc
Nov 7, 2022
Replies: 6K
USA
Mass Migration to Ireland & Europe
Started by Anderson
Feb 26, 2023
Replies: 5K
Nationalist Politics
C
🦠 Covid 19 Vaccine Thread 💉
Started by Charlene
Sep 14, 2021
Replies: 3K
Health
General Chat in The Marcus Lounge.
Started by Declan
Dec 30, 2024
Replies: 3K
Public Chat and Announcements
The Climate Change scam
Started by Anderson
Jul 29, 2022
Replies: 2K
Climate Change
Forums
Nationalist
Economy
Artifical Intelligence and it's impacts.
Top
Bottom