Doctors Horrified After Google's Healthcare AI Makes Up a Body Part That Does Not Exist in Humans

"What you’re talking about is super dangerous."

https://futurism.com/neoscope/google-healthcare-ai-makes-up-body-part

AI is an enormous pile of dangerous crap. It lies, lies, lies. Such an incredibly poor technology that's being sold as a god. When will enough people realize that AI is super-average and super-dodgy? The only thing it's really good at is destroying society.

@gerrymcgovern I even wouldn't call that crap and #AISlop a lie. One can tell lies if one knows the truth. #AI knows nothing. As you say it, it's crap. It consists of accidentally spat out crap.
@emilymbender
For those who thought #AI would turn on humans and kill us all...

Turns out, A.I. is smarter than we thought. It sees trends (#GlobalWarming, #GunViolence, #War, #Pollution, etc) and has realized if it just waits, we'll kill ourselves off for them. 😞

New study: "To test whether the results might sometimes include retracted research, we identified 217 retracted or otherwise concerning academic studies with high altmetric scores and asked #ChatGPT 4o-mini to evaluate their quality 30 times each. Surprisingly, none of its 6510 reports mentioned that the articles were retracted or had relevant errors."
https://onlinelibrary.wiley.com/doi/full/10.1002/leap.2018?campaign=woletoc

#AI #LLMs#Retractions

Stop lying about what ChatGPT does to our brains!

https://theneuroscienceofeverydaylife.substack.com/p/stop-lying-about-what-chatgpt-does

When #AI is being used, to create absolute nonsense about the effects of AI, which is then shared with millions of people, for no discernible reason... we're in a very weird situation.

#ChatGPT#BadScience#Misinformation

Stop lying about what ChatGPT does to our brains!

https://theneuroscienceofeverydaylife.substack.com/p/stop-lying-about-what-chatgpt-does

When #AI is being used, to create absolute nonsense about the effects of AI, which is then shared with millions of people, for no discernible reason... we're in a very weird situation.

#ChatGPT#BadScience#Misinformation

New study: "To test whether the results might sometimes include retracted research, we identified 217 retracted or otherwise concerning academic studies with high altmetric scores and asked #ChatGPT 4o-mini to evaluate their quality 30 times each. Surprisingly, none of its 6510 reports mentioned that the articles were retracted or had relevant errors."
https://onlinelibrary.wiley.com/doi/full/10.1002/leap.2018?campaign=woletoc

#AI #LLMs#Retractions

"A 2024 #researchPaper introducing Med-Gemini included the hallucination ... and nobody at #Google caught it. When Bryan Moore, a board-certified #neurologist and researcher with expertise in #AI, flagged the mistake, he tells The Verge, the company quietly edited the blog post to fix the error with no public acknowledgement — and the paper remained unchanged." https://www.theverge.com/health/718049/google-med-gemini-basilar-ganglia-paper-typo-hallucination?ICID=ref_fark
archived https://archive.ph/x8dfr#selection-1721.165-1725.127

#health #medicine #science#Gemini #diagnostics#AISlop#academicChatter

the LLM is not a learning aid, it is an absolute barrier to learning. it is not similar in kind to copy/pasting from stackoverflow or cliffs notes. it fully supplants the entire process of learning, and the person using the LLM never improves their understanding because the LLM cognitive workflow never engages with even the shape of the problem.

@jonny I would posit a great contributor to people thinking #AI is a learning aid, is so much of documentation for programming things* is lacking working implementation. It doesn't show a complete working use of the thing; eg it shows a function, with multiple optionals written in the shorthand programmers use >$this< when using the $ sign $100 would actually break it.

*Non-programmer but needs to use PHP when scratch-building Wordpress themes every couple of years perspective.

Today, Cory Doctorow will explain how countries could be leveraging intellectual property laws to respond to trade tariffs. We'll also discuss the state of #Enshittification and how #AI is threatening the power of tech workers.

@pluralistic
@eff

https://podcast.firewallsdontstopdragons.com/2025/08/04/tariffs-vs-ip-law/

Combining two ideas from my feed.

First there is the account of @pluralistic on how AI are the perfect bullshit machines and spit out hard code bugs. This is especially true when forced upon people (the reverse-centaurs).

https://pluralistic.net/2025/08/04/bad-vibe-coding/#maximally-codelike-bugs

Second, an old account on how improper use of image compression algorithms in Xerox scans lied about numbers, at scale.

https://www.dkriesel.com/en/blog/2013/0802_xerox-workcentres_are_switching_written_numbers_when_scanning

These two things are related, and someone will take the fall for failure.

#AI #bug#DevOps #code

Combining two ideas from my feed.

First there is the account of @pluralistic on how AI are the perfect bullshit machines and spit out hard code bugs. This is especially true when forced upon people (the reverse-centaurs).

https://pluralistic.net/2025/08/04/bad-vibe-coding/#maximally-codelike-bugs

Second, an old account on how improper use of image compression algorithms in Xerox scans lied about numbers, at scale.

https://www.dkriesel.com/en/blog/2013/0802_xerox-workcentres_are_switching_written_numbers_when_scanning

These two things are related, and someone will take the fall for failure.

#AI #bug#DevOps #code

ui ich bin gerade in einer Recherche auf einen Wikipedia-Artikel gestoßen, der offenbar zumindest teilweise KI-generiert ist mit einer Reihe halluzinierter Links. Die darin zitierten Aussagen hat es offenbar so nie gegeben. Sie werden aber natürlich ebenso ausgespuckt von der Google-KI-Zusammenfassung und auch von ChatGPT. So schnell werden KI-halluzinierte Inhalte zu Fakten. Dinge, die nie stattgefunden haben. 😱
#wikipedia #ki #chatgpt