@maxleibman

i despise #AI

but i heard an #NPR segment about a woman with throat cancer

before her voicebox was removed, she recorded hours of her voice

instead of a stephen hawking robot voice, with AI working on her recordings, she can talk like she always did, preserving her sarcasm, her inflections, etc

#AI is like #crypto: promise

but like crypto, most of AI is taken over by #techBros who use it for vile purposes

but there is little good niches

https://www.npr.org/sections/shots-health-news/2025/07/22/nx-s1-5464154/oral-cancer-laryngectomy-glossectomy-ai-voice-text-to-speech

#disability

@benroyce @maxleibman

As a fellow-AI-despiser, I'd note that the term "AI" has gotten so squishy it can slither out of any criticism.

What this woman is using probably isn't generative AI - it certainly doesn't need to be. Rather this could be achieved with a task-specific machine-learning algorithm.

An efficient algorithm for this would not be anything like generative AI. So I'd hold that there remain no legitimate uses for "AI" as that term is typically used.

@jmcclure @maxleibman

Yup

"Wait so if you walk up to the glass door it senses you and automatically slides the door open? That's some cool "

That's just about where we're at with the term nowadays

Say "AI" and there's a big jump in financial and popular interest, the technical details apparently don't matter

@benroyce
@maxleibman
This is wonderful for her, but to have to pay $99/month for a voice is cruel. It's similar to the stories of medical implants where the company goes out of business and the patients are left with obsolete unsupported and still necessary tech in their bodies. The tech is a miracle but the product is a nightmare: she needs to be able to own her own voice.
@benroyce Yeah, Apple has a feature like this for the iPhone for people at risk of losing their voice. I think in the pre-LLM-boom days they would have described it as "machine learning."

Many of my most popular posts here have been mocking OpenAI and Microsoft for their LLM follies, but possible accessibility use-cases give even me baby-with-bathwater–style pause.
@maxleibman @benroyce It's perhaps not helpful to bundle together disparate technologies and use cases when so much is at stake? The term "AI" is meaningless now, it seems to be used about almost any tech? Feels like that term should be avoided because it muddies the waters and prevents meaningful discourse.

Lots of stuff is being branded AI that already existed. Machine translations have existed for ages, human-like machine voices have been around for ages too. (Apparently Hawking deliberately continued with his primitive robot voice as a personal trademark rather than technical limitation.)

@maxleibman

yeah unfortunately popular imagination has swallowed all use cases and technologies as "AI" and it just becomes a trigger

like many things, the problem is not the tech, it is us, what we do with it

with AI the real problem is that it is all centrally controlled by plutocrats to warp our minds and our politics, eventually, like with corporate social media

same as it's always been

@benroyce @maxleibman

There are literally zero artificial intelligences. First, there is no working definition of what intelligence even is. And for certain, there is not now nor has there ever been evidence that intelligence exists outside of a living organic meat body. None, ever. Zero.

There are LLMs, which are fascinating technologies, mostly wielded in public by criminal assholes.

There's a crazy number of "machine learning" technologies doing amazing things in aerial mapping, chemistry, metallurgy, image editing etc.

But there's no AIs.

And each use case or application needs to be assessed within its context(s).

All that said generalized A. I. does not exist, the criminals pushing chat programs on us and industry are scumbags and not one can be trusted, they have agendas unspoken.

Other than the criminal LLM purveyors each of the machine learning technologies needs separate assessment. Or doesn't, really, not by me.

@tomjennings @benroyce @maxleibman

There is a big difference between Chatbots and actual intelligence.

An example.

If you asked a Historian a question about Physics, while they may give you an opinion they would know when their knowledge is insufficient to give a reliable answer, and would tell you so.

If you asked them a question about History not only would they give you a reliable answer, they would assess your understanding of the question to provide an answer you can understand.

They would also know what they don't know and tell you so, and then tell you where to go for a good answer.

Asking a Chatbot is like asking the loudmouth stranger in the pub, regurgitating something they overheard once from someone's mate.

They speak with confidence on subjects they have no expertise in, and they don't know what they don't know.

@benroyce @maxleibman

Consolidating all ML/DL to AI is definitely gimmick to pass off LLMs as huge slabs of raw intelligence that can serve as everything machines. But I’ve been disappointed with some of the critical discourse around AI, which feels like schismogenesis.

Yes tech is not neutral. And the abuse used to create and promote “AI” sucks. But shaming people who find use for any ML tech (even LLMs) isn’t useful. Pushing for regulation, education, nudging people to alternatives matter.

@benroyce @maxleibman Thank @pluralistic. He's brought up the term multiple times. I think he got it from David Graeber and David Wengrow (Dawn of Everything).

It's an extremely relevant term to describe some of the driver of not just our politics but cultural evolution. https://doctorow.medium.com/schizmogenesis-755bbb6a8515