Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

Celerity

(53,639 posts)
Fri Jun 6, 2025, 10:17 AM Jun 2025

What Happens When People Don't Understand How AI Works



Despite what tech CEOs might say, large language models are not smart in any recognizably human sense of the word.

https://www.theatlantic.com/culture/archive/2025/06/artificial-intelligence-illiteracy/683021/

https://archive.ph/401p7



On June 13, 1863, a curious letter to the editor appeared in The Press, a then-fledgling New Zealand newspaper. Signed “Cellarius,” it warned of an encroaching “mechanical kingdom” that would soon bring humanity to its yoke. “The machines are gaining ground upon us,” the author ranted, distressed by the breakneck pace of industrialization and technological development. “Day by day we are becoming more subservient to them; more men are daily bound down as slaves to tend them, more men are daily devoting the energies of their whole lives to the development of mechanical life.” We now know that this jeremiad was the work of a young Samuel Butler, the British writer who would go on to publish Erewhon, a novel that features one of the first known discussions of artificial intelligence in the English language.

Today, Butler’s “mechanical kingdom” is no longer hypothetical, at least according to the tech journalist Karen Hao, who prefers the word empire. Her new book, Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI, is part Silicon Valley exposé, part globe-trotting investigative journalism about the labor that goes into building and training large language models such as ChatGPT. It joins another recently released book—The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want, by the linguist Emily M. Bender and the sociologist Alex Hanna—in revealing the puffery that fuels much of the artificial-intelligence business. Both works, the former implicitly and the latter explicitly, suggest that the foundation of the AI industry is a scam.

To call AI a con isn’t to say that the technology is not remarkable, that it has no use, or that it will not transform the world (perhaps for the better) in the right hands. It is to say that AI is not what its developers are selling it as: a new class of thinking—and, soon, feeling—machines. Altman brags about ChatGPT-4.5’s improved “emotional intelligence,” which he says makes users feel like they’re “talking to a thoughtful person.” Dario Amodei, the CEO of the AI company Anthropic, argued last year that the next generation of artificial intelligence will be “smarter than a Nobel Prize winner.” Demis Hassabis, the CEO of Google’s DeepMind, said the goal is to create “models that are able to understand the world around us.”

Read: What ‘Silicon Valley’ knew about tech-bro paternalism

These statements betray a conceptual error: Large language models do not, cannot, and will not “understand” anything at all. They are not emotionally intelligent or smart in any meaningful or recognizably human sense of the word. LLMs are impressive probability gadgets that have been fed nearly the entire internet, and produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another. Many people, however, fail to grasp how large language models work, what their limits are, and, crucially, that LLMs do not think and feel but instead mimic and mirror. They are AI illiterate—understandably, because of the misleading ways its loudest champions describe the technology, and troublingly, because that illiteracy makes them vulnerable to one of the most concerning near-term AI threats: the possibility that they will enter into corrosive relationships (intellectual, spiritual, romantic) with machines that only seem like they have ideas or emotions.

snip
5 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies

patphil

(8,721 posts)
1. AI could be more correctly described as Artificial Idiocy.
Fri Jun 6, 2025, 10:30 AM
Jun 2025

Last week I asked Google what the results were for the 2024 election in a certain US House race. It came back and told me the election hadn't happened yet! This was in late April of 2025!
AI is fine for simple factual stuff most of the time, but when it messes up, it really messes up.
Which leads me to question a lot of what it presents as fact.

Ms. Toad

(38,232 posts)
5. It actually isn't good for simple factual stuff.
Fri Jun 6, 2025, 10:57 AM
Jun 2025

It abandoned facts for conversational flow. If the fact would disrupt the flow, it omits it - or makes up alternate "facts." Because it dies this so fluently, there aren't any context clues to tell you which "simple factual stuff" is correct v. made up.

LLMs are built for conversation - not facts.

highplainsdem

(59,953 posts)
2. Thanks, Celerity! Great article! And I was particularly happy to read in the last paragraph that only 17%
Fri Jun 6, 2025, 10:41 AM
Jun 2025

of American adults believe AI will make the US better.

AI marketing/proselytzing IS a con job. Probably the biggest and most dangerous we've ever seen - with the possible exception of the RW propaganda for Trump.

And the AI peddlers who have been cozying up to Trump would have remained a problem even if Harris had been elected.

Bernardo de La Paz

(60,320 posts)
3. LLMs are not the be-all end-all of AI. Just a plateau before the next advance.
Fri Jun 6, 2025, 10:44 AM
Jun 2025

There are cycles of AI summers and AI winters in the biz. In between the advances that yield "summers" there are pullbacks and periods of lack of investment in between ("winters" ).

First there was direct programming. It was able to find a few things people had not, with mathematical theorems or simple robots.

Then came expert systems for things like medical diagnosis. They have not gone away, but of course have their limitations.

Now we are at LLMs. While there is hype and there is discounting of "hallucinations", the systems are consequential and applicable to more contexts than just chat or search. People are expecting too much from them and there will inevitably be some disappointment which is already beginning to show up.

An AI winter is coming when disappointment becomes dominant, but it does not mean AI will go away or AI is not very powerful. LLMs applied to other domains than language are already finding new drugs, new alloys, new device designs. Those are found but then they are vetted by humans who analyze and run trials or run mechanical tests.

AI, in its current form, is very real though over-hyped. Anyone who writes it off is making a big mistake.

TheProle

(3,898 posts)
4. A tool when properly applied can be most effective
Fri Jun 6, 2025, 10:46 AM
Jun 2025

It's a game changer for parsing and analyzing data. AI can and will have profound implications in medicine, space exploration, archeology and much much more.

Archaeologists use AI to discover 303 unknown geoglyphs near Nazca Lines

Archaeologists using artificial intelligence (AI) have discovered hundreds of new geoglyphs depicting parrots, cats, monkeys, killer whales and even decapitated heads near the Nazca Lines in Peru, in a find that nearly doubles the number of known figures at the enigmatic 2,000-year-old archaeological site.

A team from the Japanese University of Yamagata’s Nazca Institute, in collaboration with IBM Research, discovered 303 previously unknown geoglyphs of humans and animals – all smaller in size than the vast geometric patterns that date from AD200-700 and stretch across more than 400 sq km of the Nazca plateau.

The new figures, which date back to 200BC, provide a new understanding of the transition from the Paracas culture to the Nazcas, who later created the iconic hummingbird, monkey and whale figures that make up part of the Unesco World Heritage site, Peru’s most popular tourist attraction after Machu Picchu.


https://www.theguardian.com/world/2024/sep/26/nazca-lines-peru-new-geoglyphs

Latest Discussions»General Discussion»What Happens When People ...