Welcome to DU!
The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards.
Join the community:
Create a free account
Support DU (and get rid of ads!):
Become a Star Member
Latest Breaking News
Editorials & Other Articles
General Discussion
The DU Lounge
All Forums
Issue Forums
Culture Forums
Alliance Forums
Region Forums
Support Forums
Help & Search
General Discussion
Showing Original Post only (View all)What Happens When People Don't Understand How AI Works [View all]

Despite what tech CEOs might say, large language models are not smart in any recognizably human sense of the word.
https://www.theatlantic.com/culture/archive/2025/06/artificial-intelligence-illiteracy/683021/
https://archive.ph/401p7

On June 13, 1863, a curious letter to the editor appeared in The Press, a then-fledgling New Zealand newspaper. Signed Cellarius, it warned of an encroaching mechanical kingdom that would soon bring humanity to its yoke. The machines are gaining ground upon us, the author ranted, distressed by the breakneck pace of industrialization and technological development. Day by day we are becoming more subservient to them; more men are daily bound down as slaves to tend them, more men are daily devoting the energies of their whole lives to the development of mechanical life. We now know that this jeremiad was the work of a young Samuel Butler, the British writer who would go on to publish Erewhon, a novel that features one of the first known discussions of artificial intelligence in the English language.
Today, Butlers mechanical kingdom is no longer hypothetical, at least according to the tech journalist Karen Hao, who prefers the word empire. Her new book, Empire of AI: Dreams and Nightmares in Sam Altmans OpenAI, is part Silicon Valley exposé, part globe-trotting investigative journalism about the labor that goes into building and training large language models such as ChatGPT. It joins another recently released bookThe AI Con: How to Fight Big Techs Hype and Create the Future We Want, by the linguist Emily M. Bender and the sociologist Alex Hannain revealing the puffery that fuels much of the artificial-intelligence business. Both works, the former implicitly and the latter explicitly, suggest that the foundation of the AI industry is a scam.
To call AI a con isnt to say that the technology is not remarkable, that it has no use, or that it will not transform the world (perhaps for the better) in the right hands. It is to say that AI is not what its developers are selling it as: a new class of thinkingand, soon, feelingmachines. Altman brags about ChatGPT-4.5s improved emotional intelligence, which he says makes users feel like theyre talking to a thoughtful person. Dario Amodei, the CEO of the AI company Anthropic, argued last year that the next generation of artificial intelligence will be smarter than a Nobel Prize winner. Demis Hassabis, the CEO of Googles DeepMind, said the goal is to create models that are able to understand the world around us.
Read: What Silicon Valley knew about tech-bro paternalism
These statements betray a conceptual error: Large language models do not, cannot, and will not understand anything at all. They are not emotionally intelligent or smart in any meaningful or recognizably human sense of the word. LLMs are impressive probability gadgets that have been fed nearly the entire internet, and produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another. Many people, however, fail to grasp how large language models work, what their limits are, and, crucially, that LLMs do not think and feel but instead mimic and mirror. They are AI illiterateunderstandably, because of the misleading ways its loudest champions describe the technology, and troublingly, because that illiteracy makes them vulnerable to one of the most concerning near-term AI threats: the possibility that they will enter into corrosive relationships (intellectual, spiritual, romantic) with machines that only seem like they have ideas or emotions.
snip
5 replies
= new reply since forum marked as read
Highlight:
NoneDon't highlight anything
5 newestHighlight 5 most recent replies
Thanks, Celerity! Great article! And I was particularly happy to read in the last paragraph that only 17%
highplainsdem
Jun 2025
#2
LLMs are not the be-all end-all of AI. Just a plateau before the next advance.
Bernardo de La Paz
Jun 2025
#3