Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

hunter

(40,514 posts)
5. The most clueless dogs I've met have better internal models of reality than any AI.
Tue Feb 10, 2026, 02:03 AM
Tuesday

I really don't understand how anyone attributes "intelligence" to these automated plagiarism machines.

There are some aspects of this paper that bother me. For example, I think it's absurd to talk about such things as "LLM Reasoning Failures" when there's no reasoning going on at all.

Are we all so conditioned by our education that we think answering questions or writing short essays for an exam is some kind of "reasoning?" It's not.

I'll give an example: Sometimes I meet Evangelical Christian physicians who tell me they don't "believe in" evolution. They might even "believe" that the earth is merely thousands of years old and not billions. They've obviously passed Biology exams to become physicians, they've witnessed the troublesome quirks of the human body that can only be explained by evolution, yet they've never applied any of that to their own internal model of reality. There's an empty space where those models ought to exist. ( Or possibly they are lying to themselves, which is the worst sort of lie. )

With AI it's all empty space. The words go in and the words come out without anything in between.

Whenever I write I'm always concerned that I'm letting the language in my head do my thinking for me; that I'm being the meat based equivalent of an LLM. If I'm doing that I don't really have anything to say. I want all my writing to represent my own internal models of reality as shaped by my own experiences.

LLMs don't have any experiences.

Recommendations

3 members have recommended this reply (displayed in chronological order):

Kick SheltieLover Feb 9 #1
Thanks! highplainsdem Tuesday #16
Yw! SheltieLover Tuesday #17
Is it accepted that generative AI reasons? Iris Feb 9 #2
Depends on the person EdmondDantes_ Tuesday #8
It's called reasoning by people working on and promoting AI, but it's really more a pretense of highplainsdem Tuesday #13
The main problem is how to assess evidence. Happy Hoosier Tuesday #23
Thank you for providing this context. Iris Thursday #30
The reasoning aspect is key. cachukis Feb 9 #3
This message was self-deleted by its author Whiskeytide Tuesday #10
I like your Spock/Kirk analogy, but then I thought ... Whiskeytide Tuesday #11
I think Spock recognized humanity as a whole cloth. cachukis Tuesday #24
I wonder how this affects ... rog Feb 9 #4
Whether or not an AI model shows its reasoning - its pretense of reasoning - you should never trust highplainsdem Tuesday #14
That's an issue that seems to be coming up again and again . . . hatrack Tuesday #18
With the "bonus" of dumbing yourself down, de-skilling yourself, as you try to let the AI do the work. highplainsdem Tuesday #19
Same reason I refuse to use AI when writing or researching . . . hatrack Tuesday #20
You may be missing my point ... rog Tuesday #21
Summarizing isn't something AI is good at, judging by examples I've seen. Organizing by subject or highplainsdem Tuesday #25
I just got back from an appointment with my vascular surgeon. rog Tuesday #27
The most clueless dogs I've met have better internal models of reality than any AI. hunter Tuesday #5
I've never forgotten a software engineer and machine learning expert saying an amoeba is more intelligent than an LLM. highplainsdem Tuesday #15
I wonder how Neuro-sama would do on the test sakabatou Tuesday #6
In a way, this is a computerized version of odins folly Tuesday #7
This explains why... purr-rat beauty Tuesday #9
Sam Altman is a serial liar who's fired everywhere he's been - including Open AI. 617Blue Tuesday #12
LLM's can't really reason. Happy Hoosier Tuesday #22
You are using the language of the AI promoters. hunter Tuesday #26
I work in software development. Such Anthropomorphic language is common. Happy Hoosier Tuesday #29
AI expert Gary Marcus's response to that paper: highplainsdem Tuesday #28
Latest Discussions»General Discussion»A very pro-AI account on ...»Reply #5