Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

hunter

(40,517 posts)
26. You are using the language of the AI promoters.
Tue Feb 10, 2026, 02:22 PM
Feb 10

There is no "they" in AI and it's not "trying" to do anything.

It's as sentient as the filter paper in your chemistry lab or coffee maker and there are still gaping holes and tears in that filter paper that let a lot of nonsense through.

Patching those holes and tears will never make the machines "intelligent."

I think we've still got a long way to go before we create a machine that's actually intelligent. It's one of those technologies that hangs just beyond our grasp like fusion power plants or manned trips to Mars. I think it's going to remain so for a long, long time no matter how many hucksters are trying to sell us futures in it today.

Recommendations

0 members have recommended this reply (displayed in chronological order):

Kick SheltieLover Feb 9 #1
Thanks! highplainsdem Feb 10 #16
Yw! SheltieLover Feb 10 #17
Is it accepted that generative AI reasons? Iris Feb 9 #2
Depends on the person EdmondDantes_ Feb 10 #8
It's called reasoning by people working on and promoting AI, but it's really more a pretense of highplainsdem Feb 10 #13
The main problem is how to assess evidence. Happy Hoosier Feb 10 #23
Thank you for providing this context. Iris Thursday #30
The reasoning aspect is key. cachukis Feb 9 #3
This message was self-deleted by its author Whiskeytide Feb 10 #10
I like your Spock/Kirk analogy, but then I thought ... Whiskeytide Feb 10 #11
I think Spock recognized humanity as a whole cloth. cachukis Feb 10 #24
I wonder how this affects ... rog Feb 9 #4
Whether or not an AI model shows its reasoning - its pretense of reasoning - you should never trust highplainsdem Feb 10 #14
That's an issue that seems to be coming up again and again . . . hatrack Feb 10 #18
With the "bonus" of dumbing yourself down, de-skilling yourself, as you try to let the AI do the work. highplainsdem Feb 10 #19
Same reason I refuse to use AI when writing or researching . . . hatrack Feb 10 #20
You may be missing my point ... rog Feb 10 #21
Summarizing isn't something AI is good at, judging by examples I've seen. Organizing by subject or highplainsdem Feb 10 #25
I just got back from an appointment with my vascular surgeon. rog Feb 10 #27
The most clueless dogs I've met have better internal models of reality than any AI. hunter Feb 10 #5
I've never forgotten a software engineer and machine learning expert saying an amoeba is more intelligent than an LLM. highplainsdem Feb 10 #15
I wonder how Neuro-sama would do on the test sakabatou Feb 10 #6
In a way, this is a computerized version of odins folly Feb 10 #7
This explains why... purr-rat beauty Feb 10 #9
Sam Altman is a serial liar who's fired everywhere he's been - including Open AI. 617Blue Feb 10 #12
LLM's can't really reason. Happy Hoosier Feb 10 #22
You are using the language of the AI promoters. hunter Feb 10 #26
I work in software development. Such Anthropomorphic language is common. Happy Hoosier Feb 10 #29
AI expert Gary Marcus's response to that paper: highplainsdem Feb 10 #28
Latest Discussions»General Discussion»A very pro-AI account on ...»Reply #26