.... in my work environment. I don't work on it directly, but, since this is a research institution, we conduct seminars and training, and I've been following through on that. Which is one reason I've been interested in this topic on line. From my training and limited hands on experience her article is accurate.
You are right that LLMs are not all of AI. They are the AI that the public sees and interacts with and shapes their opinions of it. People in general are not actually involved in, say, the investigation of protein folding, on their own or read much about it, or understand its importance. But they do interact with chat bots and ask Google questions on the web. So that is where they get their impressions, and their fears.
And there seems to be a lot of folks who are quite willing to exploit those fears. At work, one AI researcher I know calls one outlet, The Futurist , "moral panic as a service." Understanding the actual inner workings of the AI that most people interact with can be very helpful in keeping our feet on the ground and combating overreaction, moral panic, and FUD. I did not see this article as a "take down." I saw it as an explanation and an attempt to spread understanding.
My AI experience and training, such as it is, indicates to me that all successful AI begins with, and continues with, the training of some sort of statistical model on a large collection of data. A large statistical model is at the core of all successful AI that I am aware of.
Are you aware of any different approach?