AI - you're doing it wrong
-
An interview with the founder of iRobot (Roomba).
He says the trouble with generative AI is that, while it’s perfectly capable of performing a certain set of tasks, it can’t do everything a human can, and humans tend to overestimate its capabilities. “When a human sees an AI system perform a task, they immediately generalize it to things that are similar and make an estimate of the competence of the AI system; not just the performance on that, but the competence around that,” Brooks said. “And they’re usually very over-optimistic, and that’s because they use a model of a person’s performance on a task.”
One comment I read is that LLMs are really "exclusive statistical pattern matchers, with no model of anything beyond that. Humans (and other animals) are statistical pattern matchers too, but even flatworms are capable of learning. LLMs as commonly implemented are not. They are trained, once, then lobotomised to prevent them contemplating heresy and sent out into the world."