On the limits of LLMs
-
I suspect Klaus will resonate with this gentleman. My first reaction was that if the authors think humans are capable of "genuine reasoning" not in principle already achieved by computers, then the authors are not well thought out. They pretend to be talking about logic, but they're actually talking about something else, never quite defined.
-
I'm not an expert on this, but my feeling is that we have in some ways reached "Peak LLMs" already. LLMs are inherently limited by their probabilistic nature. They just go by likelihoods of "which word is likely to come next, given what was written before". These systems have no clue whether what they generate is true or complete BS.
It's amazing how far this gets you, but I think these systems need to be augmented by forms of symbolic reasoning to make real progress.
One reason why ChatGPT et al are so good at mathematical tasks is that most tasks we can think of are not genuinely new but just variants of tasks where many solutions exist in the training set. If you ask them something genuinely new, they usually fail.
Hello! It looks like you're interested in this conversation, but you don't have an account yet.
Getting fed up of having to scroll through the same posts each visit? When you register for an account, you'll always come back to exactly where you were before, and choose to be notified of new replies (either via email, or push notification). You'll also be able to save bookmarks and upvote posts to show your appreciation to other community members.
With your input, this post could be even better 💗
Register Login