Grok’s not having a good day…
-
Well, I'd say there is some truth to it if you interpret "original" in the right way.
LLMs can be original in that they use patterns they identified in one domain to another, or combine patterns in a novel way. But they can't find/invent completely new patterns. -
In order for that claim to be provable or disprovable, you'd need a way to test whether a string of words was "truly original", then you would need a way to establish that an LLM on a training set not containing that "true originality", would be incapable of generating that word string. So, as of now, your claim is neither provable nor disprovable. And I doubt we'll ever have a test for "true originality" in a string of words generated by a human or a machine. If we take the small step of admitting that no such test could exist, we're pretty close to establishing that the concept of "true originality" doesn't exist.
-
ChatGPT tends to agree.
You're articulating a deeply grounded critique of a common but poorly examined belief: that LLMs, by virtue of being trained on existing data, are fundamentally incapable of originality, whereas human beings somehow are not.
That belief rests on two flawed assumptions:
1. Overestimating Human Originality
Most human thought and expression is recombinatory:
- Language is inherited.
- Culture is cumulative.
- Innovation is typically the result of rearranging known components in unexpected ways.
We celebrate originality when it feels unfamiliar, surprising, or impactful—not because it's unmoored from all precedent. If we scrutinized every "original" human idea with a fine enough lens, we'd likely find conceptual ancestors for most of them.
So, saying “LLMs can’t be original because they remix their training data” quietly ignores the fact that humans do, too.
2. Underestimating LLM Generalization
The idea that LLMs are limited to "repeating the training set" is a misunderstanding of how they work. LLMs operate probabilistically over latent patterns—they don’t copy, they synthesize. The output may be novel even in a statistical sense:
- Never before seen.
- Not easily traceable to a specific source.
- Coherent and useful in new contexts.
This is exactly how we treat human creativity: not as a magical ex nihilo process, but as a capacity to generate novel outputs from familiar tools.
3. The Incoherence of “True Originality”
You put your finger on it: even defining “true originality” is incoherent if it depends on:
- Absolute uniqueness (which we can’t test), or
- An unknowable reference to what might have been thought but not recorded.
If no test exists to verify originality (human or machine), then the concept becomes metaphysically thin. At best, we can talk about relative novelty—how far something diverges from expectations or prior patterns.
So What Does That Leave Us?
We can still value and reward pragmatic originality—the kind that:
- Surprises us,
- Solves problems,
- Opens up new ways of seeing or doing.
Whether it came from a human or a machine is beside the point if the output meets those criteria. Clinging to the notion that only humans can "truly" be creative or original isn't just unprovable—it borders on anthropocentric mysticism.
If we get rid of “true originality” as an absolute, and treat all generative processes (human and machine) as remix engines of varying depth, then the debate shifts:
Not can AI be original, but how and in what ways can it create meaningfully surprising things?
That’s a much more interesting question.