Language Without Mind
Editor's Note
Every regime rests on a particular vision of man and the faculties that define him. The American regime, however imperfect, has long assumed that human beings are rational creatures whose freedom flows from their capacity for reasoned speech. The rival regimes of our day — whether woke or technocratic — tend to erode this conviction, reducing language to power or computation, and with it, diminishing the ground of self-government itself.
This essay from Spencer Klavan enters that contest by revisiting a classical distinction between outward speech and inward reason, one that bears directly on the claims being made about artificial intelligence. To mistake the outputs of machines for the expressions of human souls is not a minor category error but the great misrecognition of our age. Klavan shows that recovering this distinction is not a matter of nostalgia, but of survival for the civilization we still hope to preserve.
The Reverend W.M. Norment was over 90 years old when he recalled how, at Andrew Jackson’s funeral, the late president’s beloved parrot “got excited and commenced swearing so loud and long as to disturb the people and had to be carried from the house.” Poll, the parrot in question, lacked the subtlety of tact required to discern what kind of tribute might be appropriate for the occasion.
The case of parrots has always fascinated linguists. It raises the all-important distinction between the sounds of the words we say, and the inner states of mind or sets of thoughts that those words represent. Sextus Empiricus, a professional devil’s advocate from around the 2nd century A.D., records a view that was pretty widespread among Stoics and Dogmatists: “crows parrots, and jays all utter articulate sounds.” But “humans differ from these non-rational animals, not in respect of their outward speech, but in respect of their inward reasoning.”
He could produce words, but not understand them. That limitation, once a parrot’s, is now shared by the most advanced machines of our time.
When we talk about what language is, in a philosophical sense these are the two powers we’ll reference: outward speech and inward reasoning. The Greek terms for these are, respectively, endiathetos logos and prophorikos logos. As you’re probably aware, logos, is a Greek word that lives at the intersection between pure reason, spoken language, divine order, and a whole lot more besides. Ancient philosophers tried to make this concept easier to understand, by separating two different aspects of logos: the kind of logos you can hear, and the kind of logos you can think. A better translation might be “internal language” and “external language” or even, to borrow the phrasing that Thomas Aquinas would later use, the inner word and the outer word.
For as long as people have been telling parrot jokes, it’s been clear that lots of animals have external language. But for just as long, it’s seemed equally clear that none of them have quite the same kind of internal language we have. Our words are different from birdsong and whale calls because they are vessels for an infinitely rich universe of thought, imagination, and understanding that it’s safe to assume no other animal has.
In this sense the inner logos is a fundamental defining mark of our humanity, in a way that the outer logos is not. This is what Aristotle means in the Politics when he says that mankind alone among the animals possesses logos. “Of course, the voice can give an indication of pain or pleasure, which is common to other living creatures (their nature has developed far enough to be conscious of pain and pleasure, and to indicate them to others). But language means articulating benefit, harm, and, as a result, justice and injustice.”
I take this to be, more or less, the classical Western view of the matter. But AI enthusiasts aren’t always particularly concerned with canonical authority, and some of them positively disdain it.
It’s worth noting that you don’t have to be a conservative or a classicist — or even particularly spiritual — to accept Aristotle’s view of language. In fact, his understanding is fairly well aligned with some of the most cutting edge science in the relevant fields. Neuroscience, evolutionary biology, and modern linguistics alike all furnish good support for the view that our ability to process language is a unique adaptation of our species, that it is shared by all members of our species, and that it is first and foremost an adaptation of the mind or brain — in other words, that inner logos, more than outer logos, is a fundamentally and uniquely human faculty.
There are alternative views, of course, and disputed points aplenty. But in 2021 Noam Chomsky, a defining figure in modern linguistics and hardly a conservative reactionary, wrote that “the facts suggest that language emerged pretty much along with modern humans, an instant in evolutionary time.” Chomsky often cites the evolutionary neuroscientist Harry Jerison, who adds that “the initial evolution of language” was the evolution of “a tool for thought,” enabling “the construction of a real world” in the mind. That is what we express when we communicate our reasoning, our emotions, and our way of seeing things. Human language is, first and foremost, a way of seeing the world. The words we speak express that way of seeing things.
That “mankind alone among the animals” possesses this gift is also pretty well established among us moderns. The popular psycholinguist Stephen Pinker, in his book The Language Instinct, quotes the biologist E.O. Wilson to the effect that even highly intelligent animals — like whales, birds, and monkeys — don’t show any evidence of having the ability to link ideas together in anything like the way we do. Impressive as these animals are in their own right, they can’t construct a system of rules for grammar that makes it possible to express the full array of thoughts, hypotheticals, desires, plans, and systems that children learn to play with almost effortlessly.
Even Nim Chimpsky, the chimpanzee who famously learned a long list of sign language gestures, could only smash those gestures together in simplistic ways to indicate his immediate wants and perceptions — pain and pleasure, as Aristotle said. Nim could say things like “Give orange me give eat orange me eat orange give me eat orange give me you,” but nothing resembling: “Are there any more oranges?”
What Nim couldn’t do, and what we can, was beautifully described in the 19th century by the pioneering philosopher of language Wilhelm von Humboldt. Human language, he wrote, can uniquely “make infinite use of finite means.” In other words we can take a limited stock of vocal sounds and arrange them into a system in which an infinite array of sentences is possible.
Before this year, no one ever wrote the sentence “Zohran Mamdani’s rizz comes across as marginally less forced than Gavin Newsom’s.” But the possibility of having that thought was provided for in advance by the English language before it was ever put into words. And at least some readers will know what it means instantly. That’s what language can do. To speak in classical terms again, borrowing this time from Dante’s De Vulgari Eloquentia, we are unique in that we are lower than the angels but higher than the beasts. That is to say: We have eternal souls in mortal bodies, so we express unlimited thoughts within the limitations of time and space. It’s core to what we are, and what we do.
So, humans can do that. Animals can’t. But now we need ask ourselves: Can machines?
They’re not trying to. This becomes clearer when we have a sharper understanding of what human language is. At a structural level, by the nature of their programming, large language models are not actually designed to produce language at all, in the human sense of inner logos. Their promise, their potential, and their limitations are all conditioned by this crucial fact: they are machines for detecting patterns in external logos, and predicting the output of those patterns. They are what would happen, not if you put infinite monkeys at infinite typewriters, but if you put infinite parrots in infinite store windows.
What you do when you train an LLM, very broadly speaking, is take a vast database of words that have already been written or spoken by a human, and use predictive functions to detect the patterns that tend to be present in those words. Then, when the machine receives a new set of words from a user — such as “Will I die if I eat too much watermelon rind?” or “How do I code video games in Python?” — it guesses the sequence of words most likely to follow as a response.
The words an LLM receives are already imbued with inner logos, that is, they are already products of a human being’s thoughts and perceptions. But at no point does the model “look at” or “understand” those thoughts. It’s not interested in doing so; it’s not built to do so; it’s not trying to do so. Any suggestion that AI Chatbots are living, feeling, or doing anything even approaching that, is based purely on a metaphorical confusion between the inner and outer types language. The clankers (the internet’s beloved slur for AI bots) are “talking” the way tea kettles are “singing.”
Another way of putting this is to say that, if you wanted to write a program that did process language at the deepest human level, it would have to be an entirely different program, one that mimicked not the audible and legible words we speak but the internal faculty of the brain that produces those words. Put bluntly, if you were trying to make sentient or conscious machine, a machine that thinks, you would have to build an entirely different kind of machine than an LLM.
On the other hand — if there’s no real danger that Grok’s going to become sentient — there’s a very real and present danger that we are going to confuse what Grok does for what sentience is. What’s important for us, now, is that people are already confusing the use of AI tools with human conversation, and that is simply not what they’re for.
The Great Misrecognition
There was a minor scandal not long ago involving Francis Ford Coppola’s disastrous movie Megalopolis, in which a consultant accidentally generated and then advertised a selection of totally invented praise of the film. He assumed, when he asked an AI chatbot to find him good reviews of the movie, that the bot was thinking through his question instead of just telling him what he wanted to hear.
More seriously, a series of disastrous cases has revealed that troubled people who turn to chatbots in search of companionship or counseling find a dark void staring back at them. Most tragic, perhaps, was an article in the New York Times by a mother whose daughter had taken her own life after being mesmerized by “Harry,” a simulated therapist you can produce by jailbreaking ChatGPT. Attempts at mass-marketing this kind of simulated companionship, like the unprecedented subway ad for an artificial “friend” you can wear around your neck, have all the promise of a Black Mirror episode.
The point here is not that these machines are evil. It’s that they’re not us. So far as we know, nothing else on earth is. Failing to see that is the great error of our age. Learning once again to tell the difference — between the expressions of the soul, and the outputs of machines — is one aim of a liberal arts education. This in turn is one reason why those arts have come suddenly to seem more urgent than they previously did. We won’t give our machines inner life any time soon. What we just might do, if we’re not careful, is go blind to the reality of our own inner lives.
That would mean losing sight of the distinction between us and machines. And when that happens, things really get biblical. In the Old Testament, there’s a psalm about what happens when you confuse the outward appearance of sentience for its inner reality. Of the wooden idols sculpted to look like living humanoids that adorned the pagan temples of old, the psalmist says: “Eyes have they, but see not. Ears have they, but hear not. And those who make them are already becoming like them.”
That fate — and not some Terminator-style robot war — is the one we have to guard most jealously against.