Are you part robot? A linguistic anthropologist explains how humans are like ChatGPT – both recycle language

ChatGPT is a hot topic at my university, where faculty members are deeply concerned about academic integrity, while administrators urge us to “embrace the benefits” of this “new frontier.” It’s a classic example of what my colleague Punya Mishra calls the “doom-hype cycle” around new technologies. Likewise, media coverage of human-AI interaction – whether paranoid or starry-eyed – tends to emphasize its newness.

In one sense, it is undeniably new. Interactions with ChatGPT can feel unprecedented, as when a tech journalist couldn’t get a chatbot to stop declaring its love for him. In my view, however, the boundary between humans and machines, in terms of the way we interact with one another, is fuzzier than most people would care to admit, and this fuzziness accounts for a good deal of the discourse swirling around ChatGPT.

When I’m asked to check a box to confirm I’m not a robot, I don’t give it a second thought – of course I’m not a robot. On the other hand, when my email client suggests a word or phrase to complete my sentence, or when my phone guesses the next word I’m about to text, I start to doubt myself. Is that what I meant to say? Would it have occurred to me if the application hadn’t suggested it? Am I part robot? These large language models have been trained on massive amounts of “natural” human language. Does this make the robots part human?

A typical 'captcha' message featuring a square on the left, the words 'I am not a robot' in the middle and three interconnected curved arrows forming a semicircle

No, you’re not a robot, but your language is not so different from an AI chatbot’s.
Ihor Reshetniak/iStock via Getty Images

AI chatbots are new, but public debates over language change are not. As a linguistic anthropologist, I find human reactions to ChatGPT the most interesting thing about it. Looking carefully at such reactions reveals the beliefs about language underlying people’s ambivalent, uneasy, still-evolving relationship with AI interlocutors.

ChatGPT and the like hold up a mirror to human language. Humans are both highly original and unoriginal when it comes to language. Chatbots reflect this, revealing tendencies and patterns that are already present in interactions with other humans.

Creators or mimics?

Recently, famed linguist Noam Chomsky and his colleagues argued that chatbots are “stuck in a prehuman or nonhuman phase of cognitive evolution” because they can only describe and predict, not explain. Rather than drawing on an infinite capacity to generate new phrases, they compensate with huge amounts of input, which allows them to make predictions about which words to use with a high degree of accuracy.

This is in line with Chomsky’s historic recognition that human language could not be produced merely through children’s imitation of adult speakers. The human language faculty had to be generative, since children do not receive enough input to account for all the forms they produce, many of which they could not have heard before. That is the only way to explain why humans…

Access the original article