From the number of posts now on the site about artificial intelligence, you would think I have far more of a preoccupation with the topic than in fact I do. Actually, I’ve been far from impressed with most generally available AI tools. Tailored machine learning mechanisms and large language models can be far more useful, but ideas about AI “going rogue”, and the more revolutionary predictions for the capabilities, are rather overblown and unrealistic. Some of these misconceptions may come from the language we use, as a recent article in Science points out.
Titled “The Metaphors of Artificial Intelligence,” it is a refreshingly rational and reasonable approach to AI and the origins of some of our misconceptions surrounding the technology. As authors and readers, we should all be more aware of how the specific words and phrases we choose impact how we come to understand a thing or an idea. Yet, many of us probably haven’t considered just how pernicious these metaphors can be. It’s all about what becomes an assumption, and Mitchell does an excellent job of highlighting how deeply some of those assumptions have become embedded over AI’s long history in the popular imagination and in technical development – for it would be wrong to suppose the “experts” are immune to these assumptions. Indeed, she highlights examples of researchers falling for these metaphors to treat AI like individual minds in using them for IQ tests, personality assessments, and even for simulated human psychological experiments.
Of especial interest to this community may be the section on AI and copyright law. I’ve asserted on the site that certain dimensions of the copyright concerns raised by LLMs are perhaps overblown because, if the content was already available freely online, the AI using it to generate answers and responses is little different from a human being doing something similar. This presupposes, however, the argument that LLMs should be conceptualized as individual minds. Mitchell references a legal scholar and a linguist to provide a counterargument, which cuts into these underlying assumptions, but both she and her references fail to propose an alternative framework for approaching the question of AI and copyright.
AI is not the only field of human knowledge in which we can become victims of questionable metaphors. Arguably, much of our language and communication is made up of metaphors and references, and this becomes an increasingly precarious arrangement the more abstract the ideas we want to communicate become. Attempting to communicate many of the ideas of theoretical physics and cosmology, for instance, is almost entirely built upon metaphors which may or may not be as reflective of reality as we like to think they are. I, for one, am quite accustomed to acknowledging the barrier to understanding imposed by these metaphors in such heady fields, but it is a more difficult matter to apply the same skepticism to more commonplace metaphors (which are often not presented as metaphors but as facts). As Mitchell’s article makes clear, it is important we make efforts to remember.

One thought on “The Language of AI”