On March 22, 2023, thousands of researchers and tech leaders – including Elon Musk and Apple co-founder Steve Wozniak – published an open letter calling to slow down the artificial intelligence race. Specifically, the letter recommended that labs pause training for technologies stronger than OpenAI’s GPT-4, the most sophisticated generation of today’s language-generating AI systems, for at least six months.
Sounding the alarm on risks posed by AI is nothing new – academics have issued warnings about the risks of superintelligent machines for decades now. There is still no consensus about the likelihood of creating artificial general intelligence, autonomous AI systems that match or exceed humans at most economically valuable tasks. However, it is clear that current AI systems already pose plenty of dangers, from racial bias in facial recognition technology to the increased threat of misinformation and student cheating.
While the letter calls for industry and policymakers to cooperate, there is currently no mechanism to enforce such a pause. As a philosopher who studies technology ethics, I’ve noticed that AI research exemplifies the “free rider problem.” I’d argue that this should guide how societies respond to its risks – and that good intentions won’t be enough.
Riding for free
Free riding is a common consequence of what philosophers call “collective action problems.” These are situations in which, as a group, everyone would benefit from a particular action, but as individuals, each member would benefit from not doing it.
Such problems most commonly involve public goods. For example, suppose a city’s inhabitants have a collective interest in funding a subway system, which would require that each of them pay a small amount through taxes or fares. Everyone would benefit, yet it’s in each individual’s best interest to save money and avoid paying their fair share. After all, they’ll still be able to enjoy the subway if most other people pay.
Hence the “free rider” issue: Some individuals won’t contribute their fair share but will still get a “free ride” – literally, in the case of the subway. If every individual failed to pay, though, no one would benefit.
Philosophers tend to argue that it is unethical to “free ride,” since free riders fail to reciprocate others’ paying their fair share. Many philosophers also argue that free riders fail in their responsibilities as part of the social contract, the collectively agreed-upon cooperative principles that govern a society. In other words, they fail to uphold their duty to be contributing members of society.
Hit pause, or get ahead?
Like the subway, AI is a public good, given its potential to complete tasks far more efficiently than human operators: everything from diagnosing patients by analyzing medical data to taking over high-risk jobs in the military or improving mining safety.
But both its benefits and dangers will affect…