Anyone following the rhetoric around artificial intelligence in recent years has heard one version or another of the claim that AI is inevitable. Common themes are that AI is already here, it is indispensable, and people who are bearish on it harm themselves.
In the business world, AI advocates tell companies and workers that they will fall behind if they fail to integrate generative AI into their operations. In the sciences, AI advocates promise that AI will aid in curing hitherto intractable diseases.
In higher education, AI promoters admonish teachers that students must learn how to use AI or risk becoming uncompetitive when the time comes to find a job.
And, in national security, AI’s champions say that either the nation invests heavily in AI weaponry, or it will be at a disadvantage vis-à-vis the Chinese and the Russians, who are already doing so.
The argument across these different domains is essentially the same: The time for AI skepticism has come and gone. The technology will shape the future, whether you like it or not. You have the choice to learn how to use it or be left out of that future. Anyone trying to stand in the technology’s way is as hopeless as the manual weavers who resisted the mechanical looms in the early 19th century.
In the past few years, my colleagues and I at UMass Boston’s Applied Ethics Center have been studying the ethical questions raised by the widespread adoption of AI, and I believe the inevitability argument is misleading.
History and hindsight
In fact, this claim is the most recent version of a deterministic view of technological development. It’s the belief that innovations are unstoppable once people start working on them. In other words, some genies don’t go back in their bottles. The best you can do is harness them to your good purposes.
This deterministic approach to tech has a long history. It’s been applied to the influence of the printing press, as well as to the rise of automobiles and the infrastructure they require, among other developments.
But I believe that when it comes to AI, the technological determinism argument is both exaggerated and oversimplified.
AI in the field(s)
Consider the contention that businesses can’t afford to stay out of the AI game. In fact, the case has yet to be made that AI is delivering significant productivity gains to the firms that use it. A report in The Economist in July 2024 suggests that so far, the technology has had almost no economic impact.
AI’s role in higher education is also still very much an open question. Though universities have, in the past two years, invested heavily in AI-related initiatives, evidence suggests they may have jumped the gun.
The technology can serve as an interesting pedagogical tool. For…