Experts alone can’t handle AI – social scientists explain why the public needs a seat at the table

Are democratic societies ready for a future in which AI algorithmically assigns limited supplies of respirators or hospital beds during pandemics? Or one in which AI fuels an arms race between disinformation creation and detection? Or sways court decisions with amicus briefs written to mimic the rhetorical and argumentative styles of Supreme Court justices?

Decades of research show that most democratic societies struggle to hold nuanced debates about new technologies. These discussions need to be informed not only by the best available science but also the numerous ethical, regulatory and social considerations of their use. Difficult dilemmas posed by artificial intelligence are already emerging at a rate that overwhelms modern democracies’ ability to collectively work through those problems.

Broad public engagement, or the lack of it, has been a long-running challenge in assimilating emerging technologies, and is key to tackling the challenges they bring.

Ready or not, unintended consequences

Striking a balance between the awe-inspiring possibilities of emerging technologies like AI and the need for societies to think through both intended and unintended outcomes is not a new challenge. Almost 50 years ago, scientists and policymakers met in Pacific Grove, California, for what is often referred to as the Asilomar Conference to decide the future of recombinant DNA research, or transplanting genes from one organism into another. Public participation and input into their deliberations was minimal.

Societies are severely limited in their ability to anticipate and mitigate unintended consequences of rapidly emerging technologies like AI without good-faith engagement from broad cross-sections of public and expert stakeholders. And there are real downsides to limited participation. If Asilomar had sought such wide-ranging input 50 years ago, it is likely that the issues of cost and access would have shared the agenda with the science and the ethics of deploying the technology. If that had happened, the lack of affordability of recent CRISPR-based sickle cell treatments, for example, might’ve been avoided.

AI runs a very real risk of creating similar blind spots when it comes to intended and unintended consequences that will often not be obvious to elites like tech leaders and policymakers. If societies fail to ask “the right questions, the ones people care about,” science and technology studies scholar Sheila Jasanoff said in a 2021 interview, “then no matter what the science says, you wouldn’t be producing the right answers or options for society.”

Ethical debates should be central to efforts to regulate AI.

Even AI experts are uneasy about how unprepared societies are for moving forward with the technology in a responsible fashion. We study the public and political aspects of emerging science. In 2022, our research group at the University of…

Access the original article

Subscribe
Don't miss the best news ! Subscribe to our free newsletter :