From fake photos of Donald Trump being arrested by New York City police officers to a chatbot describing a very-much-alive computer scientist as having died tragically, the ability of the new generation of generative artificial intelligence systems to create convincing but fictional text and images is setting off alarms about fraud and misinformation on steroids. Indeed, a group of artificial intelligence researchers and industry figures urged the industry on March 29, 2023, to pause further training of the latest AI technologies or, barring that, for governments to “impose a moratorium.”
These technologies – image generators like DALL-E, Midjourney and Stable Diffusion, and text generators like Bard, ChatGPT, Chinchilla and LLaMA – are now available to millions of people and don’t require technical knowledge to use.
Given the potential for widespread harm as technology companies roll out these AI systems and test them on the public, policymakers are faced with the task of determining whether and how to regulate the emerging technology. The Conversation asked three experts on technology policy to explain why regulating AI is such a challenge – and why it’s so important to get it right.
To jump ahead to each response, here’s a list of each:
Human foibles and a moving target
Combining “soft” and “hard” approaches
Four key questions to ask
Human foibles and a moving target
S. Shyam Sundar, Professor of Media Effects & Director, Center for Socially Responsible AI, Penn State
The reason to regulate AI is not because the technology is out of control, but because human imagination is out of proportion. Gushing media coverage has fueled irrational beliefs about AI’s abilities and consciousness. Such beliefs build on “automation bias” or the tendency to let your guard down when machines are performing a task. An example is reduced vigilance among pilots when their aircraft is flying on autopilot.
Numerous studies in my lab have shown that when a machine, rather than a human, is identified as a source of interaction, it triggers a mental shortcut in the minds of users that we call a “machine heuristic.” This shortcut is the belief that machines are accurate, objective, unbiased, infallible and so on. It clouds the user’s judgment and results in the user overly trusting machines. However, simply disabusing people of AI’s infallibility is not sufficient, because humans are known to unconsciously assume competence even when the technology doesn’t warrant it.
Research has also shown that people treat computers as social beings when the machines show even the slightest hint of humanness, such as the use of conversational language. In these cases, people apply social rules of human interaction, such as politeness and reciprocity. So, when computers seem sentient, people tend to trust them, blindly. Regulation is needed to ensure that AI products deserve this trust and don’t exploit it.
AI poses a…