How can Congress regulate AI? Erect guardrails, ensure accountability and address monopolistic power

Takeaways:

A new federal agency to regulate AI sounds helpful but could become unduly influenced by the tech industry. Instead, Congress can legislate accountability.

Instead of licensing companies to release advanced AI technologies, the government could license auditors and push for companies to set up institutional review boards.

The government hasn’t had great success in curbing technology monopolies, but disclosure requirements and data privacy laws could help check corporate power.

OpenAI CEO Sam Altman urged lawmakers to consider regulating AI during his Senate testimony on May 16, 2023. That recommendation raises the question of what comes next for Congress. The solutions Altman proposed – creating an AI regulatory agency and requiring licensing for companies – are interesting. But what the other experts on the same panel suggested is at least as important: requiring transparency on training data and establishing clear frameworks for AI-related risks.

Another point left unsaid was that, given the economics of building large-scale AI models, the industry may be witnessing the emergence of a new type of tech monopoly.

As a researcher who studies social media and artificial intelligence, I believe that Altman’s suggestions have highlighted important issues but don’t provide answers in and of themselves. Regulation would be helpful, but in what form? Licensing also makes sense, but for whom? And any effort to regulate the AI industry will need to account for the companies’ economic power and political sway.

An agency to regulate AI?

Lawmakers and policymakers across the world have already begun to address some of the issues raised in Altman’s testimony. The European Union’s AI Act is based on a risk model that assigns AI applications to three categories of risk: unacceptable, high risk, and low or minimal risk. This categorization recognizes that tools for social scoring by governments and automated tools for hiring pose different risks than those from the use of AI in spam filters, for example.

The U.S. National Institute of Standards and Technology likewise has an AI risk management framework that was created with extensive input from multiple stakeholders, including the U.S. Chamber of Commerce and the Federation of American Scientists, as well as other business and professional associations, technology companies and think tanks.

Federal agencies such as the Equal Employment Opportunity Commission and the Federal Trade Commission have already issued guidelines on some of the risks inherent in AI. The Consumer Product Safety Commission and other agencies have a role to play as well.

Rather than create a new agency that runs the risk of becoming compromised by the technology industry it’s meant to regulate, Congress can support private and public adoption of the NIST risk management framework and pass bills such as the Algorithmic Accountability Act. That would have the effect of imposing…

Access the original article

Subscribe
Don't miss the best news ! Subscribe to our free newsletter :