Concern about generative artificial intelligence technologies seems to be growing almost as fast as the spread of the technologies themselves. These worries are driven by unease about the possible spread of disinformation at a scale never seen before, and fears of loss of employment, loss of control over creative works and, more futuristically, AI becoming so powerful that it causes extinction of the human species.
The concerns have given rise to calls for regulating AI technologies. Some governments, for example the European Union, have responded to their citizens’ push for regulation, while some, such as the U.K. and India, are taking a more laissez-faire approach.
In the U.S., the White House issued an executive order on Oct. 30, 2023, titled Safe, Secure, and Trustworthy Artificial Intelligence. It sets out guidelines to reduce both immediate and long-term risks from AI technologies. For example, it asks AI vendors to share safety test results with the federal government and calls for Congress to enact consumer privacy legislation in the face of AI technologies soaking up as much data as they can get.
In light of the drive to regulate AI, it is important to consider which approaches to regulation are feasible. There are two aspects to this question: what is technologically feasible today and what is economically feasible. It’s also important to look at both the training data that goes into an AI model and the model’s output.
1. Honor copyright
One approach to regulating AI is to limit the training data to public domain material and copyrighted material that the AI company has secured permission to use. An AI company can decide precisely what data samples it uses for training and can use only permitted material. This is technologically feasible.
It is partially economically feasible. The quality of the content that AI generates depends on the amount and richness of the training data. So it is economically advantageous for an AI vendor to not have to limit itself to content it’s received permission to use. Nevertheless, today some companies in generative AI are proclaiming as a sellable feature that they are only using content they have permission to use. One example is Adobe with its Firefly image generator.
2. Attribute output to a training data creator
Attributing the output of AI technology to a specific creator – artist, singer, writer and so on – or group of creators so they can be compensated is another potential means of regulating generative AI. However, the complexity of the AI algorithms used makes it impossible to say which input samples the output is based on. Even if that were possible, it would be impossible to determine the extent each input sample contributed to…