Consumer protection is the opening salvo of US AI regulation

The Federal Trade Commission has launched an investigation of ChatGPT maker OpenAI for potential violations of consumer protection laws. The FTC sent the company a 20-page demand for information in the week of July 10, 2023. The move comes as European regulators have begun to take action, and Congress is working on legislation to regulate the artificial intelligence industry.

The FTC has asked OpenAI to provide details of all complaints the company has received from users regarding “false, misleading, disparaging, or harmful” statements put out by OpenAI, and whether OpenAI engaged in unfair or deceptive practices relating to risks of harm to consumers, including reputational harm. The agency has asked detailed questions about how OpenAI obtains its data, how it trains its models, the processes it uses for human feedback, risk assessment and mitigation, and its mechanisms for privacy protection.

As a researcher of social media and AI, I recognize the immensely transformative potential of generative AI models, but I believe that these systems pose risks. In particular, in the context of consumer protection, these models can produce errors, exhibit biases and violate personal data privacy.

Hidden power

At the heart of chatbots such as ChatGPT and image generation tools such as DALL-E lies the power of generative AI models that can create realistic content from text, images, audio and video inputs. These tools can be accessed through a browser or a smartphone app.

Since these AI models have no predefined use, they can be fine-tuned for a wide range of applications in a variety of domains ranging from finance to biology. The models, trained on vast quantities of data, can be adapted for different tasks with little to no coding and sometimes as easily as by describing a task in simple language.

Given that AI models such as GPT-3 and GPT-4 were developed by private organizations using proprietary data sets, the public doesn’t know the nature of the data used to train them. The opacity of training data and the complexity of the model architecture – GPT-3 was trained on over 175 billion variables or “parameters” – make it difficult for anyone to audit these models. Consequently, it’s difficult to prove that the way they are built or trained causes harm.

Hallucinations

In language model AIs, a hallucination is a confident response that is inaccurate and seemingly not justified by a model’s training data. Even some generative AI models that were designed to be less prone to hallucinations have amplified them.

There is a danger that generative AI models can produce incorrect or misleading information that can end up being damaging to users. A study investigating ChatGPT’s ability to generate factually correct scientific writing in the medical field found that ChatGPT ended up either generating citations to nonexistent papers or reporting nonexistent results. My collaborators and I found similar patterns in our…

Access the original article