People are getting their news from AI – and it’s altering their views

Why AI-generated images are being used for propaganda : NPR

Meta’s decision to end its professional fact-checking program sparked a wave of criticism in the tech and media world. Critics warned that dropping expert oversight could erode trust and reliability in the digital information landscape, especially when profit-driven platforms are mostly left to police themselves.

What much of this debate has overlooked, however, is that today, AI large language models are increasingly used to write up news summaries, headlines and content that catch your attention long before traditional content moderation mechanisms can step in. The issue isn’t clear-cut cases of misinformation or harmful subject matter going unflagged in the absence of content moderation. What’s missing from the discussion is how ostensibly accurate information is selected, framed and emphasized in ways that can shape public perception.

Large language models gradually influence the way people form opinions by generating the information that chatbots and virtual assistants present to people over time. These models are now also being built into news sites, social media platforms and search services, making them the primary gateway to obtain information.

Studies show that large language models do more than simply pass along information. Their responses can subtly highlight certain viewpoints while minimizing others, often without users realizing it.

Communication bias

My colleague, computer scientist Stefan Schmid, and I, a technology law and policy scholar, show in a forthcoming accepted paper in the journal Communications of the ACM that large language models exhibit communication bias. We found that they may have a tendency to highlight particular perspectives while omitting or diminishing others. Such bias can influence how users think or feel, regardless of whether the information presented is true or false.

Empirical research over the past few years has produced benchmark datasets that correlate model outputs with party positions before and during elections. They reveal variations in how current large language models deal with public content. Depending on the persona or context used in prompting large language models, current models subtly tilt toward particular positions – even when factual accuracy remains intact.

These shifts point to an emerging form of persona-based steerability – a model’s tendency to align its tone and emphasis with the perceived expectations of the user. For instance, when a user describes themselves as an environmental activist and another as a business owner, a model may answer the same question about a new climate law by emphasizing different, yet factually accurate, concerns for each of them. For example, the criticisms could be that the law does not go far enough in promoting environmental benefits and that the law imposes regulatory burdens and compliance costs.

Such alignment can easily be misread as flattery. The phenomenon is called sycophancy: Models effectively tell users…

Access the original article

Subscribe
Don't miss the best news ! Subscribe to our free newsletter :