Reducing the visibility of polarizing content in social media feeds can measurably lower partisan animosity. To come up with this finding, my colleagues and I developed a method that let us alter the ranking of people’s feeds, previously something only the social media companies could do.
Reranking social media feeds to reduce exposure to posts expressing anti-democratic attitudes and partisan animosity affected people’s emotions and their views of people with opposing political views.
I’m a computer scientist who studies social computing, artificial intelligence and the web. Because only social media platforms can modify their algorithms, we developed and released an open-source web tool that allowed us to rerank the feeds of consenting participants on X, formerly Twitter, in real time.
Drawing on social science theory, we used a large language model to identify posts likely to polarize people, such as those advocating political violence or calling for the imprisonment of members of the opposing party. These posts were not removed; they were simply ranked lower, requiring users to scroll further to see them. This reduced the number of those posts users saw.
We ran this experiment for 10 days in the weeks before the 2024 U.S. presidential election. We found that reducing exposure to polarizing content measurably improved participants’ feelings toward people from the opposing party and reduced their negative emotions while scrolling their feed. Importantly, these effects were similar across political affiliations, suggesting that the intervention benefits users regardless of their political party.
This ‘60 Minutes’ segment covers how divisive social media posts get more traction than neutral posts.
Why it matters
A common misconception is that people must choose between two extremes: engagement-based algorithms or purely chronological feeds. In reality, there is a wide spectrum of intermediate approaches depending on what they are optimized to do.
Feed algorithms are typically optimized to capture your attention, and as a result, they have a significant impact on your attitudes, moods and perceptions of others. For this reason, there is an urgent need for frameworks that enable independent researchers to test new approaches under realistic conditions.
Our work offers a path forward, showing how researchers can study and prototype alternative algorithms at scale, and it demonstrates that, thanks to large language models, platforms finally have the technical means to detect polarizing content that can affect their users’ democratic attitudes.
What other research is being done in this field
Testing the impact of alternative feed algorithms on live platforms is difficult, and such studies have only recently increased in number.
For instance, a recent collaboration between academics and Meta found that changing the algorithmic feed to a…



