Facebook, YouTube, Twitter execs testify in senate on algorithms

Executives from Facebook, Twitter and Google-owned YouTube testified before Congress Tuesday about the ways their algorithms influence users and sometimes serve harmful misinformation.

The hearing before the the Senate Judiciary subcommittee on privacy and technology highlighted a key feature of the social media platforms that has amplified some of the most serious harms lawmakers have been seeking to address through a wide swath of bills.

Algorithms are essentially the formula social media platforms use to decide what information to surface to people using an app or website. While both Facebook and Twitter have introduced more choice for users around whether they want to view a curated timeline of content on their feeds or not, algorithms can be a useful way to surface the most engaging content for any given user, based on their interests and past activity.

While that can work to provide a better user experience, it can also drive users to more polarizing content that reinforces their beliefs, rather than showing them content that challenges their viewpoints. Lawmakers have expressed concerns that algorithms can be used to drive users toward extremism or surface inaccurate information, especially about the coronavirus and vaccines.

One of the most frequent targets of lawmaker criticism when it comes to platform regulation has been Section 230 of the Communications Decency Act, the shield that protects the platforms from being held liable for their users’ posts. While Section 230 reforms were brought up a handful of times at Tuesday’s hearing, the discussion also called attention to what could perhaps be a more narrow way of reining in some of the most pervasive harms of internet platforms by focusing on transparency around their algorithms.

Sen. Ben Sasse, R-Neb., the ranking member on the subcommittee, noted at the end of the hearing that he is still skeptical of such proposals, even though members on both sides of the aisle have promoted them. He raised concerns about the First Amendment implications that reforming the shield could have.

“I think in particular some of the conversations about Section 230 have been well off-point to the actual topic at hand today,” he said. “And I think much of the zeal to regulate is driven by short-term partisan agendas.”

Focusing on algorithms could potentially create a more immediately viable path toward regulation. Lawmakers at the hearing seemed to generally be in favor of seeing greater transparency from the platforms about how their algorithms surface content to users.

Policy executives from the three companies generally refuted the idea that their platforms are incentivized to create as much engagement as possible regardless of potential drawbacks. For example, Facebook Vice President for content policy Monika Bickert said that when the platform decided in 2018 to surface more posts from friends rather than publishers, it anticipated a drop in engagement, which ultimately occurred. But, she said, Facebook determined the move was still in its long-term interest of keeping users invested in the platform.

Senators on the subcommittee remained skeptical that the platforms were not mainly incentivized by ramping up user engagement. Sen. Josh Hawley, R-Mo., for example said that the platforms themselves are designed to create addiction.

Two experts on the witness panel testified that incentives driving amplification of misinformation do in fact exist.

Tristan Harris, a former Google design ethicist and co-founder of the Center for Humane Technology, testified that that he believes the early business model about driving engagement at Facebook still exists and that sitting the executives down to talk about what they’re doing to fix the problem is “almost like having the heads of Exxon, BP and Shell, asking about what are you doing to responsibly stop climate change?”

“Their business model is to create a society that is addicted, outraged, polarized, performative and disinformed,” Harris said. “While they can try to skim the major harm off the top and do what they can — and we want to celebrate that, we really do — it’s just fundamentally, they’re trapped in something that they can’t change.”

Joan Donovan, research director of Harvard’s Shorenstein Center on Media, Politics and Public Policy, said that “misinformation at scale is a feature, not a bug” for the online platforms. She said that the way social media platforms repeat and reinforce messages to users can “lead someone down the rabbit hole” of an internet subculture.

Both experts emphasized that decisions over social media amplification will have profound impacts on democracy itself. Throughout the hearing, senators noted the role of social media platforms played in allowing people to organize around events like the insurrection at the U.S. Capitol on Jan. 6.

To close the hearing, subcommittee Chairman Chris Coons, D-Del., said he looks forward to working with Sasse on bipartisan solutions, which could take the form of either voluntary reforms or regulation.

“None of us wants to live in a society that as a price of remaining open and free is hopelessly politically divided,” Coons said. “But I also am conscious of the fact that we don’t want to needlessly constrain some of the most innovative, fastest-growing businesses in the West. Striking that balance is going to require more conversation.”

Subscribe to CNBC on YouTube.

WATCH: The big, messy business of content moderation on Facebook, Twitter, YouTube

Access the original article
Subscribe
Don't miss the best news ! Subscribe to our free newsletter :