Facebook’s Mark Zuckerberg tells Congress how to tweak Section 230

With members of Congress seemingly dead set on passing legislation to address extremism and misinformation on the internet, Mark Zuckerberg is offering his input. The Facebook CEO appeared before a House committee on Thursday, where he suggested ways to amend Section 230 of the US Communications Decency Act. 

“The principles of Section 230 are as relevant today as they were in 1996, but the Internet has changed dramatically,” Zuckerberg said in his prepared remarks, delivered to a subpanel of the House Energy and Commerce Committee. He appeared before the subpanel along with Alphabet CEO Sundar Pichai and Twitter CEO Jack Dorsey. 

Straight out of the gate, lawmakers expressed their anger at the social media leaders for failing to rein in misinformation on their platforms. Specifically, they called out content that spread misinformation about Covid-19 vaccines, as well as content that fomented anger and spread misinformation ahead of the attempted insurrection on the US Capitol in January. 

“You have the means [to stop misinformation], but time after time you are picking engagement and profit” over healthy civic discourse or public health and safety, said Communications and Technology Subcommittee Chairman Mike Doyle (D-PA). “We will legislate to stop this.”

Lawmakers have for some time discussed changes to Section 230 of the Communications Decency Act, part of the Telecommunications Act of 1996. The law exempts online platforms from liability for content posted by third parties.

Zuckerberg suggested changing the law in a manner largely in line with Facebook’s existing practices.

“We believe Congress should consider making platforms’ intermediary liability protection for certain types of unlawful content conditional on companies’ ability to meet best practices to combat the spread of this content,” he said. “Instead of being granted immunity, platforms should be required to demonstrate that they have systems in place for identifying unlawful content and removing it. Platforms should not be held liable if a particular piece of content evades its detection—that would be impractical for platforms with billions of posts per day—but they should be required to have adequate systems in place to address unlawful content.”

Definitions of an adequate system could be proportionate to platform size and set by a third-party, Zuckerberg suggested. Best practices, he added, shouldn’t include unrelated issues, such as encryption or privacy changes, that deserve a full debate in their own right.

Pichai, meanwhile, said in his prepared statement that “recent proposals to change Section 230… would have unintended consequences— harming both free expression and the ability of platforms to take responsible action to protect users in the face of constantly evolving challenges.”

Instead, he said, the industry should focus on “processes for addressing harmful content and behavior. Solutions might include developing content policies that are clear and accessible, notifying people when their content is removed and giving them ways to appeal content decisions, and sharing how systems designed for addressing harmful content are working over time.”

Dorsey’s opening statement didn’t address Section 230 but offered some principles that social platforms could adhere to, such as “algorithmic choice.”

“We believe that people should have transparency or meaningful control over the algorithms that affect them,” he said. “We recognize that we can do more to provide algorithmic transparency, fair machine learning, and controls that empower people.”

Access the original article