CEOs of Facebook, Twitter and Google testify before Congress about misinformation

Members of the House Energy and Commerce Committee are expected to pressure Facebook CEO Mark Zuckerberg, Google CEO Sundar Pichai and Twitter CEO Jack Dorsey over their platforms’ efforts to counter unfounded claims of electoral fraud and vaccine skepticism. Opaque algorithms that prioritize user engagement and promote misinformation can also be scrutinized, a commission memo hinted.

The technology platforms, already under severe pressure to reduce disinformation and foreign interference in the run-up to the 2020 elections, received closer scrutiny in the months that followed. Even as some companies took new steps to tackle election conspiracy theories, it wasn’t enough to stop hardline supporters of President Donald Trump from storming the US Capitol.

The hearing also marks the first time the CEOs have been back for Congress since Trump was banned or suspended from their respective platforms following the Capitol riots. In their prepared comments, some of the leaders address the events of January 6 directly.

“The attack on the Capitol was a horrific attack on our values ​​and our democracy, and Facebook is committed to helping law enforcement bring the insurgents to justice,” Zuckerberg testified. But Zuckerberg also adds, “We do more to address disinformation than any other company.”

The hearings coincide with legislation being actively considered in both the House and Senate to keep the tech industry in check. Some accounts focus on corporate economic dominance and alleged anti-competitive practices. Others focus on the platforms’ approach to content moderation or data privacy. The various proposals could introduce harsh new requirements for technology platforms, or expose them to greater legal liability in ways that could reshape the industry.

For the hotseat executives, Thursday’s session could also be their last chance to personally take a case to lawmakers before Congress decides to make potentially sweeping changes to federal law.

At the heart of the upcoming policy battle is Section 230 of the Communications Act of 1934, the signature liability shield that grants websites legal immunity from much of the content posted by their users. Members of both parties have called for updates to the law, which has been widely interpreted by the courts and credited with the development of the open Internet.

What the Biden government means for the future of Silicon Valley

The CEOs’ written testimony ahead of Thursday’s high-profile hearing outlines areas of potential clashes with lawmakers and provides hints as to areas where the companies want to work with Congress – and areas where Big Tech is likely to cut back.

Zuckerberg intends to advocate limiting the scope of section 230. In his written comments, Zuckerberg says that Facebook prefers some form of contingent liability, where online platforms can be sued for user content if the companies fail to adhere to certain best practices established by an independent third party.
The other two CEOs don’t wade into the Section 230 debate or discuss the role of the government in such detail. But they do provide their overall view of content moderation. Pichai’s testimony calls for clearer content policies and to provide a way for users to appeal content decisions. Dorsey’s testimony reiterates its call for more user-led content moderation and the creation of better settings and tools that allow users to customize their online experience.
The CEOs have now gained a lot of experience with witnesses before Congress. Zuckerberg and Dorsey last appeared before the Senate on content moderation in November. And before that, Zuckerberg and Pichai testified in Parliament on antitrust issues last summer.
In the days leading up to Thursday’s hearing, the companies have argued that they acted aggressively to dismiss misinformation. Facebook said Monday that it removed 1.3 billion fake accounts last fall and that more than 35,000 people are now working on content moderation. Twitter said this month it would put warning labels on misinformation about the coronavirus vaccine, and said repeated violations of Covid-19 policy could lead to permanent bans. YouTube said it removed tens of thousands of videos this month containing misinformation about the Covid vaccine, and in January, after the Capitol riots, announced it would restrict channels that share false claims and question the outcome of the 2020 election.

But those claims about progress are unlikely to soothe committee members, whose memo quoted several research papers indicating that disinformation and extremism are still rampant on the platforms.

Source