Twitch will act on “serious” offenses that occur outside of the platform

Twitch is finally here to accept its responsibility as a microcelebrity machine the king has to make, not just as a service or a platform. Today, the Amazon company announced a formal and public policy of investigating serious indiscretions from streamers in real life, or on services like Discord or Twitter.

Last June, dozens of women came forward with sexual misconduct allegations against prominent video game streamers on Twitch. On Twitter and other social media, they shared poignant experiences of streamers using their relative fame to push boundaries, resulting in serious personal and professional damage. Twitch would eventually ban or suspend several accused streamers, a few of whom were ‘partners’ or could receive money through Twitch subscriptions. At the same time, Twitch’s #MeToo movement raised greater questions about the service’s responsibility for the actions of its most visible users, both on and off-stream.

While researching those problem users, Twitch COO Sara Clemens tells WIRED, Twitch’s moderation and law enforcement teams learned the challenge of assessing and making decisions based on users’ behavior IRL or on other platforms like Discord. “We realized that not having a policy to look at off-duty behavior created a threat vector to our community that we hadn’t addressed,” said Clemens. Today, Twitch is announcing its solution: an off-services policy. In partnership with a third-party law firm, Twitch will investigate reports of crimes such as sexual assault, extremist behavior and threats of violence that occur off-stream.

“We’ve been working on it for a while,” says Clemens. “It is certainly an unknown space.”

Twitch is at the forefront of ensuring that not only the content, but also the people who create it, are safe for the community. (The policy applies to everyone: partners, connected and even relatively unknown steamers). Sites supporting digital celebrities have banned users for years for indiscretions outside of the platform. In 2017, PayPal cut off a number of white supremacists. In 2018, Patreon removed anti-feminist YouTuber Carl Benjamin, known as Sargon of Akkad, for racist speeches on YouTube. Meanwhile, sites that grow or rely directly on digital celebrities don’t tend to rigorously scrutinize their most famous or influential users, especially when those users relegate their problematic behavior to Discord servers or industry parties.

Despite never publishing a formal policy, in the past queen-making services like Twitch and YouTube have removed users who they believed were harmful to their community for things they said or did elsewhere. In late 2020, YouTube announced it was temporarily demonetizing prank channel NELK after its creators threw brushes at Illinois State University when the social meeting limit was 10. Those actions, and public statements about them, are the exception rather than the rule.

“Platforms sometimes have special mechanisms to escalate this,” said Kat Lo, moderator at nonprofit tech-literate company Meedan, referring to the direct lines that high-profile users often have with company employees. She says that off-service moderation has been happening on the largest platforms for at least five years. But overall, she says, companies don’t often advertise or formalize these processes. “Investigating behavior outside the platform requires a high investigation capacity, finding evidence that is verifiable. It is difficult to standardize. “

Twitch received 7.4 million user reports for “all types of violations” in the second half of 2020 and responded to reports 1.1 million times, according to its recent transparency report. During that time, Twitch dealt with 61,200 cases of alleged hateful conduct, sexual harassment and harassment. That’s a tough lift. (Twitch dealt with 67 terrorism cases and escalated 16 cases to law enforcement). While they make up a large portion of user reports, unless it also occurs on Twitch, harassment and bullying are not among the listed behaviors that Twitch will investigate outside of the platform. Off-service conduct that will lead to investigation includes what Twitch’s blog post calls “serious crimes that pose a substantial security risk to the community”: deadly violence and violent extremism, explicit and credible threats of mass violence, hate group membership, and so on. While bullying and harassment are now not included, Twitch says the new policy is designed to scale.

Source