The YouTube algorithm accidentally blocks the ‘black and white’ CHESS strategy

YouTube’s exaggerated AI may have misinterpreted a conversation about chess as racist language.

Last summer, a YouTuber who produces popular chess videos saw his channel blocked from including what the site called “harmful and dangerous” content.

YouTube didn’t explain why it blocked Croatian chess player Antonio Radic, also known as ‘Agadmator’, but service was restored 24 hours later.

Carnegie Mellon computer scientists suspect that Radic’s “ black versus white ” discussion with a grandmaster accidentally triggered YouTube’s AI filters.

By running simulations with software trained to detect hate speech, they found that more than 80 percent of chess videos flagged for hate speech did not contain any video, but did contain terms such as ‘black’, ‘white’ , ‘attack’ and ‘threat’.

The researchers suggest that social media platforms include chess language in their algorithms to avoid further confusion.

Scroll down for video

Popular chess YouTuber Antonio Radic had his channel blocked last summer for 'harmful and dangerous' content.  He believes the platform's AI falsely flagged him for discussing 'black versus white' in a chess conversation

Popular chess YouTuber Antonio Radic had his channel blocked last summer for ‘harmful and dangerous’ content. He believes the platform’s AI falsely flagged him for discussing ‘black versus white’ in a chess conversation

With over a million subscribers, Agadmator is considered the most popular chess industry on YouTube.

But on June 28, Radic’s channel was blocked after he posted a segment featuring Grandmaster Hikaru Nakamura, a five-time champion and the youngest American to earn the Grandmaster title.

YouTube did not give him a reason to block the channel.

In addition to human moderators, YouTube uses AI algorithms to track down banned content, but if they don’t get the right samples to provide context, those algorithms can flag benign videos.

Carnegie Mellon researchers tested two top-level speech classifiers: AI software that can be trained to detect hate speech.  More than 80 percent of the comments flagged by the programs lacked racist language, but they did include chess terms such as 'black', 'white', 'attack' and 'threat'.

Carnegie Mellon researchers tested two top-level speech classifiers: AI software that can be trained to detect hate speech. More than 80 percent of the comments flagged by the programs lacked racist language, but they did include chess terms such as ‘black’, ‘white’, ‘attack’ and ‘threat’.

Radic’s channel was restored after 24 hours, leading him to speculate that his use of the phrase ‘black against white’ in the Nakamura was the culprit.

At the time, he was talking about the two opposing games in a game of chess.

Ashiqur R. KhudaBukhsh, a computer scientist at Carnegie Melon’s Language Technologies Institute, suspected Radic was right.

“We don’t know what tools YouTube uses, but if they rely on artificial intelligence to detect racist language, these kinds of accidents can happen,” said KhudaBukhsh.

To test his theory, KhudaBukhsh and fellow researcher Rupak Sarkar conducted tests on two advanced speech classifiers, AI software that can be trained to detect hate speech.

Radic's channel was blocked for 24 hours after he posted this video, with a conversation with Grandmaster Hikaru Nakamura

Radic’s channel was blocked for 24 hours after he posted this video, with a conversation with Grandmaster Hikaru Nakamura

Using the software on more than 680,000 comments from five popular YouTube chess channels, they found that 82 percent of the comments flagged in a sample set did not contain obvious racist language or hate speech.

Words like ‘black’, ‘white’, ‘attack’ and ‘threat’ seemed to have triggered the filters, KhudaBukhsh and Sarkar said in a presentation at the annual Association for the Advancement of AI conference this month.

The accuracy of the software will depend on the examples provided, KhudaBukhsh said, and the YouTube ratings training datasets likely contain few examples of chess talk, leading to misclassification.

Radić, 33, started his YouTube channel in 2017 and has over one million subscribers.  His most popular video, a review of a 1962 competition, has been viewed more than 5.5 million times

Radić, 33, started his YouTube channel in 2017 and has over million subscribers. His most popular video, a review of a 1962 competition, has been viewed more than 5.5 million times

If someone as famous as Radic is falsely blocked, he added, “ it could very well happen in silence with a lot of other people who are not so well known. ”

YouTube declined to indicate what caused Radic’s video to be flagged, but told Mail Online, “If we are brought to attention that a video has been accidentally deleted, we will act quickly to restore it.”

“We also offer uploaders the option to object to removals and will reassess the content,” said a representative. Agadmator appealed the removal and we quickly recovered the video.

Radić, 33, started his YouTube channel in 2017 and within a year, sales surpassed his day job as a wedding videographer.

‘I’ve always loved chess, but I live in a small town and there weren’t many people I could talk to [it]’he told ESPN last year. “So it made sense to start a YouTube channel.”

His most popular video, a review of a 1962 match between Rashid Nezhmetdinov and Oleg Chernikov, has over 5.5 million views to date.

COVID lockdowns have sparked a renewed interest in chess: Since March 2020, the server and social network Chess.com has added about 2 million new members per month since the pandemic started, Annenberg Media reported.

The Game of Kings has also benefited from the popularity of The Queen’s Gambit, an acclaimed miniseries about a troubled female chess master that hit Netflix in October.

Source