Google has reportedly told AI scientists to “ take a positive note ” in research

Illustration to article titled Google reportedly told AI scientists to strike a positive note in research

Photo: Leon Neal / Staff (Getty Images)

Almost three weeks after Black’s abrupt exit artificial intelligence ethicist Timnit Gebru, more details are emerging about the shady new set of policies Google has rolled out for its research team.

After reviewing internal communications and speaking to researchers affected by the rule change, Reuters reported Wednesday that the technology giant recently added a “sensitive subject matter” review process to its scientific papers, and on at least three occasions has explicitly requested that scientists refrain from disgracing Google’s technology.

Under the new procedure, scientists must meet with special legal, policy and PR teams before conducting AI research related to so-called controversial topics, including facial analysis and categorizations of race, gender, or political affiliation.

In an example reviewed by Reuters, scientists who had investigated the recommendation that AI used to populate user feeds on platforms such as YouTube – a Google property – had prepared a paper outlining concerns that the technology could be used about ‘disinformation, discriminatory or otherwise unfair results’ and ‘insufficient diversity of content’, and also lead to ‘political polarization’. After review by a senior manager who instructed the researchers to take a more positive note, and the final publication suggests instead that the systems can promote “accurate information, fairness and diversity of content.”

“Advances in technology and the growing complexity of our external environment are increasingly leading to situations where seemingly harmless projects raise ethical, reputational, regulatory or legal issues,” states an internal webpage describing the policy.

In recent weeks – and especially after the departure of Gebru, a widely known researcher who reportedly fell out of favor with the higher educated after raising the alarm about censorship entering the investigative process – Google has faced more attention to the potential bias in its internal investigation division.

Four staff investigators speaking with Reuters validated Gebru’s claims and said they too believe Google is beginning to interfere with critical investigations into technology’s potential to do harm.

“If, given our expertise, we investigate the right thing, and we are not allowed to publish it on grounds that are inconsistent with high-quality peer review, we will run into a serious problem of censorship”, Margaret Mitchell, said a senior scientist at the company.

In early December, Gebru claimed she was fired by Google after opposing an injunction not to publish research claiming that AI mimicking speech could put marginalized populations at a disadvantage.

.Source