Google told scientists to use ‘a positive tone’ in AI research, documents show | Technology

Sign up for the Guardian Today US newsletter

Google has tightened control of its scientific papers this year by launching a review on “ sensitive topics, ” and in at least three cases, solicited authors have refrained from disparaging the technology, internal communications and interviews with researchers at work.

Under Google’s new review process, researchers should consult legal, policy and public relations teams before engaging in topics such as facial and sentiment analysis and categorization of race, gender, or political affiliation, according to internal web pages explaining the policy.

“Advances in technology and the growing complexity of our external environment are increasingly leading to situations where seemingly harmless projects raise ethical, reputational, regulatory or legal issues,” said one of the pages for research personnel. Reuters was unable to determine the date of the post, although three current employees said the policy was launched in June.

Google declined to comment on this story.

The “sensitive topics” process adds additional scrutiny to Google’s standard review of articles on pitfalls such as trade secret disclosure, eight current and former employees said.

For some projects, Google officials intervened at later stages. A senior Google manager who reviewed a study of content recommendation technology shortly before publication this summer told authors they “should be careful to strike a positive note,” according to internal correspondence for Reuters.

The manager added, “This doesn’t mean we should hide from the real challenges” of the software.

A researcher’s subsequent correspondence with reviewers shows that authors “have been updated to remove all references to Google products.” A design spotted by Reuters talked about Google-owned YouTube.

Four staff investigators, including senior scientist Margaret Mitchell, said they think Google is starting to interfere with pivotal investigations into potential technological damage.

“If, given our expertise, we investigate the right thing, and we’re not allowed to publish that on grounds inconsistent with high-quality peer review, then we run into a serious problem of censorship,” Mitchell said.

Google states on its audience-oriented website that its scientists have “substantial” freedom.

Tensions between Google and some of its staff erupted this month after the abrupt departure of scientist Timnit Gebru, who, along with Mitchell, led a 12-person team focused on ethics in artificial intelligence software (AI).

Gebru says Google fired her after she challenged an injunction not to publish research claiming that AI mimicking speech could harm marginalized populations. Google said it accepted and accelerated her resignation. It could not be determined whether Gebru’s paper underwent an assessment of “sensitive topics”.

Jeff Dean, Google’s senior vice president, said in a statement this month that Gebru’s newspaper drew attention to potential damage without discussing efforts to address it.

Dean added that Google supports AI ethics grants and is “actively working to improve our paper assessment processes because we know that too many checks and balances can become cumbersome.”

Sensitive subjects

The explosion of research and development of AI in the tech industry has prompted authorities in the US and elsewhere to propose rules for its use. Some have cited scientific studies showing that facial analysis software and other AI can perpetuate bias or compromise privacy.

Google has integrated AI into its services in recent years, using the technology to interpret complex queries, decide recommendations on YouTube, and auto-complete sentences in Gmail. The researchers have published more than 200 papers on responsible development of AI in the past year, among more than 1,000 projects in total, Dean said.

According to an internal web page, one of the “sensitive topics” under the company’s new policy is studying Google services for bias. Dozens of other “sensitive topics” cited were the oil industry, China, Iran, Israel, Covid-19, home security, insurance, location data, religion, self-driving vehicles, telecom, and systems that recommend or personalize web content.

The Google paper for which authors were told to strike a positive note discusses recommendation AI, which services like YouTube use to personalize users’ content feeds. A draft reviewed by Reuters expressed “concerns” that this technology could promote “disinformation, discriminatory or otherwise unfair results” and “insufficient diversity of content”, as well as lead to “political polarization”.

The latest publication states instead that the systems can promote “accurate information, fairness and diversity of content”. The published version entitled What are you optimizing for? Aligning recommendation systems with human values, without giving credit to Google researchers. Reuters could not figure out why.

A paper this month on AI for understanding a foreign language softened a reference to how the Google Translate product made mistakes following a request from reviewers from the company, a source said. The published version says the authors used Google Translate, and a separate sentence says that part of the research method was to “revise and correct inaccurate translations.”

For an article published last week, a Google employee described the process as a “long-term” process, involving more than 100 email exchanges between researchers and reviewers, the internal correspondence said.

The researchers found that AI can cough up personal data and copyrighted material – including a page from a “Harry Potter” novel – pulled from the Internet to develop the system.

One draft described how such disclosures could infringe copyrights or violate European privacy law, one person familiar with the matter said. After company reviews, authors removed the legal risks and Google published the paper.

.Source