Exclusive: Google promises changes to investigative surveillance after internal uprising

(Reuters) – Alphabet Inc’s Google will change its procedures for reviewing the work of its scientists before July, according to a City Hall recording heard by Reuters, as part of an effort to stem the internal uproar over the integrity of its suppress research on artificial intelligence (AI).

FILE PHOTO: The Google name is displayed outside the company’s London, UK office, November 1, 2018. REUTERS / Toby Melville

In comments at a staff meeting last Friday, Google Research executives said they were working to regain trust after the company deposed two prominent women and rejected their work, according to a one-hour recording, the content of which was confirmed by two sources.

Teams are already testing a questionnaire that will assess projects for risk and help scientists navigate assessments, Chief Operating Officer Maggie Johnson of the research unit said at the meeting. This first change will be implemented by the end of the second quarter and most papers will not require additional vetting, she said.

Reuters reported in December that Google had introduced a “sensitive topics” rating for studies involving dozens of issues, such as China or bias in its services. Internal reviewers had demanded that at least three papers on AI be modified so as not to cast Google technology in a negative light, Reuters reported.

Jeff Dean, Google’s senior vice president who oversees the division, said on Friday that the review on “sensitive topics” is “and has been” confusing and that he had commissioned a senior research director, Zoubin Ghahramani, to review the rules. clarify, the recording said.

Ghahramani, a professor at the University of Cambridge who joined Google in September from Uber Technologies Inc, said at town hall, “We need to be comfortable with that discomfort” of self-critical research.

Google declined to comment on Friday’s meeting.

An internal email seen by Reuters provided new details about Google researchers’ concerns, revealing exactly how Google’s legal department had modified one of the three AI papers, titled “Extracting Training Data from Large Language Models . ” (bit.ly/3dL0oQj)

The email, dated Feb. 8, from a co-author of the paper, Nicholas Carlini, went to hundreds of colleagues, trying to draw their attention to what he called “deeply insidious” edits by corporate lawyers.

“Let’s be clear here,” said the approximately 1,200-word email. “If we as academics write that we have a ‘problem’ or find something ‘troubling’ and a Google attorney requires us to change it to make it sound better, this is very much that Big Brother intervenes.”

Required edits, according to his email, include ‘negative-to-neutral’ swaps, such as changing the word ‘concerns’ to ‘considerations’ and ‘hazards’ to ‘risks’. Lawyers also required the removal of references to Google technology; the authors’ finding that AI has leaked copyrighted content; and the words “intrusion” and “sensitive,” said the email.

Carlini did not respond to requests for comment. Google, in response to questions about the email, disputed its claim that lawyers were trying to control the newspaper’s tone. The company said it had no issues with the topics investigated by the newspaper, but found some legal terms used inaccurately and made an in-depth edit as a result.

AUDIT RACIAL EQUITY

Google also last week named Marian Croak, a pioneer of Internet audio technology and one of Google’s few black vice presidents, to consolidate and manage 10 teams studying issues such as racial bias in algorithms and technology for people with disabilities .

Croak said at Friday’s meeting that it would take time to allay AI ethics researchers’ concerns and mitigate damage to Google’s brand.

“Please hold me fully responsible for trying to change that situation,” she said on the recording.

Johnson added that the AI ​​organization is hiring a consultancy for a comprehensive assessment of racial equality. The first of its kind audit for the department would lead to recommendations “which are going to be quite difficult,” she said.

Tensions in Dean’s division had intensified in December after Google let go of Timnit Gebru, co-leader of its AI ethics research team, following her refusal to retract a paper on language-generating AI. Gebru, who is black, accused the company at the time of judging its work differently because of its identity and marginalizing employees from underrepresented backgrounds. Nearly 2,700 employees signed an open letter in support of Gebru. (bit.ly/3us5kj3)

Dean explained during the town hall which scholarship would support the company.

“We want to explore responsible AI and ethical AI research,” said Dean, who set the example of studying the environmental costs of technology. But it’s problematic to call data “nearly a factor of a hundred,” while ignoring Google’s more accurate statistics and efforts to cut emissions, he said. Dean previously criticized Gebru’s article for not containing significant findings about its environmental impact.

Gebru defended the quote from her paper. “It’s a really bad look for Google to come out so defensively against a paper cited by so many of their similar institutions,” she told Reuters.

Employees continued to post about their frustrations over the past month on Twitter, while Google investigated and subsequently fired ethical AI co-leader Margaret Mitchell for moving electronic files outside the company. Mitchell said on Twitter that she was acting “to raise concerns about race and gender inequality and to speak about the problematic dismissal of Dr. Gebru by Google.”

Mitchell had collaborated on the newspaper that triggered Gebru’s departure, and a version published online last month with no Google affiliation dubbed “Shmargaret Shmitchell” as co-author. (bit.ly/3kmXwKW)

When asked for comment, Mitchell expressed disappointment through an attorney at Dean’s criticism of the newspaper, saying her name had been removed following a company order.

Reporting by Paresh Dave and Jeffrey Dastin; Editing by Jonathan Weber and Lisa Shumaker

Source