Google caused a stir earlier this month when it fired Timnit Gebru, the co-leader of a team of researchers at the company that studies the ethical implications of artificial intelligence. Google claims it has accepted her “resignation,” but Gebru, who is Black, claims she was fired for drawing unwanted attention to the lack of diversity in Google’s workforce. She was also at odds with her executives over their request to withdraw an article she co-authored on ethical concerns associated with certain types of AI models that are central to Google’s business.
On this week’s Trend Lines podcast, WPR’s Elliot Waldman was joined by Karen Hao, MIT Technology Review’s senior AI reporter, to discuss Gebru’s deposition and its implications for the increasingly important field of AI ethics.
Listen to the full interview with Karen Hao on the Trend Lines podcast:
If you like what you hear, subscribe to Trend Lines:
The following is a partial transcript of the interview. It has been slightly edited for clarity.
World Politics Review: First, can you tell us a bit about Gebru and the kind of status she has in AI, given the groundbreaking research she’s done and how she ended up at Google to begin with?
Karen Hao: Timnit Gebru, you could say, is one of the cornerstones of AI ethics. She received her Ph.D. in AI Ethics at Stanford under the advice of Fei-Fei Li, who is one of the pioneers in the entire field of AI. When Timnit completed her PhD at Stanford, she went to Microsoft for a postdoctoral research before joining Google after approaching her based on the impressive work she’d done. Google started their ethical AI team and they thought they would be a great person to co-lead it. One of the studies she is known for is one she wrote along with another black female researcher, Joy Buolamwini, about the algorithmic discrimination that occurs in commercial facial recognition systems.
The paper was published in 2018, and at the time, the revelations were pretty shocking, as they checked commercial facial recognition systems already sold by tech giants. The paper’s findings showed that these systems, which were sold under the assumption that they were highly accurate, were actually extremely inaccurate, particularly on dark-skinned and feminine faces. In the two years since the paper’s publication, there have been a number of events that eventually led these tech giants to shut down or stop selling their facial recognition products to the police. The seed of those actions was actually planted by the article that Timnit co-authored. So she’s very present in the field of AI ethics, and she’s done a lot of groundbreaking work. She also co-founded a non-profit organization called Black in AI that really advocates diversity in technology and specifically in AI. She is a force of nature and a very famous name in space.
We should think about how we can develop new AI systems that don’t rely on this brute-force method to scrape billions and billions of sentences from the internet.
IN THE PR: What exactly are the ethical issues that Gebru and her co-authors identified in the paper that led to her firing?
Hao: The paper discussed the risks of large-scale language models, which are essentially AI algorithms trained on a huge amount of text. So you can imagine they are trained on all the articles published on the internet – all subreddits, Reddit threads, Twitter and Instagram captions – everything. And they try to learn how we construct sentences in English and then how to generate sentences in English. One of the reasons Google is very interested in this technology is because it helps power their search engine. In order for Google to give you relevant results when you search a search query, it must be able to capture or interpret the context of what you are saying so that if you type three random words, it can figure out the meaning of what you are looking for .
What Timnit and her co-authors point out in this paper is that this relatively recent area of research is beneficial, but it also has some pretty significant drawbacks that need to be talked about more. One is that these models consume a tremendous amount of power because they run on really large data centers. And given that we are in a global climate crisis, the field should consider that in this study it could exacerbate climate change and then have downstream impacts disproportionately impacting marginalized communities and developing countries. Another risk they point out is the fact that these models are so large that they are very difficult to research, and they also capture large areas of the Internet that are very toxic.
So they eventually normalize a lot of sexist, racist, or offensive language that we don’t want to perpetuate in the future. But because of the lack of understanding of these models, we can’t completely dissect and then remove the kinds of things they teach. Ultimately, the paper concludes that these systems have major benefits, but also major risks. And as a field, we should spend more time thinking about how to actually develop new language AI systems that don’t rely so much on this brute-force method to train it on billions and billions of phrases that are from the internet alone scraped.
IN THE PR: And how did Gebru’s supervisors at Google react to this?
Hao: What’s interesting is that Timnit has said – and this is supported by her former teammates – that the paper has actually been approved to be submitted to a conference. This is a very classic process for her team and within the wider Google AI research team. The whole purpose of this research is to contribute to the academic discourse, and the best way to do that is by submitting it to an academic conference. They had prepared this document with a number of outside collaborators and presented it to one of the most important conferences in the field of AI ethics for next year. It had been approved by her manager and by other people, but then she got a last-minute message from superiors above her manager to withdraw the paper.
Little was revealed to her as to why she had to withdraw the paper. She then asked a lot of questions about who told her to withdraw the paper, why they asked her to withdraw it, and whether or not changes could be made to make it more palatable for submission. She kept getting stopped and given no additional explanation, so she emailed just before going on the Thanksgiving holiday saying she wouldn’t retract the newspaper unless certain conditions were met first.
Silicon Valley has an idea of how the world works based on the disproportionate representation of a particular subset of the world. They are mostly straight white men from the upper class.
She asked who provided the feedback and what the feedback was. She also asked for meetings with more senior executives to explain what had happened. The way their research was handled was extremely disrespectful and was not the way researchers were traditionally treated at Google. She wanted an explanation of why they had done that. And if they didn’t meet those conditions, she would have a candid conversation with them about a last date at Google so she could make a transition plan, leave the company smoothly, and publish the paper outside of Google context. Then she went on vacation, and halfway through one of her direct reports texted her that they had received an email saying Google had accepted her resignation.
IN THE PR: In terms of the issues Gebru and her co-authors address in their paper, what does it mean for the field of AI ethics to have the greatest possible level of moral hazard, where the communities that have the most are endangered by the impacts that Gebru and her co-authors have identified – environmental impacts and the like – are marginalized and often lack a voice in the tech space, while the engineers who build those AI models are largely isolated from the risks?
Hao: I think this goes to the heart of what has been an ongoing discussion within this community for the past few years, which is that Silicon Valley has an idea of how the world works based on the disproportionate representation of a particular subset of the world. They are mostly straight white men from the upper class. The values they hold from their cross-section of lived experience have now somehow become the values that everyone should live by. But it doesn’t always work that way.
They make this cost benefit analysis that it is worth making these very large language models and spending all that money and electricity getting the benefits of that kind of research. But it is based on their values and their lived experience, and in the end it might not be the same cost-benefit analysis that someone could do in a developing country where they would later prefer not to deal with the effects of climate change. That was one of the reasons Timnit was so adamant about making sure there is more diversity at the decision-making table. If you have more people who have different lived experiences who can then analyze the impact of these technologies through their lenses and bring their voice into the conversation, then maybe we would have more technologies that bring their benefits not so much to one group at the cost of others.
Editor’s Note: The top photo is available under the CC BY 2.0 license.