In solidarity with Dr. Timnit Gebru

For the past two years, Dr. Timnit Gebru ’08 MS ’10 Ph.D. ’15 was the technical co-lead of Google’s Ethical Artificial Intelligence team. This team is an important part of Google, a company grappling with racial bias in its AI-powered search engines used by more than a billion people. Dr. Gebru, a black woman, also advocated prominently for under-represented employees at Google, where only 3.7% of the workforce is black. She recently co-authored a paper highlighting the risks of using large amounts of text data to train AI systems, such as the one that underpins Google’s search engine. But when a Google manager asked her to withdraw the newspaper or take her name out, Dr. Gebru and was subsequently fired.

We stand in solidarity with Dr. Gebru, along with more than 2,000 Google employees and 3,000 members from academia, industry and civil society who have signed an open letter to Dr. Gebru and call on Google to increase the transparency and integrity of its research. These signatories include leading Stanford scientists studying issues related to AI, such as Dr. Dan Jurafsky, a computer science professor and MacArthur Fellow, and Marietje Schaake, the director of international policy at the Cyber ​​Policy Center.

In the coming days, we hope to see additional support for Dr. Use of Stanford’s most prominent voices on the ethical, social and technical aspects of AI.

Dr. In addition to being one of Google’s foremost AI researchers, Gebru was also one of Stanford’s most prominent graduates seeking to improve the ethical and societal impact of AI systems. She holds three degrees in electrical engineering from Stanford University, including a doctorate under the supervision of Dr. Fei-Fei Li. While working at Stanford, she worked on a groundbreaking article illustrating major racial and gender bias in AI-powered facial recognition systems, which ultimately led to a significant reduction in the facial recognition offering from Amazon, Microsoft and IBM. Dr. Gebru, a respected leader in her field, has been instrumental in raising awareness about the flaws in AI systems in general – an issue that Stanford has devoted significant resources to address through new initiatives such as the Institute for Human -Centered Artificial Intelligence and the Ethics, Society and Technology Hub, as well as courses such as CS 182: Ethics, Public Policy, and Technological Change.

As co-chairs of the Stanford Public Interest Technology Lab – which is funded by the Ethics, Society, and Technology Hub – we advocate for the thoughtful development of technology. This includes initiatives that grapple with the ethical and societal implications of AI and address the racial inequality perpetuated by new technologies. As we write this piece in our personal capacities, our opinions are shaped by experiences that exist, in part because Dr. Gebru demonstrated the need for this work and Stanford prioritized funding for it.

Working on AI ethics requires dealing with ethics itself, which means distinguishing right from wrong. Stanford has a strong commitment to intellectual honesty and integrity; however, one of its employees, Google, violated these principles by Dr. Use to fire. Stanford’s computer science department, the Institute for Human-Centered Artificial Intelligence, and Dr. Fei-Fei Li’s AI4ALL have called for more diversity in technology, but one of their financial supporters has created a culture that makes black scientists feel “ constantly dehumanized. ” This discrepancy between Stanford’s values ​​and the actions of its closest partner, Google, is what concerns us – especially as Stanford strives to embed ethics in AI systems.

We are writing this piece out of respect for Stanford, not despite that. We believe Stanford is a force for good when it lives up to its values. Stanford has pushed us to become more thoughtful and engaged citizens, and we hope to maintain the institution to the same fundamental standard.

When we show solidarity with Dr. Gebru, we are sending a clear, resounding signal of our values. By making this point known, we are telling students – particularly students who are under-represented in technology – that we turn their backs when they uncover flaws in AI systems. For example, we ask alumni to fight for ‘ethical technology’, even when what is ‘ethical’ conflicts with Google’s business results. And so we tell Big Tech that academia can be an independent force that holds leaders responsible for trampling scientific integrity and firing black scientists.

Stanford has developed many successful programs to address the ethics of AI, and we applaud these advancements. These efforts have given us the opportunity to help steer the development of AI towards the public good. The students who aspire to be Stanford’s next generation of “ethical technologists” are now looking at how Stanford’s AI leaders and institutions are responding to the resignation of Dr. Use.

We look forward to declarations of solidarity from Stanford AI leaders and institutions, followed by actions defending the integrity of research and the dignity of under-represented researchers.

Nik Marda ’21 MS ’21 and Constanza Hasselmann ’21 MS ’22 are co-chairs of the Stanford Public Interest Technology Lab

Contact Nik Marda at nmarda ‘at’ stanford.edu and Constanza Hasselmann at cbh21 ‘at’ stanford.edu

Source