This year has had a lot to offer, including bold claims about artificial intelligence breakthroughs. Industry commentators speculated that the GPT-3 language generation model may have achieved “artificial general intelligence,” while others praised Alphabet’s protein folding algorithm, Alphafold, and its ability to “transform biology.” While the basis of such claims is thinner than the lavish headlines, it has not done much to dampen the enthusiasm in the industry, whose profits and prestige depend on the spread of AI.
It was against this backdrop that Google fired Timnit Gebru, our dear friend and colleague, and a leader in artificial intelligence. She is also one of the few black women in AI research and an unshakable advocate for bringing more BIPOC, women and non-Western people into the field. Regardless, she excelled at the job Google hired her for, including demonstrating racial and gender disparities in facial analysis technologies and developing reporting guidelines for datasets and AI models. Ironically, this and her articulate advocacy of those underrepresented in AI research are also the reasons, she says, that the company fired her. According to Gebru, after demanding that she and her colleagues retract a research paper critical of (profitable) large-scale AI systems, Google Research told her team that it had accepted her resignation, despite not resigning. (Google declined to comment on this story.)
Google’s horrific treatment of Gebru exposes a double crisis in AI research. The field is dominated by an elite, predominantly white male workforce, and it is mainly controlled and funded by major industry players – Microsoft, Facebook, Amazon, IBM and yes, Google. With Gebru’s resignation, the politeness politics that ripped apart the fledgling efforts to build the necessary crash barriers around AI, putting questions about the racial homogeneity of the AI workforce and the inefficiency of corporate diversity programs at the heart of the discourse. But this situation has also made it clear that – no matter how sincere a company like Google’s promises may seem – corporate-funded research can never be separated from the realities of power and the flows of income and capital.
This should concern us all. With the proliferation of AI in areas such as healthcare, criminal justice, and education, researchers and lawyers are of urgent concern. These systems make decisions that directly shape life, while at the same time being embedded in organizations structured to reinforce the history of racial discrimination. AI systems also concentrate power in the hands of those who design and use them, while obscuring the responsibility (and accountability) behind the veneer of complex calculations. The risks are great and the incentives are definitely perverse.
The current crisis is exposing the structural barriers that limit our ability to build effective protection around AI systems. This is especially important because the populations that are victims of harm and bias as a result of AI’s predictions and observations are predominantly BIPOC people, women, religious and gender minorities, and the poor – those who have suffered most from structural discrimination. Here we have a clear racial divide between those who benefit – the companies and the predominantly white male researchers and developers – and those most likely to be harmed.
Take facial recognition technologies, for example, which have been shown to ‘recognize’ dark-skinned people less often than those with lighter skin. This alone is alarming. But these racial “mistakes” aren’t the only facial recognition problems. Tawana Petty, organization director at Data for Black Lives, points out that these systems are disproportionately used in predominantly black neighborhoods and cities, while cities that have successfully banned and pushed back the use of facial recognition are predominantly white.
Without independent, critical research focusing on the perspectives and experiences of those who bear the harms of these technologies, our ability to understand and challenge the industry’s exaggerated claims is significantly impeded. Google’s treatment of Gebru is increasingly revealing where the company’s priorities appear to be when critical work pushes back business incentives. This makes it nearly impossible to ensure that AI systems are accountable to the people most vulnerable to their harm.
Industry control is further affected by the close links between technology companies and apparently independent academic institutions. Researchers from companies and academia jointly publish papers and rub elbows at the same conferences, with some researchers even holding simultaneous positions at technology companies and universities. This blurs the boundary between academic and business research and the incentives for such work become unclear. It also means that the two groups are very similar – AI research in academia suffers from the same pernicious racial and gender homogeneity problems as its corporate counterparts. In addition, the best computer science departments accept large amounts of Big Tech research funding. All we need to do is look to Big Tobacco and Big Oil for disturbing templates showing how much influence large corporations can have on the public understanding of complex scientific issues when the creation of knowledge is left in their hands.