skip to content

Department of Computer Science and Technology

In a week that Western nations return with renewed urgency to the insistence that Black Lives Matter, protest placards remind white spectators that "silence is violence”.

Our institutions in the Global North have been complicit in the construction of “race” in order to justify slavery, and we share responsibility for the continuing consequences of using science in ignorant and self-serving ways to the same end. So, while there is little need for commentary on race from a white professor in Cambridge, staying silent is even less justifiable.

The violence of the systems that beat, imprison and murder black people in our country is a jarring contrast to the generous welcome and respect I received in Ethiopia at the start of the AI in Africa project. During six weeks working in Bahir Dar University, I only once met another person in the University who had white skin. For me, it is unusual to walk through streets and campuses feeling that everyone watches me, wonders why I am there, and interacts with me firstly on the basis of my skin colour, and only secondly for anything else I might do or be. Walking through a university as a racial minority is the everyday life of black people in Cambridge, so my own experiences in Ethiopia and Namibia of arriving with privilege, and returning to privilege, while possibly helping me gain a glimmer of understanding, is not comparable to the reality of black lives in the UK or USA.

The AI in Africa project does, of course, encounter the consequences of historical racism and colonialism at every turn. The science and technology of the AI laboratories at wealthy and famous universities can scarcely be separated from the systems of 20th century white imperialism, and my own previous studies and employment as an AI engineer were often reliant on military research funding in the UK and USA. It seems inappropriate to draw attention to race in places like Bahir Dar, where the Ethiopian people are proud never to have been colonised. But my enquiries with Maori and Pasifika academics in Aotearoa New Zealand helped me to see some of the unquestioned assumptions of Eurocentric philosophies and neocolonial global corporations.

Maori academic Linda Tuhiwai Smith sets out an agenda for Decolonizing Methodologies1, not as a new set of options for critically-minded academics like me, but guidance for indigenous researchers who must find a way of accommodating to the systems of colonial science without betraying the knowledge and customs of their own people, or discounting the injustices that have been committed and continue to be committed. In my work in Ethiopia and Namibia, and also in Aotearoa, the most urgent responsibility is to revisit those assumptions of AI research that have been built on the systemic violence of racist colonial science and business.

In doing so, and informed by the indigenous knowledge of Maori, Samoan and Tongan friends, my attention was drawn to questions that were already understood to be problematic from the standpoint of gender. The classical understanding of AI requires that “intelligence” be a category separable from human bodies. The attributes of this dis-embodied “intelligence,” after they have been separated from the human body, can then be simulated in a machine. It is no surprise that the methodological sieves by which the chaff of human bodies is threshed away from the philosophical essence of intelligence somehow turns out to privilege white male experience. Kate Devlin and Olivia Belton2, drawing on N. Katherine Hales, observe that the perspective through which the rational mind is separated from a body, 'could only come from the privileged group of white, able-bodied men: most outside that group are keenly aware that the ways their bodies are read by society impacts their day-to-day existence as a marginalized subject.’

How can one do theoretical research that moves forward constructively, in a systemically racist context? This is an important question not only for the AI in Africa project, but also for the work I have been doing after arriving back here in Cambridge, as co-Director of Cambridge Global Challenges. The very title of CGC seems embedded in colonial thinking. Aren't places like Cambridge part of the problem, not part of the solution? It could be easier to stay silent, get on with scientific business-as-usual, and simply teach the supposedly objective scientific knowledge in the textbooks we already have. But honest appraisal of this option makes it seem like another kind of silence/violence.

The work of CGC is not committed to business as usual, but to working with researchers and implementation partners in local communities, co-creating research agendas informed by interdisciplinary perspectives. For us, interdisciplinary does not mean simply combining arts and sciences, or social sciences and technology, but intentionally crossing knowledge boundaries between business and academia, between public and private sectors, and also between nationalities and cultures. Some prefer to use the term “transdisciplinary” for this ambition to find new knowledge systems and practices beyond the boundaries of traditional enquiry. But just as transgender enquiry demands attention to the systems of gender privilege and oppression that are imposed on female bodies, so transdisciplinary enquiry into AI demands attention to the bodies that intelligence comes in - bodies that have been colonised, disciplined, and alienated. The practical application of AI to global challenges requires recognition of the way that AI is embodied, as a craft practice of software construction, and as a configuration of materials and practices arising from the infrastructure of digital capitalism and colonialism.

The group Black in AI has established a community of Black scholars within the AI research establishment, and initiatives such as the Deep Learning Indaba and Data Science Africa work with partners in sub-Saharan Africa to ensure that new technologies are available to the communities of the African continent. These are great initiatives, educational and empowering. But we also need to ask whether AI itself is Black enough. Are we building systems that, while claiming intelligence is disembodied, do further violence to Black bodies? Kate Crawford and Vladan Joler's powerful visualisation Anatomy of an AI System3 shows us where those bodies are, what that violence looks like, and confirms that even for AI researchers, we’ll have to work differently if Black Lives Matter.

Notes:

  1. Linda Tuhiwai Smith. Decolonizing methodologies: Research and indigenous peoples. Zed Books Ltd., 2013.
  2. Kate Devlin and Olivia Belton. The Measure of a Woman: Fembots, Fact and Fiction. In AI Narratives, edited by Stephen Cave, Kanta Dihal and Sarah Dillon. Oxford University Press, 2020.
  3. Kate Crawford and Vladan Joler Anatomy of an AI System https://anatomyof.ai