skip to content

Department of Computer Science and Technology

Professor Alan Blackwell’s vision is to build AI that works for sub-Saharan Africa.

Professor Alan Blackwell will spend a sabbatical year developing AI in sub-Saharan Africa. He’ll work with local academics and international organisations based in Africa, who are inventing AI that works for African contexts. Alan is Professor of Interdisciplinary Design at the Department of Computer Science and Technology. He spoke about his upcoming projects in Africa at the symposium AI for Social Good.

How is AI defined in another country?

Is there any reason to expect that the definition of AI would be different in other countries? It’s helpful to consider how much definitions of AI have already changed in the past. The definition of AI was very different 30 years ago, when key applications of expert systems were in industrial fields such as data fusion, diagnostics and predictive maintenance.

What is "AI for social good"?

How can we define “AI for social good” in 2019? It depends on the perspective you look at it from. There are multiple ways to view AI, such as whether you’re from a rich or poor country, or whether you think primarily in terms of technology or public policy.

What does AI currently mean for people focused on public policy in the UK? The catch-all term seems to be interpreted in many different ways, some having little in common:

  • Androids (i.e. movie robots)
  • Self-driving cars
  • Turing test (i.e. chatbots)
  • Any kind of algorithms
  • Machine vision & deep learning
  • Data science
  • Search engines & social media
  • Drones
  • Personal profiling

Is some AI inherently not good? We have already seen examples of this:

  • partly-autonomous vehicles, if algorithms are designed to circumvent regulation, as appears to have been the case with the Boeing 737 max.
  • systems that turn humans into APIs: uber drivers implementing Intelligent Agent protocols, and Amazon mechanical turkers implementing Artificial General Intelligence

Are there any problems with our definition of “good”? What about the concept of simply “doing no evil”? Interestingly, Google removed the phrase “do no evil” from their original corporate mission statement.

Definitions of what is “good” are subject to change throughout history. In the past, many people thought slavery was acceptable, as well as a lack of gender equality. Will our definitions of what is “good” change in the future?

Working towards internationally-agreed goals for social good

Do different countries have different definitions? Can we reach an international consensus about what is “good”? The closest thing we have to this is the UN’s Sustainable Development Goals. Which of these goals count as “social good”? Again the answer differs according to perspective.

For rich countries: climate change, biodiversity, energy

For poor countries: clean water, maternal health, education, economic growth

Broad learning rather than deep learning

Alan promotes a manifesto for broad learning rather than deep learning. He emphasises the need to talk to a broad range of people about their needs, and the importance of engaging with the furthest first. It’s not sufficient to talk only to philosophers, lawyers and politicians about what they expect from AI.

In “inventing” AI in Africa, we need to build on broadly established pragmatic engineering methods. It’s also imperative that we give people broad tools, not fragile solutions. We mustn’t assume that definitions of what constitutes “social good” are the same in Africa as those embedded in the AI that comes from corporate labs and rich universities.

Building African AI

So how do you invent AI in Africa? Crucially, Africans will be doing it. Alan works with African organisations including Data Science Africa, UN Global Pulse Lab, the African Institute of Mathematical Sciences, Deep Learning Indaba and Africa’s Voices Foundation. His approach will be to listen to what African academics, students and entrepreneurs want from AI, and do what they say, using a combination of fieldwork, ethnography and action research.

Professor Blackwell’s work will take into account different technology perspectives, including cultural and postcolonial studies. Alan will collaborate with local academics, while bearing in mind that they are often also members of elites within their own countries, with their own challenges connecting to other groups in society.

In creating AI, it’s important to be aware of where resources have come from, for example use of non-renewable energy, and minerals from conflict zones. It’s vital to ensure relevant business models that work for the African context. The supply chain model generated for an Amazon Alexa shows how building AI uses a global consumption of resources.

Alan plans to spend eight months in sub-Saharan Africa, living and working in Namibia, Ethiopia, Uganda and Kenya. Part of the challenge will be finding suitable teaching materials and current toolkits for machine learning, as current resources such as MOOCs have been shown to disadvantage low-income students. AI in Africa will need to respond to different landscapes for investment and regulation. It’s not as easy to get funding for AI projects in Africa as it is in Cambridge.

For developing countries it’s vital to consider the sustainability of investment, and consider how long the AI boom will continue. Richer, Western nations and organisations are better equipped to change to another field if interest in investing in AI suddenly declines, but research infrastructure in developing nations can’t be as flexible.

 

Alan’s work on building AI in sub-Saharan Africa is part of the University of Cambridge’s Global Challenges Initiative, where Alan is Director of Research.


Published by Jonathan Goddard on Monday 8th July 2019