skip to content

Department of Computer Science and Technology

Early Thoughts on Epistemic Accountability for AI
Nicola J Bidwell, December 2021

Over the past year, I’ve began to use the term epistemic accountability when reflecting on relationships between logics embedded in AI and communities with diverse knowledge systems. Mandates for accountable AI and algorithmic systems and mechanisms for accountability are increasingly considered vital to validate their utility toward the public interest and redress potential harms. However, my emphasis in considering accountability is less about people’s responsibilities for the actions and decisions made by or with AI and algorithmic support (e.g., Lepri et al., 2017; Diakopoulos, 2016) and more about the knowledge systems involved in producing AI. Indeed, accountability is difficult to assess and to trace to specific people since AI systems comprise of ensembles of vast and distributed networks that interact with each other (e.g. Campagnolo, 2020; Pasquale, 2015). Thus, epistemic accountability makes explicit that any ethic for AI is situated in a certain set of logics, and power exercised by AI relates to knowledge systems as much as to people and policies. My aim is not to add to the proliferating sets of ethical frameworks for AI (see: Peters et al, 2020) but rather sensitize their use to the consequences, as Mignolo (2002) puts it, the “geopolitics of knowledge”.

Transparency and explicability are amongst some of the principles common to many ethical frameworks proposed for AI (e.g. Jobin & Vayena, 2019) and are vital to enabling other principles including accountability and intelligibility. Transparency relates to openness about how input data is collected, labelled, used and stored, as well as about revenue sources and business models, and explicability is about explaining output functions and underlying models. Realising commitments to human rights for explanation (Goodman & Flaxman, 2016), however, is challenged not only by different computational and data literacies (Bhargava et al. 2015) but also because different types of explanations suit different purposes and models (Hancox-Li, 2020). For instance, domain and technical experts with accountability in an AI system require different types of explanations about AI to guide their assessment of it. Using formal argumentation, Hancox-Li (2020) proposes that explanations about AI should be evaluated based on their robustness, or how well they indicate reality. However, there has been little discussion about what applying an ethic of explicability or using any of the many methods to produce explanations might mean when translating between different logics and knowledge systems.

I chose epistemic, or relating to knowledge itself, rather than epistemological or how we come to know what we know, to account for the implications of translating between logics (Spivak, 1998). In fact, my use of the term epistemic accountability is inspired by the “epistemic violence” that Gayatri Chakravorty Spivak (2000) describes when a person’s meanings, contributions and communicative practices are undervalued, excluded, silenced, misrepresented or systematically distorted. In philosophy, epistemic accountability has been used to refer to how we judge another person’s testimony, rely on their reasoning or treat them as partner in a common inquiry based on perceived epistemic norms (Kauppinen, 2018). This use relates to the “epistemic injustice” (Fricker 2007, 2017) that arises when we do not trust a person’s account or understand their experiences because their accounts or experiences do not align with familiar concepts. Miranda Fricker (2017) distinguishes epistemic injustices that are discriminatory, because they relate to a person’s identity (e.g. gender, race), from those that arise because of unequal access to education, expert advice or information. I apply the term epistemic accountability to recognize that judgements about AI are framed by the norms of privileged knowledge systems.

Justice and fairness principles, common to many ethical frameworks, describe respecting people’s basic human and legal rights throughout design, as well as crediting and compensating the human labour and intellectual property involved in creating algorithms, and distributing fair access to the resulting products, services and profits. However, they do not explicitly relate justice and fairness to the inclusion of different knowledge systems or consider what fairness might mean for ways that reality is indicated in different worldviews. In fact, discussion of how to ensure that all people equally benefit and are equally exposed to the risks and costs of AI does not extend to the impacts of privileging certain knowledge systems in AI design or in the dynamic and messy, socio-technical contexts in which AI should be evaluated and, in fact, separates fairness from accountability (e.g. Veale et al, 2018).

Mechanisms are needed to adjudicate disagreements among stakeholders to avoid eradicating cultural and moral pluralism. Most ethical guidelines for AI are produced by European and the G7 member countries and, even within this constrained geography, interpretations of them vary widely (Jobin & Vayena, 2019). This begs asking what must be done to ensure groups in the Global South can critically relate the logics involved in AI systems to their own experiences, livelihoods and knowledge practices? In starting to answer this question Alan, Helen and I have been working with colleagues at IUM and Ju|’hoan people, in Namibia, to explore how school mathematics curricula can function in this relation. Currently curricula originate in the Global North; for instance, in Namibia, where I lived for seven years, the Senior Secondary Certificate maths syllabus is based, as in many African countries, on the University of Cambridge’s International examinations. Thus, building on an existing collaboration in the Nyae Nyae Conservancy < https://www.apc.org/en/huinom-project >, we are exploring the use of stories and games in AI design to make more epistemically accountable (Blackwell et al, 2021; Bidwell et al, 2022). <Link https://www.designinformatics.org/event/di-webinar-nicola-bidwell-translating-time/>

References

Bhargava, R., Deahl, E., Letouze ́, E., Noonan, A., Sangokoya, D., Shoup, N. (2015). Beyond data literacy: reinventing community engagement and empowerment in the age of data. Data-Pop Alliance White Paper Series. http://datapopalliance.org/wp- content/uploads/2015/11/Beyond- Data- Literacy- 2015.pdf.

Bidwell N.J., Arnold, H., Blackwell, A., Nqeisji, C., Kunta, /K., Ujakpa, M. (2022) AI Design and Everyday Logics in the Kalahari. The Routledge Companion to Media Anthropology; (Ed) E. Costa, P. Lange, N. Haynes, J. Sinanan. In press.

Blackwell, A (2015) Interacting with an inferred world: The challenge of machine learning for humane computer interaction. In Proceedings of Critical Alternatives: The 5th Decennial Aarhus Conference, 169-180.  <LINK>

Blackwell, A. Bidwell N.J., Arnold, H., Nqeisji, C., Kunta, /K., Ujakpa, M. (2021) Visualising Bayesian Probability in the Kalahari. Psychology of Programming Interest Group 2021.

Campagnolo, G.M. (2020.) Social Data Science Xennials 73-90.

Diakopoulos., N. 2016. Accountability in algorithmic decision making. Commun. ACM 59(2), 56–62

Fricker, M. (2007) Epistemic Injustice: Power and the Ethics of Knowing. Oxford University Press.

Fricker, M. (2017) Evolving concepts of epistemic injustice. In: The Routledge Handbook of Epistemic Injustice.

Goodman B., Flaxman, S. (2016). European union regulations on algorithmic decisionmaking and a “right to explanation”. In ICML Workshop on Human Interpretability in Machine Learning.

Hancox-Li, L., (2020). Robustness in machine learning explanations: does it matter?. In Proceedings of the 2020 conference on fairness, accountability, and transparency (pp. 640-647).

Heleta, S. (2016) Decolonisation of higher education: Dismantling epistemic violence and Eurocentrism in South Africa. Transformation in Higher Education (1)1-8.

Jobin, M.L., Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence (1) 389–399

Kauppinen A. (2018) Epistemic norms and epistemic accountability. Philosophers. 18.

Lepri, B., Oliver, N., Letouzé, E., Pentland, A., Vinck, P (2018). Fair, transparent, and accountable algorithmic decision-making processes. Philosophy & Technology 31(4), 611–627.

Mignolo, W.D. (2002). The geopolitics of knowledge and the colonial difference. South Atlantic Quarterly101(1), 57-96.

Pasquale, F. (2015). The Black Blox Society: the secret algorithms that control money and information. Cambridge: Harvard University Press.

Peters, D., Vold, K., Robinson, D., Calvo, R.A. (2020). Responsible AI—two frameworks for ethical design practice. IEEE Transactions on Technology and Society1(1), 34-47.

Spivak, G.C. (1988). Can the Subaltern Speak? in Marxism and the Interpretation of Culture, eds. Cary Nelson and Lawrence Grossberg. Basingstoke: Macmillan. 271–313.

Spivak, G.C. (2000). Translation as culture. parallax6(1), 13-24.

Veale, M., Van Kleek, M, Binns, R. (2018.) Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision Making. Conference on Human Computer Interaction (CHI ’18). ACM 1–14