skip to content

Department of Computer Science and Technology

Date: 
Tuesday, 2 March, 2021 - 13:15 to 14:15
Speaker: 
Michelle Lee
Venue: 
Zoom
Abstract: 

"Join us on Zoom":https://zoom.us/j/99166955895?pwd=SzI0M3pMVEkvNmw3Q0dqNDVRalZvdz09

With the surge in literature focusing on the assessment and mitigation of unfair outcomes in algorithms, several open source 'fairness toolkits' recently emerged to make such methods widely accessible. However, little studied are the differences in approach and capabilities of existing fairness toolkits, and whether they are fit-for-purpose in commercial contexts. Towards this, this paper identifies the gaps between the existing open source fairness toolkit capabilities and the industry practitioners' needs. Specifically, we undertake a comparative assessment of the strengths and weaknesses of six prominent open source fairness toolkits, and investigate the current landscape and gaps in fairness toolkits through an exploratory focus group, a semi-structured interview, and an anonymous survey of data science/machine learning (ML) practitioners. We identify several gaps between the toolkits' capabilities and practitioner needs, highlighting areas requiring attention and future directions towards tooling that better support 'fairness in practice'.

Seminar series: 
Artificial Intelligence Research Group Talks

Upcoming seminars