skip to content

Department of Computer Science and Technology

Date: 
Friday, 24 May, 2024 - 17:15 to 18:00
Speaker: 
Sheresh Zahoor
Venue: 
Lecture Theatre 2, Computer Laboratory, William Gates Building

In healthcare, where accurate and reliable decision-making is paramount, interpretability is essential. Traditional Machine Learning (ML) models have provided valuable insights but often lack transparency in their reasoning, limiting their effectiveness. The recent surge in ML techniques across medical fields such as radiology, cardiology, mental health, and pathology holds great promise. These techniques can improve diagnostic accuracy, enhance workflow efficiency, minimise medical errors, and ultimately improve public health outcomes. However, the "black-box" nature of many ML algorithms raises significant concerns about interpretability. The lack of transparency in these models' decision-making processes often prevents clear explanations for their predictions, which undermines trust and hinders their integration into clinical practice. This issue has led to a growing movement towards interpretable models in healthcare, shifting away from traditional approaches. Probabilistic graphical models (PGMs), particularly Causal Bayesian Networks (CBNs), are emerging as front-runners in interpretable models for healthcare. CBNs offer a framework for representing causal relationships between variables. This fosters a deeper understanding of the mechanisms influencing healthcare outcomes. Integrating domain knowledge and expert clinical insights empowers CBNs to capture more accurate causal relationships between risk factors and health outcomes. This enriched model provides a more realistic understanding of healthcare phenomena, as it goes beyond simply identifying correlations and unveils the underlying causal drivers. By prioritizing interpretable models like CBNs, we empower healthcare professionals to make informed decisions and develop improved preventative strategies. This ultimately leads to superior patient outcomes. Our research prioritises Non-Communicable Diseases (NCDs) like diabetes and cardiovascular diseases (CVD) due to their significant public health burden. These chronic illnesses are often preventable through lifestyle modifications, highlighting the importance of identifying key modifiable risk factors. To achieve this, we conducted an extensive analysis utilizing various structural learning algorithms. This analysis helped us identify causal pathways among potential risk factors affecting the progression of these diseases. Based on these pathways, we developed novel CBNs that represent the identified causal relationships. These CBNs offer valuable insights into the progression and prevention of NCDs, empowering healthcare professionals with a powerful tool to combat these diseases at their root cause.

Seminar series: 
Artificial Intelligence Research Group Talks

Upcoming seminars