Explainability plays a pivotal role in building trust and fostering the adoption of artificial intelligence (AI) in healthcare, particularly in high-stakes domains like neuroscience where decisions directly affect patient outcomes. While progress in AI interpretability has been substantial, there remains a lack of clear, domain-specific guidelines for constructing meaningful and clinically relevant explanations.
In this talk, I will explore how explainable AI (XAI) can be effectively integrated into neuroscience applications. I will outline practical strategies for leveraging interpretability methods to uncover novel patterns in neural data, and discuss how these insights can inform the identification of emerging biomarkers. Drawing on recent developments, I will highlight adaptable XAI frameworks that enhance transparency and support data-driven discovery.
To validate these concepts, I will present illustrative case studies involving large language models (LLMs) and vision transformers applied to neuroscience. These examples serve as proof of concept, showcasing how explainable AI can not only translate complex model behavior into human-understandable insights, but also support the discovery of novel patterns and potential biomarkers relevant to clinical and research applications.