This seminar examines the mechanisms of catastrophic forgetting in large-scale AI systems, with particular emphasis on applications in neuroscience. We explore how continual learning on real-world data can lead to knowledge degradation, where sequential training progressively erodes previously acquired representations. Current mitigation approaches such as replay strategies, parameter regularization methods like Elastic Weight Consolidation (EWC), gradient-based protection techniques, and context-dependent learning are discussed in the context of medical and neuroimaging foundation models. Finally, we consider practical and conceptual strategies to reduce forgetting and support stable, long-term learning in large neuroscience models.
