Ferdinand Mom is a Research Engineer at Hugging Face with a background in large-scale pretraining and efficient deep learning systems.
-
16Feb
-
17Feb
Do Large Language Models (LLMs) think and reason? Are they perpetual information machines, producing endless coherent and correct text from finite training data? We explore how LLMs work and whether they produce rational thought and endless information.
-
17Feb
This seminar examines the mechanisms of catastrophic forgetting in large-scale AI systems, with particular emphasis on applications in neuroscience. We explore how continual learning on real-world data can lead to knowledge degradation, where sequential training progressively erodes previously acquired representations.
-
18Feb
Algorithmic information theory has a wide range of applications, despite the fact that its core quantity, Kolmogorov complexity, is incomputable.
-
18Feb
-
18Feb
Gaussian processes (GPs) are often considered to be the gold standard in settings where well-calibrated predictive uncertainty is of key importance, such as decision making.
-
19Feb
Synthetic theories such as homotopy type theory axiomatize classical mathematical objects such as spaces up to homotopy. Although theorems in synthetic theories translate to theorems about the axiomatized structures on paper, this fact has not yet been exploited in proof assistants.
-
20Feb
Abstract not available
-
20Feb
Abstract not available
-
20Feb
*Abstract*
