Gemini 1.5 has shown that large-scale foundational models are able to process information across millions of tokens of context. This talk will explore the importance and applications of these long-context AI models, examining the current state of their capabilities and highlighting open research problems in the field of long-context understanding.