Geometrization of AI Speaker Series Explores Theory and Practice of Large Language Models

February 20, 2026

Geometrization of AI Speaker Series Explores Theory and Practice of Large Language Models

Portrait of Misha Belkin, machine learning researcher and professor at the University of California, San Diego.

The Geometrization of AI theme within the Translational Data Analytics Institute (TDAI) welcomed Misha Belkin, Professor at the University of California, San Diego, for a two-part speaker series examining how modern artificial intelligence systems learn, represent knowledge, and can be interpreted and controlled.

Held on February 18–19, 2026, the series featured a seminar followed by an interactive workshop, both hosted in Pomerene Hall and led by TDAI Geometrization of AI theme leads Dena Asta and Subhadeep Paul. Together, the events brought faculty, postdoctoral researchers, and graduate students into conversation around emerging theoretical and practical questions in large language models (LLMs).

The seminar focused on foundational questions about what large language models “know,” how that knowledge is encoded internally, and how reliably model behavior can be interpreted or steered. Belkin introduced the linear representation hypothesis as a framework for understanding internal representations in modern neural networks and discussed Recursive Feature Machines, a method for extracting meaningful structure from high-dimensional data. He demonstrated how fixed directions in a model’s activation space can correspond to semantic concepts and how manipulating these representations enables targeted monitoring and steering of model behavior.

Building on these ideas, the workshop shifted toward interactive exploration and collaboration. Participants engaged in short talks from Ohio State faculty and lightning presentations from graduate students working on topics related to feature learning, representation, and model steering. These presentations set the stage for guided discussion on how geometric and statistical perspectives can help explain generalization, robustness, and controllability in contemporary AI systems.

Dr. Belkin is widely recognized for his foundational contributions to machine learning theory, deep learning, and representation learning. His work seeks to explain how modern AI systems learn and organize information, with recent research focused on feature learning and the interpretability of large language models.

The Geometrization of AI speaker series is part of TDAI’s broader effort to foster interdisciplinary research and dialogue around the theoretical underpinnings of artificial intelligence and their implications for reliability, transparency, and control.