MBZUAI Nexus Speaker Series
Hosted by: Prof. Elizabeth Churchill
"Designing the next generation of human-computer interactions requires a deeper understanding of how cognition unfolds in context, shaped not only by the user’s mental and bodily states but also by their dynamic interaction with the surrounding environment. In this talk, I present a research agenda that brings together cognitive neuroscience, brain-computer interfaces (BCIs), and wearable sensing to inform the design of ubiquitous, adaptive, and unobtrusive interactive systems. Using tools such as mobile EEG, eye-tracking, motion sensors, and environment-aware computing, my work investigates how people perceive, act, and make decisions in natural settings, from high-load operational tasks such as flying a plane to everyday behaviors like walking around a city or eating a meal. This approach moves beyond screen-based interaction to develop systems that respond to users in real time, based on the continuous coupling between brain, body, and environment. By embedding cognitive and contextual awareness into system design, we can move toward calm, seamless technologies that adapt fluidly to the user’s moment-to-moment needs."
Hosted by: Prof. Natasa Przulj
The rapid growth of open-access omics data has enabled large-scale exploration of cellular states across species, tissues, and molecular modalities. Building on these resources, cellular foundation models use self-supervised learning to derive general cell representations that can be adapted to diverse downstream biological tasks, including the prediction of responses to chemical and genetic perturbations. This presentation reviews their use in modeling cellular perturbations, describing common learning frameworks, data requirements, and evaluation practices, as well as key challenges specific to single-cell data. We note emerging gaps between reported results and standardized evaluations, which highlight persistent issues in how performance is quantified across studies and benchmarks. Overall, this presentation provides an overview of the current landscape of single-cell foundation models, emphasizing both their progress and limitations in capturing perturbation-specific responses.
Hosted by: Prof. Preslav Nakov
To move beyond tools and towards true partners, AI systems must bridge the gap between perception-driven deep learning and knowledge-based symbolic reasoning. Current approaches excel at one or the other, but not both, limiting their reliability and preventing us from fully trusting them. My research addresses this challenge through a principled fusion of learning and reasoning, guided by the principle of building AI that is "Trustworthy by Design." I will first describe work on embedding formal logic into neural networks, creating models that are not only more robust and sample-efficient, but also inherently more transparent. Building on this foundation, I will show how neuro-symbolic integration enables robots to reason about intent, anticipate human needs, and perform task-oriented actions in unstructured environments. Finally, I will present a novel training-free method that leverages generative models for self-correction, tackling the critical problem of hallucination in modern AI. Together, these contributions lay the groundwork for intelligent agents that can be instructed, corrected, and ultimately trusted, agents that learn from human knowledge, adapt to real-world complexity, and collaborate seamlessly with people in everyday environments.

December 15 – 18, 2025 — MBZUAI Reception at International Conference on Statistics and Data Science (ICSDS) 2025
Hangzhou, China
Sharm El Sheikh, Egypt
Rotterdam, Netherlands
Vienna, Austria
United Kingdom
Vancouver, Canada 





