Welcome to AI Nexus

Home AI Talks
Introducing AI Nexus

AI Talks

All Previous AI Speaker Series at MBZUAI

Watch Now Abstract
Center Humans, Shape Intelligence: Human-AI Collaboration in Immersive Training and Generative Creation

The advent of Generative AI has shifted the challenge of Human-Computer Interaction (HCI) from 'hard execution' to 'hard specification'—transforming our mission from simply enabling people to use AI to empowering them to augment their capabilities, understand, and co-create with it. My research vision focuses on Human-AI Collaboration, bridging the gap between human intent and reliable outcomes by proposing a Human-AI loop paradigm. In this talk, I will highlight two primary research thrusts: (1) Immersive Simulation for Skill Training: I will demonstrate how immersive environments optimize human perception and decision-making, including using VR to bridge abstract theory and professional practice, and utilizing multi-agent simulations to study survival decisions in emergencies. (2) Human-AI Collaboration Systems: Transitioning to generative workflows, I will present expert-in-the-loop co-creation tools and scalable multi-agent orchestration engines. I will also discuss how this paradigm extends to robust AI content governance. Finally, I will conclude by outlining my future vision for AI co-creation systems, transfer-focused immersive training, and responsible generative media, demonstrating how we can center humans to effectively shape intelligence.
Watch Now Abstract
Architecting Physical Intelligence: Cross-Stack Co-Design from Systems to Silicon

"Physical intelligence – where embodied agents perceive, reason, plan, and act in the physical world – is emerging as a new computing frontier spanning robotics, autonomous systems, and spatial AI. However, today’s physical intelligence systems remain constrained by high latency, energy cost, and fragile reliability, due to fundamental mismatch between their compositional nature and existing computing architectures. The core challenge extends beyond algorithms, to how we architect computing systems and silicon that natively support intelligence that reasons and adapts under real-world constraints. In this talk, I will present a principled cross-stack system-architecture-silicon co-design approach to building the computational foundations for physical intelligence. First, I will introduce REASON, a flexible hardware architecture and programmable SoC tapeout for efficient neuro-symbolic cognition, demonstrating how tightly integrated memory-centric computing, heterogeneous architectures, end-to-end compilation flow, and adaptive power management enable efficient cognition in silicon. Building on this foundation, I will present ReCA, an integrated hardware architecture that bridges high-level cognition and low-level autonomy under stringent power and latency constraints by leveraging spatial-aware runtimes, heterogeneous fabrics, and hybrid memory hierarchies. Finally, I will highlight our agile SoC design flows that translate evolving cognition and autonomy workloads into efficient silicon implementations. By bridging computer architecture, system software, and silicon validation, my research establishes adaptive, accelerator-rich computing substrates for physical intelligence. This work advances a vision in which AI and hardware are co-designed, co-reason, and co-adapt, architecting future computing systems as active enablers of intelligence in the physical world."
Guillaume Adrien Sartoretti
High-Dimensional Multi-Agent Robot Learning
 February 23, 2026

Guillaume Adrien Sartoretti Assistant Professor in Mechanical Engineering at National University of Singapore (NUS)

Hosted by: Prof. Yoshihiko Nakamura
Robotics
Watch Now Abstract
High-Dimensional Multi-Agent Robot Learning

As robotic systems grow more capable and ubiquitous, their increasing scale and complexity necessitate a shift toward robust, scalable controllers and automated synthesis methods. My group has approached this challenge by turning to distributed (multi-agent) reinforcement learning (MARL) approaches, with an emphasis on understanding and eliciting emergent coordination/cooperation in multi-robot systems and articulated robots (where agents are individual joints). There, our focus lies in improving information representations and neural architectures, as well as devising learning techniques that can help them explore their high-dimensional joint policy space, to identify and reinforce high-quality policies that naturally fit together towards team-level cooperation. In this talk, I will discuss the three main areas my group has been investigating: imitation learning, modularized/hierarchical neural structures, and learning scaffolding. I will describe these techniques within a wide variety of robotic applications, such as multi-agent pathfinding, autonomous exploration/search, traffic signal control, collaborative manipulation, and legged loco-manipulation. Finally, I will also briefly touch on some of our ongoing and future work. Throughout this journey, my goal will be to highlight the key challenges surrounding learning representation, policy space exploration, and scalability/robustness of learned policies, and outline some of the open avenues for research in this exciting area of robotics.
Imon Banerjee
Statistical Inference with Time-Dependent Data
 February 20, 2026

Imon Banerjee Research Assistant Professor in Industrial Engineering and Management Sciences at Northwestern University

Hosted by: Prof. Mladen Kolar
Statistics and Data Science
Watch Now Abstract
Statistical Inference with Time-Dependent Data

Historically, tools developed for statistical inference and control have relied heavily on the independence of the samples. However, the advent of methods to continuously draw samples from a single source makes them dependent. Statistical inference is far more challenging for dependent data without assumption of strict structures like auto-regressions or moving average. This talk concerns regenerating stochastic processes; a structure richer than simplistic dependent models like AR/ARIMA, but still amenable to rigorous statistical theoretical guaranteed in both time homogenous, and time inhomogeneous settings.
Alexey Naumov
Machine Learning
Watch Now Abstract
Stochastic optimal control approach to generative modelling and Schrödinger potential estimation

Stochastic optimal control problem with a final constraint provides a natural way to construct a Schrödinger bridge between two distributions, making it well‑suited for generative modelling. In this problem, the optimal control can be expressed through the Schrödinger potential, which depends on the target distribution — typically unknown in practice. We address the problem of estimating this potential from finite samples. Focusing on estimators that minimize the empirical Kullback Leibler (KL) divergence, we study their generalization abilities. Despite the loss function’s unusual structure, we show that it exhibits favourable geometric properties under mild assumptions that hold for a broad class of target distributions. We derive non‑asymptotic, high‑probability upper bounds for the potential estimation accuracy, measured in terms of excess KL‑risk. In the second part of the talk , we show that the Schrödinger system could be rewritten in terms of a single positive transformed potential that satisfies a nonlinear fixed-point equation and estimate this potential by empirical risk minimization over a function class. The talk is based on the joint work with D. Belomestny, N. Puchkin and D. Suchkov. 
Safwan Hossain
Information Design for the Information Age
 February 18, 2026

Safwan Hossain Ph.D candidate in Computer Science at Harvard University

Hosted by: Prof. Eric Moulines
Machine Learning
Watch Now Abstract
Information Design for the Information Age

Information design is a seminal concept in economics wherein a party with information advantage can strategically reveal this to influence the actions of a rational decision-maker. This talk centers on my efforts to bridge this model to emerging computational and machine learning paradigms. While the classic model assumes that only the quantitative structure of information matters, behavioral economics and psychology emphasize that the framing of information also plays a key role. My recent work formalizes a language-based notion of framing for information design and combines analytical methods to design information structures with LLMs to optimize the language/framing. I explore, both theoretically and empirically, when this LLM-augmented approach is tractable. I will also discuss a second work that uses information design as a light-weight approach to content moderation on social media. Doing so requires a new framework where the information advantage originates from a machine learning model and the interaction is dynamic with long-term intervention effects. I will conclude by connecting these threads to my broader research agenda on strategic decision-making in multi-agent systems.
Watch Now Abstract
Designing for Complexity Across the Flight Project Lifecycle. Why Navigating Ambiguity, Emotion, and Power Dynamics in Aerospace Remains a Human-Only Mission

After spending years working through every phase of the flight project lifecycle, I’ve realised that the most critical part of the system isn't the hardware—it’s the humans. I’m here to talk about why Human-Centered Design is our most effective tool for risk mitigation. We’re often told AI is the future, but AI fundamentally lacks the ability to understand why we are building these systems and for whom we are building them.
Luo Mai
Bringing LLM Inference to Wafer Scale Systems
 February 18, 2026

Luo Mai Associate Professor, University of Edinburgh

Hosted by: Youcheng Sun
Computer Science
Abstract
Bringing LLM Inference to Wafer Scale Systems

Emerging AI accelerators increasingly adopt wafer-scale integration, combining hundreds of thousands of cores with massive on-chip memory and ultra-high bandwidth. Yet, existing LLM inference systems—designed primarily for GPUs—cannot fully exploit this architecture. In this talk, I will present WaferLLM, the first LLM inference system designed specifically for wafer-scale accelerators. WaferLLM introduces new approaches for wafer-scale prefill and decode parallelism, KV-cache management, and high-performance kernels—MeshGEMM and MeshGEMV—to maximize hardware utilization. On commodity hardware (Cerebras WSE-2), WaferLLM achieves 2,700 tokens per second for a single user, translating to less than one millisecond per token and demonstrating its potential for efficient scaling in test-time compute. 
Christian Andersson Naesseth
SDE Matching: Simulation-Free Learning of Stochastic Dynamics
 February 16, 2026

Christian Andersson Naesseth Assistant Professor of Machine Learning at University of Amsterdam

Hosted by: Prof. Eric Moulines
Machine Learning
Watch Now Abstract
SDE Matching: Simulation-Free Learning of Stochastic Dynamics

"Stochastic differential equations (SDEs) provide a flexible framework for modeling time series, dynamical systems, and sequential data. However, learning SDEs from data typically relies on adjoint sensitivity methods, which require repeated simulation, time discretization, and backpropagation through approximate SDE solvers, leading to significant computational overhead and limited scalability. We introduce SDE Matching, a simulation- and discretization-free approach for learning stochastic dynamics directly from data. Building on recent advances in score matching and flow matching for generative modeling, we extend these ideas to the dynamical setting, enabling direct learning of SDE drift and diffusion terms without numerical simulation. SDE Matching replaces solver-based training with a regression-like objective defined on transformed data samples, eliminating the need for backpropagation through stochastic trajectories. Empirically, SDE Matching achieves accuracy comparable to adjoint sensitivity-based methods while substantially reducing computational cost, offering a scalable alternative for learning stochastic dynamical systems. We demonstrate these results across a range of synthetic and real-world dynamical modeling tasks."
Bibhas Chakraborty
Computational Biology
Watch Now Abstract
Reinforcement Learning in Health-related Sequential Decision Problems: From Dynamic Treatment Regimes to Mobile Health

In recent years, Reinforcement Learning (RL) has gained a prominent position in addressing health-related sequential decision-making problems. In this talk, we will discuss two such sequential decision-making problems: (1) dynamic treatment regimes (DTRs), i.e., clinical decision rules for adapting the type, dosage and timing of treatment according to an individual patient’s characteristics and evolving health status; and (2) just-in-time adaptive interventions (JITAIs) in mobile app-based behavioral nudges in population health. Specifically, we will illustrate the similarities and differences between these two types of RL problems (e.g., offline vs. online RL), common algorithms used in these two settings (e.g., Q-learning vs. Thomson sampling), and real-life case studies.
Watch Now Abstract
MLMC: Visualizing Multi-Label Classification. A Tool for Intuitively Evaluating and Comparing Classifiers at Global, Label and Instance Levels

Machine learning classifiers are increasingly applied to complex tasks such as audio tagging, image labeling, and text classification -- many of which require multi-label classification. Traditional evaluation tools, often limited to single metrics such as accuracy, fall short of providing insight into classifier behavior across multiple labels. To address this, we present MLMC, an interactive visualization tool for evaluating and comparing multi-label classifiers. Based on expert interviews, MLMC supports analysis at instance-, label-, and classifier-level views, offering a scalable, more interpretable alternative. We demonstrate its use across three different domains and describe its core algorithms and user interface. Two pilot studies (N=$6$ each) provided insight into MLMC's usability and showed improved task accuracy, consistency, and user confidence compared to confusion matrices. Results highlight MLMC's potential as a practical tool for intuitive evaluation of multi-label classifiers, with implications for a broad range of machine learning applications. Our approach is using the Design Study Methodology, which is rooted in Human-Centered Design.
Abstract
Big Data and the Global Past: AI, Complexity Science and the Co-Evolution of Human Cultures and Environments

Understanding the deep human past requires analytical frameworks capable of integrating diverse datasets and tracing long-term trajectories of cultural and environmental change. Archaeology—uniquely positioned at the intersection of material culture, ecology, and human behaviour—holds unparalleled potential to address these challenges. This talk presents a suite of pioneering studies in which artificial intelligence, network science, and complexity theory are applied to Eurasian archaeological datasets, offering the most robust quantitative framework to date for modelling cooperation, exchange, and cultural co-evolution. The first part of the talk focuses on the origins of metallurgy in the Balkans between the 6th and 3rd millennia BC, where copper production and circulation first took recognisable regional form. Using trace element and lead isotope analyses from 410 artefacts across c. 80 sites (6200–3200 BC), we apply seven community detection algorithms—including Louvain, Leiden, Spinglass, and Eigenvector methods—to reconstruct prehistoric copper-supply networks. These models reveal stable and meaningful supply communities that correlate strikingly with regional archaeological cultures such as Vinča, KGK VI and Bodrogkeresztúr. By critically evaluating algorithm performance on archaeological compositional data, this case study not only demonstrates the power of network science for reconstructing prehistoric exchange but also challenges the traditional, typology-based concept of “archaeological culture.” It exemplifies how AI and complexity science can rigorously decode patterns of cooperation, resource movement, and social boundaries in the deep past.
Junchi Yan
Machine Learning for Combinatorial Optimization
 February 11, 2026

Junchi Yan Professor in Artificial Intelligence at Shanghai Jiao Tong University

Hosted by: Prof. Zhiqiang Xu
Machine Learning
Watch Now Abstract
Machine Learning for Combinatorial Optimization

In this talk, I will discuss the development of machine learning for combinatorial optimization, covering general methodology and especially generative models for AI4Opt. I will show how the idea of diffusion models could be introduced to solve the notoriously hard combinatorial problems. I will also share some forward-looking ideas on future research directions.
Subhasis Chaudhuri
Continual Learning (… and Forgetting too)
 February 11, 2026

Subhasis Chaudhuri Professor in Electrical Engineering at Indian Institute of Technology Bombay

Hosted by: Muhammad Haris Khan
Computer Vision
Watch Now Abstract
Continual Learning (… and Forgetting too)

We spend a lot of time in training a network to recognize different but a fixed number of types of objects in a scene. If we are to induct new object classes subsequently in the recognition engine, should we be retraining the network from scratch again? Can we tweak the network so that it can incrementally learn new classes of object? Unfortunately, any attempt to incrementally learn new concepts may also lead to forgetting, often catastrophic, of previously learnt concepts. Similarly, can we also selectively forget a few concepts that may be required for socio-technical reasons? In this talk, we shall discuss how some of these objectives can be achieved.
Watch Now Abstract
Trapped in the Sweet Porridge: Reclaiming Autonomy in the Age of AI. Why navigating ambiguity, emotion, and power dynamics in aerospace remains a human-only mission

Mark Weiser imagined technology as “refreshing as a walk in the woods.” Today, however, the digital landscape often resembles a dense and opaque environment that limits autonomy and traps users in systems designed to maximise data collection. Modern “dumb-smart” technologies frequently solve problems that do not exist, offering the convenient but ultimately constraining “sweet technological porridge” that reduces critical engagement with the tools we rely on. The rise of AI makes this challenge urgent. Intelligent systems increasingly shape our information, decisions, and everyday interactions. Users who cannot interrogate or influence these systems risk losing control over both their data and their autonomy.
Shaimaa Lazem
Beyond Universal Models: Weaving n9n-Western threads for a Pluralistic AI Future
 February 11, 2026

Shaimaa Lazem Associate Research Professor at City of Scientific Research and Technological Applications

Hosted by: Prof. Elizabeth Churchill
Human-Computer Interaction
Watch Now Abstract
Beyond Universal Models: Weaving n9n-Western threads for a Pluralistic AI Future

In this talk, I will share some of my past and current work on technology design in the Arab and African contexts, highlighting the importance of addressing cultural specificities and understanding their subtleties and nuances. Particularly, I am excited to share information and recent progress on my recent Google project with Prof. Elizabeth Churchill on designing primitives for culturally situated human-AI collaboration in the Arab world. This work addresses a gap in investigating how Arab users collaborate and build trust with AI tools. I would welcome input and thoughts from MBZUAI, a world-leading AI institute, to help us shape the future of culturally-localized and trustworthy AI.
Zhouchen Lin
Machine Learning
Watch Now Abstract
Stepsize anything: A unified learning rate schedule for budgeted-iteration training

The expanding computational costs and limited resources underscore the critical need for budgeted-iteration training, which aims to achieve optimal learning within predetermined iteration budgets. While learning rate schedules fundamentally govern the performance of different networks and tasks, particularly in budgeted-iteration scenarios, their design remains largely heuristic, lacking theoretical foundations. In addition, the optimal learning rate schedule requires extensive trial-and-error selection, making the training process inefficient. In this work, we propose the Unified Budget-Aware (UBA) schedule, a theoretically grounded learning rate schedule that consistently outperforms commonly-used schedules among diverse architectures and tasks under different constrained training budgets. First, we bridge the gap by constructing a novel training budget-aware optimization framework, which explicitly accounts for the robustness to landscape curvature variations. From this framework, we derive the UBA schedule, controlled by a single hyper-parameter φ that provides a trade-off between flexibility and simplicity, eliminating the need for per-network numerical optimization. Moreover, we establish a theoretical connection between φ and the condition number, adding interpretation and justification to our approach. Besides, we prove the convergence for different values of φ. We offer practical guidelines for its selection via theoretical analysis and empirical results. Extensive experimental results show that UBA consistently surpasses the commonly-used schedules across diverse vision and language tasks, spanning network architectures (e.g., ResNet, OLMo) and scales, under different training-iteration budgets.
Nirav Bhatt and Ramkrishna Pasumarthy
Controllability of Functional Brain Networks
 February 10, 2026

Nirav Bhatt and Ramkrishna Pasumarthy IIT Madras Zanzibar

Hosted by: Prof. Eduardo Beltrame
Computational Biology
Watch Now Abstract
Controllability of Functional Brain Networks

Recent research has imported tools from network science control theory towards studying controllability properties of brain circuits, and investigating the possibility of restoring or enhancing brain activity using brain stimulation.  However, a fundamental challenge here is that current notions of controllability based on the structural connections of the human brain may be inadequate for the study of human brain function. We use system identification, network science, stability analysis, and control theory to probe functional circuit dynamics during working memory task performance.  Our main finding is that the Network controllability decreases with working memory load and SN nodes show the highest functional controllability. Our findings reveal dissociable roles of the SN and FPN in systems control and provide novel insights into dynamic circuit mechanisms by which cognitive control circuits operate asymmetrically during cognition.
Siyuan Guo
Physics of Learning and Structure
 February 9, 2026

Siyuan Guo PhD Candidate at the University of Cambridge and Max Planck Institute for Intelligent Systems

Hosted by: Prof. Chih-Jen Lin
Machine Learning
Watch Now Abstract
Physics of Learning and Structure

"In physics, phenomena such as light propagation and Newtonian mechanics obey the principle of least action: the true trajectory is a stationary point of the Lagrangian. In our recent work [1], we investigated whether learning, too, follows a least-action principle. We model learning as stationary-action dynamics on information fields. Concretely, we derive classical learning algorithms as stationary points of information-field Lagrangians, recovering Bellman optimality from a reward-based Hamiltonian and Fisher-information–aware updates for estimation. This potentially yields a unifying variational view across reinforcement learning and supervised learning, and suggests optimisers with testable properties. Conceptually, it treats the training of a learning system as the dynamical evolution of a physical system in an abstract information space. Structure is also central to learning, enabling interventional reasoning and scientific understanding. Causality provides a framework for discovering structure from data under the hypothesis that causal mechanisms are independent. In earlier work [2], we formalise independent mechanisms as independent latent variables controlling each mechanism, and show how this perspective extends across effect estimation, counterfactual reasoning, representation learning, and reinforcement learning. Methodologically, in collaboration with Prior Labs, we developed Do-PFN [3], a pre-trained foundation model that performs in-context causal inference. This serves as a promising out-of-the-box tool for practitioners working across diverse scientific domains. References [1] Siyuan Guo and Bernhard Schölkopf. Physics of Learning: A Lagrangian Perspective to Different Learning Paradigms. arXiv preprint arXiv:2509.21049, 2025. [2] Siyuan Guo*, Viktor Tóth*, Bernhard Schölkopf, and Ferenc Huszár. Causal de Finetti: On the Identification of Invariant Causal Structure in Exchangeable Data. Advances in Neural Information Processing Systems (NeurIPS), 2023. [3] Jake Robertson*, Arik Reuter*, Siyuan Guo, Noah Hollmann, Frank Hutter, and Bernhard Schölkopf. Do-PFN: In-Context Learning for Causal Effect Estimation. Advances in Neural Information Processing Systems (NeurIPS), 2025. (Spotlight; acceptance rate 3.19%.)"
Chaithanya Band
Statistics and Data Science
Watch Now Abstract
Orchestrating Agents Under Constraints: Optimization, Evaluation, and Small-Model Proxies

Tool-using LLM agents can be best understood as resource-constrained decision systems. Each run implicitly solves an operations problem: how to allocate scarce budget (tokens, latency, tool-call limits, and verification/judging compute) across planning, execution, recovery, and checking—under uncertainty about tool reliability, user intent, and when to stop. In this talk, I’ll connect modern agent design to classic OR ideas—sequential decision-making, budgeted optimization, scheduling, and robust objectives—and show how this framing leads to systems that are measurably more reliable, not just larger. I’ll walk through a unified set of results across three themes: (1) tool orchestration in realistic multi-tool environments, with evaluation designed to be diagnostic and trajectory-agnostic; (2) open-ended research agents evaluated via structured rubrics that surface systematic failure modes and make iteration scientific; and (3) cost-aware evaluation protocols, where debate/deliberation and budgeted stopping explicitly trade off accuracy against compute to trace a cost–accuracy frontier.  Finally, I’ll discuss why small-model proxies (“analogs”) are a practical accelerator for this agenda: they enable faster experimentation on orchestration policies and evaluation designs at a fraction of the cost, while preserving the failure modes that matter. I’ll close with how these ideas translate into ongoing research collaborations with startups, developing deployable agents with explicit budgets, measurable guarantees, and clear reliability trade-offs.
Xuegong Zhang
From AI for Biological and Medical Science to Virtual Cells 
 February 6, 2026

Xuegong Zhang Professor in Bioinformatics and Pattern Recognition and Director of the Bioinformatics Division at Tsinghua University

Hosted by: Prof. Aziz Khan
Computational Biology
Watch Now Abstract
From AI for Biological and Medical Science to Virtual Cells 

Many tasks in biological and medical science can be modeled as Pattern Recognition tasks, and AI is playing more and more important roles in those tasks. With the enrichment of single-cell level high-throughput omics data, it is now even possible to build digital virtual cells with advanced AI foundation models. Prof. Xuegong Zhang has been one of the leading researchers in using AI for cutting-edge pattern recognition tasks in biology and medicine, and in prompting the concept and practices of developing AI virtual cell models. In this seminar, he will provide an overview of both the fields based on their own work in the past two decades, and discuss the future trends in AI biology and medicine.  
Abstract
Time Will Tell: Transforming Digital Health Data into Meaningful Distributions

Modern digital devices continuously record physiological signals such as heart rate and physical activity, generating rich but complex data that evolve over time and across individuals. This talk introduces flexible statistical frameworks that move beyond modeling averages to capture full outcome distributions and dynamic time patterns. By representing responses through quantile functions and allowing data‐driven transformations of time, the proposed methods provide a unified way to study how entire distributions change with covariates and over the course of daily life. These approaches enable more nuanced questions: not only how a typical heart rate responds to activity, but how variability, extremes, and temporal dynamics differ across individuals and contexts. Applications to continuously monitored wearable data demonstrate how the methods reveal interpretable features of human behavior and physiology, offering powerful tools for digital health research and personalized monitoring.
Marco Romano-Silva
Cognitive Impairment and Peripheral Inflammation
 February 6, 2026

Marco Romano-Silva Professor of Psychiatry at Universidade Federal de Minas Gerais

Hosted by: Eduardo Beltrame
Computational Biology
Abstract
Cognitive Impairment and Peripheral Inflammation

"Cognitive impairment is increasingly recognized as a systemic phenomenon rather than a purely brain-restricted disorder. Across neurodevelopmental conditions, psychiatric disorders, post-infectious syndromes such as long COVID, cancer-related cognitive impairment, and neurodegenerative diseases, peripheral inflammation emerges as a shared and biologically meaningful contributor to cognitive vulnerability. This convergence across diagnostic categories suggests that inflammatory processes act as cross-cutting modifiers of brain function rather than disease-specific epiphenomena. Our results show that inflammatory burden outside the central nervous system is consistently associated with selective cognitive deficits. Importantly, these associations are detectable before overt neurological or psychiatric deterioration, indicating a role in shaping cognitive trajectories rather than merely reflecting established disease. Rather than acting as a nonspecific background factor, peripheral inflammation appears to organize distinct and clinically relevant cognitive phenotypes, with implications for risk stratification, prognosis, and early intervention. This perspective reframes cognitive impairment as a dynamic outcome of systemic brain–body interactions, opening new avenues for prevention-oriented approaches to brain health."
Wolfgang Lehner
Reproducible Query Optimization  Research for Data Systems
 February 5, 2026

Wolfgang Lehner Professor in Database Research and Director of the Institute of Systems Architecture at Dresden University of Technology

Hosted by: Prof. Xiaosong Ma
Computer Science
Watch Now Abstract
Reproducible Query Optimization  Research for Data Systems

"Identifying reasonably good plans to execute complex queries in large data systems is a crucial ingredient for a robust data management platform. The traditional cost-based query optimizer approach enumerates different execution plans for each individual query, assesses each plan based on its costs, and selects the plan that promises the lowest execution costs. However, as we all know, the optimal execution plan is not always selected, opportunities are missed, and complex analytical queries might not even work. Thus, query optimization for data systems is a highly active research area, with novel concepts being introduced continuously. The talk will discuss this research area by addressing three distinct themes. First, the talk shows the potential of optimizer improvements by sharing insights from a comprehensive and in-depth evaluation. Based upon this analysis, the talk introduces TONIC and FASTgres. TONIC is a novel cardinality estimation-free extension for generic SPJ query optimizers, revising operator decisions for arbitrary join paths based on learned query feedback. FASTgres is a context-aware classification strategy for steering existing optimizers using hint set prediction. Finally, the talk sheds light on PostBOUND, a novel optimizer development and benchmarking framework that enables rapid prototyping and common-ground comparisons, serving as a base for reproducible optimizer research."
Alessandra Carbone
Decoding missense variants
 February 2, 2026

Alessandra Carbone Professor in Computer Science at Sorbonne University

Hosted by: Prof. Aziz Khan
Computational Biology
Abstract
Decoding missense variants

Natural protein sequences observed today are the result of evolutionary processes selecting for function. They can inform us on which and how sequence variations affect proteins’ biological functions, a central question in biology, bioengineering and medicine. The increasing wealth of genomic data has enabled the accurate prediction of complete mutational landscapes. State-of-the-art methods addressing this problem explicitly or implicitly model inter-dependencies between all positions in the sequence of interest to predict the effect of a particular mutation at a particular position. They infer hundreds of thousands of parameters from very large multiple sequence alignments. They require large variability in the input data and remain time consuming. Here, we present PRESCOTT (https://prescott.lcqb.upmc.fr/), a fast, scalable and interpretable method to predict mutational outcomes. PRESCOTT considers the evolutionary history that relate natural sequences, structural information and allele frequency in human populations, when available. I will present the problem, the model, the impacts in genomic medicine, some applications guiding experiments in LLPS, and PRESCOTT answers to the recent international CAGI7 challenges.
Yidong Zhou
Causal Inference Beyond Euclidean Data
 January 26, 2026

Yidong Zhou Postdoctoral Scholar in Department of Statistics at University of California, Davis

Hosted by: Mladen Kolar
Statistics and Data Science
Watch Now Abstract
Causal Inference Beyond Euclidean Data

Adjusting for confounding and imbalance when establishing statistical relationships is an increasingly important task, and causal inference methods have emerged as the most popular tool to achieve this. Causal inference has been developed mainly for regression relationships with scalar responses and also for distributional responses. We introduce here a general framework for causal inference when responses reside in general geodesic metric spaces, where we draw on a novel geodesic calculus that facilitates scalar multiplication for geodesics and the quantification of treatment effects through the concept of geodesic average treatment effect. Using ideas from Fréchet regression, we obtain a doubly robust estimation of the geodesic average treatment effect and results on consistency and rates of convergence for the proposed estimators. Examples and practical implementations include simulations and data illustrations for responses corresponding to compositional responses as encountered for U.S. statewise energy source data, where we study the effect of coal mining, network data corresponding to New York taxi trips, where the effect of the COVID-19 pandemic is of interest, and the studying the effect of Alzheimer's disease on connectivity networks.
Feng Liang
Bayesian Smoothing and Feature Selection via Variational Automatic Relevance Determination
 January 23, 2026

Feng Liang Professor in Statistics at the University of Illinois Urbana-Champaign

Hosted by: Prof. Mladen Kolar
Statistics and Data Science
Watch Now Abstract
Bayesian Smoothing and Feature Selection via Variational Automatic Relevance Determination

This study introduces Variational Automatic Relevance Determination (VARD), a novel approach for fitting sparse additive regression models in high-dimensional settings. VARD stands out by independently assessing the smoothness of each feature while precisely determining whether its contribution to the response is zero, linear, or nonlinear. Additionally, we present an efficient coordinate descent algorithm for implementing VARD. Empirical evaluations on both simulated and real-world datasets demonstrate VARD’s superior performance compared to alternative variable selection methods for additive models.
Abstract
Towards a True AI Partner: Fusing Learning and Knowledge for Trustworthy Human-AI Synergy

For robots to move from automated tools to reliable collaborators, they must tightly couple perception, decision-making, and action. Today’s robotic systems rely heavily on deep learning for sensing and control, yet lack explicit reasoning, which limits robustness, interpretability, and trust in real-world deployment. My research addresses this gap by unifying learning-based perception with knowledge-based reasoning under a trustworthy by design framework. I will first present methods for embedding formal logic into neural models, enabling robots to learn from limited data while maintaining structured constraints that improve robustness and transparency. Building on this, I will show how neuro-symbolic integration allows robots to reason about human intent, anticipate goals, and plan task-oriented actions in unstructured, human-centered environments. Finally, I will introduce a training-free self-correction approach using generative models, aimed at reducing hallucinations and unsafe behavior in robotic decision pipelines. Together, these results point toward robotic agents that can be instructed, corrected, and trusted, systems that combine learning with explicit knowledge, adapt online to real-world uncertainty, and collaborate effectively with humans in everyday settings.
Chong Liu
Accelerated Bayesian Optimization for  Drug Discovery
 January 20, 2026

Chong Liu Assistant Professor in Computer Science at the University of Albany, State University of New York

Hosted by: Prof. Mladen Kolar
Statistics and Data Science
Watch Now Abstract
Accelerated Bayesian Optimization for  Drug Discovery

Traditional drug discovery is an extremely time-consuming, high-risk, and cost-intensive process, taking on average 10–15 years and approximately $2.8 billion to bring a new drug to market. A central bottleneck is drug screening, which involves sequential decision-making under severe cost and time constraints, where each wet-lab validation experiment can take days or even weeks. Bayesian optimization (BO) is widely used to guide these decisions, but standard BO methods often require too many experimental rounds to be practical for real-world discovery pipelines. In this talk, I will present recent advances from my lab on accelerated BO that substantially reduce the number of experiments needed to identify high-quality drug candidates. The first part introduces procedure-informed BO, which learns optimization trajectories from related source tasks to enable rapid adaptation and strong performance in few-shot settings. The second part focuses on transfer BO with provable acceleration guarantees, in which differences between source and target tasks are explicitly modeled to achieve lower regret and faster convergence than standard BO. The final part explores the potential of quantum computing for next-generation accelerated BO. Together, these components form a unified framework for incorporating procedural knowledge, task similarity, and emerging computational paradigms into accelerated BO. Through experiments on drug discovery benchmarks, I will show how these methods significantly speed up optimization, enabling faster identification of promising compounds under tight experimental budgets. The results point to a principled and scalable path toward knowledge-driven optimization systems that can keep pace with modern high-throughput drug discovery workflows.
Lei Clifton
AI, Machine Learning, and Medical Statistics
 January 15, 2026

Lei Clifton Programme Director, Nuffield Department of Primary Care Health Sciences & Governing Body Fellow of Reuben College, Oxford University.

Hosted by: Prof. Eran Segal
Computational Biology
Watch Now Abstract
AI, Machine Learning, and Medical Statistics

"There is considerable interest in AI for health data science, driven by the rapid growth of available data and declining computational costs. The debate over when to use AI versus classical statistical methods in medical research is long-standing, but merits fresh consideration in light of major methodological advances and increased policy attention. AI-based approaches offer substantial opportunities, while recognising that we may be near the peak of the Gartner hype cycle for AI. Lei argues that AI and classical statistics are best suited to different scenarios and are often complementary. In some domains, AI is widely regarded as essential because of the complexity and multimodality of the data, which are frequently free-form. A key example is unstructured clinical text, where clinical reasoning and summarisation tasks are increasingly addressed by contemporary large language models, a class of generative AI. In domains where either AI or classical statistics could plausibly be used, combining the strengths of both approaches is often the most effective strategy. In this talk, Lei will illustrate how she has integrated AI, machine learning, and medical statistics in her research through worked examples (and her own paintings). The session has two parts: Part I: Large language models (LLMs) for risk prediction and clinical tasks Part II: Combining machine learning and medical statistics This talk is suitable for a mixed audience interested in data modelling and its application in real-world clinical settings."
Jingang Yi
Motion Control of Underactuated Balance Robots
 January 15, 2026

Jingang Yi Professor in mechanical engineering, Rutgers University

Hosted by: Prof. Dezhen Song
Robotics
Watch Now Abstract
Motion Control of Underactuated Balance Robots

Underactuated balance robots have more degrees of freedom than the number of control inputs and they perform the balancing and tracking tasks simultaneously, such as rotational inverted pendulums, bicycles and bipedal walkers, etc. The balancing task requires the robot to maintain its motion around unstable equilibrium points, while the tracking task requires following desired trajectories. In this talk, I first review the model-based control design of the underactuated balance robots. Balance equilibrium manifold is proposed to capture the external trajectory tracking and internal balance performance. I will then present a machine learning-based control for underactuated balance robots. Gaussian process is used to obtain the estimation of the systems dynamics and the learning process is obtained without need of prior physical knowledge nor successful balance demonstrations. Additional attractive property of the design includes the guaranteed stability and closed-loop performance. Experiments from a Furuta pendulum and a bikebot are used to demonstrate the performance of the learning-based control design. Finally, I will present a few mechatronic design and motion control applications of underactuated balance robots such as mobile manipulation with bikebot, autonomous bikebot with leg assistance, and autonomous vehicle ski-stunt maneuvers.
Xiang Li 
Statistics and Data Science
Watch Now Abstract
What Can Statistics Offer to Language Models: Watermarking and Evaluation

"Large language models (LLMs) have transformed how we generate and process information, yet two foundational challenges remain: ensuring the authenticity  of their outputs and accurately evaluating their true capabilities. In this talk, I argue that both challenges are fundamentally statistical problems, and that statistical thinking plays a central role in advancing reliable and principled research on large language models. I will present two lines of work that address these problems from a statistical perspective.  The first part introduces a statistical framework for language watermarks, which embed imperceptible signals into model-generated text for provenance verification. By formulating watermark detection as a hypothesis testing problem, this framework identifies pivotal statistics, provides rigorous Type I error control, and derives optimal detection rules that are theoretically grounded and computationally efficient. It clarifies the theoretical limits of existing detection methods and guides the design of more robust and powerful detectors. The second part focuses on language model evaluation, where I study how to quantify the unseen knowledge that models possess but may not reveal through limited queries. I introduce a statistical pipeline, based on the smoothed Good–Turing estimator, to estimate the total amount of a model’s knowledge beyond what is observed in benchmark datasets. The findings reveal that even advanced LLMs often articulate only a fraction of their internal knowledge, suggesting a new perspective on evaluation and model competence. Together, these projects represent an ongoing effort to develop statistical foundations for trustworthy and reliable language models. This talk is based on the following works: https://arxiv.org/abs/2404.01245 https://arxiv.org/abs/2506.02058 and will briefly mention follow-up studies: https://arxiv.org/abs/2411.13868 https://arxiv.org/abs/2510.22007"
Jianyu Xu
Machine Learning
Watch Now Abstract
Algorithmic Foundations of Online Decision-Making: From Operational Constraints to Generative AI

"Online decision-making is the core engine behind intelligent systems that must learn from incomplete feedback and act in real-time, with ubiquitous applications ranging over adaptive recommendation system, e-commerce platform, autonomous vehicle navigation, and personalized healthcare assistance. To operate effectively, these agents must balance exploration against exploitation while navigating uncertainty and satisfying complex constraints. In this talk, I will present a research program for reliable and adaptive sequential decision-making, that bridges theoretical foundations with crucial real-world deployments. I will begin by briefly outlining decision-making in dynamic pricing under censored feedback, before extending this to various operational constraints like fairness, supply, and multi-stage bottlenecks. Then I will introduce ""Generative Online Learning"" as a combination of traditional decision-making framework with the emerging power of Generative AI, where agents strategically decide to either generate novel actions or select from the existing action list. I will demonstrate the impact of this framework through the architecture and deployment of a safe, adaptive maternal health chatbot. Finally, I will conclude with future directions in multi-party online learning, and adaptive in-context decision planning"
Xiaocong Xu
Estimation and Inference in Proportional High Dimensions
 January 13, 2026

Xiaocong Xu Research Associate, USC Marshall School of Business

Hosted by: Prof. Mladen Kolar
Statistics and Data Science
Watch Now Abstract
Estimation and Inference in Proportional High Dimensions

"Many modern learning problems are studied in a proportional high-dimensional regime, where the feature dimension is of the same order as the sample size. In this talk, I will discuss how working in this regime affects both estimation and uncertainty quantification, and how we obtain useful and sharp characterizations for widely used estimators and algorithms. The first part will focus on ridge regression in linear models. We derive a distributional approximation for the ridge estimator via an associated Gaussian sequence model with “effective” noise and regularization parameters. This reduction provides a convenient way to analyze prediction and estimation risks and to support practical tuning rules, such as cross-validation and generalized cross-validation. It also yields a simple inference procedure based on a debiased ridge construction. The second part will take an algorithmic perspective. Instead of analyzing only the final empirical risk minimizer, we view gradient descent iterates as estimators along an optimization path. We characterize the distribution of the iterates and use this characterization to construct data-driven estimates of generalization error and debiased iterates for statistical inference, including in settings beyond linear regression. I will conclude with simulations that illustrate the practical implications for tuning and inference."
Ekaterina Khrameeva
Computational Biology
Watch Now Abstract
AI for Longevity Science: Computational Approaches to Understanding and Measuring Aging

Aging is a multifactorial process characterized by progressive functional decline and increasing vulnerability to disease, driven by complex, nonlinear interactions among genes, proteins, metabolites, and environmental factors. This complexity makes it challenging to quantify how “old” a cell, tissue, or organism truly is. To address this gap, researchers have developed aging clocks – computational models that estimate biological age from molecular data. Aging clock approaches have evolved over time, from first-generation clocks predicting chronological age (a poor proxy for biological age) to third-generation clocks trained using longitudinal data from the Dunedin Study (a cohort followed for several decades with repeated physiological, cognitive, and functional assessments) and providing a sensitive tool for detecting short-term effects of lifestyle changes or interventions. But do they bring us any closer to understanding the fundamental nature of aging? The ML approaches currently used to construct aging clocks are not designed to address the root causes of aging, as they focus on learning correlations rather than causal relationships: they are not trained to distinguish between passengers and drivers of aging. The features and coefficients of most clocks remain difficult to interpret, and mechanistic or actionable insights derived from them are extremely scarce, with only a few recent works offering promising leads. AI-based approaches have been advancing exponentially over the past few years and can now operate with massive volumes of longitudinal data, enabling a more comprehensive assessment of human health by directly predicting future life events. For example, “large health models” (LHMs) represent human health as a sequence of events allowing us to identify which dysregulation events occur first, and to analyze how the conditional probability of one event (e.g., atherosclerosis) affects the occurrence of another (e.g., stroke). By uncovering these complex pathways of health-related events, we can gain a more nuanced, albeit observational, understanding of how human health evolves over time. LHMs will arguably become more beneficial for practical longevity research than the much-debated aging clocks. Already, they inherently encompass the properties required of aging clocks and mortality predictors, at least regarding health assessment. The recently proposed LHMs, including BEHRT, Life2Vec, and Delphi-2M, clearly demonstrate how the access to vast amounts of longitudinal data enables deep insights and accurate predictions of individuals’ health and even their socioeconomic status. Yet, their utility for deepening our understanding of aging—like that of aging clocks—remains to be shown.
Willy Zwaenepoel
Software For Fast Storage Hardware
 January 12, 2026

Willy Zwaenepoel Professor of Computer Science, University of Sydney

Hosted by: Prof. Xiaosong Ma
Computer Science
Watch Now Abstract
Software For Fast Storage Hardware

"Storage technologies have entered the market with performance vastly superior to conventional storagedevices. This technology shift requires a complete rethinking of the software storage stack. In this talk I will give two examples of our work with Optane-based solid-state (block) devices that illustrate the need for and the benefit of a wholesale redesign. First, I will describe the KVell key-value (KV) store. The key observation underlying KVell is that conventional KV software on fast devices is bottlenecked by the CPU rather than by the device. KVell therefore focuses on minimizing CPU intervention. Second, I will describe the KVell+ OLTP/OLAP system built on top of KVell. The key underlying observation here is that these storage devices have become so fast that the conventional implementation of snapshot isolation – maintaining multiple versions – leads to intolerable space amplification. Kvell+ therefore processes versions as they are created. This talk describes joint work with Oana Balmau (McGill University), Khaled Elmeleegy (Coupang), Karan Gupta (Nutanix), Kimberley Keeton (Google), Baptiste Lepers (INRIA), Xiaoxiang Wu and Yuben Yang (Sydney)."
Samuele Cornell
Conversational Speech Processing: Challenges & Opportunities
 January 8, 2026

Samuele Cornell Postdoctoral Research Associate, Carnegie Mellon University

Hosted by: Prof. Preslav Nakov
Natural Language Processing
Watch Now Abstract
Conversational Speech Processing: Challenges & Opportunities

State-of-the-art ASR systems excel on close-talk benchmarks but struggle with far-field conversational speech, where error rates remain above 20%. Current benchmark datasets inadequately assess generalization across domains and real-world conditions, often relying on oracle segmentation that yields overly optimistic results. Distant ASR (DASR) faces unique challenges including overlapping speech, long-form processing and varied recording setups, and dynamic speaker interactions that significantly complicate system development. Despite these difficulties, spontaneous conversational speech represents the next frontier for developing more human-like AI agents capable of natural multi-party communication. This presentation examines the challenges of conversational speech processing and outlines two promising research directions. The first is end-to-end integration, which can mitigate the cascading errors that plague modular approaches. The second tackles data scarcity—a persistent bottleneck given the privacy concerns surrounding conversational recordings and the substantial cost of annotation. Here, the talk explores how large language models and text-to-speech synthesis can generate effective training data, alongside self-supervised learning techniques which can further dramatically reduce reliance on labeled corpora.
Xueguang Ma
Breaking Information Silos: Advancing Search Systems for Unified Information Seeking
 January 8, 2026

Xueguang Ma Last-year PhD, David R. Cheriton School of Computer Science at University of Waterloo

Hosted by: Prof. Preslav Nakov
Natural Language Processing
Watch Now Abstract
Breaking Information Silos: Advancing Search Systems for Unified Information Seeking

Information seeking has been fundamental to human advancement, enabling knowledge acquisition, decision-making, and innovation across disciplines. However, traditional information retrieval systems often rely on specialized pipelines optimized for specific retrieval tasks, causing information silos that hinder unified information seeking. In this talk, I will present our work in building unified document retrieval systems that break these information silos across three dimensions: (1) domain and language silos, where I demonstrate how LLM-based dense retrievers achieve strong generalizability across retrieval tasks and present frameworks for training small, generalizable retrievers through diverse LLM augmentation; (2) modality silos, where I introduce a paradigm shift from text-based retrieval that relies on content extraction to directly encoding document screenshots, preserving all information including text, images, and layout in unified dense representations; and (3) space silos, where we show the importance of LLM-powered search agents in seeking and gathering information across disparate sources, and present fair and transparent evaluation benchmarks for assessing deep-search systems. I will conclude by discussing future directions that further pave the way toward building truly unified retrieval systems for seamless information seeking across world knowledge.
Kenton Murray
Natural Language Processing
Watch Now Abstract
Improving Artificial Intelligence Using Multilinguality

Artificial Intelligence and Natural Language Processing are overly focused on English-only and English-centric models. Fortunately, there has been a growing interest in making models more multilingual. Yet, whereas most researchers in this field are focused on broadening coverage of people and cultures, my interests are two-fold: both expanding access, but also making core machine learning improvements that translate back to monolingual English methods. By focusing on other languages, we are able to design more robust methods and create novel algorithms that drive advances across all of aspects of Artificial Intelligence and Machine Learning, not just multilingual applications. In this talk, I will cover improvements my students and I have made throughout all parts of an LLM pipeline, from data curation, to pretraining, post-training, and evaluation and inference. We show how this can result in faster training time, less GPU memory usage, and fewer parameters, as well as many other advancements. While these methods were developed with a focus on multilinguality, they have been applied to improve monolingual, English-only models as well.
Jon Saad-Falcon
Natural Language Processing
Watch Now Abstract
Intelligence Per Watt: Measuring the Intelligence Efficiency of Local and Cloud AI

Large language model (LLM) queries are predominantly processed by frontier models in centralized cloud infrastructure. Rapidly growing demand strains this paradigm, and cloud providers struggle to scale infrastructure at pace. Two advances enable us to rethink this paradigm: small LMs (<=20B active parameters) now achieve competitive performance to frontier models on many tasks, and local accelerators (e.g., Apple M4 Max) run these models at interactive latencies. This raises the question: can local inference viably redistribute demand from centralized infrastructure? Answering this requires measuring whether local LMs can accurately answer real-world queries and whether they can do so efficiently enough to be practical on power-constrained devices (i.e., laptops). We propose intelligence per watt (IPW), task accuracy divided by unit of power, as a metric for assessing capability and efficiency of local inference across model-accelerator pairs. We conduct a large-scale empirical study across 20+ state-of-the-art local LMs, 8 accelerators, and a representative subset of LLM traffic: 1M real-world single-turn chat and reasoning queries. For each query, we measure accuracy, energy, latency, and power. Our analysis reveals 3 findings. First, local LMs can accurately answer 88.7% of single-turn chat and reasoning queries with accuracy varying by domain. Second, from 2023-2025, IPW improved 5.3x and local query coverage rose from 23.2% to 71.3%. Third, local accelerators achieve at least 1.4x lower IPW than cloud accelerators running identical models, revealing significant headroom for optimization. These findings demonstrate that local inference can meaningfully redistribute demand from centralized infrastructure, with IPW serving as the critical metric for tracking this transition. We release our IPW profiling harness for systematic intelligence-per-watt benchmarking.
Zeyu Jia
Statistical Foundations of Outcome-Based Reinforcement Learning: from RLHF to Reasoning
 December 22, 2025

Zeyu Jia Final-year PhD student in the Department of Electrical Engineering and Computer Science at MIT

Hosted by: Prof. Mladen Kolar
Statistics and Data Science
Abstract
Statistical Foundations of Outcome-Based Reinforcement Learning: from RLHF to Reasoning

A central question in reinforcement learning for complex reasoning tasks is how feedback should be provided: should learning rely on fine-grained, step-by-step supervision (process supervision), or only on evaluations of final outcomes (outcome supervision)? Conventional wisdom holds that outcome-based supervision is inherently more difficult, due to trajectory-level coverage challenges, motivating substantial effort to collect detailed process annotations. In this talk, I offer two complementary perspectives that revisit this assumption. First, in the offline setting, I introduce a transformation algorithm that converts outcome-supervision data into process-supervision data, and show through its analysis that, under standard coverage assumptions, outcome supervision is statistically no more difficult than process supervision. This result suggests that observed performance gaps arise from algorithmic limitations rather than fundamental statistical barriers. In addition, our results provide a finer-grained analysis of the Direct Policy Optimization (DPO) algorithm. Second, I turn to the online setting and present provably sample-efficient algorithms that achieve strong performance guarantees using only trajectory-level feedback. At the same time, I identify sharp separations: there exist classes of MDPs in which outcome-based feedback incurs an exponential disadvantage relative to step-level supervision. These results precisely characterize when—and why—process supervision is genuinely necessary. I conclude by outlining my broader research vision for the role of statistics in the age of large language models.
Katie Houlahan
The etiology and evolution of complex amplifications in breast cancer
 December 10, 2025

Katie Houlahan Assistant Professor and Principal Investigator, Centre for Discovery in Cancer Research (CDCR) at McMaster University

Hosted by: Prof. Aziz Khan
Computational Biology
Watch Now Abstract
The etiology and evolution of complex amplifications in breast cancer

Breast cancer is defined clinically by Estrogen Receptor (ER), Progesterone Receptor (PR), and Human Epithelial Growth Factor Receptor 2 (HER2) status, but subtypes based on these receptors only partially capture its biological diversity. We assembled a meta-cohort of 1,828 breast tumours spanning pre-invasive to metastatic stages with whole-genome and transcriptome sequencing. We show that the mutational rearrangement processes driving a subset of ER⁺ tumours are identical to those in HER2⁺ disease, but instead of amplifying ERBB2, they target alternative oncogenes such as MYC, CCND1, and FGFR1. These complex amplifications arise early, in ductal carcinoma in situ, and persist through metastasis, suggesting they are founding events. Integrating germline and tumour data from 5,870 cases, we find that inherited variation influences which tumours can acquire these complex somatic amplifications. Tumours arising in individuals with high germline epitope burden in these loci show reduced amplification, consistent with immune selection against highly antigenic clones. This germline–somatic interaction shapes subtype development, immune landscape, and patient outcome. Together, these data reveal that breast cancer subtypes emerge through the intersection of shared mutational processes and germline-mediated immune editing, linking inherited variation to the evolutionary trajectories of tumour genomes.
Daniel Dobos
Human-Computer Interaction
Watch Now Abstract
Mobile Computational Action Through a Modern AI Lens

What are the advantages and disadvantages of open-source Large Language Models? Where can they be used already know efficiently and how do they help answering the two big global societal AI questions: "Will AI scale faster then any technology before?" and "In what type of global AI arms race are we currently?” Examples from the Swiss AI Model Apertus will be given and how exchanges with other LLM builders, like the Falcon model series, from the UAE.
Watch Now Abstract
Healthcare Agents: Language Model Agents in Health Prediction and Decision-Making

Recent advances in foundation models have enabled powerful general-purpose reasoning systems, yet their application to health remains limited by safety, hallucination, and the inability to operate over long-horizon physiological trajectories. In this talk, I will present a line of research that builds from single-agent system to multi-agent systems capable of clinical reasoning, wearable understanding, and scientific discovery. Together, these advances outline a path toward the next generation of safe, interpretable, and continuously learning personal health agents.
Morgan Cole Thomas
Computational Biology
Watch Now Abstract
Chemical Language Models and Reinforcement Learning for Drug Design

Chemical language models (CLMs) with Reinforcement Learning (RL), although relatively simply, are the most adopted and robust generative model for de novo molecular design in industry still. In this work, I present advances in the RL learning efficiency of these models enabling the use of more computationally expensive oracles, investigate cooperative agent learning and the scaling laws in molecular rediscovery, and introduce inference time methods to constrain CLMs for practical scaffold elaboration and fragment linking. In addition, I will share successful case studies that led to the discovery of novel binders of Adenosine 2A receptor with an 88% success rate. Lastly, I will compare to newer generative models conducting de novo design in 3D, and postulate where research is going, and where it should go.
Davide Casciano
Toward New Directions for an Anthropology‑Informed HCI/HCAI
 November 25, 2025

Davide Casciano Marie Skłodowska‑Curie Global Fellow at KU Leuven and San José State University

Hosted by: Prof. Elizabeth Churchill
Human-Computer Interaction
Abstract
Toward New Directions for an Anthropology‑Informed HCI/HCAI

"Anthropology has been part of Human–Computer Interaction (HCI) since at least the 1980s, fostering interdisciplinary collaborations that laid the foundations for a productive dialogue that continues today. Yet many of its current applications remain limited, both methodologically and theoretically — two dimensions deeply intertwined in anthropological practice. In an era defined by artificial intelligence and by increasing calls for genuinely human-centered approaches, I argue that contemporary anthropology can reshape the conceptual and ethical coordinates of both HCI and HCAI. By enabling deeper reflection on what it means to be “human” and on how we understand the “contexts” in which technologies are designed and adopted, anthropology provides critical tools for engaging with technological complexity. As artificial intelligence grows increasingly opaque, often eluding even its developers, anthropology offers unique means to explore socio-technical complexity — conceived as an assemblage of relations and dense meanings among humans and non-humans. This perspective supports the development of responsible design and research practices, capable of anticipating innovation’s impacts rather than merely reacting ex post, while rethinking human–machine interaction as co constitutive relationships in which human and more-than-human layers — consciously or not, visibly or subtly, at every level — shape the global reality we inhabit and co produce, from Silicon Valley to the smallest towns in Africa. In this sense, not only can HCI and HCAI continue to evolve through anthropological insights, but anthropology itself can be revitalized through new interdisciplinary hybridizations within academic and research environments prepared to address the challenges posed by continuously emerging technologies."
Ricardo Baeza-Yates
The Limitations of Data, Machine Learning & Us
 November 25, 2025

Ricardo Baeza-Yates WASP Professor at KTH, Sweden, as well as Professor at UPF, Spain

Hosted by: Prof. Elizabeth Churchill
Human-Computer Interaction
Watch Now Abstract
The Limitations of Data, Machine Learning & Us

Machine learning (ML), particularly deep learning, is being used everywhere. However, not always is used well, ethically and/or scientifically. In this talk, we first do a deep dive in the limitations of supervised ML and data, its key input. We cover small data, datification, all types of biases, predictive optimization issues, evaluating success instead of harm, and pseudoscience, among other problems.  The second part is about our own limitations using ML, including different types of human incompetence: cognitive biases, unethical applications, no administrative competence, misinformation, and the impact on mental health. In the final part we discuss regulation on the use of AI and responsible AI principles that can mitigate the problems outlined above.
Watch Now Abstract
Integrating Large-Scale Genomics and Artificial Intelligence in Personalized Medicine

Over the past decade, Genotek Ltd. has established the largest genetic testing facility in Eastern Europe, pioneering the integration of large-scale sequencing, artificial intelligence, and clinical bioinformatics. In this talk, we will begin by presenting our progress in developing and applying the variable-depth whole genome sequencing (vdWGS) technology — a novel approach that significantly outperforms microarray-based genotyping in accuracy, coverage, and efficiency. For more than 15 years, our team has been developing computational frameworks for personal DNA testing and the interpretation of individual genetic data. We will discuss advances in polygenic risk scoring, machine learning models for complex disease prediction, population genetics and local ancestry inference, as well as applications in nutrigenetics, sports genetics, and pharmacogenetics. Our unique data collection — encompassing over 500,000 genomes linked with electronic health records and questionnaires — represents an invaluable resource for biomedical research. We will highlight our own recent studies conducted at Genotek Ltd.: GWAS, oral microbiome analysis for complex diseases (including type 1 and type 2 diabetes), deep learning methods for modeling epistatic effects, graph neural networks for genetic relatives networks, etc. In addition, we will discuss the implementation of AI technologies in telemedicine and deep learning for MRI image analysis. Genotek’s research has been published in leading journals, including Nature, Nature Genetics, EClinicalMedicine (The Lancet), and Scientific Reports. The company actively participates in international collaborations, such as the COVID-19 Host Genetics Initiative, and maintains research partnerships with academic institutions including Charité Clinic, the University of Berlin and the University of Copenhagen. Finally, we will share our experience in developing bioinformatics educational programs and supervising student research projects based at Genotek
Frederic Gmeiner
Human-Computer Interaction
Abstract
Designing Interactions to Empower Thoughtful Human-AI Co-Creation

"Generative AI (GenAI) promises to transform how we think, create, and solve problems. Yet its current integration into professional practice remains limited. Users frequently face misalignment between outputs and intentions, uncertainty in how to guide the system, and reduced cognitive engagement when tasks are overly delegated to automation. These issues limit GenAI’s impact in precisely the kinds of complex, open-ended domains where human creativity and judgment matter most. My research addresses these challenges by rethinking human-AI interaction: how can we design systems that amplify rather than offload human cognitive work? Drawing on the long-standing HCI vision of augmenting human intellect, I explore interaction techniques that scaffold reflection, sharpen problem formulation, and support deliberate engagement in tasks where human judgment and creativity are essential. I will present examples from recent projects—including SocraBot, a voice-based agent for reflective engagement in mechanical design, and IntentTagger, a patented input technique for steering AI-generated content in PowerPoint—that demonstrate how new forms of interaction can unlock more productive, empowering human-AI co-creation. I will end by outlining a forward-looking agenda for research and education—advancing human-centered AI systems, methods, and curricula that empower people to think more deeply, create more meaningfully, and innovate more responsibly in the age of intelligent machines."
Callum Parker
Designing Intelligent Interactions for Public Spaces
 November 24, 2025

Callum Parker Lecturer in Interaction Design at the University of Sydney

Hosted by: Prof. Elizabeth Churchill
Human-Computer Interaction
Watch Now Abstract
Designing Intelligent Interactions for Public Spaces

"Public spaces, from city streets to virtual worlds, are increasingly shaped by systems that sense, predict, and adapt to how we move, communicate, and experience our surroundings. As these technologies become embedded in everyday environments, a critical question emerges: how can we design interfaces that are intelligent while also being inclusive and responsive to human needs in these shared contexts? In this talk, I will draw on a series of projects exploring how people interact with autonomous and intelligent systems. These include studies on communication between pedestrians and autonomous vehicles, adaptive public displays that respond to behaviour and context, and inclusive environments within the metaverse. I conclude by reflecting on how AI is transforming our collective experience of space, not only through automation and sensing, but also through its capacity to personalise and, at times, fragment the environments we share. As intelligent systems increasingly adapt to individuals, our challenge as designers and researchers is to ensure that AI enhances connection rather than isolation, supporting a future where technology deepens rather than divides our shared public environments."
Antonietta Mira
On estimating and exploiting data intrinsic dimension
 November 21, 2025

Antonietta Mira Professor of Statistics, Founder and Director of the Data Science Lab at USI

Hosted by: Prof. Eric Moulines
Statistics and Data Science
Watch Now Abstract
On estimating and exploiting data intrinsic dimension

"Real-world datasets often exhibit a high degree of (possibly) non-linear correlations and constraints among their features. Consequently, despite residing in a high-dimensional embedding space, the data typically lie on a manifold with a much lower intrinsic dimension (ID), which—under the presence of noise—may depend on the scale at which the data are analyzed. This situation raises interesting questions: How many variables or combinations thereof are necessary to describe a real-world dataset without significant information loss? What is the appropriate scale at which one should analyze and visualize data? Although these two issues are often considered unrelated, they are in fact strongly entangled and can be addressed within a unified framework. We introduce an approach in which the optimal number of variables and the optimal scale are determined self-consistently, recognizing and bypassing the scale at which the data are affected by noise. To this end, we estimate the data ID in an adaptive manner. Sometimes, within the same dataset, it is possible to identify more than one ID, meaning that different subsets of data points lie on manifolds with different IDs. Identifying these manifolds provides a clustering of the data. Examples of exploitation of data ID will be presented ranging from gene expression to protein folding, and pandemic evolution, all the way to fMRI, financial and network data. All these real-world applications show how a simple topological feature such as the ID allows us to uncover a rich data structure and improves our insight into subsequent statistical analyses."
Abhishek Bhattacharjee
Catalyzing computing for brain-computer interfaces
 November 21, 2025

Abhishek Bhattacharjee A. Bartlett Giamatti Professor of Computer Science at Yale University

Hosted by: Prof. Abdulrahman Mahmoud
Computer Science
Abstract
Catalyzing computing for brain-computer interfaces

Brain–computer interfaces have the potential to treat debilitating neurological disorders, reveal new insights into brain function, and ultimately redefine the relationship between biological and artificial intelligence. Realizing this vision requires computer systems that carefully balance power, latency, and bandwidth to decode neural activity, stimulate neurons, and control assistive devices with precision. This talk presents my group’s design of a standardized, general-purpose computer architecture for future brain interfaces. Our architecture supports the treatment of multiple neurological conditions—most notably epilepsy and movement disorders—and is built around end-to-end hardware acceleration, spanning from the microarchitectural level to distributed systems. We validate these ideas through custom chip implementations and real-time experiments interfacing our chips with the brains of two human patients in the operating room.
Watch Now Abstract
When Agents Trade: Live Multi-Market Benchmarking of LLM-Driven Trading Systems

As large language models (LLMs) evolve beyond static reasoning toward dynamic decision-making, their application in real-time trading environments poses a new frontier for financial AI. This talk introduces the Agent Market Arena (AMA), the first real-time, lifelong benchmark for evaluating LLM-driven trading agents across multiple markets. Developed by The Fin AI and collaborators at Columbia, Harvard, and other institutions, AMA compares diverse agent architectures such as InvestorAgent, TradeAgent, HedgeFundAgent, and DeepFundAgent, powered by LLMs including GPT-4.1, Claude-3.5, and Gemini-2.0. Using verified live data from stocks and cryptocurrencies, AMA reveals that profitability depends more on agent architecture and coordination logic than on the LLM backbone itself. The results highlight how memory, debate, and risk-control mechanisms shape financial decision-making, paving the way for more adaptive and cooperative AI traders. Click here for my slides: https://docs.google.com/presentation/d/1VrgSciscCD2UKlp0VXCBX2dqCJPzoBgt/edit?usp=drive_link&ouid=107320101831769930525&rtpof=true&sd=true
Mark Podolskij
On nonparametric estimation of the interaction function in particle system models
 November 20, 2025

Mark Podolskij Professor of statistics and probability, University of Luxembourg

Hosted by: Maxim Panov
Statistics and Data Science
Watch Now Abstract
On nonparametric estimation of the interaction function in particle system models

"This talk discusses the challenging problem of nonparametric estimation for the interaction function within diffusion-type particle system models. We introduce an estimation method based on empirical risk minimization. Our study encompasses an analysis of the stochastic and approximation errors associated with the proposed procedure, along with an examination of certain minimax lower bounds. In particular, we show that there is a natural metric under which the corresponding estimation error of the interaction function converges to zero with a parametric rate that is minimax optimal. This result is rather surprising given the complexity of the underlying estimation problem and a rather large class of interaction functions for which the above parametric rate holds. Furthermore, we investigate convergence rates in the conventional $L^2$-norm and discuss their optimality in some cases. The presentation is based upon a joint work with D. Belomestny and S.-Y. Zhou https://arxiv.org/pdf/2402.14419"
Abderrahmane Kheddar
Towards Human-Like Machines: The Journey of Humanoids from Research to Deployment
 November 20, 2025

Abderrahmane Kheddar Director of Research, Centre National de la Recherche Scientifique (CNRS)

Hosted by: Prof. Yoshihiko Nakamura
Robotics
Watch Now Abstract
Towards Human-Like Machines: The Journey of Humanoids from Research to Deployment

Humanoid robots have matured from research laboratories into increasingly capable systems that promise to interact, assist, and even collaborate with humans in real-world settings. In this talk, I chart the evolution of humanoid machines, from early research prototypes focused on balance, locomotion and manipulation, to nowadays multimodal platforms aiming to operate alongside people in factories, homes, healthcare and other services. Drawing on our work in multi-contact locomotion, haptic interaction, embodiment and human-robot teaming, I highlight key enablers such as contact-aware control, vision- and force-based interaction, adaptable posture and locomotion, and thought-based or tele-operated embodiment. At the same time, I cover the critical challenges that remain: AI physical embodiment, safe and reliable deployment in human-centred environments, learning and adaptation in unstructured settings, and the economic pathway from research to fielded machines. Looking ahead, I propose that the next stage will hinge on seamless human-robot symbiosis: humanoids as cyber-physical avatars, physical companions, and general-purpose agents embedded in the digital society. By mapping this trajectory from research to deployment, this talk offers a roadmap for how we might realise truly human-like machines, not in appearance alone, but in purpose, interaction, adaptability and societal integration.
Pengtao Xie
Billion-Parameter Foundation Model for Single-Cell Transcriptomics
 November 19, 2025

Pengtao Xie Adjunct Assistant Professor of Machine Learning at MBZUAI and Associate Professor at UCSD

Hosted by: Prof. Jin Tian
Machine Learning
Watch Now Abstract
Billion-Parameter Foundation Model for Single-Cell Transcriptomics

Single-cell RNA sequencing (scRNA-seq) has revolutionized the study of cellular heterogeneity by providing gene expression data at single-cell resolution, uncovering insights into rare cell populations, cell-cell interactions, and gene regulation. Foundation models pretrained on large-scale scRNA-seq datasets have shown great promise in analyzing such data, but existing approaches are often limited to modeling a small subset of highly expressed genes and lack the integration of external genespecific knowledge. To address these limitations, we present sc-Long, a billion-parameter foundation model pretrained on 48 million cells. sc-Long performs self-attention across the entire set of 28,000 genes in the human genome. This enables the model to capture long-range dependencies between all genes, including lowly expressed ones, which often play critical roles in cellular processes but are typically excluded by existing foundation models. Additionally, sc-Long integrates gene knowledge from the Gene Ontology using a graph convolutional network, enriching its contextual understanding of gene functions and relationships. In extensive evaluations, sc-Long surpasses both stateof-the-art scRNA-seq foundation models and task-specific models across diverse tasks, including predicting transcriptional responses to genetic and chemical perturbations, forecasting cancer drug responses, and inferring gene regulatory networks.
Veselin Stoyanov
Natural Language Processing
Watch Now Abstract
Why Wait for AGI? Artificial Superintelligence is Here and Solving Real Problems

Research in the AI community remains fixated on achieving Artificial General Intelligence. Whether and why autonomous AGI will arrive is a matter of dispute. At the same time, Artificial Superintelligence (ASI) already exists in narrow but valuable domains and it is amazing. Today's AI systems demonstrate genuinely superhuman capabilities—processing millions of documents in seconds, extracting insights with breadth and speed that humans cannot match. In this talk, I will first demonstrate ASI in action powering Lightfield's AI CRM, which launched just recently. Our system represents Relationship Superintelligence by understanding relationship dynamics across vast interaction histories. Second, I'll share a research project with colleagues at MBZUAI on evidence-based generation. While LLMs can already process vast amounts of text with superhuman capability, they are not always reliable and have limitations on effective input size. To fully enable this ASI potential, models must be able to provide evidence—precise references to where information comes from—as well as process increasingly larger amounts of information at decreasing computational cost. I will discuss how evidence-based generation enables these advances and share some current results.
Zhengjun Yue
Toward Interpretable and Inclusive Speech Technology for Healthcare
 November 17, 2025

Zhengjun Yue Tenured Assistant Professor, Technology University of Delft (TU Delft)

Hosted by: Prof. Preslav Nakov
Natural Language Processing
Watch Now Abstract
Toward Interpretable and Inclusive Speech Technology for Healthcare

"Speech is a powerful and natural channel for human communication. It reflects not only a person’s linguistic ability, but also their cognitive, neurological, and emotional state. AI-driven speech technology is transforming how people access services, receive care, and engage with information. However, mainstream systems remain largely inaccessible to individuals with speech impairments, particularly those affected by neurological, developmental, or motor disorders. These underrepresented groups of people often find their speech excluded or misinterpreted. This technological gap not only limits access to digital services, but also impedes the development of reliable tools for health monitoring, clinical decision support, and communicative assistance. My research is centered on interpretable AI-driven speech-oriented multimodal technology for healthcare, with a mission to make voice a clinically useful and socially inclusive biomarker. In this talk, I will present my research and recent progress on automatic detection, recognition and analysis of pathological and atypical speech, highlighting methods that enhance robustness and interpretability. I will also discuss how advances in speech and language modeling can enable context-aware, explainable, and embodied assistive systems, for instance, through social robots that support pathological speakers and other underrepresented user groups."
Yaqi Xie
Natural Language Processing
Watch Now Abstract
Towards a True AI Partner: Fusing Learning and Knowledge for Trustworthy Human-AI Synergy

To move beyond tools and towards true partners, AI systems must bridge the gap between perception-driven deep learning and knowledge-based symbolic reasoning. Current approaches excel at one or the other, but not both, limiting their reliability and preventing us from fully trusting them. My research addresses this challenge through a principled fusion of learning and reasoning, guided by the principle of building AI that is "Trustworthy by Design." I will first describe work on embedding formal logic into neural networks, creating models that are not only more robust and sample-efficient, but also inherently more transparent. Building on this foundation, I will show how neuro-symbolic integration enables robots to reason about intent, anticipate human needs, and perform task-oriented actions in unstructured environments. Finally, I will present a novel training-free method that leverages generative models for self-correction, tackling the critical problem of hallucination in modern AI. Together, these contributions lay the groundwork for intelligent agents that can be instructed, corrected, and ultimately trusted, agents that learn from human knowledge, adapt to real-world complexity, and collaborate seamlessly with people in everyday environments.
Victor Curean
Cellular Foundation Models in Biology - Towards understanding disease and therapeutic targets
 November 13, 2025

Victor Curean PhD candidate, Iuliu Hațieganu University of Medicine and Pharmacy, Romania

Hosted by: Prof. Natasa Przulj
Computational Biology
Watch Now Abstract
Cellular Foundation Models in Biology - Towards understanding disease and therapeutic targets

The rapid growth of open-access omics data has enabled large-scale exploration of cellular states across species, tissues, and molecular modalities. Building on these resources, cellular foundation models use self-supervised learning to derive general cell representations that can be adapted to diverse downstream biological tasks, including the prediction of responses to chemical and genetic perturbations. This presentation reviews their use in modeling cellular perturbations, describing common learning frameworks, data requirements, and evaluation practices, as well as key challenges specific to single-cell data. We note emerging gaps between reported results and standardized evaluations, which highlight persistent issues in how performance is quantified across studies and benchmarks. Overall, this presentation provides an overview of the current landscape of single-cell foundation models, emphasizing both their progress and limitations in capturing perturbation-specific responses.
Watch Now Abstract
Toward Ubiquitous HCI: Connecting Minds, Bodies, and Environment Through Wearable Sensing

"Designing the next generation of human-computer interactions requires a deeper understanding of how cognition unfolds in context, shaped not only by the user’s mental and bodily states but also by their dynamic interaction with the surrounding environment. In this talk, I present a research agenda that brings together cognitive neuroscience, brain-computer interfaces (BCIs), and wearable sensing to inform the design of ubiquitous, adaptive, and unobtrusive interactive systems. Using tools such as mobile EEG, eye-tracking, motion sensors, and environment-aware computing, my work investigates how people perceive, act, and make decisions in natural settings, from high-load operational tasks such as flying a plane to everyday behaviors like walking around a city or eating a meal. This approach moves beyond screen-based interaction to develop systems that respond to users in real time, based on the continuous coupling between brain, body, and environment. By embedding cognitive and contextual awareness into system design, we can move toward calm, seamless technologies that adapt fluidly to the user’s moment-to-moment needs."
Sebastian Stich
Statistics and Data Science
Watch Now Abstract
Communication-Efficient Algorithms for Federated Learning

Federated learning has emerged as an important paradigm in modern distributed machine learning. Unlike traditional centralized learning, where models are trained using large datasets stored on a central server, federated learning keeps the training data distributed across many clients, such as phones, network sensors, hospitals, or other local information sources. In this setting, communication-efficient optimization algorithms are crucial. We provide a brief introduction to local update methods developed for federated optimization and discuss their worst-case complexity. Surprisingly, these methods often perform much better in practice than predicted by theoretical analyses using classical assumptions. Recent years have revealed that their performance can be better described using refined notions that capture the similarity among client objectives. In this talk, we introduce a generic framework based on a distributed proximal point algorithm, which consolidates many of our insights and allows for the adaptation of arbitrary centralized optimization algorithms to the convex federated setting (even with acceleration). Our theoretical analysis shows that the derived methods enjoy faster convergence if the degree of similarity among clients is high. We conclude with a discussion of extensions and open challenges for non-convex objectives and for scaling federated learning to modern large models.
Egor Shulgin
From AdamW to Muon: Bridging Theory and Practice of Geometry-Aware Optimization for LLMs and Beyond
 November 4, 2025

Egor Shulgin PhD candidate in Computer Science, King Abdullah University of Science and Technology (KAUST)

Hosted by: Prof. Eduard Gorbunov
Statistics and Data Science
Watch Now Abstract
From AdamW to Muon: Bridging Theory and Practice of Geometry-Aware Optimization for LLMs and Beyond

"Optimization remains a crucial driver of progress in modern machine learning: it governs whether large models train reliably and how efficiently they use compute. This talk examines Muon, a geometry-aware alternative to AdamW that replaces element-wise adaptation with layer-wise, matrix-aware updates—an opportunity to reimagine optimization for deep learning in a way that better matches practice and respects network structure. In large-scale practice, Muon has begun to displace AdamW, offering stronger performance, better hyperparameter transferability, and lower memory overhead across LLMs, diffusion, and vision models. We aim to advance our understanding of deep learning through the lens of optimization, grounding the analysis in how these methods are actually used. I will present Gluon, a unifying, layer-aware framework together with a more general, geometry-based model that captures the heterogeneous behavior of deep networks across layers and along training trajectories. Gluon reimagines optimization for deep learning by replacing uniform, global assumptions with a per-layer description that tracks training dynamics and respects network structure. Measured during language-model training, this model closely tracks observed smoothness and reveals pronounced variation across layers and blocks—phenomena that classical assumptions miss. The framework yields convergence guarantees under these broader conditions and helps explain when structured, per-layer methods can outperform classical approaches. Building on this lens, I then move from the idealized analysis of Muon to the practical, approximate version used in codebases, where orthogonalization is performed using a few Newton–Schulz iterations rather than an expensive full SVD, moving beyond prior analyses of the idealized SVD step to explicitly model the inexact iteration used in practice. Our theory predicts that better approximations lead to better performance (faster convergence), and in practice they permit larger learning rates and widen the stability region. Taken together, these results reduce the theory–practice gap for geometry-aware methods."
Robert Moskovitch
Computational Biology
Watch Now Abstract
Heterogenuous Multivariate Temporal Data Analytics with Time Intervals Related Patterns

"Analysis of heterogeneous multivariate time-stamped data is one of the most challenging topics in data science in general, relevant to various problems in real-life longitudinal data in many domains, such as cybersecurity, healthcare, predictive maintenance, sports, and more. Timestamped data can be sampled regularly, commonly by electronic means, but also irregularly, often made manually, common in biomedical data, whether intense as in ICU or sparse as in Electronic Health Records (EHR). Additionally, raw temporal data can represent durations of a continuous or nominal value represented by time intervals. Transforming time point series into meaningful symbolic time intervals using temporal Absorption will be presented to bring all the temporal variables, which have various representations, into a uniform representation. Then, KarmaLego (IEEE ICDM 2015), or TIRPClo (AAAI 2021, DMKD 2023), fast time intervals mining algorithms for the discovery of non-ambiguous Time Intervals Related Patterns (TIRPs) represented by Allen's temporal relations, will be introduced. TIRPs can be used for several purposes: temporal knowledge discovery or as features for the classification of heterogeneous multivariate temporal data (KAIS 2015), and with increased accuracy when using the Temporal Discretization for Classification (TD4C) method (DMKD 2015). In this talk, I will refer to our recent developments and publications in faster TIRPs mining, visualization of TIRPs discovery (JBI 2022, Cell/Patterns, 2025), and the very recent novel use of TIRPs for event’s continuous prediction (SDM 2024, ML 2025) based on the continuous prediction of a pattern’s completion, and more."
Jonas Oppenlaender
From small-scale generative images to global-scale picture of HCI
 November 3, 2025

Jonas Oppenlaender Postdoctoral Researcher in Human-Computer Interaction, University of Oulu

Hosted by: Prof. Elizabeth Churchill
Human-Computer Interaction
Watch Now Abstract
From small-scale generative images to global-scale picture of HCI

This talk presents a retrospective on my research into “prompt engineering” for text-to-image (TTI) generation – an example where humans were creatively empowered by generative AI. I trace how online communities were instrumental in shaping the practice of prompting and how challenges persist to this day in the creative use of TTI systems. While TTI generative systems enable anyone to produce digital images and artworks through language, this apparent democratization conceals deeper issues of control, authorship, and alignment. I argue that prompt engineering is not merely a creative technique but a symptom of a broader misalignment between human intent and system behavior. Extending this lens, I discuss how prompting has diffused into the wider research field of Human-Computer Interaction (HCI), where it risks fostering tool-driven novelty at the expense of conceptual progress and meaningful insight. What is harmful is not that prompting fails to translate human intent efficiently, but that it is brittle and encodes a mode of interaction that prioritizes prompt tuning and short-lived prototyping over deeper understanding. I conclude by outlining a vision for reflective and scalable stewardship in HCI research.
Watch Now Abstract
From Splitting to Variance Reduction: A Primal–Dual Perspective on Optimization Algorithms

Convex nonsmooth optimization problems in high-dimensional spaces have become ubiquitous. Primal–dual proximal algorithms are particularly well-suited to solving them: they rely on simple iterative operations that handle the terms of the objective function separately. Their design is grounded in the framework of monotone inclusions, where splitting techniques provide a powerful way to decompose a complex problem involving multiple terms into simpler subproblems that can be solved and combined efficiently. Meanwhile, stochastic algorithms such as Stochastic Gradient Descent (SGD) have been central to the success of machine learning and artificial intelligence. Modern variance-reduced methods enhance these algorithms by counteracting the noise inherent to stochastic updates, enabling convergence to exact solutions rather than oscillation around them. In this talk, I will highlight the deep connections between splitting and variance reduction: the dual variables in primal–dual methods and the control variates in variance-reduced stochastic algorithms play remarkably similar roles, revealing a unifying perspective on these seemingly distinct areas.
Constantine Dovrolis
Machine Learning
Watch Now Abstract
Toward Neuro-Inspired AI: Sparse Data, Modular Networks, and Stream-Based Continual Learning

How can we design learning systems that resemble the brain—able to adapt continually, learn from streams, and generalize without a flood of labeled data? This talk explores recent advances in sparse and modular neural networks that push machine learning in that direction. By selecting only the most informative experiences from a stream, enforcing sparsity to balance stability and plasticity, and leveraging modular structure to reduce interference and improve efficiency, we can move toward models that learn more like animals and humans. The focus is not on scaling up to larger black boxes, but on rethinking how learning itself happens under constraints. The result is a neuro-inspired agenda for machine learning that emphasizes adaptability, efficiency, and robustness in open-ended environments.
Tiffany Knearem
Human-Computer Interaction
Watch Now Abstract
Human-AI Alignment: Philosophy, Perspectives, and Practice

Curious about how we can design AI systems that truly center human values? This talk introduces Bidirectional Human-AI Alignment, which posits alignment as a dynamic, mutual process that goes beyond simply integrating human goals into AI. By balancing AI-centered and human-centered perspectives, we can preserve human agency, foster critical engagement, and adapt societal approaches to AI that benefit humanity. To ground the discussion, we will look at case study of how AI is being used to support healthcare decision making.
Ying Sun
Statistics and Data Science
Watch Now Abstract
Advancing Spatio-Temporal Statistics in Geo-Environmental Data Science through Deep Learning and High Performance Computing

In this talk, I will discuss the contributions and ongoing research of my Environmental Statistics Research Group in the area of spatio-temporal statistics, with a particular focus on leveraging deep learning and high performance computing for spatio-temporal analysis in Geo-Environmental Data Science. I will introduce the developed innovative software tools such as ExaGeoStat, ParallelVecchiaGP, and DeepKriging, which support the analysis of large-scale geostatistical datasets. During this presentation, I will also showcase environmental applications to air quality modeling and prediction.
Marc Genton
High-Performance Statistical Computing: The Case of ExaGeoStat for Large-Scale Spatial Data Science
 October 20, 2025

Marc Genton Al-Khawarizmi Distinguished Professor of Statistics at the King Abdullah University of Science and Technology (KAUST)

Hosted by: Prof. Souhaib Ben Taieb
Statistics and Data Science
Watch Now Abstract
High-Performance Statistical Computing: The Case of ExaGeoStat for Large-Scale Spatial Data Science

The new field of High-Performance Statistical Computing (HPSC) reflects the emergence of a statistical computing community focused on working with large computing platforms and producing software for various applications. For example, spatial data science relies on some fundamental problems such as: 1) Spatial Gaussian likelihood inference; 2) Spatial kriging; 3) Gaussian random field simulations; 4) Multivariate Gaussian probabilities; and 5) Robust inference for spatial data. These problems develop into very challenging tasks when the number of spatial locations grows large. Moreover, they are the cornerstone of more sophisticated procedures involving non-Gaussian distributions, multivariate random fields, or space-time processes. Parallel computing becomes necessary for avoiding computational and memory restrictions associated with large-scale spatial data science applications. In this talk, I will demonstrate how high-performance computing (HPC) can provide solutions to the aforementioned problems using tile-based linear algebra, tile low-rank approximations, as well as multi- and mixed-precision computational statistics. I will introduce ExaGeoStat, and its R version ExaGeoStatR, a powerful HPSC software that can perform exascale (10^18 flops/s) geostatistics by exploiting the power of existing parallel computing hardware systems, such as shared-memory, possibly equipped with GPUs, and distributed-memory systems, i.e., supercomputers. I will then describe how ExaGeoStat can be used to design competitions on spatial statistics for large datasets and to benchmark new methods developed by statisticians and data scientists for large-scale spatial data science. Finally, I will briefly demonstrate how these techniques were used to build an exascale climate emulator that received the prestigious 2024 ACM Gordon Bell Prize in Climate Modeling.
Abstract
AMA - Chip Design, Software Design, and Using AI

This will be a conversational "ask me anything" session
Yanyuan Qiao
Language Model × Robotics – From Embodied Navigation to AI-Driven Robot Hand Design
 October 16, 2025

Yanyuan Qiao Postdoctoral Research Fellow, École Polytechnique Fédérale de Lausanne (EPFL)

Hosted by: Prof. Yutong Xie
Computer Vision
Watch Now Abstract
Language Model × Robotics – From Embodied Navigation to AI-Driven Robot Hand Design

"Recent advances in language models are transforming how robots can perceive, reason, and act. This talk presents a series of works that explore how language models, used both as pretrained representations and interactive reasoning engines, can be applied to develop intelligent embodied agents. The studies span tasks from embodied navigation in 3D environments to automatic design of robot morphologies for manipulation. The first part focuses on embodied navigation. I began by exploring how to improve an agent’s perception of temporal and historical context through multimodal pretraining. Building on this foundation, I then examined how large language models can assist decision-making—by interpreting ambiguous instructions and injecting external knowledge to support generalization. Taking this further, we investigated using language models directly as agents, enabling them to perform navigation in continuous environments without additional training. To systematically understand what these models can and cannot do, we introduced a benchmark that evaluates key embodied capabilities, such as instruction comprehension, spatial reasoning, and alignment between language and action. The second part turns to robot design. I present our recent work on AI-driven robot hand generation, where task descriptions are translated into diverse and functional morphologies. This system leverages language models to capture user intent and guides structural generation through reasoning and feedback. Together, these studies explore a central question: how far can language models take us in embodied robotics? From interpreting instructions to designing physical form, they reveal both the opportunities and current frontiers in this rapidly evolving intersection."
Thang Luong
Towards AI Superhuman Reasoning & the future of knowledge discovery
 October 16, 2025

Thang Luong Principal Scientist and Director of Research at Google DeepMind

Hosted by: Prof. Monojit Choudhury
Natural Language Processing
Watch Now Abstract
Towards AI Superhuman Reasoning & the future of knowledge discovery

In this talk, I will discuss recent advances in AI for Mathematics, from AlphaGeometry and AlphaProof to the recent Gemini Deep Think, which achieved a historic gold-medal level performance at the International Mathematical Olympiad 2025. Through these technological breakthroughs, I will also share my thoughts towards the future of AI for knowledge discovery.
Watch Now Abstract
Navigating Privacy, Data Protection, AI, and IP Laws in AI Development: A Practical Approach

VP - Privacy, Data Protection and AI @ e&. Former Global Head of Privacy @ X. PhD from the University of São Paulo (USP). Fellow at the Oxford Internet Institute (OII). Professor of Law. LL.M from New York University (NYU) and the National University of Singapore (NUS).
Yi Zhou
Human-Computer Interaction
Watch Now Abstract
Human-Centric AI: Learning and Co-Creating Humans in 2D, 3D and 4D.

This talk explores how AI can learn from humans and co-create with humans to capture the richness of human appearance, motion, interactions, and personality. I will present three lines of work: (1) building large-scale 4D datasets such as HUMOTO, which capture human–human and human–object interactions with industry-standard fidelity; (2) developing novel 3D representations and differentiable simulations, including DMesh and Digital Salon, for efficient modeling of complex geometry and dynamics; and (3) designing generative tools that enable intuitive, user-guided creation of digital humans and their interactions and behaviors in scenes. Together, these efforts advance a vision of human-centric generative AI: systems that learn about humans, collaborate with humans, and empower creativity across 2D, 3D, and 4D domains.
Timothy Roscoe
Human-Computer Interaction
Watch Now Abstract
A Formal but Pragmatic Foundation for General-Purpose Operating Systems

The Operating System (OS) is fundamental to the correct working of any non-trivial computer system, and general-purpose OSes like Linux (and Android), Windows, iOS and MacOS are the central component of the infrastructure of modern computing and communications, from mobile phones to cloud providers. Modern AI would not be possible without OS software providing required scaling and communication between distributed tasks. Faults attributable to OS flaws have serious consequences ranging from security breaches to global-scale outages. Despite this, general-purpose OS design and implementation today remains surprisingly ad-hoc, based on a simplistic architecture proposed decades ago for machines designed in 1970s. Since then, system hardware has changed beyond recognition: computers are complex networks of cores, devices, management engines, and accelerators, all running code ignored by the nominal OS. This broad disconnect between hardware reality and OS structure underlies many security and reliability flaws, and will not go away without a radical change in approach. I'll talk about our attempts to put general-purpose OS development on a solid foundation for the first time, based on a formal framework for capturing the software-visible semantics of all the hardware in complete, real computers. Above this, we are working on tooling to assemble an OS for modern heterogeneous servers and systems-on-chip which can incorporate existing drivers, firmware, and application environments, but nevertheless offer strong, formal platform-wide guarantees of application isolation and security.
Afsaneh Doryab
Ubiquitous AI for Health
 October 9, 2025

Afsaneh Doryab Assistant Professor of Computer Science and Systems Engineering, University of Virginia

Hosted by: Prof. Elizabeth Churchill
Human-Computer Interaction
Watch Now Abstract
Ubiquitous AI for Health

Harnessing data streams generated by widely used devices, such as smartphones, wearables, and embedded sensors, allows AI algorithms to continuously model, detect, and predict people's biobehavioural and social states. These algorithms can then use the resulting models to deliver personalized services, recommendations, and interventions. However, this capability also introduces new technical challenges related to data collection, processing, algorithm development, modelling, and interpretation. In this talk, I will discuss my research approaches to address some of these challenges in the context of health and wellness applications. I will demonstrate how we leverage multimodal mobile data streams to model aspects such as circadian rhythm variability. Additionally, I will describe how we integrate biobehavioural models to create innovative strategies, including music melodies designed for personalized health status communication. 
Daniel Huttenlocher
The Age of AI: And Our Human Future
 October 2, 2025

Daniel Huttenlocher Dean, MIT Schwarzman College of Computing

Hosted by: Prof. Timothy Baldwin
Watch Now Abstract
The Age of AI: And Our Human Future

In this talk we look at how AI is changing discovery, knowledge, human interaction, and how we understand the world around us. These changes are becoming more prominent with every passing moment, and this session endeavors to help build insights into the development and deployment of AI for broad benefit. The talk will also present a brief overview of the MIT Schwarzman College of Computing.
Ravi Garg
3D Reconstruction in the era of Machine Learning and Gaussian Splatting
 September 30, 2025

Ravi Garg Future Making Fellow, Australian Institute for Machine Learning, University of Adelaide

Hosted by: Prof. Ian Reid
Computer Vision
Watch Now Abstract
3D Reconstruction in the era of Machine Learning and Gaussian Splatting

"The problem of 3D reconstruction from multiple views has traditionally been posed as an inverse problem: estimating structure, appearance, and camera parameters from observed images. Classical approaches emphasised minimal parametrisation, simplified image formation models, and the use of hand-crafted priors to render the optimisation well-posed. This paradigm has recently been challenged by the emergence of overparameterised scene representations—such as Radiance Fields and Gaussian Splatting, and overparameterised camera models. These representations enable efficient inference, rapid novel-view synthesis, and offer greater flexibility in training neural networks for 3D reconstruction. This talk will examine the implications of such overparameterised formulations in recovering scene geometry. I will present recent works demonstrating that while the additional flexibility afforded by overparameterisation can be beneficial, it often necessitates careful geometric regularisation. I will discuss often overlooked considerations in employing these representations by both neural and non-neural 3D reconstruction techniques."
Watch Now Abstract
Towards biological discovery with foundation models: applications in neuroscience

Foundation models offer the potential to transform discovery for the biological science, promising novel biomarkers as well as new directions for therapeutic application. Design of such models however can be challenging, and their application can be equally difficult. Here, I will discuss our work generating the infrastructure to enable biological discovery robustly, efficiently, and at-scale with foundation modelling. Applied specifically to the neurosciences and the study of neurodegenerative conditions like Alzheimer’s and Parkinson’s, we have shown foundation models can learn complex representations of disease, and derive novel biomarkers and therapeutic directions. I will also share our thinking about future directions for frontier AI for treating these major causes of global mortality.
Watch Now Abstract
Exploring the Power of Speech: How Synthetic Voices Shape User Perception and Behavior

Speech-enabled Conversational Agents (CAs), such as Amazon Alexa, Apple Siri, and Google Assistant, are becoming increasingly more popular interaction platforms for users to engage with their mobile devices and smart speakers. While CAs have the potential to support users in achieving behavioural change goals, such as increasing physical activity or improving productivity at work, they can also lead to complacent behaviour and a lack of reflection. In the first part of my presentation, I will discuss how different types of synthetic voices that vary in terms of prosodic qualities and method of synthesis can affect users' perception of CAs, and what impact they can have on users' behaviour in decision-making tasks. Specifically, we will analyse how differing voice characteristics can affect user trust and engagement. In the second part, we will explore several research avenues to enable the design and development of proactive conversational agents that can effectively support users while preserving their agency.
Haiyan Huang
Statistics and Data Science
Abstract
Computational and AI-Driven Design of Random Heteropolymers as Protein Mimics

Synthetic random heteropolymers (RHPs), composed of a predefined set of monomers, offer a promising strategy for creating protein mimicking materials with tailored biochemical functions. When designed appropriately, RHPs can replicate protein behavior, enabling applications in drug delivery, therapeutic protein stabilization, biosensing, tissue engineering, and medical diagnostics. However, designing RHPs that achieve specific biological functions in a time- and cost-effective manner remains a major challenge. In this talk, I will review this problem and discuss several successful efforts we have made to address it, using statistical, computational, and AI approaches. These include a generalized semi-hidden Markov model (GSHMM) and a hybrid variational autoencoder (VAE), which we call DeepRHP and implement within a semi-supervised framework. Both methods are designed to capture the structures of critical chemical features as well as individual RHP sequence patterns, but they offer different advantages in terms of interpretability and flexibility. These studies highlight the potential of computational approaches to accelerate the rational design of RHPs for a wide range of biological, medical, and healthcare applications.
Benjamin Guedj
On Generalisation and Learning
 September 24, 2025

Benjamin Guedj Professor of Machine Learning and Foundations of Artificial Intelligence, University College London

Hosted by: Prof. Mladen Kolar
Statistics and Data Science
Watch Now Abstract
On Generalisation and Learning

"Generalisation is one of the essential problems in machine learning and foundational AI. The PAC-Bayes theory has emerged in the past two decades as a generic and flexible framework to study and enforce generalisation abilities of machine learning algorithms. It leverages the power of Bayesian inference and allows to derive new learning strategies. I will briefly present the key concepts of PAC-Bayes and pinpoint how generalisation-driven principled approaches can help further advance a better mathematical understanding of AI systems, and will highlight a few recent contributions from my group including connections to information theory, with a particular focus on our AISTATS 2024 paper https://proceedings.mlr.press/v238/hellstrom24a in which we present a unifying framework for deriving information-theoretic and PAC-Bayesian generalization bounds based on arbitrary convex comparator functions that quantify the gap between empirical and population loss. References: https://cas5-0-urlprotect.trendmicro.com:443/wis/clicktime/v1/query?url=https%3a%2f%2fbguedj.github.io%2fpublications%2f&umid=22d342e6-1e2d-415e-ac94-86c451c45ff8&rct=1756738884&auth=2558bcdb84e02b0c27cd7aa4822a24989cb4e596-640ea02a57d89009a8841304e29c786fa103dcca"
Yanding Zhao
Computational Biology
Watch Now Abstract
Decoding Genome Instability: Regulatory Rewiring in Osteosarcoma and Beyond

Genome instability in cancer spans from small-scale mutations, such as non-coding SNVs that alter transcription factor motifs, to large-scale structural variants (SVs) and extrachromosomal DNA (ecDNA) that reconfigure the 3D genome. Together, these alterations promote tumor growth and remodel the tumor microenvironment. Yet existing technologies remain siloed—each illuminates one layer of the genome, but none can connect structural change to regulatory consequence in a unified way. My work in the TCGA Pan-Cancer 3D Genome Project established integrative computational frameworks to bridge these gaps, linking variants of different scales to enhancer rewiring. Building on this methodological foundation, I applied and refined this framework in osteosarcoma, the most instability-driven pediatric cancer, providing a natural context to test this framework. Using longitudinal and multi-modal profiling, I identified MYC enhancer hijacking linked to chemoresistance and uncovered high-risk instability trajectories associated with poor prognosis. Spatial and single-cell analyses further revealed that these trajectories propagate into distinct stromal and immune states. Together, these studies show how integrative methods can decode regulatory rewiring across multiple levels, from genome architecture to the tumor microenvironment. Looking forward, I aim to extend this platform beyond osteosarcoma by integrating the Emirati Genome Programme with publicly available genomic resources to advance our understanding of instability-driven regulation and therapeutic opportunities.
Gregory S. Chirikjian
From State Estimation on Lie Groups to Robot Imagination
 September 8, 2025

Gregory S. Chirikjian Willis F. Harrington Professor and Department Chair, Mechanical Engineering at the University of Delaware
Ujwal Gadiraju
The Human Quotient for Better AI Systems: Agents, Appropriate Reliance, and Alignment
 September 8, 2025

Ujwal Gadiraju Associate Professor of Software Technology, Delft University of Technology
Edward Boone
Bayesian Monitoring of a Pandemic: A Case Study
 September 4, 2025

Edward Boone Professor of Statistics, Virginia Commonwealth University
Ryad Ghanam
Statistical Inference on Fractional Partial Differential Equations
 September 4, 2025

Ryad Ghanam Professor of Mathematics, Liberal Arts & Sciences, Virginia Commonwealth University
Hongyuan Cao
Testing composite null hypotheses with high-dimensional dependent data
 September 2, 2025

Hongyuan Cao Professor of Statistics at Florida State University
David Ayman Shamma
Building AI Systems for Sustainable Automotive Behaviors
 September 2, 2025

David Ayman Shamma Scientific Advisor, CWI
Yong Zhang
DB+AI: A Paradigm to Stimulate the Value of Data
 August 27, 2025

Yong Zhang Associate Professor of Tsinghua University, deputy dean of Tsinghua JIAIDB institute.
Merritt Moore
Staged Encounters: Dance as a Testbed for Human–Robot Interaction
 August 26, 2025

Merritt Moore Artist-in-Resident and Adjunct Professor at NYU Abu Dhabi

Hosted by: Prof. Ivan Laptev
Computer Vision
Watch Now Abstract
Staged Encounters: Dance as a Testbed for Human–Robot Interaction

Science fiction has long been our window to the future, predicting technological advancements and their societal impacts. Fiction doesn’t just entertain—it prepares us to navigate the moral and emotional complexities yet to come. Extending this inquiry into practice, Dr. Merritt Moore shares how dancing with robots has become a living experiment in future human–robot interactions and relationships. Through staged and improvised duets, she tests how machines function not merely as tools but as partners in expression and creativity, raising questions about authorship, agency, and emotional impact. This talk explores how choreography and robotics can inform one another, shaping both creative practice and future possibilities.
Iryna Gurevych
Natural Language Processing
Watch Now Abstract
Please meet AI, our dear new colleague. In other words: can scientists and machines truly cooperate?

How can AI and LLMs facilitate the work of scientists in different stages of the research process? Can technology even make scientists obsolete? The role of AI and Large Language Models (LLMs) in science as the target application domain has recently been rapidly growing. This includes assessing the impact of scientific work, facilitating writing and revising manuscripts as well as intelligent support for manuscript quality assessment, peer-review and scientific discussions. The talk will illustrate such methods and models using several tasks from the scientific domain. We argue that while AI and LLMs can effectively support and augment specific steps of the research process, expert-AI collaboration may be a more promising mode for complex research tasks.
Stephanie Milani
Rethinking AI Agents: Human-Centered Reinforcement Learning
 July 10, 2025

Stephanie Milani Final-year Ph.D. Candidate in Machine Learning, Carnegie Mellon University
Reut Tsarfaty
Multilinguality in LLMs with an Eye on Semitic Languages
 June 12, 2025

Reut Tsarfaty Associate Professor at Bar-llan University leading the Open Natural Language Processing research lab (The ONLP Lab), and a Visiting Professor at Google
Yan Gong
Causal Spatial Quantile Regression
 June 10, 2025

Yan Gong Postdoctoral Research Fellow at Harvard T.H. Chan School of Public Health
Liuhua Peng
Enhanced localized conformal prediction with imperfect auxiliary information
 June 2, 2025

Liuhua Peng Senior Lecturer in the School of Mathematics and Statistics at the University of Melbourne
Milad Alshomary
From Argument Generation to Explainable AI: My Research in Natural Language Processing
 May 26, 2025

Milad Alshomary Postdoctoral research scientist, Columbia University
Tiffany Knearem
Bidirectional Human-AI Alignment: A User-Centered Approach to Shaping AI Systems in Practice
 May 20, 2025

Tiffany Knearem User Experience Researcher, formerly of Google and Meta, and holds a PhD in Information Sciences and Technology from Pennsylvania State University
James Landay
“AI For Good” Isn’t Good Enough: A Call for Human-Centered AI
 May 15, 2025

James Landay Professor of Computer Science and the Anand Rajaraman and Venky Harinarayan Professor in the School of Engineering at Stanford University. Co-founder and Co-Director of the Stanford Institute for Human-Centered Artificial Intelligence (HAI)
Utkarsh Mall
Visual Discovery for Science
 May 15, 2025

Utkarsh Mall Postdoctoral research scientist in Computer Science at Columbia University
Anees Kazi
Multi-modal data analysis using Graph Deep Learning for applications in healthcare
 May 14, 2025

Anees Kazi Postdoctoral fellow at the Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, and Harvard Medical School
Houbing Herbert Song
Neuro-symbolic AI: The Third Wave of AI
 May 14, 2025

Houbing Herbert Song Professor, Founding Director of the NSF Center for Aviation Big Data Analytics, Associate Director for Leadership of the DOT Transportation Cybersecurity Center for Advanced Research and Education, and Director of the Security and Optimization for Networked Globe Laboratory, University of Maryland
Sean Xuefeng Du
Teach AI What It Doesn't Know
 May 14, 2025

Sean Xuefeng Du Final year CS Ph.D. student at University of Wisconsin-Madison
Mathew Magimai Doss
Explainable Speech and Sign Language Processing using Posterior Features
 May 13, 2025

Mathew Magimai Doss Senior Research Scientist at the Idiap Research Institute
Diyi Yang
The Future of Human-AI Interaction: Teaching, Talking & Teaming Up
 May 12, 2025

Diyi Yang Assistant professor in the Computer Science Department at Stanford University
Fabricio A. B. da Silva
Deep Learning in the Brazilian Network for Genomic Surveillance of Multidrug-Resistant Bacteria
 May 8, 2025

Fabricio A. B. da Silva Senior public health researcher at the Oswaldo Cruz Foundation
Laure Berti
Towards Uncertainty-Aware, Multimodal Data-Centric AI Pipelines
 May 5, 2025

Laure Berti Research Director (DR1) at IRD, the French Research Institute for Sustainable Development
Sven Behnke
Towards Conscious Service Robots
 May 5, 2025

Sven Behnke Chair for Autonomous Intelligent Systems at the University of Bonn, Germany, and heads the Computer Science Institute VI – Intelligent Systems and Robotics.
Junyuan Hong
Harmonizing, Understanding, and Deploying Responsible AI
 May 5, 2025

Junyuan Hong Postdoctoral fellow at the Institute for Foundations of Machine Learning (IFML) and the Wireless Networking and Communications Group (WNCG) at UT Austin.
Andrew P. Feinberg
New advances in the epigenetics of common disease
 May 1, 2025

Andrew P. Feinberg Director, Center for Epigenetics; Bloomberg Distinguished Professor, Johns Hopkins University
Mohammad Naseri
Trustworthy Decentralized AI
 May 1, 2025

Mohammad Naseri Research Scientist at Flower Labs
Joyce Chai
Words Meet World: Grounded Language in Embodied AI
 April 30, 2025

Joyce Chai Professor in the Department of Electrical Engineering and Computer Science at the University of Michigan
Shilong Liu
Object-centric Open-world Visual Understanding
 April 30, 2025

Shilong Liu Final-year Ph.D. candidate at Tsinghua University
Ken-ichiro Kamei
Reverse Bioengineering to recreate multicellular animals in vitro
 April 29, 2025

Ken-ichiro Kamei Stem cell engineer and an associate professor specializing in biology and bioengineering at New York University Abu Dhabi
João Paulo Papa
Pattern Recognition with Optimum-Path Forests
 April 28, 2025

João Paulo Papa Professor at the Department of Computer Science, Sao Paulo State University, Brazil
Chong Li
An Introduction to Decentralized AI
 April 23, 2025

Chong Li Founder & CEO of OORT, Adjunct professor in the department of electrical engineering at Columbia University
Deva Ramanan
Cameras as rays: spatial representations for 2D and 3D understanding with foundation models
 April 22, 2025

Deva Ramanan Professor in the Robotics Institute at Carnegie-Mellon University and the former director of the CMU Center for Autonomous Vehicle Research
Prakash Chandra + Rajkumar Saini
Towards Robust Self-supervised Representation Learning
 April 22, 2025

Prakash Chandra + Rajkumar Saini Prakash Chandra Chhipa is an incoming Postdoctoral Researcher at the Machine Learning Group, Luleå University of Technology, Sweden. Rajkumar Saini is an Assistant Professor at Luleå University of Technology (LTU), Sweden
Mattia Soldan
Scalable and Efficient Semantic Search in Videos
 April 21, 2025

Mattia Soldan Ph.D. candidate in Electrical and Computer Engineering at King Abdullah University of Science and Technology (KAUST)
Lizhen Qu
Harnessing Causal Discovery for Robust and Adaptive Natural Language Processing
 April 18, 2025

Lizhen Qu Assistant Professor (Lecturer) in the Faculty of Information Technology at Monash University and a founding member of the AIM Lab
Zhang Jie
Building Trustworthy Text-to-Image Models: Risks, Defenses, and Forensics
 April 16, 2025

Zhang Jie Research scientist and innovation lead at the Center for Frontier AI Research (CFAR)
Jian Kang
Operationalizing Fairness in an Interconnected World
 April 16, 2025

Jian Kang Assistant Professor in the Department of Computer Science at the University of Rochester
Homanga Bharadwaj
Watch, Predict, Act: Robot Learning Meets Web Videos
 April 16, 2025

Homanga Bharadwaj Final-year PhD student in Carnegie Mellon University
Amin Beheshti
From Intelligence to Artificial Intelligence: Exploring the Future of Humanity
 April 15, 2025

Amin Beheshti Professor of Data Science at Macquarie University, and an Adjunct Professor of Computer Science at UNSW
Deming Chen
A3C3 – AI Algorithm & Accelerator Co-design, Co-search, and Co-generation
 April 15, 2025

Deming Chen Abel Bliss Professor in the Grainger College of Engineering at the University of Illinois Urbana-Champaign
Vaishnav Kameswaran
Building Equitable Technology Futures: A Relational Access Approach
 April 14, 2025

Vaishnav Kameswaran Postdoctoral Researcher with the Values-Centered AI initiative at the University of Maryland

Hosted by: Prof. Elizabeth Churchill
Human-Computer Interaction
Watch Now Abstract
Building Equitable Technology Futures: A Relational Access Approach

A grand challenge in HCI is understanding how technology-mediated access can enable fuller participation of people with disabilities in society. However, access, framed solely as a feature of technology, can overlook how communities of people with disabilities actively create, share, and sustain access in their everyday lives. In this talk, I show how drawing from disability justice scholarship can broaden the concept of access and open up novel avenues for design. I will share examples from my work where I reconceptualize access as a relational, socio-technical construct-- one shaped by social and material conditions, as well as community values. I will show how this perspective also expands the design space for emerging technologies like AI, shifting their roles from simply mitigating impairments to augmenting human abilities. By reframing technology-mediated access as a socio-technical and relational concept, my work offers new pathways toward more equitable technological futures in HCI.
Bashar Alhafni
Controlled Natural Language Generation for Morphologically Rich Languages: The Case of Arabic
 April 14, 2025

Bashar Alhafni Final-year computer science PhD candidate at New York University (NYU) and a graduate research assistant at the Computational Approaches to Modeling Language (CAMeL) lab at NYU Abu Dhabi
Lingqi Yan
Next-generation Photorealistic Rendering
 April 14, 2025

Lingqi Yan Associate Professor of Computer Science at University of California
Dilip K. Prasad
Digital Twin of a living Cell using Physics based Artificial Intelligence
 April 11, 2025

Dilip K. Prasad Professor at the Department of Computer Science
Tanmoy Chakraborty
Don't underestimate the power of small language models
 April 10, 2025

Tanmoy Chakraborty Faculty Chair Professor in AI and an Associate Professor in the Dept. of Electrical Engineering and the School of AI at IIT Delhi
Guosheng Hu
Reduce AI’s Carbon Footprint
 April 9, 2025

Guosheng Hu Senior lecturer of AI at University of Bristol
Haritz Puerto
Unpacking Reasoning in LLMs: Input Formats, Generating CoTs, and Fair Evaluation
 April 8, 2025

Haritz Puerto Final-year Ph.D. candidate in Machine Learning & Natural Language Processing at UKP Lab in TU Darmstadt
Andreas Bender
Artificial Intelligence in Drug Discovery and Computational Biology: Current Status, Successes, and Pitfalls
 April 8, 2025

Andreas Bender Professor for Machine Learning in Medicine at the Department of Medicine at Khalifa University, Abu Dhabi
Shaobo Cui
Navigating Uncertainty in Commonsense Causal Reasoning 
 April 8, 2025

Shaobo Cui Final-year Ph.D. candidate in Computer Science at École Polytechnique Fédérale de Lausanne (EPFL)
Eduard Gorbunov
Stochastic First-Order Optimization with Gradient Clipping
 April 7, 2025

Eduard Gorbunov Research scientist in the Machine Learning Department at MBZUAI
 Johannes Schoning
The Role of Human-Computer Interaction Perspectives in Advancing AI-Driven Next-Generation Spatial User Interfaces
 April 7, 2025

Johannes Schoning Professor of Computer Science at the University of St. Gallen
Lena Maier-Hein
Failing Forward: Rethinking the Foundations of Medical Imaging AI
 April 3, 2025

Lena Maier-Hein "Professor at Heidelberg University Managing Director of the National Center for Tumor Diseases (NCT) Heidelberg, Head of the division Intelligent Medical Systems (IMSY) at the German Cancer Research Center (DKFZ)"
Klaus Maier-Hein
Towards Generalist Medical AI
 April 3, 2025

Klaus Maier-Hein Director of the Division of Medical Image Computing at the German Cancer Research Center (DKFZ)
Yotam Margalit
The Politics of Using AI in Policy Implementation: Evidence from a Field Experiment
 March 24, 2025

Yotam Margalit Brian Mulroney Chair in Government at the School of Political Science and International Affairs at Tel Aviv University and a Professor in the Department of Political Economy at King’s College London
Anthony Lin
Automated Reasoning over Strings and Sequences
 March 24, 2025

Anthony Lin Professor at TU Kaiserslautern (Germany) and a Fellow of Max-Planck Society
Dongxia Wu
Uncertainty Quantification for Scientific Machine Learning
 March 24, 2025

Dongxia Wu Ph.D. student in the Department of Computer Science and Engineering at UC San Diego
Bhat Suma Pallathadka
Towards Enhanced Linguistic Reasoning in Language Models
 March 20, 2025

Bhat Suma Pallathadka Assistant Professor in Electrical and Computer Engineering at the University of Illinois at Urbana-Champaign
Jun Wen
Enhancing Computational Precision Medicine with Electronic Health Records
 March 20, 2025

Jun Wen Postdoctoral Research Fellow in the Department of Biomedical Informatics at Harvard Medical School
Raul Astudillo
AI-Assisted Experimentation: Challenges, Advances, and Future Directions
 March 20, 2025

Raul Astudillo Postdoctoral Scholar in the Department of Computing and Mathematical Sciences at Caltech
Joshua Bakita
Moving GPU Systems from “Real-Fast” to “Real-Time”
 March 19, 2025

Joshua Bakita PhD Candidate at The University of North Carolina at Chapel Hill
Marzena Karpinska
Evaluating Long-Context Language Models
 March 17, 2025

Marzena Karpinska Senior researcher at Microsoft
Hao Chung
Mechanism Design for Decentralized Systems
 March 17, 2025

Hao Chung Ph.D. student at Carnegie Mellon University
Jibang Wu
Towards Strategic Alignment in AI: Foundations, Progress and Outlook
 March 13, 2025

Jibang Wu Final year PhD student in Computer Science at University of Chicago
Yomna Abdelrahman
Thermal Imaging For Amplifying Human Perception
 March 12, 2025

Yomna Abdelrahman Post-doctoral researcher at the Armed Forces University in Munich, Germany
Ali Valikian
Algorithms in the AI Age: Fair and Learning-Augmented
 March 11, 2025

Ali Valikian Research Assistant Professor at TTIC
Haonan Li
AI Advance Pathway: From Targeted Evaluation to Holistic Intelligence
 March 10, 2025

Haonan Li Postdoctoral Researcher at MBZUAI with a PhD in NLP from the University of Melbourne
Ishtiaque Ahmed
Fear of Small Data: AI’s Blind Spot in Ethics, Lifecycle Assessment, and Policy
 March 10, 2025

Ishtiaque Ahmed Associate Professor of Computer Science at the University of Toronto and the founding director of the ‘Third Space'' research group
Gustavo Carneiro
Advancing Medical AI:  Robust, Interpretable, and Collaborative Solutions
 March 10, 2025

Gustavo Carneiro Professor of AI and Machine Learning at the University of Surrey, UK
Tatsuki Kuribayashi
Next-Word Prediction in Language Models and Humans
 March 4, 2025

Tatsuki Kuribayashi Postdoctoral researcher in the NLP department at MBZUAI
Shmuel Peleg
Speech Enhancement & Video Summarization - Technology Transfer of Academic Research
 March 4, 2025

Shmuel Peleg Professor of Computer Science at the Hebrew University of Jerusalem
Utkarshani Jaimini
Causal Neuro-Symbolic AI: synergy between neuro-symbolic and causal AI
 March 3, 2025

Utkarshani Jaimini Ph.D. candidate at the AI Institute, University of South Carolina
Yannic Noller
Automated Program Repair for Security
 March 3, 2025

Yannic Noller Professor at the Faculty of Computer Science at the Ruhr University Bochum (RUB)
David Basin
Formal Methods for Modern Payment Protocols
 February 24, 2025

David Basin Professor of Computer Science at ETH Zurich
Prem Devanbu
LLMs (for code) sometimes make mistakes. When should I trust them?
 February 21, 2025

Prem Devanbu Researcher in Software Engineering, UC Davis
James Ehrlich
Applying Machine Learning and GenAI to the design and operation of climate-resilient residential infrastructure
 February 21, 2025

James Ehrlich Director of Compassionate Sustainability at the Stanford University School of Medicine CCARE Institute
Nan Lin
Sequential Quantile Estimation for Distributed and Streaming Data
 February 20, 2025

Nan Lin Professor of Statistics and Data Science at the Washington University in St. Louis
Gülşen Eryiğit
Multimodal Information Extraction from Unstructured Documents
 February 19, 2025

Gülşen Eryiğit Professor at the Artificial Intelligence and Data Engineering Department, Istanbul Technical University
Yuxia Wang
Towards safe, factual, and empathetic human-AI interaction
 February 19, 2025

Yuxia Wang Postdoctoral researcher in the NLP department at MBZUAI
Junpei Komiyama
Balancing Explore-exploit, or Purely Exploring
 February 18, 2025

Junpei Komiyama Assistant Professor of Technology, New York University Stern School of Business
Peter Haas
PEaRCE: A Platform for Ethical and Responsible Computing Education in CS Courses
 February 17, 2025

Peter Haas Professor of Information and Computer Sciences and Adjunct Professor of Mechanical and Industrial Engineering
Lijie Hu
Towards Usable and Useful Explainable AI
 February 11, 2025

Lijie Hu Ph.D. candidate in the Computer Science program at King Abdullah University of Science and Technology (KAUST)
Yannis Ioannidis
Open Science: A New Paradigm for the Research Lifecycle and the Role of Computing
 February 6, 2025

Yannis Ioannidis President of the Association of Computing Machinery (ACM)
Ali Sarvghad
Towards Responsible Visual Analytics: Fostering Inclusivity, Accessibility and Trustworthiness in the AI Era
 February 5, 2025

Ali Sarvghad Senior visualization researcher and co-director of the HCI-VIS Research Lab in the Manning College of Computer and Information Sciences at the University of Massachusetts Amherst
Carlo Maj
Polygenic Score Modeling to Investigate Genotype-Phenotype Associations
 February 5, 2025

Carlo Maj Principal Investigator at the Center for Human Genetics at the University of Marburg, Germany
Narges Mahyar
Community-Centered Computing  for Collective Action and Societal Impact
 February 4, 2025

Narges Mahyar Associate Professor in the Manning College of Information and Computer Sciences at the University of Massachusetts
Umang Bhatt
Trustworthy Machine Learning: Transparency, Collaboration, and Evaluation
 February 4, 2025

Umang Bhatt Assistant Professor & Faculty Fellow at the Center for Data Science at New York University
Justin Hong
Deep generative modeling of sample-level heterogeneity in single-cell genomics
 February 3, 2025

Justin Hong Computer Science Ph.D. candidate at Columbia University
Fatemeh Vafaee
AI-enhanced Personalized Medicine and Therapeutic Development
 January 29, 2025

Fatemeh Vafaee Deputy Director of the UNSW AI Institute and an Associate Professor in the School of Biotechnology and Biomolecular Sciences at the University of New South Wales
Yingyao Hu
The Econometrics of Unobservables: Identification, Estimation, and Empirical Applications
 January 27, 2025

Yingyao Hu Professor of economics and Vice Dean for Social Sciences at Johns Hopkins University
Senthil Arumugam
Cell Biology of Developmental Processes: Imaging Across Scales
 January 23, 2025

Senthil Arumugam EMBL Australia Group Leader at the Monash Biomedicine Discovery Institute
Jihong Kim
Optimizing 3D Flash-Based SSDs through Device-Aware Techniques
 January 23, 2025

Jihong Kim Professor in the department of Computer Science & Engineering
Seth Fraden
How to Boot Up a New Engineering Program
 January 22, 2025

Seth Fraden Professor of Physics & Co-Chair of Engineering
Qi Wu
Human-Computer Conversational Vision-and-Language Navigation
 January 21, 2025

Qi Wu Associate Professor at the University of Adelaide
Zhongyu Wei
From Individual to Society: Social Simulation Driven by LLM-based Agent
 January 20, 2025

Zhongyu Wei Associate Professor at the School of Data Science, Fudan University
Jingshan Li
AI-based Whole-cycle Health Care Management: Problems, Challenges, and Opportunities
 January 17, 2025

Jingshan Li Head and Gavriel Salvendy Chair Professor in Department of Industrial Engineering, Tsinghua University
Surya Narayanan Hari
Memory representation and retrieval in neuroscience and AI 
 January 15, 2025

Surya Narayanan Hari 3rd year graduate student in the Thomson Lab at the California Institute of Technology
Yu Li
Complex disease modeling and efficient drug discovery with large language models
 January 14, 2025

Yu Li Assistant Professor in the Department of Computer Science and Engineering at CUHK
Ahmed Elhag
Efficiently Approximating Equivariance in Unconstrained Models
 January 13, 2025

Ahmed Elhag PhD student at the Department of Computer Science at the University of Oxford
Youjip Won
Bring an order to the chaos: Order-Preserving IO stack for Modern Flash storage
 January 13, 2025

Youjip Won ICT Endowed Chair Professor at School of Electrical Engineering, KAIST
Joonhyuk Kang
Communication in the Age of AI: AI for Communication and Communication for AI
 December 9, 2024

Joonhyuk Kang Professor in the School of Electrical Engineering at the Korea Advanced Institute of Science and Technology (KAIST)
Masanori Hashimomo
Reliability Exploration of Neural Network Accelerator
 December 5, 2024

Masanori Hashimomo Professor in the Department of Communications and Computer Engineering, Kyoto University
Youngsoo Shin
Chip Design and Manufacturing with AI
 December 5, 2024

Youngsoo Shin Professor with the School of EE
Zeke Xie
Golden Noise and Ziazag Sampling of Diffusion Models
 December 4, 2024

Zeke Xie Assistant Professor at Information Hub, Hong Kong University of Science and Technology
Zhiqiang Lin
Security-Enhanced Radio Access Networks for 5G OpenRAN
 November 21, 2024

Zhiqiang Lin Distinguished Professor of Engineering and the Director of the Institute for Cybersecurity and Digital Trust (ICDT) at The Ohio State University
Muhammad Shafique
Energy-Efficient and Secure EdgeAI Systems: From Architectures to Applications
 November 20, 2024

Muhammad Shafique Professor, ECE, New York University
Alexandre Paschoal‬
Generative Artificial Intelligence in RNA Biology
 November 19, 2024

Alexandre Paschoal‬ Associate Professor at the Federal University of Technology – Parana (UTFPR)
Vicky Kalogeiton
Multimodality for story-level understanding and generation of visual data
 November 13, 2024

Vicky Kalogeiton Assistant Professor at École Polytechnique
Momen Abayazid
Image- and AI-guided robotics for minimally invasive surgery
 November 12, 2024

Momen Abayazid Associate Professor in the Robotics and Mechatronics (RaM) Group at the University of Twente and a visiting Associate professor at Radboud University Medical Centre
Ang Chen
From cloud computing to cloudless computing
 November 11, 2024

Ang Chen Associate professor in the Computer Science & Engineering department, at the University of Michigan
Pascal Fua
Physics-Based Deep Learning for Medical Imaging
 November 4, 2024

Pascal Fua Professor in the School of Computer and Communication Science and head of the Computer Vision Lab
Weisi Lin
To Make Just-Noticeable Difference (JND) Computable toward Visual Intelligence
 October 31, 2024

Weisi Lin President’s Chair Professor in College of Computing and Data Science, Nanyang Technological University (NTU)
Maha Elgarf
The chameleon effect in education with social AI: can children learn by subconsciously mimicking a social robot?
 October 31, 2024

Maha Elgarf Postdoctoral Associate at the Social Machines and Robotics (SMART) Lab at NYU Abu Dhabi
Nobuyuki Umetani
AI for Engineering Design
 October 25, 2024

Nobuyuki Umetani Associate professor at the University of Tokyo
Santosh Kumar Vipparthi
Integrating Micro-Emotion Recognition with Mental Health Estimation for Improved Well-being
 October 25, 2024

Santosh Kumar Vipparthi Associate Professor at the School of Artificial Intelligence and Data Engineering, Indian Institute of Technology Ropar (IIT Ropar)
Subrahmanyam Murala
Amplifying the Invisible: The Impact of Video Motion Magnification in Healthcare, Engineering, and Beyond
 October 25, 2024

Subrahmanyam Murala Associate Professor at School of Computer Science and Statistics, Trinity College Dublin, Ireland
Joyojeet Pal
Social Media Influencers, Misinformation, and the threat to elections
 October 23, 2024

Joyojeet Pal Associate Professor of Information at the School of Information at the University of Michigan
Yanwei Fu
Unlocking the Potential of Large Models for Vision Related Tasks
 October 16, 2024

Yanwei Fu Professor at the School of Data Science of Fudan University
Marc Pollefeys
Spatial AI to help humans and enable robots
 October 15, 2024

Marc Pollefeys Professor of Computer Science at ETH Zurich and the Director of the Microsoft Spatial AI Lab in Zurich
Dinesh Manocha
Robot Navigation in the Wild
 October 14, 2024

Dinesh Manocha Paul Chrisman-Iribe Chair in Computer Science & ECE and Distinguished University Professor at University of Maryland College Park
Jan Buchmann
NLP for Long, Structured Documents
 October 8, 2024

Jan Buchmann Fourth year PhD student in the Ubiquitous Knowledge Processing (UKP) Lab at TU Darmstadt
Michael Yu Wang
Embodied Robot Skills and Good Old Fashioned Engineering
 September 30, 2024

Michael Yu Wang Chair Professor and the Founding Dean of the School of Engineering of the Great Bay University
Mladen Kolar
Confidence sets for Causal Discovery
 September 25, 2024

Mladen Kolar Department Chair and Visiting Professor of Statistics and Data Science, MBZUAI
Cesare Stefanini
AI, Robotics, and the Living: A Research Journey and Future Perspectives
 September 17, 2024

Cesare Stefanini Director of the Biorobotics Institute at Scuola Superiore Sant’Anna (SSSA) in Pontedera, Italy
Abhinav Dhall
Human-Centric Approaches for Multimodal Deepfakes Analysis
 September 13, 2024

Abhinav Dhall Associate Professor (Reader) of Computer Science at Flinders University, Australia
Eliseo Ferrante
Towards Controllable Swarms: Integrating Artificial Intelligence at Microscopic and Macroscopic Scales
 September 11, 2024

Eliseo Ferrante Researcher at the interface with statistical physics and evolutionary biology
Suranga Nanayakkara
Humanizing Technology with Assistive Augmentations
 September 3, 2024

Suranga Nanayakkara Associate Professor at the School of Computing and the Director of the Centre for Holistic Inquiry into Lifelong Learning (CHILL) at National University of Singapore
Holger Pirk
Bring Your Own Kernel! Constructing High-Performance Data Management Systems from Components
 September 2, 2024

Holger Pirk Associate Professor in the Large-Scale Data and Systems group at Imperial College London
Ramesh Raskar
Unlocking Decentralized AI and Vision: Overcoming Incentive Barriers, Orchestration Challenges, and Data Silos
 August 26, 2024

Ramesh Raskar Associate Director and Associate Professor at MIT Media Lab
Tetsunari Inamura
Integrating Virtual Reality and Robotics: Enhancing Human and Robot Experiences in Assistive Technologies
 August 22, 2024

Tetsunari Inamura Professor at the Advanced Intelligence & Robotics Research Center, Brain Science Institute, Tamagawa University, Japan
Hassan Sajjad
Latent Space Exploration for Safe and Trustworthy AI Models
 August 21, 2024

Hassan Sajjad Associate Professor in the Faculty of Computer Science at Dalhousie University, Canada, and the director of the HyperMatrix lab
Chaoyang Song
Super-aligned Machine Intelligence via a Soft Touch
 August 21, 2024

Chaoyang Song Assistant Professor at the Southern University of Science and Technology (SUSTech) in Shenzhen
Hao Dong
Key Research in Embodied AI
 August 19, 2024

Hao Dong Assistant professor, Peking University
Mykel Kochenderfer
Automated Decision Making for Safety Critical Applications
 July 22, 2024

Mykel Kochenderfer Associate Professor of Aeronautics and Astronautics at Stanford University
Krishnan Murthy Jatavallabhula
Structured World Models for Robots
 June 7, 2024

Krishnan Murthy Jatavallabhula Postdoc at MIT
Pedro Moreno
Past, Present and Future of Speech Technologies
 May 28, 2024

Pedro Moreno Formerly led the ASR R&D Team at Google
Pedro Moreno
Past, Present and Future of Speech Technologies
 May 28, 2024

Pedro Moreno Team lead of the ASR R&D Team at Google
Eduardo da Veiga Beltrame
Enabling precision medicine with single cell omics and decentralized clinical studies
 May 23, 2024

Eduardo da Veiga Beltrame Assistant Professor of Computational Biology, MBZUAI
Amir Goharshady
Martingale-based Verification of Probabilistic Programs
 May 21, 2024

Amir Goharshady Assistant Professor of Computer Science and Mathematics at the Hong Kong University of Science and Technology
Feng Liu
Recent Advance of Two-sample Testing and Its Application in AI Security
 May 16, 2024

Feng Liu Lecturer in Machine Learning at The University of Melbourne, Australia, and a Visiting Scientist at RIKEN-AIP, Japan
Kimon Fountoulakis
Understanding Machine Learning on Graphs: From Node Classification to Algorithmic Reasoning
 May 14, 2024

Kimon Fountoulakis Assistant Professor in the David R. Cheriton School of Computer Science
Debdeep Mukhopadhyay
Hardware Security through the Lens of Dr ML
 May 10, 2024

Debdeep Mukhopadhyay Visiting Professor in the school of Computer Engineering, NYU Abu Dhabi
Artem Shelmanov
Safety of Deploying NLP Models: Uncertainty Quantification of Generative LLMs
 May 6, 2024

Artem Shelmanov Senior Research Scientist at MBZUAI, in the Natural Language Processing Department
Babak Falsafi
Computing in the Post-Moore Era
 April 2, 2024

Babak Falsafi Professor of Computer and Communication Sciences at EPFL and Founder of EcoCloud
Yann LeCun
Objective-Driven AI: Towards Machines that can Learn, Reason, and Plan
 February 16, 2024

Yann LeCun VP & Chief AI Scientist at Meta
Speaker Series