All Previous AI Speaker Series at MBZUAI
The etiology and evolution of complex amplifications in breast cancer
Hosted by: Prof. Aziz Khan
December 10, 2025
Katie Houlahan
The etiology and evolution of complex amplifications in breast cancer
Hosted by: Prof. Aziz Khan
Computational Biology
Abstract
The etiology and evolution of complex amplifications in breast cancer
Breast cancer is defined clinically by Estrogen Receptor (ER), Progesterone Receptor (PR), and Human Epithelial Growth Factor Receptor 2 (HER2) status, but subtypes based on these receptors only partially capture its biological diversity. We assembled a meta-cohort of 1,828 breast tumours spanning pre-invasive to metastatic stages with whole-genome and transcriptome sequencing. We show that the mutational rearrangement processes driving a subset of ER⁺ tumours are identical to those in HER2⁺ disease, but instead of amplifying ERBB2, they target alternative oncogenes such as MYC, CCND1, and FGFR1. These complex amplifications arise early, in ductal carcinoma in situ, and persist through metastasis, suggesting they are founding events. Integrating germline and tumour data from 5,870 cases, we find that inherited variation influences which tumours can acquire these complex somatic amplifications. Tumours arising in individuals with high germline epitope burden in these loci show reduced amplification, consistent with immune selection against highly antigenic clones. This germline–somatic interaction shapes subtype development, immune landscape, and patient outcome. Together, these data reveal that breast cancer subtypes emerge through the intersection of shared mutational processes and germline-mediated immune editing, linking inherited variation to the evolutionary trajectories of tumour genomes.
Breast cancer is defined clinically by Estrogen Receptor (ER), Progesterone Receptor (PR), and Human Epithelial Growth Factor Receptor 2 (HER2) status, but subtypes based on these receptors only partially capture its biological diversity. We assembled a meta-cohort of 1,828 breast tumours spanning pre-invasive to metastatic stages with whole-genome and transcriptome sequencing. We show that the mutational rearrangement processes driving a subset of ER⁺ tumours are identical to those in HER2⁺ disease, but instead of amplifying ERBB2, they target alternative oncogenes such as MYC, CCND1, and FGFR1. These complex amplifications arise early, in ductal carcinoma in situ, and persist through metastasis, suggesting they are founding events. Integrating germline and tumour data from 5,870 cases, we find that inherited variation influences which tumours can acquire these complex somatic amplifications. Tumours arising in individuals with high germline epitope burden in these loci show reduced amplification, consistent with immune selection against highly antigenic clones. This germline–somatic interaction shapes subtype development, immune landscape, and patient outcome. Together, these data reveal that breast cancer subtypes emerge through the intersection of shared mutational processes and germline-mediated immune editing, linking inherited variation to the evolutionary trajectories of tumour genomes.
Mobile Computational Action Through a Modern AI Lens
Hosted by: Olivier Oullier
December 10, 2025
Daniel Dobos
Mobile Computational Action Through a Modern AI Lens
Hosted by: Olivier Oullier
Human-Computer Interaction
Abstract
Mobile Computational Action Through a Modern AI Lens
What are the advantages and disadvantages of open-source Large Language Models? Where can they be used already know efficiently and how do they help answering the two big global societal AI questions: "Will AI scale faster then any technology before?" and "In what type of global AI arms race are we currently?” Examples from the Swiss AI Model Apertus will be given and how exchanges with other LLM builders, like the Falcon model series, from the UAE.
What are the advantages and disadvantages of open-source Large Language Models? Where can they be used already know efficiently and how do they help answering the two big global societal AI questions: "Will AI scale faster then any technology before?" and "In what type of global AI arms race are we currently?” Examples from the Swiss AI Model Apertus will be given and how exchanges with other LLM builders, like the Falcon model series, from the UAE.
Healthcare Agents: Language Model Agents in Health Prediction and Decision-Making
Hosted by: Jianing Qiu, Assistant Professor of Personalized Medicine
December 8, 2025
Yubin Kim
Healthcare Agents: Language Model Agents in Health Prediction and Decision-Making
Hosted by: Jianing Qiu, Assistant Professor of Personalized Medicine
Abstract
Healthcare Agents: Language Model Agents in Health Prediction and Decision-Making
Recent advances in foundation models have enabled powerful general-purpose reasoning systems, yet their application to health remains limited by safety, hallucination, and the inability to operate over long-horizon physiological trajectories. In this talk, I will present a line of research that builds from single-agent system to multi-agent systems capable of clinical reasoning, wearable understanding, and scientific discovery. Together, these advances outline a path toward the next generation of safe, interpretable, and continuously learning personal health agents.
Recent advances in foundation models have enabled powerful general-purpose reasoning systems, yet their application to health remains limited by safety, hallucination, and the inability to operate over long-horizon physiological trajectories. In this talk, I will present a line of research that builds from single-agent system to multi-agent systems capable of clinical reasoning, wearable understanding, and scientific discovery. Together, these advances outline a path toward the next generation of safe, interpretable, and continuously learning personal health agents.
Chemical Language Models and Reinforcement Learning for Drug Design
Hosted by: Prof. Eduardo Beltrame
November 27, 2025
Morgan Cole Thomas
Chemical Language Models and Reinforcement Learning for Drug Design
Hosted by: Prof. Eduardo Beltrame
Computational Biology
Watch Now
Abstract
Chemical Language Models and Reinforcement Learning for Drug Design
Chemical language models (CLMs) with Reinforcement Learning (RL), although relatively simply, are the most adopted and robust generative model for de novo molecular design in industry still. In this work, I present advances in the RL learning efficiency of these models enabling the use of more computationally expensive oracles, investigate cooperative agent learning and the scaling laws in molecular rediscovery, and introduce inference time methods to constrain CLMs for practical scaffold elaboration and fragment linking. In addition, I will share successful case studies that led to the discovery of novel binders of Adenosine 2A receptor with an 88% success rate. Lastly, I will compare to newer generative models conducting de novo design in 3D, and postulate where research is going, and where it should go.
Chemical language models (CLMs) with Reinforcement Learning (RL), although relatively simply, are the most adopted and robust generative model for de novo molecular design in industry still. In this work, I present advances in the RL learning efficiency of these models enabling the use of more computationally expensive oracles, investigate cooperative agent learning and the scaling laws in molecular rediscovery, and introduce inference time methods to constrain CLMs for practical scaffold elaboration and fragment linking. In addition, I will share successful case studies that led to the discovery of novel binders of Adenosine 2A receptor with an 88% success rate. Lastly, I will compare to newer generative models conducting de novo design in 3D, and postulate where research is going, and where it should go.
Toward New Directions for an Anthropology‑Informed HCI/HCAI
Hosted by: Prof. Elizabeth Churchill
November 25, 2025
Davide Casciano
Toward New Directions for an Anthropology‑Informed HCI/HCAI
Hosted by: Prof. Elizabeth Churchill
Human-Computer Interaction
Abstract
Toward New Directions for an Anthropology‑Informed HCI/HCAI
"Anthropology has been part of Human–Computer Interaction (HCI) since at least the 1980s, fostering interdisciplinary collaborations that laid the foundations for a productive dialogue that continues today. Yet many of its current applications remain limited, both methodologically and theoretically — two dimensions deeply intertwined in anthropological practice. In an era defined by artificial intelligence and by increasing calls for genuinely human-centered approaches, I argue that contemporary anthropology can reshape the conceptual and ethical coordinates of both HCI and HCAI. By enabling deeper reflection on what it means to be “human” and on how we understand the “contexts” in which technologies are designed and adopted, anthropology provides critical tools for engaging with technological complexity. As artificial intelligence grows increasingly opaque, often eluding even its developers, anthropology offers unique means to explore socio-technical complexity — conceived as an assemblage of relations and dense meanings among humans and non-humans. This perspective supports the development of responsible design and research practices, capable of anticipating innovation’s impacts rather than merely reacting ex post, while rethinking human–machine interaction as co constitutive relationships in which human and more-than-human layers — consciously or not, visibly or subtly, at every level — shape the global reality we inhabit and co produce, from Silicon Valley to the smallest towns in Africa. In this sense, not only can HCI and HCAI continue to evolve through anthropological insights, but anthropology itself can be revitalized through new interdisciplinary hybridizations within academic and research environments prepared to address the challenges posed by continuously emerging technologies."
"Anthropology has been part of Human–Computer Interaction (HCI) since at least the 1980s, fostering interdisciplinary collaborations that laid the foundations for a productive dialogue that continues today. Yet many of its current applications remain limited, both methodologically and theoretically — two dimensions deeply intertwined in anthropological practice. In an era defined by artificial intelligence and by increasing calls for genuinely human-centered approaches, I argue that contemporary anthropology can reshape the conceptual and ethical coordinates of both HCI and HCAI. By enabling deeper reflection on what it means to be “human” and on how we understand the “contexts” in which technologies are designed and adopted, anthropology provides critical tools for engaging with technological complexity. As artificial intelligence grows increasingly opaque, often eluding even its developers, anthropology offers unique means to explore socio-technical complexity — conceived as an assemblage of relations and dense meanings among humans and non-humans. This perspective supports the development of responsible design and research practices, capable of anticipating innovation’s impacts rather than merely reacting ex post, while rethinking human–machine interaction as co constitutive relationships in which human and more-than-human layers — consciously or not, visibly or subtly, at every level — shape the global reality we inhabit and co produce, from Silicon Valley to the smallest towns in Africa. In this sense, not only can HCI and HCAI continue to evolve through anthropological insights, but anthropology itself can be revitalized through new interdisciplinary hybridizations within academic and research environments prepared to address the challenges posed by continuously emerging technologies."
The Limitations of Data, Machine Learning & Us
Hosted by: Prof. Elizabeth Churchill
November 25, 2025
Ricardo Baeza-Yates
The Limitations of Data, Machine Learning & Us
Hosted by: Prof. Elizabeth Churchill
Human-Computer Interaction
Watch Now
Abstract
The Limitations of Data, Machine Learning & Us
Machine learning (ML), particularly deep learning, is being used everywhere. However, not always is used well, ethically and/or scientifically. In this talk, we first do a deep dive in the limitations of supervised ML and data, its key input. We cover small data, datification, all types of biases, predictive optimization issues, evaluating success instead of harm, and pseudoscience, among other problems. The second part is about our own limitations using ML, including different types of human incompetence: cognitive biases, unethical applications, no administrative competence, misinformation, and the impact on mental health. In the final part we discuss regulation on the use of AI and responsible AI principles that can mitigate the problems outlined above.
Machine learning (ML), particularly deep learning, is being used everywhere. However, not always is used well, ethically and/or scientifically. In this talk, we first do a deep dive in the limitations of supervised ML and data, its key input. We cover small data, datification, all types of biases, predictive optimization issues, evaluating success instead of harm, and pseudoscience, among other problems. The second part is about our own limitations using ML, including different types of human incompetence: cognitive biases, unethical applications, no administrative competence, misinformation, and the impact on mental health. In the final part we discuss regulation on the use of AI and responsible AI principles that can mitigate the problems outlined above.
Integrating Large-Scale Genomics and Artificial Intelligence in Personalized Medicine
Hosted by: Prof. Yulia Medvedeva
November 25, 2025
Alexander Rakitko
Integrating Large-Scale Genomics and Artificial Intelligence in Personalized Medicine
Hosted by: Prof. Yulia Medvedeva
Computational Biology
Watch Now
Abstract
Integrating Large-Scale Genomics and Artificial Intelligence in Personalized Medicine
Over the past decade, Genotek Ltd. has established the largest genetic testing facility in Eastern Europe, pioneering the integration of large-scale sequencing, artificial intelligence, and clinical bioinformatics. In this talk, we will begin by presenting our progress in developing and applying the variable-depth whole genome sequencing (vdWGS) technology — a novel approach that significantly outperforms microarray-based genotyping in accuracy, coverage, and efficiency. For more than 15 years, our team has been developing computational frameworks for personal DNA testing and the interpretation of individual genetic data. We will discuss advances in polygenic risk scoring, machine learning models for complex disease prediction, population genetics and local ancestry inference, as well as applications in nutrigenetics, sports genetics, and pharmacogenetics. Our unique data collection — encompassing over 500,000 genomes linked with electronic health records and questionnaires — represents an invaluable resource for biomedical research. We will highlight our own recent studies conducted at Genotek Ltd.: GWAS, oral microbiome analysis for complex diseases (including type 1 and type 2 diabetes), deep learning methods for modeling epistatic effects, graph neural networks for genetic relatives networks, etc. In addition, we will discuss the implementation of AI technologies in telemedicine and deep learning for MRI image analysis. Genotek’s research has been published in leading journals, including Nature, Nature Genetics, EClinicalMedicine (The Lancet), and Scientific Reports. The company actively participates in international collaborations, such as the COVID-19 Host Genetics Initiative, and maintains research partnerships with academic institutions including Charité Clinic, the University of Berlin and the University of Copenhagen. Finally, we will share our experience in developing bioinformatics educational programs and supervising student research projects based at Genotek
Over the past decade, Genotek Ltd. has established the largest genetic testing facility in Eastern Europe, pioneering the integration of large-scale sequencing, artificial intelligence, and clinical bioinformatics. In this talk, we will begin by presenting our progress in developing and applying the variable-depth whole genome sequencing (vdWGS) technology — a novel approach that significantly outperforms microarray-based genotyping in accuracy, coverage, and efficiency. For more than 15 years, our team has been developing computational frameworks for personal DNA testing and the interpretation of individual genetic data. We will discuss advances in polygenic risk scoring, machine learning models for complex disease prediction, population genetics and local ancestry inference, as well as applications in nutrigenetics, sports genetics, and pharmacogenetics. Our unique data collection — encompassing over 500,000 genomes linked with electronic health records and questionnaires — represents an invaluable resource for biomedical research. We will highlight our own recent studies conducted at Genotek Ltd.: GWAS, oral microbiome analysis for complex diseases (including type 1 and type 2 diabetes), deep learning methods for modeling epistatic effects, graph neural networks for genetic relatives networks, etc. In addition, we will discuss the implementation of AI technologies in telemedicine and deep learning for MRI image analysis. Genotek’s research has been published in leading journals, including Nature, Nature Genetics, EClinicalMedicine (The Lancet), and Scientific Reports. The company actively participates in international collaborations, such as the COVID-19 Host Genetics Initiative, and maintains research partnerships with academic institutions including Charité Clinic, the University of Berlin and the University of Copenhagen. Finally, we will share our experience in developing bioinformatics educational programs and supervising student research projects based at Genotek
Designing Interactions to Empower Thoughtful Human-AI Co-Creation
Hosted by: Prof. Elizabeth Churchill
November 24, 2025
Frederic Gmeiner
Designing Interactions to Empower Thoughtful Human-AI Co-Creation
Hosted by: Prof. Elizabeth Churchill
Human-Computer Interaction
Abstract
Designing Interactions to Empower Thoughtful Human-AI Co-Creation
"Generative AI (GenAI) promises to transform how we think, create, and solve problems. Yet its current integration into professional practice remains limited. Users frequently face misalignment between outputs and intentions, uncertainty in how to guide the system, and reduced cognitive engagement when tasks are overly delegated to automation. These issues limit GenAI’s impact in precisely the kinds of complex, open-ended domains where human creativity and judgment matter most. My research addresses these challenges by rethinking human-AI interaction: how can we design systems that amplify rather than offload human cognitive work? Drawing on the long-standing HCI vision of augmenting human intellect, I explore interaction techniques that scaffold reflection, sharpen problem formulation, and support deliberate engagement in tasks where human judgment and creativity are essential. I will present examples from recent projects—including SocraBot, a voice-based agent for reflective engagement in mechanical design, and IntentTagger, a patented input technique for steering AI-generated content in PowerPoint—that demonstrate how new forms of interaction can unlock more productive, empowering human-AI co-creation. I will end by outlining a forward-looking agenda for research and education—advancing human-centered AI systems, methods, and curricula that empower people to think more deeply, create more meaningfully, and innovate more responsibly in the age of intelligent machines."
"Generative AI (GenAI) promises to transform how we think, create, and solve problems. Yet its current integration into professional practice remains limited. Users frequently face misalignment between outputs and intentions, uncertainty in how to guide the system, and reduced cognitive engagement when tasks are overly delegated to automation. These issues limit GenAI’s impact in precisely the kinds of complex, open-ended domains where human creativity and judgment matter most. My research addresses these challenges by rethinking human-AI interaction: how can we design systems that amplify rather than offload human cognitive work? Drawing on the long-standing HCI vision of augmenting human intellect, I explore interaction techniques that scaffold reflection, sharpen problem formulation, and support deliberate engagement in tasks where human judgment and creativity are essential. I will present examples from recent projects—including SocraBot, a voice-based agent for reflective engagement in mechanical design, and IntentTagger, a patented input technique for steering AI-generated content in PowerPoint—that demonstrate how new forms of interaction can unlock more productive, empowering human-AI co-creation. I will end by outlining a forward-looking agenda for research and education—advancing human-centered AI systems, methods, and curricula that empower people to think more deeply, create more meaningfully, and innovate more responsibly in the age of intelligent machines."
Designing Intelligent Interactions for Public Spaces
Hosted by: Prof. Elizabeth Churchill
November 24, 2025
Callum Parker
Designing Intelligent Interactions for Public Spaces
Hosted by: Prof. Elizabeth Churchill
Human-Computer Interaction
Watch Now
Abstract
Designing Intelligent Interactions for Public Spaces
"Public spaces, from city streets to virtual worlds, are increasingly shaped by systems that sense, predict, and adapt to how we move, communicate, and experience our surroundings. As these technologies become embedded in everyday environments, a critical question emerges: how can we design interfaces that are intelligent while also being inclusive and responsive to human needs in these shared contexts? In this talk, I will draw on a series of projects exploring how people interact with autonomous and intelligent systems. These include studies on communication between pedestrians and autonomous vehicles, adaptive public displays that respond to behaviour and context, and inclusive environments within the metaverse. I conclude by reflecting on how AI is transforming our collective experience of space, not only through automation and sensing, but also through its capacity to personalise and, at times, fragment the environments we share. As intelligent systems increasingly adapt to individuals, our challenge as designers and researchers is to ensure that AI enhances connection rather than isolation, supporting a future where technology deepens rather than divides our shared public environments."
"Public spaces, from city streets to virtual worlds, are increasingly shaped by systems that sense, predict, and adapt to how we move, communicate, and experience our surroundings. As these technologies become embedded in everyday environments, a critical question emerges: how can we design interfaces that are intelligent while also being inclusive and responsive to human needs in these shared contexts? In this talk, I will draw on a series of projects exploring how people interact with autonomous and intelligent systems. These include studies on communication between pedestrians and autonomous vehicles, adaptive public displays that respond to behaviour and context, and inclusive environments within the metaverse. I conclude by reflecting on how AI is transforming our collective experience of space, not only through automation and sensing, but also through its capacity to personalise and, at times, fragment the environments we share. As intelligent systems increasingly adapt to individuals, our challenge as designers and researchers is to ensure that AI enhances connection rather than isolation, supporting a future where technology deepens rather than divides our shared public environments."
On estimating and exploiting data intrinsic dimension
Hosted by: Prof. Eric Moulines
November 21, 2025
Antonietta Mira
On estimating and exploiting data intrinsic dimension
Hosted by: Prof. Eric Moulines
Statistics and Data Science
Watch Now
Abstract
On estimating and exploiting data intrinsic dimension
"Real-world datasets often exhibit a high degree of (possibly) non-linear correlations and constraints among their features. Consequently, despite residing in a high-dimensional embedding space, the data typically lie on a manifold with a much lower intrinsic dimension (ID), which—under the presence of noise—may depend on the scale at which the data are analyzed. This situation raises interesting questions: How many variables or combinations thereof are necessary to describe a real-world dataset without significant information loss? What is the appropriate scale at which one should analyze and visualize data? Although these two issues are often considered unrelated, they are in fact strongly entangled and can be addressed within a unified framework. We introduce an approach in which the optimal number of variables and the optimal scale are determined self-consistently, recognizing and bypassing the scale at which the data are affected by noise. To this end, we estimate the data ID in an adaptive manner. Sometimes, within the same dataset, it is possible to identify more than one ID, meaning that different subsets of data points lie on manifolds with different IDs. Identifying these manifolds provides a clustering of the data. Examples of exploitation of data ID will be presented ranging from gene expression to protein folding, and pandemic evolution, all the way to fMRI, financial and network data. All these real-world applications show how a simple topological feature such as the ID allows us to uncover a rich data structure and improves our insight into subsequent statistical analyses."
"Real-world datasets often exhibit a high degree of (possibly) non-linear correlations and constraints among their features. Consequently, despite residing in a high-dimensional embedding space, the data typically lie on a manifold with a much lower intrinsic dimension (ID), which—under the presence of noise—may depend on the scale at which the data are analyzed. This situation raises interesting questions: How many variables or combinations thereof are necessary to describe a real-world dataset without significant information loss? What is the appropriate scale at which one should analyze and visualize data? Although these two issues are often considered unrelated, they are in fact strongly entangled and can be addressed within a unified framework. We introduce an approach in which the optimal number of variables and the optimal scale are determined self-consistently, recognizing and bypassing the scale at which the data are affected by noise. To this end, we estimate the data ID in an adaptive manner. Sometimes, within the same dataset, it is possible to identify more than one ID, meaning that different subsets of data points lie on manifolds with different IDs. Identifying these manifolds provides a clustering of the data. Examples of exploitation of data ID will be presented ranging from gene expression to protein folding, and pandemic evolution, all the way to fMRI, financial and network data. All these real-world applications show how a simple topological feature such as the ID allows us to uncover a rich data structure and improves our insight into subsequent statistical analyses."
Catalyzing computing for brain-computer interfaces
Hosted by: Prof. Abdulrahman Mahmoud
November 21, 2025
Abhishek Bhattacharjee
Catalyzing computing for brain-computer interfaces
Hosted by: Prof. Abdulrahman Mahmoud
Computer Science
Abstract
Catalyzing computing for brain-computer interfaces
Brain–computer interfaces have the potential to treat debilitating neurological disorders, reveal new insights into brain function, and ultimately redefine the relationship between biological and artificial intelligence. Realizing this vision requires computer systems that carefully balance power, latency, and bandwidth to decode neural activity, stimulate neurons, and control assistive devices with precision. This talk presents my group’s design of a standardized, general-purpose computer architecture for future brain interfaces. Our architecture supports the treatment of multiple neurological conditions—most notably epilepsy and movement disorders—and is built around end-to-end hardware acceleration, spanning from the microarchitectural level to distributed systems. We validate these ideas through custom chip implementations and real-time experiments interfacing our chips with the brains of two human patients in the operating room.
Brain–computer interfaces have the potential to treat debilitating neurological disorders, reveal new insights into brain function, and ultimately redefine the relationship between biological and artificial intelligence. Realizing this vision requires computer systems that carefully balance power, latency, and bandwidth to decode neural activity, stimulate neurons, and control assistive devices with precision. This talk presents my group’s design of a standardized, general-purpose computer architecture for future brain interfaces. Our architecture supports the treatment of multiple neurological conditions—most notably epilepsy and movement disorders—and is built around end-to-end hardware acceleration, spanning from the microarchitectural level to distributed systems. We validate these ideas through custom chip implementations and real-time experiments interfacing our chips with the brains of two human patients in the operating room.
When Agents Trade: Live Multi-Market Benchmarking of LLM-Driven Trading Systems
Hosted by: Prof. Steve Liu
November 20, 2025
Jimin Huang
When Agents Trade: Live Multi-Market Benchmarking of LLM-Driven Trading Systems
Hosted by: Prof. Steve Liu
Watch Now
Abstract
When Agents Trade: Live Multi-Market Benchmarking of LLM-Driven Trading Systems
As large language models (LLMs) evolve beyond static reasoning toward dynamic decision-making, their application in real-time trading environments poses a new frontier for financial AI. This talk introduces the Agent Market Arena (AMA), the first real-time, lifelong benchmark for evaluating LLM-driven trading agents across multiple markets. Developed by The Fin AI and collaborators at Columbia, Harvard, and other institutions, AMA compares diverse agent architectures such as InvestorAgent, TradeAgent, HedgeFundAgent, and DeepFundAgent, powered by LLMs including GPT-4.1, Claude-3.5, and Gemini-2.0. Using verified live data from stocks and cryptocurrencies, AMA reveals that profitability depends more on agent architecture and coordination logic than on the LLM backbone itself. The results highlight how memory, debate, and risk-control mechanisms shape financial decision-making, paving the way for more adaptive and cooperative AI traders. Click here for my slides: https://docs.google.com/presentation/d/1VrgSciscCD2UKlp0VXCBX2dqCJPzoBgt/edit?usp=drive_link&ouid=107320101831769930525&rtpof=true&sd=true
As large language models (LLMs) evolve beyond static reasoning toward dynamic decision-making, their application in real-time trading environments poses a new frontier for financial AI. This talk introduces the Agent Market Arena (AMA), the first real-time, lifelong benchmark for evaluating LLM-driven trading agents across multiple markets. Developed by The Fin AI and collaborators at Columbia, Harvard, and other institutions, AMA compares diverse agent architectures such as InvestorAgent, TradeAgent, HedgeFundAgent, and DeepFundAgent, powered by LLMs including GPT-4.1, Claude-3.5, and Gemini-2.0. Using verified live data from stocks and cryptocurrencies, AMA reveals that profitability depends more on agent architecture and coordination logic than on the LLM backbone itself. The results highlight how memory, debate, and risk-control mechanisms shape financial decision-making, paving the way for more adaptive and cooperative AI traders. Click here for my slides: https://docs.google.com/presentation/d/1VrgSciscCD2UKlp0VXCBX2dqCJPzoBgt/edit?usp=drive_link&ouid=107320101831769930525&rtpof=true&sd=true
On nonparametric estimation of the interaction function in particle system models
Hosted by: Maxim Panov
November 20, 2025
Mark Podolskij
On nonparametric estimation of the interaction function in particle system models
Hosted by: Maxim Panov
Statistics and Data Science
Watch Now
Abstract
On nonparametric estimation of the interaction function in particle system models
"This talk discusses the challenging problem of nonparametric estimation for the interaction function within diffusion-type particle system models. We introduce an estimation method based on empirical risk minimization. Our study encompasses an analysis of the stochastic and approximation errors associated with the proposed procedure, along with an examination of certain minimax lower bounds. In particular, we show that there is a natural metric under which the corresponding estimation error of the interaction function converges to zero with a parametric rate that is minimax optimal. This result is rather surprising given the complexity of the underlying estimation problem and a rather large class of interaction functions for which the above parametric rate holds. Furthermore, we investigate convergence rates in the conventional $L^2$-norm and discuss their optimality in some cases. The presentation is based upon a joint work with D. Belomestny and S.-Y. Zhou https://arxiv.org/pdf/2402.14419"
"This talk discusses the challenging problem of nonparametric estimation for the interaction function within diffusion-type particle system models. We introduce an estimation method based on empirical risk minimization. Our study encompasses an analysis of the stochastic and approximation errors associated with the proposed procedure, along with an examination of certain minimax lower bounds. In particular, we show that there is a natural metric under which the corresponding estimation error of the interaction function converges to zero with a parametric rate that is minimax optimal. This result is rather surprising given the complexity of the underlying estimation problem and a rather large class of interaction functions for which the above parametric rate holds. Furthermore, we investigate convergence rates in the conventional $L^2$-norm and discuss their optimality in some cases. The presentation is based upon a joint work with D. Belomestny and S.-Y. Zhou https://arxiv.org/pdf/2402.14419"
Towards Human-Like Machines: The Journey of Humanoids from Research to Deployment
Hosted by: Prof. Yoshihiko Nakamura
November 20, 2025
Abderrahmane Kheddar
Towards Human-Like Machines: The Journey of Humanoids from Research to Deployment
Hosted by: Prof. Yoshihiko Nakamura
Robotics
Watch Now
Abstract
Towards Human-Like Machines: The Journey of Humanoids from Research to Deployment
Humanoid robots have matured from research laboratories into increasingly capable systems that promise to interact, assist, and even collaborate with humans in real-world settings. In this talk, I chart the evolution of humanoid machines, from early research prototypes focused on balance, locomotion and manipulation, to nowadays multimodal platforms aiming to operate alongside people in factories, homes, healthcare and other services. Drawing on our work in multi-contact locomotion, haptic interaction, embodiment and human-robot teaming, I highlight key enablers such as contact-aware control, vision- and force-based interaction, adaptable posture and locomotion, and thought-based or tele-operated embodiment. At the same time, I cover the critical challenges that remain: AI physical embodiment, safe and reliable deployment in human-centred environments, learning and adaptation in unstructured settings, and the economic pathway from research to fielded machines. Looking ahead, I propose that the next stage will hinge on seamless human-robot symbiosis: humanoids as cyber-physical avatars, physical companions, and general-purpose agents embedded in the digital society. By mapping this trajectory from research to deployment, this talk offers a roadmap for how we might realise truly human-like machines, not in appearance alone, but in purpose, interaction, adaptability and societal integration.
Humanoid robots have matured from research laboratories into increasingly capable systems that promise to interact, assist, and even collaborate with humans in real-world settings. In this talk, I chart the evolution of humanoid machines, from early research prototypes focused on balance, locomotion and manipulation, to nowadays multimodal platforms aiming to operate alongside people in factories, homes, healthcare and other services. Drawing on our work in multi-contact locomotion, haptic interaction, embodiment and human-robot teaming, I highlight key enablers such as contact-aware control, vision- and force-based interaction, adaptable posture and locomotion, and thought-based or tele-operated embodiment. At the same time, I cover the critical challenges that remain: AI physical embodiment, safe and reliable deployment in human-centred environments, learning and adaptation in unstructured settings, and the economic pathway from research to fielded machines. Looking ahead, I propose that the next stage will hinge on seamless human-robot symbiosis: humanoids as cyber-physical avatars, physical companions, and general-purpose agents embedded in the digital society. By mapping this trajectory from research to deployment, this talk offers a roadmap for how we might realise truly human-like machines, not in appearance alone, but in purpose, interaction, adaptability and societal integration.
Billion-Parameter Foundation Model for Single-Cell Transcriptomics
Hosted by: Prof. Jin Tian
November 19, 2025
Pengtao Xie
Billion-Parameter Foundation Model for Single-Cell Transcriptomics
Hosted by: Prof. Jin Tian
Machine Learning
Watch Now
Abstract
Billion-Parameter Foundation Model for Single-Cell Transcriptomics
Single-cell RNA sequencing (scRNA-seq) has revolutionized the study of cellular heterogeneity by providing gene expression data at single-cell resolution, uncovering insights into rare cell populations, cell-cell interactions, and gene regulation. Foundation models pretrained on large-scale scRNA-seq datasets have shown great promise in analyzing such data, but existing approaches are often limited to modeling a small subset of highly expressed genes and lack the integration of external genespecific knowledge. To address these limitations, we present sc-Long, a billion-parameter foundation model pretrained on 48 million cells. sc-Long performs self-attention across the entire set of 28,000 genes in the human genome. This enables the model to capture long-range dependencies between all genes, including lowly expressed ones, which often play critical roles in cellular processes but are typically excluded by existing foundation models. Additionally, sc-Long integrates gene knowledge from the Gene Ontology using a graph convolutional network, enriching its contextual understanding of gene functions and relationships. In extensive evaluations, sc-Long surpasses both stateof-the-art scRNA-seq foundation models and task-specific models across diverse tasks, including predicting transcriptional responses to genetic and chemical perturbations, forecasting cancer drug responses, and inferring gene regulatory networks.
Single-cell RNA sequencing (scRNA-seq) has revolutionized the study of cellular heterogeneity by providing gene expression data at single-cell resolution, uncovering insights into rare cell populations, cell-cell interactions, and gene regulation. Foundation models pretrained on large-scale scRNA-seq datasets have shown great promise in analyzing such data, but existing approaches are often limited to modeling a small subset of highly expressed genes and lack the integration of external genespecific knowledge. To address these limitations, we present sc-Long, a billion-parameter foundation model pretrained on 48 million cells. sc-Long performs self-attention across the entire set of 28,000 genes in the human genome. This enables the model to capture long-range dependencies between all genes, including lowly expressed ones, which often play critical roles in cellular processes but are typically excluded by existing foundation models. Additionally, sc-Long integrates gene knowledge from the Gene Ontology using a graph convolutional network, enriching its contextual understanding of gene functions and relationships. In extensive evaluations, sc-Long surpasses both stateof-the-art scRNA-seq foundation models and task-specific models across diverse tasks, including predicting transcriptional responses to genetic and chemical perturbations, forecasting cancer drug responses, and inferring gene regulatory networks.
Why Wait for AGI? Artificial Superintelligence is Here and Solving Real Problems
Hosted by: Prof. Preslav Nakov
November 18, 2025
Veselin Stoyanov
Why Wait for AGI? Artificial Superintelligence is Here and Solving Real Problems
Hosted by: Prof. Preslav Nakov
Natural Language Processing
Watch Now
Abstract
Why Wait for AGI? Artificial Superintelligence is Here and Solving Real Problems
Research in the AI community remains fixated on achieving Artificial General Intelligence. Whether and why autonomous AGI will arrive is a matter of dispute. At the same time, Artificial Superintelligence (ASI) already exists in narrow but valuable domains and it is amazing. Today's AI systems demonstrate genuinely superhuman capabilities—processing millions of documents in seconds, extracting insights with breadth and speed that humans cannot match. In this talk, I will first demonstrate ASI in action powering Lightfield's AI CRM, which launched just recently. Our system represents Relationship Superintelligence by understanding relationship dynamics across vast interaction histories. Second, I'll share a research project with colleagues at MBZUAI on evidence-based generation. While LLMs can already process vast amounts of text with superhuman capability, they are not always reliable and have limitations on effective input size. To fully enable this ASI potential, models must be able to provide evidence—precise references to where information comes from—as well as process increasingly larger amounts of information at decreasing computational cost. I will discuss how evidence-based generation enables these advances and share some current results.
Research in the AI community remains fixated on achieving Artificial General Intelligence. Whether and why autonomous AGI will arrive is a matter of dispute. At the same time, Artificial Superintelligence (ASI) already exists in narrow but valuable domains and it is amazing. Today's AI systems demonstrate genuinely superhuman capabilities—processing millions of documents in seconds, extracting insights with breadth and speed that humans cannot match. In this talk, I will first demonstrate ASI in action powering Lightfield's AI CRM, which launched just recently. Our system represents Relationship Superintelligence by understanding relationship dynamics across vast interaction histories. Second, I'll share a research project with colleagues at MBZUAI on evidence-based generation. While LLMs can already process vast amounts of text with superhuman capability, they are not always reliable and have limitations on effective input size. To fully enable this ASI potential, models must be able to provide evidence—precise references to where information comes from—as well as process increasingly larger amounts of information at decreasing computational cost. I will discuss how evidence-based generation enables these advances and share some current results.
Toward Interpretable and Inclusive Speech Technology for Healthcare
Hosted by: Prof. Preslav Nakov
November 17, 2025
Zhengjun Yue
Toward Interpretable and Inclusive Speech Technology for Healthcare
Hosted by: Prof. Preslav Nakov
Natural Language Processing
Watch Now
Abstract
Toward Interpretable and Inclusive Speech Technology for Healthcare
"Speech is a powerful and natural channel for human communication. It reflects not only a person’s linguistic ability, but also their cognitive, neurological, and emotional state. AI-driven speech technology is transforming how people access services, receive care, and engage with information. However, mainstream systems remain largely inaccessible to individuals with speech impairments, particularly those affected by neurological, developmental, or motor disorders. These underrepresented groups of people often find their speech excluded or misinterpreted. This technological gap not only limits access to digital services, but also impedes the development of reliable tools for health monitoring, clinical decision support, and communicative assistance. My research is centered on interpretable AI-driven speech-oriented multimodal technology for healthcare, with a mission to make voice a clinically useful and socially inclusive biomarker. In this talk, I will present my research and recent progress on automatic detection, recognition and analysis of pathological and atypical speech, highlighting methods that enhance robustness and interpretability. I will also discuss how advances in speech and language modeling can enable context-aware, explainable, and embodied assistive systems, for instance, through social robots that support pathological speakers and other underrepresented user groups."
"Speech is a powerful and natural channel for human communication. It reflects not only a person’s linguistic ability, but also their cognitive, neurological, and emotional state. AI-driven speech technology is transforming how people access services, receive care, and engage with information. However, mainstream systems remain largely inaccessible to individuals with speech impairments, particularly those affected by neurological, developmental, or motor disorders. These underrepresented groups of people often find their speech excluded or misinterpreted. This technological gap not only limits access to digital services, but also impedes the development of reliable tools for health monitoring, clinical decision support, and communicative assistance. My research is centered on interpretable AI-driven speech-oriented multimodal technology for healthcare, with a mission to make voice a clinically useful and socially inclusive biomarker. In this talk, I will present my research and recent progress on automatic detection, recognition and analysis of pathological and atypical speech, highlighting methods that enhance robustness and interpretability. I will also discuss how advances in speech and language modeling can enable context-aware, explainable, and embodied assistive systems, for instance, through social robots that support pathological speakers and other underrepresented user groups."
Towards a True AI Partner: Fusing Learning and Knowledge for Trustworthy Human-AI Synergy
Hosted by: Prof. Preslav Nakov
November 14, 2025
Yaqi Xie
Towards a True AI Partner: Fusing Learning and Knowledge for Trustworthy Human-AI Synergy
Hosted by: Prof. Preslav Nakov
Natural Language Processing
Watch Now
Abstract
Towards a True AI Partner: Fusing Learning and Knowledge for Trustworthy Human-AI Synergy
To move beyond tools and towards true partners, AI systems must bridge the gap between perception-driven deep learning and knowledge-based symbolic reasoning. Current approaches excel at one or the other, but not both, limiting their reliability and preventing us from fully trusting them. My research addresses this challenge through a principled fusion of learning and reasoning, guided by the principle of building AI that is "Trustworthy by Design." I will first describe work on embedding formal logic into neural networks, creating models that are not only more robust and sample-efficient, but also inherently more transparent. Building on this foundation, I will show how neuro-symbolic integration enables robots to reason about intent, anticipate human needs, and perform task-oriented actions in unstructured environments. Finally, I will present a novel training-free method that leverages generative models for self-correction, tackling the critical problem of hallucination in modern AI. Together, these contributions lay the groundwork for intelligent agents that can be instructed, corrected, and ultimately trusted, agents that learn from human knowledge, adapt to real-world complexity, and collaborate seamlessly with people in everyday environments.
To move beyond tools and towards true partners, AI systems must bridge the gap between perception-driven deep learning and knowledge-based symbolic reasoning. Current approaches excel at one or the other, but not both, limiting their reliability and preventing us from fully trusting them. My research addresses this challenge through a principled fusion of learning and reasoning, guided by the principle of building AI that is "Trustworthy by Design." I will first describe work on embedding formal logic into neural networks, creating models that are not only more robust and sample-efficient, but also inherently more transparent. Building on this foundation, I will show how neuro-symbolic integration enables robots to reason about intent, anticipate human needs, and perform task-oriented actions in unstructured environments. Finally, I will present a novel training-free method that leverages generative models for self-correction, tackling the critical problem of hallucination in modern AI. Together, these contributions lay the groundwork for intelligent agents that can be instructed, corrected, and ultimately trusted, agents that learn from human knowledge, adapt to real-world complexity, and collaborate seamlessly with people in everyday environments.
Cellular Foundation Models in Biology - Towards understanding disease and therapeutic targets
Hosted by: Prof. Natasa Przulj
November 13, 2025
Victor Curean
Cellular Foundation Models in Biology - Towards understanding disease and therapeutic targets
Hosted by: Prof. Natasa Przulj
Computational Biology
Watch Now
Abstract
Cellular Foundation Models in Biology - Towards understanding disease and therapeutic targets
The rapid growth of open-access omics data has enabled large-scale exploration of cellular states across species, tissues, and molecular modalities. Building on these resources, cellular foundation models use self-supervised learning to derive general cell representations that can be adapted to diverse downstream biological tasks, including the prediction of responses to chemical and genetic perturbations. This presentation reviews their use in modeling cellular perturbations, describing common learning frameworks, data requirements, and evaluation practices, as well as key challenges specific to single-cell data. We note emerging gaps between reported results and standardized evaluations, which highlight persistent issues in how performance is quantified across studies and benchmarks. Overall, this presentation provides an overview of the current landscape of single-cell foundation models, emphasizing both their progress and limitations in capturing perturbation-specific responses.
The rapid growth of open-access omics data has enabled large-scale exploration of cellular states across species, tissues, and molecular modalities. Building on these resources, cellular foundation models use self-supervised learning to derive general cell representations that can be adapted to diverse downstream biological tasks, including the prediction of responses to chemical and genetic perturbations. This presentation reviews their use in modeling cellular perturbations, describing common learning frameworks, data requirements, and evaluation practices, as well as key challenges specific to single-cell data. We note emerging gaps between reported results and standardized evaluations, which highlight persistent issues in how performance is quantified across studies and benchmarks. Overall, this presentation provides an overview of the current landscape of single-cell foundation models, emphasizing both their progress and limitations in capturing perturbation-specific responses.
Toward Ubiquitous HCI: Connecting Minds, Bodies, and Environment Through Wearable Sensing
Hosted by: Prof. Elizabeth Churchill
November 11, 2025
Simon Ladouce
Toward Ubiquitous HCI: Connecting Minds, Bodies, and Environment Through Wearable Sensing
Hosted by: Prof. Elizabeth Churchill
Human-Computer Interaction
Watch Now
Abstract
Toward Ubiquitous HCI: Connecting Minds, Bodies, and Environment Through Wearable Sensing
"Designing the next generation of human-computer interactions requires a deeper understanding of how cognition unfolds in context, shaped not only by the user’s mental and bodily states but also by their dynamic interaction with the surrounding environment. In this talk, I present a research agenda that brings together cognitive neuroscience, brain-computer interfaces (BCIs), and wearable sensing to inform the design of ubiquitous, adaptive, and unobtrusive interactive systems. Using tools such as mobile EEG, eye-tracking, motion sensors, and environment-aware computing, my work investigates how people perceive, act, and make decisions in natural settings, from high-load operational tasks such as flying a plane to everyday behaviors like walking around a city or eating a meal. This approach moves beyond screen-based interaction to develop systems that respond to users in real time, based on the continuous coupling between brain, body, and environment. By embedding cognitive and contextual awareness into system design, we can move toward calm, seamless technologies that adapt fluidly to the user’s moment-to-moment needs."
"Designing the next generation of human-computer interactions requires a deeper understanding of how cognition unfolds in context, shaped not only by the user’s mental and bodily states but also by their dynamic interaction with the surrounding environment. In this talk, I present a research agenda that brings together cognitive neuroscience, brain-computer interfaces (BCIs), and wearable sensing to inform the design of ubiquitous, adaptive, and unobtrusive interactive systems. Using tools such as mobile EEG, eye-tracking, motion sensors, and environment-aware computing, my work investigates how people perceive, act, and make decisions in natural settings, from high-load operational tasks such as flying a plane to everyday behaviors like walking around a city or eating a meal. This approach moves beyond screen-based interaction to develop systems that respond to users in real time, based on the continuous coupling between brain, body, and environment. By embedding cognitive and contextual awareness into system design, we can move toward calm, seamless technologies that adapt fluidly to the user’s moment-to-moment needs."
Communication-Efficient Algorithms for Federated Learning
Hosted by: Prof. Eduard Gorbunov
November 7, 2025
Sebastian Stich
Communication-Efficient Algorithms for Federated Learning
Hosted by: Prof. Eduard Gorbunov
Statistics and Data Science
Watch Now
Abstract
Communication-Efficient Algorithms for Federated Learning
Federated learning has emerged as an important paradigm in modern distributed machine learning. Unlike traditional centralized learning, where models are trained using large datasets stored on a central server, federated learning keeps the training data distributed across many clients, such as phones, network sensors, hospitals, or other local information sources. In this setting, communication-efficient optimization algorithms are crucial. We provide a brief introduction to local update methods developed for federated optimization and discuss their worst-case complexity. Surprisingly, these methods often perform much better in practice than predicted by theoretical analyses using classical assumptions. Recent years have revealed that their performance can be better described using refined notions that capture the similarity among client objectives. In this talk, we introduce a generic framework based on a distributed proximal point algorithm, which consolidates many of our insights and allows for the adaptation of arbitrary centralized optimization algorithms to the convex federated setting (even with acceleration). Our theoretical analysis shows that the derived methods enjoy faster convergence if the degree of similarity among clients is high. We conclude with a discussion of extensions and open challenges for non-convex objectives and for scaling federated learning to modern large models.
Federated learning has emerged as an important paradigm in modern distributed machine learning. Unlike traditional centralized learning, where models are trained using large datasets stored on a central server, federated learning keeps the training data distributed across many clients, such as phones, network sensors, hospitals, or other local information sources. In this setting, communication-efficient optimization algorithms are crucial. We provide a brief introduction to local update methods developed for federated optimization and discuss their worst-case complexity. Surprisingly, these methods often perform much better in practice than predicted by theoretical analyses using classical assumptions. Recent years have revealed that their performance can be better described using refined notions that capture the similarity among client objectives. In this talk, we introduce a generic framework based on a distributed proximal point algorithm, which consolidates many of our insights and allows for the adaptation of arbitrary centralized optimization algorithms to the convex federated setting (even with acceleration). Our theoretical analysis shows that the derived methods enjoy faster convergence if the degree of similarity among clients is high. We conclude with a discussion of extensions and open challenges for non-convex objectives and for scaling federated learning to modern large models.
From AdamW to Muon: Bridging Theory and Practice of Geometry-Aware Optimization for LLMs and Beyond
Hosted by: Prof. Eduard Gorbunov
November 4, 2025
Egor Shulgin
From AdamW to Muon: Bridging Theory and Practice of Geometry-Aware Optimization for LLMs and Beyond
Hosted by: Prof. Eduard Gorbunov
Statistics and Data Science
Watch Now
Abstract
From AdamW to Muon: Bridging Theory and Practice of Geometry-Aware Optimization for LLMs and Beyond
"Optimization remains a crucial driver of progress in modern machine learning: it governs whether large models train reliably and how efficiently they use compute. This talk examines Muon, a geometry-aware alternative to AdamW that replaces element-wise adaptation with layer-wise, matrix-aware updates—an opportunity to reimagine optimization for deep learning in a way that better matches practice and respects network structure. In large-scale practice, Muon has begun to displace AdamW, offering stronger performance, better hyperparameter transferability, and lower memory overhead across LLMs, diffusion, and vision models. We aim to advance our understanding of deep learning through the lens of optimization, grounding the analysis in how these methods are actually used. I will present Gluon, a unifying, layer-aware framework together with a more general, geometry-based model that captures the heterogeneous behavior of deep networks across layers and along training trajectories. Gluon reimagines optimization for deep learning by replacing uniform, global assumptions with a per-layer description that tracks training dynamics and respects network structure. Measured during language-model training, this model closely tracks observed smoothness and reveals pronounced variation across layers and blocks—phenomena that classical assumptions miss. The framework yields convergence guarantees under these broader conditions and helps explain when structured, per-layer methods can outperform classical approaches. Building on this lens, I then move from the idealized analysis of Muon to the practical, approximate version used in codebases, where orthogonalization is performed using a few Newton–Schulz iterations rather than an expensive full SVD, moving beyond prior analyses of the idealized SVD step to explicitly model the inexact iteration used in practice. Our theory predicts that better approximations lead to better performance (faster convergence), and in practice they permit larger learning rates and widen the stability region. Taken together, these results reduce the theory–practice gap for geometry-aware methods."
"Optimization remains a crucial driver of progress in modern machine learning: it governs whether large models train reliably and how efficiently they use compute. This talk examines Muon, a geometry-aware alternative to AdamW that replaces element-wise adaptation with layer-wise, matrix-aware updates—an opportunity to reimagine optimization for deep learning in a way that better matches practice and respects network structure. In large-scale practice, Muon has begun to displace AdamW, offering stronger performance, better hyperparameter transferability, and lower memory overhead across LLMs, diffusion, and vision models. We aim to advance our understanding of deep learning through the lens of optimization, grounding the analysis in how these methods are actually used. I will present Gluon, a unifying, layer-aware framework together with a more general, geometry-based model that captures the heterogeneous behavior of deep networks across layers and along training trajectories. Gluon reimagines optimization for deep learning by replacing uniform, global assumptions with a per-layer description that tracks training dynamics and respects network structure. Measured during language-model training, this model closely tracks observed smoothness and reveals pronounced variation across layers and blocks—phenomena that classical assumptions miss. The framework yields convergence guarantees under these broader conditions and helps explain when structured, per-layer methods can outperform classical approaches. Building on this lens, I then move from the idealized analysis of Muon to the practical, approximate version used in codebases, where orthogonalization is performed using a few Newton–Schulz iterations rather than an expensive full SVD, moving beyond prior analyses of the idealized SVD step to explicitly model the inexact iteration used in practice. Our theory predicts that better approximations lead to better performance (faster convergence), and in practice they permit larger learning rates and widen the stability region. Taken together, these results reduce the theory–practice gap for geometry-aware methods."
Heterogenuous Multivariate Temporal Data Analytics with Time Intervals Related Patterns
Hosted by: Prof. Nataša Pržulj
November 4, 2025
Robert Moskovitch
Heterogenuous Multivariate Temporal Data Analytics with Time Intervals Related Patterns
Hosted by: Prof. Nataša Pržulj
Computational Biology
Watch Now
Abstract
Heterogenuous Multivariate Temporal Data Analytics with Time Intervals Related Patterns
"Analysis of heterogeneous multivariate time-stamped data is one of the most challenging topics in data science in general, relevant to various problems in real-life longitudinal data in many domains, such as cybersecurity, healthcare, predictive maintenance, sports, and more. Timestamped data can be sampled regularly, commonly by electronic means, but also irregularly, often made manually, common in biomedical data, whether intense as in ICU or sparse as in Electronic Health Records (EHR). Additionally, raw temporal data can represent durations of a continuous or nominal value represented by time intervals. Transforming time point series into meaningful symbolic time intervals using temporal Absorption will be presented to bring all the temporal variables, which have various representations, into a uniform representation. Then, KarmaLego (IEEE ICDM 2015), or TIRPClo (AAAI 2021, DMKD 2023), fast time intervals mining algorithms for the discovery of non-ambiguous Time Intervals Related Patterns (TIRPs) represented by Allen's temporal relations, will be introduced. TIRPs can be used for several purposes: temporal knowledge discovery or as features for the classification of heterogeneous multivariate temporal data (KAIS 2015), and with increased accuracy when using the Temporal Discretization for Classification (TD4C) method (DMKD 2015). In this talk, I will refer to our recent developments and publications in faster TIRPs mining, visualization of TIRPs discovery (JBI 2022, Cell/Patterns, 2025), and the very recent novel use of TIRPs for event’s continuous prediction (SDM 2024, ML 2025) based on the continuous prediction of a pattern’s completion, and more."
"Analysis of heterogeneous multivariate time-stamped data is one of the most challenging topics in data science in general, relevant to various problems in real-life longitudinal data in many domains, such as cybersecurity, healthcare, predictive maintenance, sports, and more. Timestamped data can be sampled regularly, commonly by electronic means, but also irregularly, often made manually, common in biomedical data, whether intense as in ICU or sparse as in Electronic Health Records (EHR). Additionally, raw temporal data can represent durations of a continuous or nominal value represented by time intervals. Transforming time point series into meaningful symbolic time intervals using temporal Absorption will be presented to bring all the temporal variables, which have various representations, into a uniform representation. Then, KarmaLego (IEEE ICDM 2015), or TIRPClo (AAAI 2021, DMKD 2023), fast time intervals mining algorithms for the discovery of non-ambiguous Time Intervals Related Patterns (TIRPs) represented by Allen's temporal relations, will be introduced. TIRPs can be used for several purposes: temporal knowledge discovery or as features for the classification of heterogeneous multivariate temporal data (KAIS 2015), and with increased accuracy when using the Temporal Discretization for Classification (TD4C) method (DMKD 2015). In this talk, I will refer to our recent developments and publications in faster TIRPs mining, visualization of TIRPs discovery (JBI 2022, Cell/Patterns, 2025), and the very recent novel use of TIRPs for event’s continuous prediction (SDM 2024, ML 2025) based on the continuous prediction of a pattern’s completion, and more."
From small-scale generative images to global-scale picture of HCI
Hosted by: Prof. Elizabeth Churchill
November 3, 2025
Jonas Oppenlaender
From small-scale generative images to global-scale picture of HCI
Hosted by: Prof. Elizabeth Churchill
Human-Computer Interaction
Watch Now
Abstract
From small-scale generative images to global-scale picture of HCI
This talk presents a retrospective on my research into “prompt engineering” for text-to-image (TTI) generation – an example where humans were creatively empowered by generative AI. I trace how online communities were instrumental in shaping the practice of prompting and how challenges persist to this day in the creative use of TTI systems. While TTI generative systems enable anyone to produce digital images and artworks through language, this apparent democratization conceals deeper issues of control, authorship, and alignment. I argue that prompt engineering is not merely a creative technique but a symptom of a broader misalignment between human intent and system behavior. Extending this lens, I discuss how prompting has diffused into the wider research field of Human-Computer Interaction (HCI), where it risks fostering tool-driven novelty at the expense of conceptual progress and meaningful insight. What is harmful is not that prompting fails to translate human intent efficiently, but that it is brittle and encodes a mode of interaction that prioritizes prompt tuning and short-lived prototyping over deeper understanding. I conclude by outlining a vision for reflective and scalable stewardship in HCI research.
This talk presents a retrospective on my research into “prompt engineering” for text-to-image (TTI) generation – an example where humans were creatively empowered by generative AI. I trace how online communities were instrumental in shaping the practice of prompting and how challenges persist to this day in the creative use of TTI systems. While TTI generative systems enable anyone to produce digital images and artworks through language, this apparent democratization conceals deeper issues of control, authorship, and alignment. I argue that prompt engineering is not merely a creative technique but a symptom of a broader misalignment between human intent and system behavior. Extending this lens, I discuss how prompting has diffused into the wider research field of Human-Computer Interaction (HCI), where it risks fostering tool-driven novelty at the expense of conceptual progress and meaningful insight. What is harmful is not that prompting fails to translate human intent efficiently, but that it is brittle and encodes a mode of interaction that prioritizes prompt tuning and short-lived prototyping over deeper understanding. I conclude by outlining a vision for reflective and scalable stewardship in HCI research.
From Splitting to Variance Reduction: A Primal–Dual Perspective on Optimization Algorithms
Hosted by: Prof. Eduard Gorbunov
October 31, 2025
Laurent Condat
From Splitting to Variance Reduction: A Primal–Dual Perspective on Optimization Algorithms
Hosted by: Prof. Eduard Gorbunov
Statistics and Data Science
Watch Now
Abstract
From Splitting to Variance Reduction: A Primal–Dual Perspective on Optimization Algorithms
Convex nonsmooth optimization problems in high-dimensional spaces have become ubiquitous. Primal–dual proximal algorithms are particularly well-suited to solving them: they rely on simple iterative operations that handle the terms of the objective function separately. Their design is grounded in the framework of monotone inclusions, where splitting techniques provide a powerful way to decompose a complex problem involving multiple terms into simpler subproblems that can be solved and combined efficiently. Meanwhile, stochastic algorithms such as Stochastic Gradient Descent (SGD) have been central to the success of machine learning and artificial intelligence. Modern variance-reduced methods enhance these algorithms by counteracting the noise inherent to stochastic updates, enabling convergence to exact solutions rather than oscillation around them. In this talk, I will highlight the deep connections between splitting and variance reduction: the dual variables in primal–dual methods and the control variates in variance-reduced stochastic algorithms play remarkably similar roles, revealing a unifying perspective on these seemingly distinct areas.
Convex nonsmooth optimization problems in high-dimensional spaces have become ubiquitous. Primal–dual proximal algorithms are particularly well-suited to solving them: they rely on simple iterative operations that handle the terms of the objective function separately. Their design is grounded in the framework of monotone inclusions, where splitting techniques provide a powerful way to decompose a complex problem involving multiple terms into simpler subproblems that can be solved and combined efficiently. Meanwhile, stochastic algorithms such as Stochastic Gradient Descent (SGD) have been central to the success of machine learning and artificial intelligence. Modern variance-reduced methods enhance these algorithms by counteracting the noise inherent to stochastic updates, enabling convergence to exact solutions rather than oscillation around them. In this talk, I will highlight the deep connections between splitting and variance reduction: the dual variables in primal–dual methods and the control variates in variance-reduced stochastic algorithms play remarkably similar roles, revealing a unifying perspective on these seemingly distinct areas.
Toward Neuro-Inspired AI: Sparse Data, Modular Networks, and Stream-Based Continual Learning
Hosted by: Prof. Chih-Jen Lin
October 29, 2025
Constantine Dovrolis
Toward Neuro-Inspired AI: Sparse Data, Modular Networks, and Stream-Based Continual Learning
Hosted by: Prof. Chih-Jen Lin
Machine Learning
Watch Now
Abstract
Toward Neuro-Inspired AI: Sparse Data, Modular Networks, and Stream-Based Continual Learning
How can we design learning systems that resemble the brain—able to adapt continually, learn from streams, and generalize without a flood of labeled data? This talk explores recent advances in sparse and modular neural networks that push machine learning in that direction. By selecting only the most informative experiences from a stream, enforcing sparsity to balance stability and plasticity, and leveraging modular structure to reduce interference and improve efficiency, we can move toward models that learn more like animals and humans. The focus is not on scaling up to larger black boxes, but on rethinking how learning itself happens under constraints. The result is a neuro-inspired agenda for machine learning that emphasizes adaptability, efficiency, and robustness in open-ended environments.
How can we design learning systems that resemble the brain—able to adapt continually, learn from streams, and generalize without a flood of labeled data? This talk explores recent advances in sparse and modular neural networks that push machine learning in that direction. By selecting only the most informative experiences from a stream, enforcing sparsity to balance stability and plasticity, and leveraging modular structure to reduce interference and improve efficiency, we can move toward models that learn more like animals and humans. The focus is not on scaling up to larger black boxes, but on rethinking how learning itself happens under constraints. The result is a neuro-inspired agenda for machine learning that emphasizes adaptability, efficiency, and robustness in open-ended environments.
Human-AI Alignment: Philosophy, Perspectives, and Practice
Hosted by: Prof. Elizabeth Churchill
October 29, 2025
Tiffany Knearem
Human-AI Alignment: Philosophy, Perspectives, and Practice
Hosted by: Prof. Elizabeth Churchill
Human-Computer Interaction
Watch Now
Abstract
Human-AI Alignment: Philosophy, Perspectives, and Practice
Curious about how we can design AI systems that truly center human values? This talk introduces Bidirectional Human-AI Alignment, which posits alignment as a dynamic, mutual process that goes beyond simply integrating human goals into AI. By balancing AI-centered and human-centered perspectives, we can preserve human agency, foster critical engagement, and adapt societal approaches to AI that benefit humanity. To ground the discussion, we will look at case study of how AI is being used to support healthcare decision making.
Curious about how we can design AI systems that truly center human values? This talk introduces Bidirectional Human-AI Alignment, which posits alignment as a dynamic, mutual process that goes beyond simply integrating human goals into AI. By balancing AI-centered and human-centered perspectives, we can preserve human agency, foster critical engagement, and adapt societal approaches to AI that benefit humanity. To ground the discussion, we will look at case study of how AI is being used to support healthcare decision making.
Advancing Spatio-Temporal Statistics in Geo-Environmental Data Science through Deep Learning and High Performance Computing
Hosted by: Prof. Souhaib Ben Taieb
October 22, 2025
Ying Sun
Advancing Spatio-Temporal Statistics in Geo-Environmental Data Science through Deep Learning and High Performance Computing
Hosted by: Prof. Souhaib Ben Taieb
Statistics and Data Science
Watch Now
Abstract
Advancing Spatio-Temporal Statistics in Geo-Environmental Data Science through Deep Learning and High Performance Computing
In this talk, I will discuss the contributions and ongoing research of my Environmental Statistics Research Group in the area of spatio-temporal statistics, with a particular focus on leveraging deep learning and high performance computing for spatio-temporal analysis in Geo-Environmental Data Science. I will introduce the developed innovative software tools such as ExaGeoStat, ParallelVecchiaGP, and DeepKriging, which support the analysis of large-scale geostatistical datasets. During this presentation, I will also showcase environmental applications to air quality modeling and prediction.
In this talk, I will discuss the contributions and ongoing research of my Environmental Statistics Research Group in the area of spatio-temporal statistics, with a particular focus on leveraging deep learning and high performance computing for spatio-temporal analysis in Geo-Environmental Data Science. I will introduce the developed innovative software tools such as ExaGeoStat, ParallelVecchiaGP, and DeepKriging, which support the analysis of large-scale geostatistical datasets. During this presentation, I will also showcase environmental applications to air quality modeling and prediction.
High-Performance Statistical Computing: The Case of ExaGeoStat for Large-Scale Spatial Data Science
Hosted by: Prof. Souhaib Ben Taieb
October 20, 2025
Marc Genton
High-Performance Statistical Computing: The Case of ExaGeoStat for Large-Scale Spatial Data Science
Hosted by: Prof. Souhaib Ben Taieb
Statistics and Data Science
Watch Now
Abstract
High-Performance Statistical Computing: The Case of ExaGeoStat for Large-Scale Spatial Data Science
The new field of High-Performance Statistical Computing (HPSC) reflects the emergence of a statistical computing community focused on working with large computing platforms and producing software for various applications. For example, spatial data science relies on some fundamental problems such as: 1) Spatial Gaussian likelihood inference; 2) Spatial kriging; 3) Gaussian random field simulations; 4) Multivariate Gaussian probabilities; and 5) Robust inference for spatial data. These problems develop into very challenging tasks when the number of spatial locations grows large. Moreover, they are the cornerstone of more sophisticated procedures involving non-Gaussian distributions, multivariate random fields, or space-time processes. Parallel computing becomes necessary for avoiding computational and memory restrictions associated with large-scale spatial data science applications. In this talk, I will demonstrate how high-performance computing (HPC) can provide solutions to the aforementioned problems using tile-based linear algebra, tile low-rank approximations, as well as multi- and mixed-precision computational statistics. I will introduce ExaGeoStat, and its R version ExaGeoStatR, a powerful HPSC software that can perform exascale (10^18 flops/s) geostatistics by exploiting the power of existing parallel computing hardware systems, such as shared-memory, possibly equipped with GPUs, and distributed-memory systems, i.e., supercomputers. I will then describe how ExaGeoStat can be used to design competitions on spatial statistics for large datasets and to benchmark new methods developed by statisticians and data scientists for large-scale spatial data science. Finally, I will briefly demonstrate how these techniques were used to build an exascale climate emulator that received the prestigious 2024 ACM Gordon Bell Prize in Climate Modeling.
The new field of High-Performance Statistical Computing (HPSC) reflects the emergence of a statistical computing community focused on working with large computing platforms and producing software for various applications. For example, spatial data science relies on some fundamental problems such as: 1) Spatial Gaussian likelihood inference; 2) Spatial kriging; 3) Gaussian random field simulations; 4) Multivariate Gaussian probabilities; and 5) Robust inference for spatial data. These problems develop into very challenging tasks when the number of spatial locations grows large. Moreover, they are the cornerstone of more sophisticated procedures involving non-Gaussian distributions, multivariate random fields, or space-time processes. Parallel computing becomes necessary for avoiding computational and memory restrictions associated with large-scale spatial data science applications. In this talk, I will demonstrate how high-performance computing (HPC) can provide solutions to the aforementioned problems using tile-based linear algebra, tile low-rank approximations, as well as multi- and mixed-precision computational statistics. I will introduce ExaGeoStat, and its R version ExaGeoStatR, a powerful HPSC software that can perform exascale (10^18 flops/s) geostatistics by exploiting the power of existing parallel computing hardware systems, such as shared-memory, possibly equipped with GPUs, and distributed-memory systems, i.e., supercomputers. I will then describe how ExaGeoStat can be used to design competitions on spatial statistics for large datasets and to benchmark new methods developed by statisticians and data scientists for large-scale spatial data science. Finally, I will briefly demonstrate how these techniques were used to build an exascale climate emulator that received the prestigious 2024 ACM Gordon Bell Prize in Climate Modeling.
AMA - Chip Design, Software Design, and Using AI
Hosted by: Prof. Abdulrahman Mahmoud
October 16, 2025
Jim Keller
AMA - Chip Design, Software Design, and Using AI
Hosted by: Prof. Abdulrahman Mahmoud
Undergraduate Division
Abstract
AMA - Chip Design, Software Design, and Using AI
This will be a conversational "ask me anything" session
This will be a conversational "ask me anything" session
Language Model × Robotics – From Embodied Navigation to AI-Driven Robot Hand Design
Hosted by: Prof. Yutong Xie
October 16, 2025
Yanyuan Qiao
Language Model × Robotics – From Embodied Navigation to AI-Driven Robot Hand Design
Hosted by: Prof. Yutong Xie
Computer Vision
Watch Now
Abstract
Language Model × Robotics – From Embodied Navigation to AI-Driven Robot Hand Design
"Recent advances in language models are transforming how robots can perceive, reason, and act. This talk presents a series of works that explore how language models, used both as pretrained representations and interactive reasoning engines, can be applied to develop intelligent embodied agents. The studies span tasks from embodied navigation in 3D environments to automatic design of robot morphologies for manipulation. The first part focuses on embodied navigation. I began by exploring how to improve an agent’s perception of temporal and historical context through multimodal pretraining. Building on this foundation, I then examined how large language models can assist decision-making—by interpreting ambiguous instructions and injecting external knowledge to support generalization. Taking this further, we investigated using language models directly as agents, enabling them to perform navigation in continuous environments without additional training. To systematically understand what these models can and cannot do, we introduced a benchmark that evaluates key embodied capabilities, such as instruction comprehension, spatial reasoning, and alignment between language and action. The second part turns to robot design. I present our recent work on AI-driven robot hand generation, where task descriptions are translated into diverse and functional morphologies. This system leverages language models to capture user intent and guides structural generation through reasoning and feedback. Together, these studies explore a central question: how far can language models take us in embodied robotics? From interpreting instructions to designing physical form, they reveal both the opportunities and current frontiers in this rapidly evolving intersection."
"Recent advances in language models are transforming how robots can perceive, reason, and act. This talk presents a series of works that explore how language models, used both as pretrained representations and interactive reasoning engines, can be applied to develop intelligent embodied agents. The studies span tasks from embodied navigation in 3D environments to automatic design of robot morphologies for manipulation. The first part focuses on embodied navigation. I began by exploring how to improve an agent’s perception of temporal and historical context through multimodal pretraining. Building on this foundation, I then examined how large language models can assist decision-making—by interpreting ambiguous instructions and injecting external knowledge to support generalization. Taking this further, we investigated using language models directly as agents, enabling them to perform navigation in continuous environments without additional training. To systematically understand what these models can and cannot do, we introduced a benchmark that evaluates key embodied capabilities, such as instruction comprehension, spatial reasoning, and alignment between language and action. The second part turns to robot design. I present our recent work on AI-driven robot hand generation, where task descriptions are translated into diverse and functional morphologies. This system leverages language models to capture user intent and guides structural generation through reasoning and feedback. Together, these studies explore a central question: how far can language models take us in embodied robotics? From interpreting instructions to designing physical form, they reveal both the opportunities and current frontiers in this rapidly evolving intersection."
Towards AI Superhuman Reasoning & the future of knowledge discovery
Hosted by: Prof. Monojit Choudhury
October 16, 2025
Thang Luong
Towards AI Superhuman Reasoning & the future of knowledge discovery
Hosted by: Prof. Monojit Choudhury
Natural Language Processing
Watch Now
Abstract
Towards AI Superhuman Reasoning & the future of knowledge discovery
In this talk, I will discuss recent advances in AI for Mathematics, from AlphaGeometry and AlphaProof to the recent Gemini Deep Think, which achieved a historic gold-medal level performance at the International Mathematical Olympiad 2025. Through these technological breakthroughs, I will also share my thoughts towards the future of AI for knowledge discovery.
In this talk, I will discuss recent advances in AI for Mathematics, from AlphaGeometry and AlphaProof to the recent Gemini Deep Think, which achieved a historic gold-medal level performance at the International Mathematical Olympiad 2025. Through these technological breakthroughs, I will also share my thoughts towards the future of AI for knowledge discovery.
Navigating Privacy, Data Protection, AI, and IP Laws in AI Development: A Practical Approach
Hosted by: Prof. Elizabeth Churchill
October 15, 2025
Dr. Renato Leite Monteiro
Navigating Privacy, Data Protection, AI, and IP Laws in AI Development: A Practical Approach
Hosted by: Prof. Elizabeth Churchill
Human-Computer Interaction
Watch Now
Abstract
Navigating Privacy, Data Protection, AI, and IP Laws in AI Development: A Practical Approach
VP - Privacy, Data Protection and AI @ e&. Former Global Head of Privacy @ X. PhD from the University of São Paulo (USP). Fellow at the Oxford Internet Institute (OII). Professor of Law. LL.M from New York University (NYU) and the National University of Singapore (NUS).
VP - Privacy, Data Protection and AI @ e&. Former Global Head of Privacy @ X. PhD from the University of São Paulo (USP). Fellow at the Oxford Internet Institute (OII). Professor of Law. LL.M from New York University (NYU) and the National University of Singapore (NUS).
Human-Centric AI: Learning and Co-Creating Humans in 2D, 3D and 4D.
Hosted by: Prof. Elizabeth Churchill
October 13, 2025
Yi Zhou
Human-Centric AI: Learning and Co-Creating Humans in 2D, 3D and 4D.
Hosted by: Prof. Elizabeth Churchill
Human-Computer Interaction
Watch Now
Abstract
Human-Centric AI: Learning and Co-Creating Humans in 2D, 3D and 4D.
This talk explores how AI can learn from humans and co-create with humans to capture the richness of human appearance, motion, interactions, and personality. I will present three lines of work: (1) building large-scale 4D datasets such as HUMOTO, which capture human–human and human–object interactions with industry-standard fidelity; (2) developing novel 3D representations and differentiable simulations, including DMesh and Digital Salon, for efficient modeling of complex geometry and dynamics; and (3) designing generative tools that enable intuitive, user-guided creation of digital humans and their interactions and behaviors in scenes. Together, these efforts advance a vision of human-centric generative AI: systems that learn about humans, collaborate with humans, and empower creativity across 2D, 3D, and 4D domains.
This talk explores how AI can learn from humans and co-create with humans to capture the richness of human appearance, motion, interactions, and personality. I will present three lines of work: (1) building large-scale 4D datasets such as HUMOTO, which capture human–human and human–object interactions with industry-standard fidelity; (2) developing novel 3D representations and differentiable simulations, including DMesh and Digital Salon, for efficient modeling of complex geometry and dynamics; and (3) designing generative tools that enable intuitive, user-guided creation of digital humans and their interactions and behaviors in scenes. Together, these efforts advance a vision of human-centric generative AI: systems that learn about humans, collaborate with humans, and empower creativity across 2D, 3D, and 4D domains.
A Formal but Pragmatic Foundation for General-Purpose Operating Systems
Hosted by: Prof. Elizabeth Churchill
October 9, 2025
Timothy Roscoe
A Formal but Pragmatic Foundation for General-Purpose Operating Systems
Hosted by: Prof. Elizabeth Churchill
Human-Computer Interaction
Watch Now
Abstract
A Formal but Pragmatic Foundation for General-Purpose Operating Systems
The Operating System (OS) is fundamental to the correct working of any non-trivial computer system, and general-purpose OSes like Linux (and Android), Windows, iOS and MacOS are the central component of the infrastructure of modern computing and communications, from mobile phones to cloud providers. Modern AI would not be possible without OS software providing required scaling and communication between distributed tasks. Faults attributable to OS flaws have serious consequences ranging from security breaches to global-scale outages. Despite this, general-purpose OS design and implementation today remains surprisingly ad-hoc, based on a simplistic architecture proposed decades ago for machines designed in 1970s. Since then, system hardware has changed beyond recognition: computers are complex networks of cores, devices, management engines, and accelerators, all running code ignored by the nominal OS. This broad disconnect between hardware reality and OS structure underlies many security and reliability flaws, and will not go away without a radical change in approach. I'll talk about our attempts to put general-purpose OS development on a solid foundation for the first time, based on a formal framework for capturing the software-visible semantics of all the hardware in complete, real computers. Above this, we are working on tooling to assemble an OS for modern heterogeneous servers and systems-on-chip which can incorporate existing drivers, firmware, and application environments, but nevertheless offer strong, formal platform-wide guarantees of application isolation and security.
The Operating System (OS) is fundamental to the correct working of any non-trivial computer system, and general-purpose OSes like Linux (and Android), Windows, iOS and MacOS are the central component of the infrastructure of modern computing and communications, from mobile phones to cloud providers. Modern AI would not be possible without OS software providing required scaling and communication between distributed tasks. Faults attributable to OS flaws have serious consequences ranging from security breaches to global-scale outages. Despite this, general-purpose OS design and implementation today remains surprisingly ad-hoc, based on a simplistic architecture proposed decades ago for machines designed in 1970s. Since then, system hardware has changed beyond recognition: computers are complex networks of cores, devices, management engines, and accelerators, all running code ignored by the nominal OS. This broad disconnect between hardware reality and OS structure underlies many security and reliability flaws, and will not go away without a radical change in approach. I'll talk about our attempts to put general-purpose OS development on a solid foundation for the first time, based on a formal framework for capturing the software-visible semantics of all the hardware in complete, real computers. Above this, we are working on tooling to assemble an OS for modern heterogeneous servers and systems-on-chip which can incorporate existing drivers, firmware, and application environments, but nevertheless offer strong, formal platform-wide guarantees of application isolation and security.
Ubiquitous AI for Health
Hosted by: Prof. Elizabeth Churchill
October 9, 2025
Afsaneh Doryab
Ubiquitous AI for Health
Hosted by: Prof. Elizabeth Churchill
Human-Computer Interaction
Watch Now
Abstract
Ubiquitous AI for Health
Harnessing data streams generated by widely used devices, such as smartphones, wearables, and embedded sensors, allows AI algorithms to continuously model, detect, and predict people's biobehavioural and social states. These algorithms can then use the resulting models to deliver personalized services, recommendations, and interventions. However, this capability also introduces new technical challenges related to data collection, processing, algorithm development, modelling, and interpretation. In this talk, I will discuss my research approaches to address some of these challenges in the context of health and wellness applications. I will demonstrate how we leverage multimodal mobile data streams to model aspects such as circadian rhythm variability. Additionally, I will describe how we integrate biobehavioural models to create innovative strategies, including music melodies designed for personalized health status communication.
Harnessing data streams generated by widely used devices, such as smartphones, wearables, and embedded sensors, allows AI algorithms to continuously model, detect, and predict people's biobehavioural and social states. These algorithms can then use the resulting models to deliver personalized services, recommendations, and interventions. However, this capability also introduces new technical challenges related to data collection, processing, algorithm development, modelling, and interpretation. In this talk, I will discuss my research approaches to address some of these challenges in the context of health and wellness applications. I will demonstrate how we leverage multimodal mobile data streams to model aspects such as circadian rhythm variability. Additionally, I will describe how we integrate biobehavioural models to create innovative strategies, including music melodies designed for personalized health status communication.
The Age of AI: And Our Human Future
Hosted by: Prof. Timothy Baldwin
October 2, 2025
Daniel Huttenlocher
The Age of AI: And Our Human Future
Hosted by: Prof. Timothy Baldwin
Watch Now
Abstract
The Age of AI: And Our Human Future
In this talk we look at how AI is changing discovery, knowledge, human interaction, and how we understand the world around us. These changes are becoming more prominent with every passing moment, and this session endeavors to help build insights into the development and deployment of AI for broad benefit. The talk will also present a brief overview of the MIT Schwarzman College of Computing.
In this talk we look at how AI is changing discovery, knowledge, human interaction, and how we understand the world around us. These changes are becoming more prominent with every passing moment, and this session endeavors to help build insights into the development and deployment of AI for broad benefit. The talk will also present a brief overview of the MIT Schwarzman College of Computing.
3D Reconstruction in the era of Machine Learning and Gaussian Splatting
Hosted by: Prof. Ian Reid
September 30, 2025
Ravi Garg
3D Reconstruction in the era of Machine Learning and Gaussian Splatting
Hosted by: Prof. Ian Reid
Computer Vision
Watch Now
Abstract
3D Reconstruction in the era of Machine Learning and Gaussian Splatting
"The problem of 3D reconstruction from multiple views has traditionally been posed as an inverse problem: estimating structure, appearance, and camera parameters from observed images. Classical approaches emphasised minimal parametrisation, simplified image formation models, and the use of hand-crafted priors to render the optimisation well-posed. This paradigm has recently been challenged by the emergence of overparameterised scene representations—such as Radiance Fields and Gaussian Splatting, and overparameterised camera models. These representations enable efficient inference, rapid novel-view synthesis, and offer greater flexibility in training neural networks for 3D reconstruction. This talk will examine the implications of such overparameterised formulations in recovering scene geometry. I will present recent works demonstrating that while the additional flexibility afforded by overparameterisation can be beneficial, it often necessitates careful geometric regularisation. I will discuss often overlooked considerations in employing these representations by both neural and non-neural 3D reconstruction techniques."
"The problem of 3D reconstruction from multiple views has traditionally been posed as an inverse problem: estimating structure, appearance, and camera parameters from observed images. Classical approaches emphasised minimal parametrisation, simplified image formation models, and the use of hand-crafted priors to render the optimisation well-posed. This paradigm has recently been challenged by the emergence of overparameterised scene representations—such as Radiance Fields and Gaussian Splatting, and overparameterised camera models. These representations enable efficient inference, rapid novel-view synthesis, and offer greater flexibility in training neural networks for 3D reconstruction. This talk will examine the implications of such overparameterised formulations in recovering scene geometry. I will present recent works demonstrating that while the additional flexibility afforded by overparameterisation can be beneficial, it often necessitates careful geometric regularisation. I will discuss often overlooked considerations in employing these representations by both neural and non-neural 3D reconstruction techniques."
Towards biological discovery with foundation models: applications in neuroscience
Hosted by: Prof. Eduardo Beltrame
September 30, 2025
Ravi Solanki
Towards biological discovery with foundation models: applications in neuroscience
Hosted by: Prof. Eduardo Beltrame
Computational Biology
Watch Now
Abstract
Towards biological discovery with foundation models: applications in neuroscience
Foundation models offer the potential to transform discovery for the biological science, promising novel biomarkers as well as new directions for therapeutic application. Design of such models however can be challenging, and their application can be equally difficult. Here, I will discuss our work generating the infrastructure to enable biological discovery robustly, efficiently, and at-scale with foundation modelling. Applied specifically to the neurosciences and the study of neurodegenerative conditions like Alzheimer’s and Parkinson’s, we have shown foundation models can learn complex representations of disease, and derive novel biomarkers and therapeutic directions. I will also share our thinking about future directions for frontier AI for treating these major causes of global mortality.
Foundation models offer the potential to transform discovery for the biological science, promising novel biomarkers as well as new directions for therapeutic application. Design of such models however can be challenging, and their application can be equally difficult. Here, I will discuss our work generating the infrastructure to enable biological discovery robustly, efficiently, and at-scale with foundation modelling. Applied specifically to the neurosciences and the study of neurodegenerative conditions like Alzheimer’s and Parkinson’s, we have shown foundation models can learn complex representations of disease, and derive novel biomarkers and therapeutic directions. I will also share our thinking about future directions for frontier AI for treating these major causes of global mortality.
Exploring the Power of Speech: How Synthetic Voices Shape User Perception and Behavior
Hosted by: Prof. Elizabeth Churchill
September 29, 2025
Matuesz Dubiel
Exploring the Power of Speech: How Synthetic Voices Shape User Perception and Behavior
Hosted by: Prof. Elizabeth Churchill
Human-Computer Interaction
Watch Now
Abstract
Exploring the Power of Speech: How Synthetic Voices Shape User Perception and Behavior
Speech-enabled Conversational Agents (CAs), such as Amazon Alexa, Apple Siri, and Google Assistant, are becoming increasingly more popular interaction platforms for users to engage with their mobile devices and smart speakers. While CAs have the potential to support users in achieving behavioural change goals, such as increasing physical activity or improving productivity at work, they can also lead to complacent behaviour and a lack of reflection. In the first part of my presentation, I will discuss how different types of synthetic voices that vary in terms of prosodic qualities and method of synthesis can affect users' perception of CAs, and what impact they can have on users' behaviour in decision-making tasks. Specifically, we will analyse how differing voice characteristics can affect user trust and engagement. In the second part, we will explore several research avenues to enable the design and development of proactive conversational agents that can effectively support users while preserving their agency.
Speech-enabled Conversational Agents (CAs), such as Amazon Alexa, Apple Siri, and Google Assistant, are becoming increasingly more popular interaction platforms for users to engage with their mobile devices and smart speakers. While CAs have the potential to support users in achieving behavioural change goals, such as increasing physical activity or improving productivity at work, they can also lead to complacent behaviour and a lack of reflection. In the first part of my presentation, I will discuss how different types of synthetic voices that vary in terms of prosodic qualities and method of synthesis can affect users' perception of CAs, and what impact they can have on users' behaviour in decision-making tasks. Specifically, we will analyse how differing voice characteristics can affect user trust and engagement. In the second part, we will explore several research avenues to enable the design and development of proactive conversational agents that can effectively support users while preserving their agency.
Computational and AI-Driven Design of Random Heteropolymers as Protein Mimics
Hosted by: Prof. Mladen Kolar
September 29, 2025
Haiyan Huang
Computational and AI-Driven Design of Random Heteropolymers as Protein Mimics
Hosted by: Prof. Mladen Kolar
Statistics and Data Science
Abstract
Computational and AI-Driven Design of Random Heteropolymers as Protein Mimics
Synthetic random heteropolymers (RHPs), composed of a predefined set of monomers, offer a promising strategy for creating protein mimicking materials with tailored biochemical functions. When designed appropriately, RHPs can replicate protein behavior, enabling applications in drug delivery, therapeutic protein stabilization, biosensing, tissue engineering, and medical diagnostics. However, designing RHPs that achieve specific biological functions in a time- and cost-effective manner remains a major challenge. In this talk, I will review this problem and discuss several successful efforts we have made to address it, using statistical, computational, and AI approaches. These include a generalized semi-hidden Markov model (GSHMM) and a hybrid variational autoencoder (VAE), which we call DeepRHP and implement within a semi-supervised framework. Both methods are designed to capture the structures of critical chemical features as well as individual RHP sequence patterns, but they offer different advantages in terms of interpretability and flexibility. These studies highlight the potential of computational approaches to accelerate the rational design of RHPs for a wide range of biological, medical, and healthcare applications.
Synthetic random heteropolymers (RHPs), composed of a predefined set of monomers, offer a promising strategy for creating protein mimicking materials with tailored biochemical functions. When designed appropriately, RHPs can replicate protein behavior, enabling applications in drug delivery, therapeutic protein stabilization, biosensing, tissue engineering, and medical diagnostics. However, designing RHPs that achieve specific biological functions in a time- and cost-effective manner remains a major challenge. In this talk, I will review this problem and discuss several successful efforts we have made to address it, using statistical, computational, and AI approaches. These include a generalized semi-hidden Markov model (GSHMM) and a hybrid variational autoencoder (VAE), which we call DeepRHP and implement within a semi-supervised framework. Both methods are designed to capture the structures of critical chemical features as well as individual RHP sequence patterns, but they offer different advantages in terms of interpretability and flexibility. These studies highlight the potential of computational approaches to accelerate the rational design of RHPs for a wide range of biological, medical, and healthcare applications.
On Generalisation and Learning
Hosted by: Prof. Mladen Kolar
September 24, 2025
Benjamin Guedj
On Generalisation and Learning
Hosted by: Prof. Mladen Kolar
Statistics and Data Science
Watch Now
Abstract
On Generalisation and Learning
"Generalisation is one of the essential problems in machine learning and foundational AI. The PAC-Bayes theory has emerged in the past two decades as a generic and flexible framework to study and enforce generalisation abilities of machine learning algorithms. It leverages the power of Bayesian inference and allows to derive new learning strategies. I will briefly present the key concepts of PAC-Bayes and pinpoint how generalisation-driven principled approaches can help further advance a better mathematical understanding of AI systems, and will highlight a few recent contributions from my group including connections to information theory, with a particular focus on our AISTATS 2024 paper https://proceedings.mlr.press/v238/hellstrom24a in which we present a unifying framework for deriving information-theoretic and PAC-Bayesian generalization bounds based on arbitrary convex comparator functions that quantify the gap between empirical and population loss. References: https://cas5-0-urlprotect.trendmicro.com:443/wis/clicktime/v1/query?url=https%3a%2f%2fbguedj.github.io%2fpublications%2f&umid=22d342e6-1e2d-415e-ac94-86c451c45ff8&rct=1756738884&auth=2558bcdb84e02b0c27cd7aa4822a24989cb4e596-640ea02a57d89009a8841304e29c786fa103dcca"
"Generalisation is one of the essential problems in machine learning and foundational AI. The PAC-Bayes theory has emerged in the past two decades as a generic and flexible framework to study and enforce generalisation abilities of machine learning algorithms. It leverages the power of Bayesian inference and allows to derive new learning strategies. I will briefly present the key concepts of PAC-Bayes and pinpoint how generalisation-driven principled approaches can help further advance a better mathematical understanding of AI systems, and will highlight a few recent contributions from my group including connections to information theory, with a particular focus on our AISTATS 2024 paper https://proceedings.mlr.press/v238/hellstrom24a in which we present a unifying framework for deriving information-theoretic and PAC-Bayesian generalization bounds based on arbitrary convex comparator functions that quantify the gap between empirical and population loss. References: https://cas5-0-urlprotect.trendmicro.com:443/wis/clicktime/v1/query?url=https%3a%2f%2fbguedj.github.io%2fpublications%2f&umid=22d342e6-1e2d-415e-ac94-86c451c45ff8&rct=1756738884&auth=2558bcdb84e02b0c27cd7aa4822a24989cb4e596-640ea02a57d89009a8841304e29c786fa103dcca"
Decoding Genome Instability: Regulatory Rewiring in Osteosarcoma and Beyond
Hosted by: Prof. Eran Segal
September 18, 2025
Yanding Zhao
Decoding Genome Instability: Regulatory Rewiring in Osteosarcoma and Beyond
Hosted by: Prof. Eran Segal
Computational Biology
Watch Now
Abstract
Decoding Genome Instability: Regulatory Rewiring in Osteosarcoma and Beyond
Genome instability in cancer spans from small-scale mutations, such as non-coding SNVs that alter transcription factor motifs, to large-scale structural variants (SVs) and extrachromosomal DNA (ecDNA) that reconfigure the 3D genome. Together, these alterations promote tumor growth and remodel the tumor microenvironment. Yet existing technologies remain siloed—each illuminates one layer of the genome, but none can connect structural change to regulatory consequence in a unified way. My work in the TCGA Pan-Cancer 3D Genome Project established integrative computational frameworks to bridge these gaps, linking variants of different scales to enhancer rewiring. Building on this methodological foundation, I applied and refined this framework in osteosarcoma, the most instability-driven pediatric cancer, providing a natural context to test this framework. Using longitudinal and multi-modal profiling, I identified MYC enhancer hijacking linked to chemoresistance and uncovered high-risk instability trajectories associated with poor prognosis. Spatial and single-cell analyses further revealed that these trajectories propagate into distinct stromal and immune states. Together, these studies show how integrative methods can decode regulatory rewiring across multiple levels, from genome architecture to the tumor microenvironment. Looking forward, I aim to extend this platform beyond osteosarcoma by integrating the Emirati Genome Programme with publicly available genomic resources to advance our understanding of instability-driven regulation and therapeutic opportunities.
Genome instability in cancer spans from small-scale mutations, such as non-coding SNVs that alter transcription factor motifs, to large-scale structural variants (SVs) and extrachromosomal DNA (ecDNA) that reconfigure the 3D genome. Together, these alterations promote tumor growth and remodel the tumor microenvironment. Yet existing technologies remain siloed—each illuminates one layer of the genome, but none can connect structural change to regulatory consequence in a unified way. My work in the TCGA Pan-Cancer 3D Genome Project established integrative computational frameworks to bridge these gaps, linking variants of different scales to enhancer rewiring. Building on this methodological foundation, I applied and refined this framework in osteosarcoma, the most instability-driven pediatric cancer, providing a natural context to test this framework. Using longitudinal and multi-modal profiling, I identified MYC enhancer hijacking linked to chemoresistance and uncovered high-risk instability trajectories associated with poor prognosis. Spatial and single-cell analyses further revealed that these trajectories propagate into distinct stromal and immune states. Together, these studies show how integrative methods can decode regulatory rewiring across multiple levels, from genome architecture to the tumor microenvironment. Looking forward, I aim to extend this platform beyond osteosarcoma by integrating the Emirati Genome Programme with publicly available genomic resources to advance our understanding of instability-driven regulation and therapeutic opportunities.
Bridging Digital and Physical Intelligence: from Generative to Embodied AI and Beyond
September 9, 2025
Yu Zeng
Bridging Digital and Physical Intelligence: from Generative to Embodied AI and Beyond
From State Estimation on Lie Groups to Robot Imagination
September 8, 2025
Gregory S. Chirikjian
From State Estimation on Lie Groups to Robot Imagination
The Human Quotient for Better AI Systems: Agents, Appropriate Reliance, and Alignment
September 8, 2025
Ujwal Gadiraju
The Human Quotient for Better AI Systems: Agents, Appropriate Reliance, and Alignment
Bayesian Monitoring of a Pandemic: A Case Study
September 4, 2025
Edward Boone
Bayesian Monitoring of a Pandemic: A Case Study
Statistical Inference on Fractional Partial Differential Equations
September 4, 2025
Ryad Ghanam
Statistical Inference on Fractional Partial Differential Equations
Testing composite null hypotheses with high-dimensional dependent data
September 2, 2025
Hongyuan Cao
Testing composite null hypotheses with high-dimensional dependent data
Building AI Systems for Sustainable Automotive Behaviors
September 2, 2025
David Ayman Shamma
Building AI Systems for Sustainable Automotive Behaviors
DB+AI: A Paradigm to Stimulate the Value of Data
August 27, 2025
Yong Zhang
DB+AI: A Paradigm to Stimulate the Value of Data
Staged Encounters: Dance as a Testbed for Human–Robot Interaction
Hosted by: Prof. Ivan Laptev
August 26, 2025
Merritt Moore
Staged Encounters: Dance as a Testbed for Human–Robot Interaction
Hosted by: Prof. Ivan Laptev
Computer Vision
Watch Now
Abstract
Staged Encounters: Dance as a Testbed for Human–Robot Interaction
Science fiction has long been our window to the future, predicting technological advancements and their societal impacts. Fiction doesn’t just entertain—it prepares us to navigate the moral and emotional complexities yet to come. Extending this inquiry into practice, Dr. Merritt Moore shares how dancing with robots has become a living experiment in future human–robot interactions and relationships. Through staged and improvised duets, she tests how machines function not merely as tools but as partners in expression and creativity, raising questions about authorship, agency, and emotional impact. This talk explores how choreography and robotics can inform one another, shaping both creative practice and future possibilities.
Science fiction has long been our window to the future, predicting technological advancements and their societal impacts. Fiction doesn’t just entertain—it prepares us to navigate the moral and emotional complexities yet to come. Extending this inquiry into practice, Dr. Merritt Moore shares how dancing with robots has become a living experiment in future human–robot interactions and relationships. Through staged and improvised duets, she tests how machines function not merely as tools but as partners in expression and creativity, raising questions about authorship, agency, and emotional impact. This talk explores how choreography and robotics can inform one another, shaping both creative practice and future possibilities.
Please meet AI, our dear new colleague. In other words: can scientists and machines truly cooperate?
Hosted by: Prof. Preslav Nakov
August 18, 2025
Iryna Gurevych
Please meet AI, our dear new colleague. In other words: can scientists and machines truly cooperate?
Hosted by: Prof. Preslav Nakov
Natural Language Processing
Watch Now
Abstract
Please meet AI, our dear new colleague. In other words: can scientists and machines truly cooperate?
How can AI and LLMs facilitate the work of scientists in different stages of the research process? Can technology even make scientists obsolete? The role of AI and Large Language Models (LLMs) in science as the target application domain has recently been rapidly growing. This includes assessing the impact of scientific work, facilitating writing and revising manuscripts as well as intelligent support for manuscript quality assessment, peer-review and scientific discussions. The talk will illustrate such methods and models using several tasks from the scientific domain. We argue that while AI and LLMs can effectively support and augment specific steps of the research process, expert-AI collaboration may be a more promising mode for complex research tasks.
How can AI and LLMs facilitate the work of scientists in different stages of the research process? Can technology even make scientists obsolete? The role of AI and Large Language Models (LLMs) in science as the target application domain has recently been rapidly growing. This includes assessing the impact of scientific work, facilitating writing and revising manuscripts as well as intelligent support for manuscript quality assessment, peer-review and scientific discussions. The talk will illustrate such methods and models using several tasks from the scientific domain. We argue that while AI and LLMs can effectively support and augment specific steps of the research process, expert-AI collaboration may be a more promising mode for complex research tasks.
Memorization-to-Generalization in Foundation Model Pretraining: Through the Lens of Pathway Optimization
July 29, 2025
Tianyi Zhou
Memorization-to-Generalization in Foundation Model Pretraining: Through the Lens of Pathway Optimization
Rethinking AI Agents: Human-Centered Reinforcement Learning
July 10, 2025
Stephanie Milani
Rethinking AI Agents: Human-Centered Reinforcement Learning
Causal Mediation Analysis Integrating Exposure, Genomic, and Phenotype Data via Tail Likelihood Ratio Method in Epigenome-Wide Studies
July 9, 2025
Haoyu Yang
Causal Mediation Analysis Integrating Exposure, Genomic, and Phenotype Data via Tail Likelihood Ratio Method in Epigenome-Wide Studies
Multilinguality in LLMs with an Eye on Semitic Languages
June 12, 2025
Reut Tsarfaty
Multilinguality in LLMs with an Eye on Semitic Languages
Enhanced localized conformal prediction with imperfect auxiliary information
June 2, 2025
Liuhua Peng
Enhanced localized conformal prediction with imperfect auxiliary information
From Argument Generation to Explainable AI: My Research in Natural Language Processing
May 26, 2025
Milad Alshomary
From Argument Generation to Explainable AI: My Research in Natural Language Processing
Bidirectional Human-AI Alignment: A User-Centered Approach to Shaping AI Systems in Practice
May 20, 2025
Tiffany Knearem
Bidirectional Human-AI Alignment: A User-Centered Approach to Shaping AI Systems in Practice
“AI For Good” Isn’t Good Enough: A Call for Human-Centered AI
May 15, 2025
James Landay
“AI For Good” Isn’t Good Enough: A Call for Human-Centered AI
Multi-modal data analysis using Graph Deep Learning for applications in healthcare
May 14, 2025
Anees Kazi
Multi-modal data analysis using Graph Deep Learning for applications in healthcare
Neuro-symbolic AI: The Third Wave of AI
May 14, 2025
Houbing Herbert Song
Neuro-symbolic AI: The Third Wave of AI
Connecting dots between different science fields towards better treatments – Breast Cancer Research - from HTA to performance assessment using real world data and genomics
May 13, 2025
Augusto Guerra
Connecting dots between different science fields towards better treatments – Breast Cancer Research - from HTA to performance assessment using real world data and genomics
Explainable Speech and Sign Language Processing using Posterior Features
May 13, 2025
Mathew Magimai Doss
Explainable Speech and Sign Language Processing using Posterior Features
The Future of Human-AI Interaction: Teaching, Talking & Teaming Up
May 12, 2025
Diyi Yang
The Future of Human-AI Interaction: Teaching, Talking & Teaming Up
Deep Learning in the Brazilian Network for Genomic Surveillance of Multidrug-Resistant Bacteria
May 8, 2025
Fabricio A. B. da Silva
Deep Learning in the Brazilian Network for Genomic Surveillance of Multidrug-Resistant Bacteria
Towards Uncertainty-Aware, Multimodal Data-Centric AI Pipelines
May 5, 2025
Laure Berti
Towards Uncertainty-Aware, Multimodal Data-Centric AI Pipelines
Harmonizing, Understanding, and Deploying Responsible AI
May 5, 2025
Junyuan Hong
Harmonizing, Understanding, and Deploying Responsible AI
New advances in the epigenetics of common disease
May 1, 2025
Andrew P. Feinberg
New advances in the epigenetics of common disease
Words Meet World: Grounded Language in Embodied AI
April 30, 2025
Joyce Chai
Words Meet World: Grounded Language in Embodied AI
Object-centric Open-world Visual Understanding
April 30, 2025
Shilong Liu
Object-centric Open-world Visual Understanding
Reverse Bioengineering to recreate multicellular animals in vitro
April 29, 2025
Ken-ichiro Kamei
Reverse Bioengineering to recreate multicellular animals in vitro
Pattern Recognition with Optimum-Path Forests
April 28, 2025
João Paulo Papa
Pattern Recognition with Optimum-Path Forests
Cameras as rays: spatial representations for 2D and 3D understanding with foundation models
April 22, 2025
Deva Ramanan
Cameras as rays: spatial representations for 2D and 3D understanding with foundation models
Towards Robust Self-supervised Representation Learning
April 22, 2025
Prakash Chandra + Rajkumar Saini
Towards Robust Self-supervised Representation Learning
Scalable and Efficient Semantic Search in Videos
April 21, 2025
Mattia Soldan
Scalable and Efficient Semantic Search in Videos
Harnessing Causal Discovery for Robust and Adaptive Natural Language Processing
April 18, 2025
Lizhen Qu
Harnessing Causal Discovery for Robust and Adaptive Natural Language Processing
Building Trustworthy Text-to-Image Models: Risks, Defenses, and Forensics
April 16, 2025
Zhang Jie
Building Trustworthy Text-to-Image Models: Risks, Defenses, and Forensics
Operationalizing Fairness in an Interconnected World
April 16, 2025
Jian Kang
Operationalizing Fairness in an Interconnected World
Watch, Predict, Act: Robot Learning Meets Web Videos
April 16, 2025
Homanga Bharadwaj
Watch, Predict, Act: Robot Learning Meets Web Videos
From Intelligence to Artificial Intelligence: Exploring the Future of Humanity
April 15, 2025
Amin Beheshti
From Intelligence to Artificial Intelligence: Exploring the Future of Humanity
A3C3 – AI Algorithm & Accelerator Co-design, Co-search, and Co-generation
April 15, 2025
Deming Chen
A3C3 – AI Algorithm & Accelerator Co-design, Co-search, and Co-generation
Building Equitable Technology Futures: A Relational Access Approach
Hosted by: Prof. Elizabeth Churchill
April 14, 2025
Vaishnav Kameswaran
Building Equitable Technology Futures: A Relational Access Approach
Hosted by: Prof. Elizabeth Churchill
Human-Computer Interaction
Watch Now
Abstract
Building Equitable Technology Futures: A Relational Access Approach
A grand challenge in HCI is understanding how technology-mediated access can enable fuller participation of people with disabilities in society. However, access, framed solely as a feature of technology, can overlook how communities of people with disabilities actively create, share, and sustain access in their everyday lives. In this talk, I show how drawing from disability justice scholarship can broaden the concept of access and open up novel avenues for design. I will share examples from my work where I reconceptualize access as a relational, socio-technical construct-- one shaped by social and material conditions, as well as community values. I will show how this perspective also expands the design space for emerging technologies like AI, shifting their roles from simply mitigating impairments to augmenting human abilities. By reframing technology-mediated access as a socio-technical and relational concept, my work offers new pathways toward more equitable technological futures in HCI.
A grand challenge in HCI is understanding how technology-mediated access can enable fuller participation of people with disabilities in society. However, access, framed solely as a feature of technology, can overlook how communities of people with disabilities actively create, share, and sustain access in their everyday lives. In this talk, I show how drawing from disability justice scholarship can broaden the concept of access and open up novel avenues for design. I will share examples from my work where I reconceptualize access as a relational, socio-technical construct-- one shaped by social and material conditions, as well as community values. I will show how this perspective also expands the design space for emerging technologies like AI, shifting their roles from simply mitigating impairments to augmenting human abilities. By reframing technology-mediated access as a socio-technical and relational concept, my work offers new pathways toward more equitable technological futures in HCI.
Controlled Natural Language Generation for Morphologically Rich Languages: The Case of Arabic
April 14, 2025
Bashar Alhafni
Controlled Natural Language Generation for Morphologically Rich Languages: The Case of Arabic
Next-generation Photorealistic Rendering
April 14, 2025
Lingqi Yan
Next-generation Photorealistic Rendering
Digital Twin of a living Cell using Physics based Artificial Intelligence
April 11, 2025
Dilip K. Prasad
Digital Twin of a living Cell using Physics based Artificial Intelligence
Don't underestimate the power of small language models
April 10, 2025
Tanmoy Chakraborty
Don't underestimate the power of small language models
Unpacking Reasoning in LLMs: Input Formats, Generating CoTs, and Fair Evaluation
April 8, 2025
Haritz Puerto
Unpacking Reasoning in LLMs: Input Formats, Generating CoTs, and Fair Evaluation
Artificial Intelligence in Drug Discovery and Computational Biology: Current Status, Successes, and Pitfalls
April 8, 2025
Andreas Bender
Artificial Intelligence in Drug Discovery and Computational Biology: Current Status, Successes, and Pitfalls
Navigating Uncertainty in Commonsense Causal Reasoning
April 8, 2025
Shaobo Cui
Navigating Uncertainty in Commonsense Causal Reasoning
Stochastic First-Order Optimization with Gradient Clipping
April 7, 2025
Eduard Gorbunov
Stochastic First-Order Optimization with Gradient Clipping
The Role of Human-Computer Interaction Perspectives in Advancing AI-Driven Next-Generation Spatial User Interfaces
April 7, 2025
Johannes Schoning
The Role of Human-Computer Interaction Perspectives in Advancing AI-Driven Next-Generation Spatial User Interfaces
Failing Forward: Rethinking the Foundations of Medical Imaging AI
April 3, 2025
Lena Maier-Hein
Failing Forward: Rethinking the Foundations of Medical Imaging AI
The Politics of Using AI in Policy Implementation: Evidence from a Field Experiment
March 24, 2025
Yotam Margalit
The Politics of Using AI in Policy Implementation: Evidence from a Field Experiment
Automated Reasoning over Strings and Sequences
March 24, 2025
Anthony Lin
Automated Reasoning over Strings and Sequences
Uncertainty Quantification for Scientific Machine Learning
March 24, 2025
Dongxia Wu
Uncertainty Quantification for Scientific Machine Learning
Towards Enhanced Linguistic Reasoning in Language Models
March 20, 2025
Bhat Suma Pallathadka
Towards Enhanced Linguistic Reasoning in Language Models
Enhancing Computational Precision Medicine with Electronic Health Records
March 20, 2025
Jun Wen
Enhancing Computational Precision Medicine with Electronic Health Records
AI-Assisted Experimentation: Challenges, Advances, and Future Directions
March 20, 2025
Raul Astudillo
AI-Assisted Experimentation: Challenges, Advances, and Future Directions
Moving GPU Systems from “Real-Fast” to “Real-Time”
March 19, 2025
Joshua Bakita
Moving GPU Systems from “Real-Fast” to “Real-Time”
Evaluating Long-Context Language Models
March 17, 2025
Marzena Karpinska
Evaluating Long-Context Language Models
Mechanism Design for Decentralized Systems
March 17, 2025
Hao Chung
Mechanism Design for Decentralized Systems
Towards Strategic Alignment in AI: Foundations, Progress and Outlook
March 13, 2025
Jibang Wu
Towards Strategic Alignment in AI: Foundations, Progress and Outlook
Thermal Imaging For Amplifying Human Perception
March 12, 2025
Yomna Abdelrahman
Thermal Imaging For Amplifying Human Perception
Algorithms in the AI Age: Fair and Learning-Augmented
March 11, 2025
Ali Valikian
Algorithms in the AI Age: Fair and Learning-Augmented
AI Advance Pathway: From Targeted Evaluation to Holistic Intelligence
March 10, 2025
Haonan Li
AI Advance Pathway: From Targeted Evaluation to Holistic Intelligence
Fear of Small Data: AI’s Blind Spot in Ethics, Lifecycle Assessment, and Policy
March 10, 2025
Ishtiaque Ahmed
Fear of Small Data: AI’s Blind Spot in Ethics, Lifecycle Assessment, and Policy
Advancing Medical AI: Robust, Interpretable, and Collaborative Solutions
March 10, 2025
Gustavo Carneiro
Advancing Medical AI: Robust, Interpretable, and Collaborative Solutions
Next-Word Prediction in Language Models and Humans
March 4, 2025
Tatsuki Kuribayashi
Next-Word Prediction in Language Models and Humans
Speech Enhancement & Video Summarization - Technology Transfer of Academic Research
March 4, 2025
Shmuel Peleg
Speech Enhancement & Video Summarization - Technology Transfer of Academic Research
Causal Neuro-Symbolic AI: synergy between neuro-symbolic and causal AI
March 3, 2025
Utkarshani Jaimini
Causal Neuro-Symbolic AI: synergy between neuro-symbolic and causal AI
Automated Program Repair for Security
March 3, 2025
Yannic Noller
Automated Program Repair for Security
Formal Methods for Modern Payment Protocols
February 24, 2025
David Basin
Formal Methods for Modern Payment Protocols
LLMs (for code) sometimes make mistakes. When should I trust them?
February 21, 2025
Prem Devanbu
LLMs (for code) sometimes make mistakes. When should I trust them?
Applying Machine Learning and GenAI to the design and operation of climate-resilient residential infrastructure
February 21, 2025
James Ehrlich
Applying Machine Learning and GenAI to the design and operation of climate-resilient residential infrastructure
Sequential Quantile Estimation for Distributed and Streaming Data
February 20, 2025
Nan Lin
Sequential Quantile Estimation for Distributed and Streaming Data
Multimodal Information Extraction from Unstructured Documents
February 19, 2025
Gülşen Eryiğit
Multimodal Information Extraction from Unstructured Documents
Towards safe, factual, and empathetic human-AI interaction
February 19, 2025
Yuxia Wang
Towards safe, factual, and empathetic human-AI interaction
Balancing Explore-exploit, or Purely Exploring
February 18, 2025
Junpei Komiyama
Balancing Explore-exploit, or Purely Exploring
PEaRCE: A Platform for Ethical and Responsible Computing Education in CS Courses
February 17, 2025
Peter Haas
PEaRCE: A Platform for Ethical and Responsible Computing Education in CS Courses
Towards Usable and Useful Explainable AI
February 11, 2025
Lijie Hu
Towards Usable and Useful Explainable AI
Open Science: A New Paradigm for the Research Lifecycle and the Role of Computing
February 6, 2025
Yannis Ioannidis
Open Science: A New Paradigm for the Research Lifecycle and the Role of Computing
Towards Responsible Visual Analytics: Fostering Inclusivity, Accessibility and Trustworthiness in the AI Era
February 5, 2025
Ali Sarvghad
Towards Responsible Visual Analytics: Fostering Inclusivity, Accessibility and Trustworthiness in the AI Era
Polygenic Score Modeling to Investigate Genotype-Phenotype Associations
February 5, 2025
Carlo Maj
Polygenic Score Modeling to Investigate Genotype-Phenotype Associations
Community-Centered Computing for Collective Action and Societal Impact
February 4, 2025
Narges Mahyar
Community-Centered Computing for Collective Action and Societal Impact
Trustworthy Machine Learning: Transparency, Collaboration, and Evaluation
February 4, 2025
Umang Bhatt
Trustworthy Machine Learning: Transparency, Collaboration, and Evaluation
Deep generative modeling of sample-level heterogeneity in single-cell genomics
February 3, 2025
Justin Hong
Deep generative modeling of sample-level heterogeneity in single-cell genomics
AI-enhanced Personalized Medicine and Therapeutic Development
January 29, 2025
Fatemeh Vafaee
AI-enhanced Personalized Medicine and Therapeutic Development
The Econometrics of Unobservables: Identification, Estimation, and Empirical Applications
January 27, 2025
Yingyao Hu
The Econometrics of Unobservables: Identification, Estimation, and Empirical Applications
Cell Biology of Developmental Processes: Imaging Across Scales
January 23, 2025
Senthil Arumugam
Cell Biology of Developmental Processes: Imaging Across Scales
Optimizing 3D Flash-Based SSDs through Device-Aware Techniques
January 23, 2025
Jihong Kim
Optimizing 3D Flash-Based SSDs through Device-Aware Techniques
How to Boot Up a New Engineering Program
January 22, 2025
Seth Fraden
How to Boot Up a New Engineering Program
Human-Computer Conversational Vision-and-Language Navigation
January 21, 2025
Qi Wu
Human-Computer Conversational Vision-and-Language Navigation
From Individual to Society: Social Simulation Driven by LLM-based Agent
January 20, 2025
Zhongyu Wei
From Individual to Society: Social Simulation Driven by LLM-based Agent
AI-based Whole-cycle Health Care Management: Problems, Challenges, and Opportunities
January 17, 2025
Jingshan Li
AI-based Whole-cycle Health Care Management: Problems, Challenges, and Opportunities
Memory representation and retrieval in neuroscience and AI
January 15, 2025
Surya Narayanan Hari
Memory representation and retrieval in neuroscience and AI
Complex disease modeling and efficient drug discovery with large language models
January 14, 2025
Yu Li
Complex disease modeling and efficient drug discovery with large language models
Efficiently Approximating Equivariance in Unconstrained Models
January 13, 2025
Ahmed Elhag
Efficiently Approximating Equivariance in Unconstrained Models
Bring an order to the chaos: Order-Preserving IO stack for Modern Flash storage
January 13, 2025
Youjip Won
Bring an order to the chaos: Order-Preserving IO stack for Modern Flash storage
Communication in the Age of AI: AI for Communication and Communication for AI
December 9, 2024
Joonhyuk Kang
Communication in the Age of AI: AI for Communication and Communication for AI
Reliability Exploration of Neural Network Accelerator
December 5, 2024
Masanori Hashimomo
Reliability Exploration of Neural Network Accelerator
Chip Design and Manufacturing with AI
December 5, 2024
Youngsoo Shin
Chip Design and Manufacturing with AI
Golden Noise and Ziazag Sampling of Diffusion Models
December 4, 2024
Zeke Xie
Golden Noise and Ziazag Sampling of Diffusion Models
Many-cell sequencing: machine learning principles and methods for moving beyond single cells to population-scale analysis
November 26, 2024
David Brown
Many-cell sequencing: machine learning principles and methods for moving beyond single cells to population-scale analysis
Security-Enhanced Radio Access Networks for 5G OpenRAN
November 21, 2024
Zhiqiang Lin
Security-Enhanced Radio Access Networks for 5G OpenRAN
Energy-Efficient and Secure EdgeAI Systems: From Architectures to Applications
November 20, 2024
Muhammad Shafique
Energy-Efficient and Secure EdgeAI Systems: From Architectures to Applications
Generative Artificial Intelligence in RNA Biology
November 19, 2024
Alexandre Paschoal
Generative Artificial Intelligence in RNA Biology
Multimodality for story-level understanding and generation of visual data
November 13, 2024
Vicky Kalogeiton
Multimodality for story-level understanding and generation of visual data
Image- and AI-guided robotics for minimally invasive surgery
November 12, 2024
Momen Abayazid
Image- and AI-guided robotics for minimally invasive surgery
From cloud computing to cloudless computing
November 11, 2024
Ang Chen
From cloud computing to cloudless computing
Physics-Based Deep Learning for Medical Imaging
November 4, 2024
Pascal Fua
Physics-Based Deep Learning for Medical Imaging
To Make Just-Noticeable Difference (JND) Computable toward Visual Intelligence
October 31, 2024
Weisi Lin
To Make Just-Noticeable Difference (JND) Computable toward Visual Intelligence
The chameleon effect in education with social AI: can children learn by subconsciously mimicking a social robot?
October 31, 2024
Maha Elgarf
The chameleon effect in education with social AI: can children learn by subconsciously mimicking a social robot?
Integrating Micro-Emotion Recognition with Mental Health Estimation for Improved Well-being
October 25, 2024
Santosh Kumar Vipparthi
Integrating Micro-Emotion Recognition with Mental Health Estimation for Improved Well-being
Amplifying the Invisible: The Impact of Video Motion Magnification in Healthcare, Engineering, and Beyond
October 25, 2024
Subrahmanyam Murala
Amplifying the Invisible: The Impact of Video Motion Magnification in Healthcare, Engineering, and Beyond
Social Media Influencers, Misinformation, and the threat to elections
October 23, 2024
Joyojeet Pal
Social Media Influencers, Misinformation, and the threat to elections
Unlocking the Potential of Large Models for Vision Related Tasks
October 16, 2024
Yanwei Fu
Unlocking the Potential of Large Models for Vision Related Tasks
Spatial AI to help humans and enable robots
October 15, 2024
Marc Pollefeys
Spatial AI to help humans and enable robots
Embodied Robot Skills and Good Old Fashioned Engineering
September 30, 2024
Michael Yu Wang
Embodied Robot Skills and Good Old Fashioned Engineering
Confidence sets for Causal Discovery
September 25, 2024
Mladen Kolar
Confidence sets for Causal Discovery
AI, Robotics, and the Living: A Research Journey and Future Perspectives
September 17, 2024
Cesare Stefanini
AI, Robotics, and the Living: A Research Journey and Future Perspectives
Human-Centric Approaches for Multimodal Deepfakes Analysis
September 13, 2024
Abhinav Dhall
Human-Centric Approaches for Multimodal Deepfakes Analysis
Towards Controllable Swarms: Integrating Artificial Intelligence at Microscopic and Macroscopic Scales
September 11, 2024
Eliseo Ferrante
Towards Controllable Swarms: Integrating Artificial Intelligence at Microscopic and Macroscopic Scales
Humanizing Technology with Assistive Augmentations
September 3, 2024
Suranga Nanayakkara
Humanizing Technology with Assistive Augmentations
Bring Your Own Kernel! Constructing High-Performance Data Management Systems from Components
September 2, 2024
Holger Pirk
Bring Your Own Kernel! Constructing High-Performance Data Management Systems from Components
Unlocking Decentralized AI and Vision: Overcoming Incentive Barriers, Orchestration Challenges, and Data Silos
August 26, 2024
Ramesh Raskar
Unlocking Decentralized AI and Vision: Overcoming Incentive Barriers, Orchestration Challenges, and Data Silos
Integrating Virtual Reality and Robotics: Enhancing Human and Robot Experiences in Assistive Technologies
August 22, 2024
Tetsunari Inamura
Integrating Virtual Reality and Robotics: Enhancing Human and Robot Experiences in Assistive Technologies
Latent Space Exploration for Safe and Trustworthy AI Models
August 21, 2024
Hassan Sajjad
Latent Space Exploration for Safe and Trustworthy AI Models
Super-aligned Machine Intelligence via a Soft Touch
August 21, 2024
Chaoyang Song
Super-aligned Machine Intelligence via a Soft Touch
Automated Decision Making for Safety Critical Applications
July 22, 2024
Mykel Kochenderfer
Automated Decision Making for Safety Critical Applications
Structured World Models for Robots
June 7, 2024
Krishnan Murthy Jatavallabhula
Structured World Models for Robots
Past, Present and Future of Speech Technologies
May 28, 2024
Pedro Moreno
Past, Present and Future of Speech Technologies
Past, Present and Future of Speech Technologies
May 28, 2024
Pedro Moreno
Past, Present and Future of Speech Technologies
Enabling precision medicine with single cell omics and decentralized clinical studies
May 23, 2024
Eduardo da Veiga Beltrame
Enabling precision medicine with single cell omics and decentralized clinical studies
Martingale-based Verification of Probabilistic Programs
May 21, 2024
Amir Goharshady
Martingale-based Verification of Probabilistic Programs
Recent Advance of Two-sample Testing and Its Application in AI Security
May 16, 2024
Feng Liu
Recent Advance of Two-sample Testing and Its Application in AI Security
Understanding Machine Learning on Graphs: From Node Classification to Algorithmic Reasoning
May 14, 2024
Kimon Fountoulakis
Understanding Machine Learning on Graphs: From Node Classification to Algorithmic Reasoning
Hardware Security through the Lens of Dr ML
May 10, 2024
Debdeep Mukhopadhyay
Hardware Security through the Lens of Dr ML
Safety of Deploying NLP Models: Uncertainty Quantification of Generative LLMs
May 6, 2024
Artem Shelmanov
Safety of Deploying NLP Models: Uncertainty Quantification of Generative LLMs
Objective-Driven AI: Towards Machines that can Learn, Reason, and Plan
February 16, 2024
Yann LeCun
Objective-Driven AI: Towards Machines that can Learn, Reason, and Plan
