AI Foundations and Frontiers: A CS Seminar Series
Please see the following schedule below for the dates, times, and speaker information for the upcoming talks taking place as part of the CS Seminar Series: AI Foundations and Frontiers. The series begins this Wednesday, April 15th, and runs through the Spring 2026 Quarter until May 27th.
Zoom is available for those who prefer to attend remotely. You can join using the following link: https://ucsb.zoom.us/j/
Series Schedule
Wednesday, April 15, 2026 — Xifeng Yan, Professor of Computer Science, UCSB
Wednesday, April 29, 2026 — Misha Sra, Associate Professor of Computer Science, UCSB & Head of Human-AI Experience Lab
Wednesday, May 6, 2026 — Ambuj Singh, Distinguished Professor of Computer Science, UCSB
Wednesday, May 13, 2026 — Yao Qin, Assistant Professor of Electrical and Computer Engineering, UCSB
Wednesday, May 20, 2026 — Grant Schoenebeck, Associate Professor at the School of Information, University of Michigan
Wednesday, May, 2026 — Yuheng Bu, Assistant Professor of Computer Science, UCSB
Seminar Information
Adaptive Inference in Large Language Models by Xifeng Yan, Professor of Computer Science, UCSB
Date: Wednesday, April 15, 2026
Time: 3:30–4:30 p.m.
Location: Harold Frank Hall 1132
Abstract: Transformer-based large language models (LLMs) have achieved remarkable success, yet many challenges remain. In this talk, I will address a fundamental question: Do all tokens require the same amount of computation within a Transformer? I will share insights into this question and introduce our dynamic layer-skipping algorithm for adaptive inference in pre-trained LLMs, where different tokens are generated using varying numbers of Transformer layers. Our findings show that many layers can be automatically skipped without degrading output quality. These skipped layers reveal a substantial amount of underutilized compute within Transformers, which can be further exploited to enable the generation of multiple tokens using only a subset of layers. We refer to this inference paradigm as Direct Multi-Token Decoding (DMTD). Unlike speculative decoding, our method introduces no additional parameters, no auxiliary routines, and requires no post-generation verification. Despite being trained on a limited dataset, it has demonstrated promising results on a fine-tuned Qwen3-4B model, achieving up to a 2x speedup with only minor performance degradation. Scaling analysis suggests further gains with larger training datasets. At the end of the talk, I will also briefly introduce our efforts on leveraging LLMs for knowledge discovery, such as multimodal models applied to the financial domain.
Spatial Human-AI Interaction by Misha Sra, Associate Professor of Computer Science, UCSB, & Head of Human-AI Experience Lab
Date: Wednesday, April 29, 2026
Time: 3:30–4:30 p.m.
Location: Harold Frank Hall 1132
Abstract: Most AI interaction today happens through language in a chat box. But many real-world activities depend on shared context such as maps, layouts, tools, physical environments to ground reasoning and coordinate action. In our work, humans and AI coordinate through such shared context. A chess player reasons with an AI over the physical board. A restaurant designer optimizes a layout with AI on a floor plan or 3D model. A therapist and AI explore a 3D knee model for rehab planning. An artist and AI create together on a shared canvas, or a fencing learner practices lunges with an AI coach in XR. Across these examples, the shared context is the medium for coordination, shaping how humans and AI communicate, act, and make decisions together. We call this Spatial Human–AI Interaction, a paradigm for human–AI coordination grounded in shared context of varying dimensionalities.
To be Announced by Ambuj Singh, Distinguished Professor of Computer Science, UCSB
Date: Wednesday, May 6, 2026
Time: 3:30–4:30 p.m.
Location: Harold Frank Hall 1132
Abstract: To be Announced.
To be Announced by Yao Qin, Assistant Professor of Electrical and Computer Engineering, UCSB
Date: Wednesday, May 13, 2026
Time: 3:30–4:30 p.m.
Location: Harold Frank Hall 1132
Abstract: To be Announced.
To be Announced by Grant Schoenebeck, Distinguished Professor of Computer Science, UCSB
Date: Wednesday, May 20, 2026
Time: 3:30–4:30 p.m.
Location: Harold Frank Hall 1132
Abstract: To be Announced.
Rethinking LLM Watermarking: From Theory to In-Context Methods by Yuheng Bu, Assistant Professor of Computer Science, UCSB
Date: Wednesday, May 27, 2026
Time: 3:30–4:30 p.m.
Location: Harold Frank Hall 1132
Abstract: LLM watermarking has emerged as a promising tool for attributing AI-generated text, but its practical deployment remains limited. In this talk, I will first present a unified theoretical framework that jointly optimizes watermark design and detection, characterizing the fundamental trade-off between detectability, false positive control, and text distortion. While this formulation yields sharp optimality insights, it typically requires the LLM provider to implement the watermark during generation, which has limited its deployment in practice. I will then discuss the reasons behind this gap, emphasizing stakeholder incentives and real-world deployment constraints. Motivated by these challenges, I will present in-context watermarking, a model-agnostic approach that embeds detectable signals through prompts rather than decoding-time access. This perspective suggests that effective watermarking requires not only strong algorithms, but also incentive-aligned designs tailored to practical domains such as peer review and education.