Yu-Xiang Wang's Homepage
Yu-Xiang Wang 王宇翔Associate Professor, Eugene Aas Chair Department of Computer Science, Co-Director, Center for Responsible Machine Learning UC Santa Barbara Office: Henley Hall 2013 E-mail: yuxiangw AT cs.ucsb.edu Yu-Xiang is pronounced approximately as ['ju:'ʃi:ʌŋ], namely, y~eu~ee - sh~ih~ah~ng . Looking for self-motivated students and postdocs. |
Welcome
Hello! Welcome to my homepage. I am a faculty member of the computer science department in UCSB. Prior to joining UCSB, I was a scientist with Amazon AI in Palo Alto. Even before that I was with the Machine Learning Department in Carnegie Mellon University and had the pleasure of being jointly advised by Stephen Fienberg, Alex Smola, Ryan Tibshirani and Jing Lei.
Over the years I have worked on a diverse set of problems in the broad area of statistical machine learning, e.g., trend filtering, differential privacy, subspace clustering, large-scale learning / optimization, bandits / reinforcement learning, just to name a few. My most recent quests include making differential privacy practical and developing a statistical foundation for off-policy reinforcement learning.
Selected research projects [Publications, Google Scholar profile]
- NSF SCALE MoDL: Adaptivity of Deep Neural Networks
- NSF CAREER: Optimal Algorithms in Differential Privacy
- NSF RI: Offline and Low-Adaptive Reinforcement Learning
- NSF III: Neural COVID-19 Forecasting
Teaching
- (This quarter!) CS165B Machine Learning (2023 Fall): [course website]
- CS165A Artificial Intelligence (2023 Spring): [course website]
- CS291K Machine Learning (2022 Fall): [course website]
- CS165A Artificial Intelligence (2022 Spring): [course website]
- CS291A Differential Privacy (2021 Fall): [course website]
- CS292F Reinforcement Learning (2021 Spring): [course website]
- CS165A Artificial Intelligence (2020 Fall): [course website]
- CS292F Convex Optimization (2020 Spring): [course website]
News
- Jan 2024: Three papers accepted to ICLR'23, covering Watermarking LLMs, Pure Differential Privacy with MCMCs, and Federated Nonlinear Bandit Optimization. Congratulations to S2ML students, alumni and collaborators.
- Dec 2023: Dheeraj Baby and Rachel Redberg successfully defended their PhD Theses. Congratulations to S2ML's newly minted doctors!
- Oct 2023: Yuqing Zhu successfully defended her PhD. Congratulations to Dr. Zhu!
- Sept 2023: Six papers accepted to NeurIPS'23, covering recent advances on distribution shift correction, data valuation, differential privacy, and reinforcement learning. Congratulations to S2ML students, alumni and collaborators.
- Sept 2023: Ming Yin successfully defended his second PhD in Computer Science. Congratulations to Dr./Dr. Yin!
- August 2023: Talk slides (at ICML'23 and KDD'23) on watermarking generative AI available: [slides]
- July 2023: I received tenure. Sincere thanks to the amazing work of my students and collaborators!
- July 2023: Two new manuscripts just dropped! One on ``Provably robust watermark for AI Generated Text'', the other on how ``Deep Neural Networks Adapt to Low-Dimensional Manifolds''. Comments / pointers are welcome!
- June 2023: Chong Liu successfully defended his PhD. Congratulations to Dr Liu!
- May 2023: Kaiqi Zhang successfully defended his dissertation on "Overparameterization in Neural Networks: from Application to Theory". Congratulations to Dr. Zhang for becoming the second PhD graduate from S2ML!
- May 2023: Two papers accepted to UAI'23. Paper 1 presents the first no-regret linear bandits learner under misspecification. Paper 2 (to be presented as an oral) is about how "privacy-preserving prediction" is having an epic comeback! Congrats to students and collaborators!
- May 2023: Five papers accepted to ICML'23. Congratulations to my students and collaborators! Topics include "Global optimization with parametric approx", "Deep Offline RL“, "Scalable private learning for large models" and a "distillation resillient watermark for large language models" .
- March 2023: Ming Yin successfully defended his Statistics PhD dissertation on "Off-Policy Evaluation for Reinforcement Learning". Congratulations to Ming for becoming the first Phd from S2ML! One more doctorate to go.
- Jan 2023: Three papers accepted to ICLR'23. Congratulations to Ming, Dan, Kaiqi (and Prof. Mengdi too!)
- Jan 2023: Four papers accepted to AISTATS'23. Congratulations to Dheeraj, Dan, Jianyu, Rachel, Yuqing!
- Sept 2022: Three papers accepted to NeurIPS'22. Congratulations to Dheeraj, Fuheng, Dan, Rachel, Zhiliang and coauthors!
- May 2022: Paper Sample-Efficient Reinforcement Learning with loglog(T) Switching Cost accepted to ICML'2022. Congratulations to Dan Qiao and coauthors!
- April 2022: Paper on Provably Confidentiality in Language Modelling accepted to NAACL'22 as an "Oral" paper. Congratulations to Xuandong and my amazing colleague Lei Li!
- Mar 2022: Paper on Mixed Differentially Privacy in Computer Vision" accepted to CVPR'22 as an "Oral" paper. Congratulations to Aditya Golatkar, Alessandro Achille and other coauthors!
- Jan 2022: Paper on "Near-optimal Offline Reinforcement Learning with Linear Representation" accepted to ICLR'22. Congratulations to Ming (and Yaqi, Mengdi too)!
- Jan 2022: Five papers accepted to AISTATS'22. Topics include "Optimal privacy accounting", "Data-adaptive Private Top-K selection", "Agnostic Dynamic Pricing" and "Optimal Universal dynamic regret for Strongly convex losses" and for "Nonstochastic control". Congratulations to Yuqing, Jinshuo, Jianyu, Dheeraj, and Peng for the excellent work!
- Dec 2021: Excited to release a technical report on Multivariate Trend Filtering. This substantial article summarizes and extends our NeurIPS'16 / NeurIPS'17 work on "Total Variation beyond 1D".