Dai Zhongxiang
Assistant Professor
Presidential Young Fellow
The Chinese University of Hong Kong, Shenzhen (CUHKSZ)
School of Data Science (SDS)
Email: daizhongxiang [AT] cuhk [DOT] edu [DOT] cn
I'm looking for PhD students, RAs, and visiting students/interns. Feel free to reach out if you're interested in working with me. [School Webpage: Zhongxiang Dai].
I work on both the theory and practice of AI/machine learning.
On the practical side, I'm mostly interested in the inference of large language models (LLMs), including
-
prompt optimization,
-
in-context learning,
-
personalization of LLMs,
-
LLM-based agents,
-
scaling up test-time computation of LLMs (e.g., MCTS, ToT, etc.),
all of which can be studied from the perspective of multi-armed bandits (MAB) and Bayesian optimization (BO). I'm also interested in reinforcement learning from human feedback (RLHF).
On the theoretical side, I'm mainly interested in the theoretical study of MAB and BO.
Google Scholar  / 
Twitter  / 
Github
|
|
What's New
-
Dec 2024: Gave a talk at ByteDance on the topic of Enhancing Inference for Large Language Models!
-
Sep 2024: Our 2 papers on prompt optimization and exemplar selection accepted to NeurIPS 2024!!
-
Sep 2024: Gave a talk at HKUST(GZ), AI Thrust Seminars on the topic of "Prompt Optimization for LLMs" !
-
Aug 2024: Invited to serve as an Area Chair for ICLR 2025!
-
Aug 2024: I've joined CUHKSZ as an Assistant Professor and Presidential Young Fellow!
-
June 2024: 5 of our papers accepted to ICML Workshops, with topics spanning prompt optimization for LLMs, bandits and federated learning!
-
May 2024: Our paper on automated prompting accepted to ICML 2024!
-
Apr 2024: I'll join CUHKSZ, School of Data Science as an Assistant Professor!
-
Mar 2024: Our 2 contributed chapters to the book Federated Learning: Theory and Practice are online!
-
Jan 2024: Our paper on NAS accepted to ICLR 2024!
-
Oct 2023: Check out our 2 pre-prints on LLM about Automated Prompting and Watermarking!
-
Sep 2023: 3 papers accepted to NeurIPS 2023!
|
Education and Research Experience
-
Massachusetts Institute of Technology (MIT)   (Jan 2024 - June 2024)
-
Postdoctoral Associate at Laboratory for Information and Decision Systems (LIDS)
-
Working with Prof. Patrick Jaillet
-
National University of Singapore (NUS)   (Apr 2021 - Dec 2023)
-
National University of Singapore (NUS)   (Aug 2017 - Apr 2021)
-
Ph.D. student in Artificial Intelligence, Department of Computer Science
-
Advisors: Bryan Kian Hsiang Low (NUS) &
Patrick Jaillet (MIT)
-
Supported by Singapore-MIT Alliance for Research and Technology (SMART) Graduate Fellowship,
eligible for co-supervision by an MIT faculty and research residency at MIT for up to six months
-
National University of Singapore (NUS)   (Aug 2011 - Jun 2015)
-
Bachelor of Engineering (Electrical Engineering), First Class Honors
|
Book Chapters
* denotes equal contribution.
-
Federated sequential decision making: Bayesian optimization, reinforcement learning, and beyond.
Zhongxiang Dai*, Flint Xiaofeng Fan*, Cheston Tan, Trong Nghia Hoang, Kian Hsiang Low and Patrick Jaillet.
Federated Learning: Theory and Practice, Chapter 14, pages 257-279, Academic Press, 2024.
-
Data valuation in federated learning.
Zhaoxuan Wu, Xinyi Xu, Rachael Hwee Ling Sim, Yao Shu, Xiaoqiang Lin, Lucas Agussurja, Zhongxiang Dai, See-Kiong Ng, Chuan-Sheng Foo, Patrick Jaillet, Trong Nghia Hoang and Kian Hsiang Low.
Federated Learning: Theory and Practice, Chapter 15, pages 281-296, Academic Press, 2024.
|
Selected Workshop Papers & Pre-prints
* denotes equal contribution, † denotes corresponding author.
-
Prompt Optimization with Human Feedback.
Xiaoqiang Lin, Zhongxiang Dai†, Arun Verma, See-Kiong Ng, Patrick Jaillet and Kian Hsiang Low.
ICML 2024, Workshop on Models of Human Feedback for AI Alignment. [arXiv]
Selected as Oral
-
Neural Dueling Bandits.
Arun Verma*, Zhongxiang Dai*†, Xiaoqiang Lin, Patrick Jaillet and Kian Hsiang Low.
ICML 2024, Workshop on Foundations of Reinforcement Learning and Control -- Connections and Perspectives.
-
WASA: WAtermark-based Source Attribution for Large Language Model-Generated Data.
Jingtan Wang*, Xinyang Lu*, Zitong Zhao*, Zhongxiang Dai, Chuan-Sheng Foo, See-Kiong Ng and Kian Hsiang Low
Pre-print, 2023. [arXiv]
-
Federated Zeroth-Order Optimization using Trajectory-Informed Surrogate Gradients.
Yao Shu, Xiaoqiang Lin, Zhongxiang Dai† and Kian Hsiang Low
ICML 2024, Workshop on Differentiable Almost Everything: Differentiable Relaxations, Algorithms, Operators, and Simulators. [arXiv]
-
Adjusted Expected Improvement for Cumulative Regret Minimization in Noisy Bayesian Optimization.
Shouri Hu, Haowei Wang, Zhongxiang Dai, Kian Hsiang Low and Szu Hui Ng.
Pre-print, 2022 [arXiv]
|
Publications
* denotes equal contribution, † denotes corresponding author.
-
Prompt Optimization with EASE? Efficient Ordering-aware Automated Selection of Exemplars.
Zhaoxuan Wu*, Xiaoqiang Lin*, Zhongxiang Dai†, Wenyang Hu, Yao Shu, See-Kiong Ng, Patrick Jaillet and Kian Hsiang Low.
NeurIPS 2024. Acceptance rate: 25.8%.
Also presented at ICML 2024, Workshop on In-Context Learning. [arXiv]
-
Localized Zeroth-Order Prompt Optimization.
Wenyang Hu*, Yao Shu*, Zongmin Yu, Zhaoxuan Wu, Xiaoqiang Lin, Zhongxiang Dai, See-Kiong Ng and Kian Hsiang Low.
NeurIPS 2024 Spotlight. Acceptance rate: 25.8%.
Also presented at ICML 2024, Workshop on In-Context Learning. [arXiv]
-
Data-Centric AI in the Age of Large Language Models.
Xinyi Xu, Zhaoxuan Wu, Rui Qiao, Arun Verma, Yao Shu, Jingtan Wang, Xinyuan Niu, Zhenfeng He, Jiangwei Chen, Zijian Zhou, Gregory Kang Ruey Lau, Hieu Dao, Lucas Agussurja, Rachael Hwee Ling Sim, Xiaoqiang Lin, Wenyang Hu, Zhongxiang Dai, Pang Wei Koh, Kian Hsiang Low.
EMNLP Findings 2024. [arXiv]
-
Use Your INSTINCT: INSTruction optimization usIng Neural bandits Coupled with Transformers.
Xiaoqiang Lin*, Zhaoxuan Wu*, Zhongxiang Dai†, Wenyang Hu, Yao Shu, See-Kiong Ng, Patrick Jaillet and Kian Hsiang Low.
ICML 2024. Acceptance rate: 27.5%.
[Project page, Code, arXiv]
Also presented at NeurIPS 2023, Workshop on Instruction Tuning and Instruction Following
-
Robustifying and Boosting Training-Free Neural Architecture Search.
Zhenfeng He, Yao Shu, Zhongxiang Dai, Bryan Kian Hsiang Low.
ICLR 2024. Acceptance rate: 31%.
-
Quantum Bayesian Optimization.
Zhongxiang Dai*, Gregory Kang Ruey Lau*, Arun Verma, Yao Shu, Kian Hsiang Low and Patrick Jaillet.
NeurIPS 2023. Acceptance rate: 26.1%. [code]
-
Batch Bayesian Optimization For Replicable Experimental Design.
Zhongxiang Dai, Quoc Phong Nguyen, Sebastian Shenghong Tay, Daisuke Urano, Richalynn Leong, Kian Hsiang Low and Patrick Jaillet.
NeurIPS 2023. Acceptance rate: 26.1%.
-
Exploiting Correlated Auxiliary Feedback in Parameterized Bandits.
Arun Verma, Zhongxiang Dai, Yao Shu and Kian Hsiang Low.
NeurIPS 2023. Acceptance rate: 26.1%.
-
Training-Free Neural Active Learning with Initialization-Robustness Guarantees.
Apivich Hemachandra, Zhongxiang Dai†, Jasraj Singh, See-Kiong Ng and Kian Hsiang Low.
ICML 2023. Acceptance rate: 27.9%.
-
Federated Neural Bandits.
Zhongxiang Dai, Yao Shu, Arun Verma, Flint Xiaofeng Fan, Kian Hsiang Low and Patrick Jaillet.
ICLR 2023. Acceptance rate: 31.8%.
-
Zeroth-Order Optimization with Trajectory-Informed Derivative Estimation.
Yao Shu*, Zhongxiang Dai*, Weicong Sng, Arun Verma, Patrick Jaillet and Kian Hsiang Low.
ICLR 2023. Acceptance rate: 31.8%.
-
Recursive Reasoning-Based Training-Time Adversarial Machine Learning.
Yizhou Chen, Zhongxiang Dai, Haibin Yu, Kian Hsiang Low and Teck-Hua Ho.
In Artificial Intelligence (Special Issue on Risk-Aware Autonomous Systems: Theory and Practice), 2023.
-
Sample-Then-Optimize Batch Neural Thompson Sampling.
Zhongxiang Dai, Yao Shu, Kian Hsiang Low and Patrick Jaillet.
NeurIPS 2022. Acceptance rate: 25.6%. [arXiv, Code]
-
Unifying and Boosting Gradient-Based Training-Free Neural Architecture Search.
Yao Shu, Zhongxiang Dai†, Zhaoxuan Wu and Kian Hsiang Low.
NeurIPS 2022. Acceptance rate: 25.6%. [arXiv]
-
Bayesian Optimization under Stochastic Delayed Feedback.
Arun Verma*, Zhongxiang Dai* and Kian Hsiang Low.
ICML 2022. Acceptance rate: 21.9%.
-
On Provably Robust Meta-Bayesian Optimization.
Zhongxiang Dai, Yizhou Chen, Haibin Yu, Kian Hsiang Low and Patrick Jaillet.
UAI 2022. Acceptance rate: 32.3%. [OpenReview]
-
Neural Ensemble Search via Bayesian Sampling.
Yao Shu, Yizhou Chen, Zhongxiang Dai and Kian Hsiang Low.
UAI 2022. Acceptance rate: 32.3%. [OpenReview]
-
NASI: Label- and Data-agnostic Neural Architecture Search at Initialization.
Yao Shu, Shaofeng Cai, Zhongxiang Dai, Beng Chin Ooi and Kian Hsiang Low.
ICLR 2022. Acceptance rate: 32.3%. [OpenReview, arXiv]
-
Differentially Private Federated Bayesian Optimization with Distributed Exploration.
Zhongxiang Dai, Kian Hsiang Low and Patrick Jaillet.
NeurIPS 2021. Acceptance rate: 26%. [OpenReview, Code]
-
Optimizing Conditional Value-At-Risk of Black-Box Functions.
Quoc Phong Nguyen, Zhongxiang Dai, Kian Hsiang Low and Patrick Jaillet.
NeurIPS 2021. Acceptance rate: 26%. [OpenReview, Code]
-
Fault-Tolerant Federated Reinforcement Learning with Theoretical Guarantee.
Xiaofeng Fan, Yining Ma, Zhongxiang Dai, Wei Jing, Cheston Tan and Kian Hsiang Low.
NeurIPS 2021. Acceptance rate: 26%. [OpenReview, Code]
-
Value-at-Risk Optimization with Gaussian Processes.
Quoc Phong Nguyen, Zhongxiang Dai, Kian Hsiang Low and Patrick Jaillet.
ICML 2021. Acceptance rate: 21.4%. [Proceedings, Code]
-
Federated Bayesian Optimization via Thompson Sampling.
Zhongxiang Dai, Kian Hsiang Low and Patrick Jaillet.
NeurIPS 2020. Acceptance rate: 20.1%. [Code, Proceedings]
-
R2-B2: Recursive Reasoning-Based Bayesian Optimization for No-Regret Learning in Games.
Zhongxiang Dai, Yizhou Chen, Kian Hsiang Low, Patrick Jaillet and Teck-Hua Ho.
ICML 2020. Acceptance rate: 21.8%. [Code, Proceedings, Video]
-
Private Outsourced Bayesian Optimization.
Dmitrii Kharkovskii, Zhongxiang Dai and Kian Hsiang Low.
ICML 2020. Acceptance rate: 21.8%. [Code, Proceedings, Video]
-
Bayesian Optimization Meets Bayesian Optimal Stopping.
Zhongxiang Dai, Haibin Yu, Kian Hsiang Low, and Patrick Jaillet.
ICML 2019. Acceptance rate: 22.6%. [Code, Proceedings]
-
Bayesian Optimization with Binary Auxiliary Information.
Yehong Zhang, Zhongxiang Dai, and Kian Hsiang Low.
UAI 2019. Acceptance rate: 26.2% (plenary talk). [Code]
-
Implicit Posterior Variational Inference for Deep Gaussian Processes.
Haibin Yu*, Yizhou Chen*, Zhongxiang Dai, Kian Hsiang Low, and Patrick Jaillet.
NeurIPS 2019. Acceptance rate: 3% (spotlight). [Code]
|
Awards and Honors
-
Presidential Young Fellow, The Chinese University of Hong Kong, Shenzhen, 2024
-
Dean's Graduate Research Excellence Award, NUS, School of Computing, 2021
-
Research Achievement Award × 2, NUS, School of Computing, 2019 & 2020
-
Singapore-MIT Alliance for Research and Technology (SMART) Graduate Fellowship, Aug 2017
-
ST Electronics Prize × 2 (the top student in the cohort of Electrical Engineering Year 1 & 2, NUS), Academic Year
2011/2012 & 2012/2013
-
Dean’s List × 5 (top 5% in Electrical Engineering, NUS), 2011-2015
|
Professional Services
-
Area Chair (AC) for ICLR 2025
-
Senior Program Committee (SPC) member of IJCAI 2021
-
Conference reviewer for: NeurIPS, ICML, ICLR, UAI, AISTATS, AAAI, CoRL, CVPR, ICCV, AAMAS, IROS, ICRA.
-
Journal reviewer for: IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), Operations Research, SIAM Journal on Optimization, Automatica, Transactions on Machine Learning Research (TMLR), Neural Networks, IEEE Robotics and Automation Letters (RA-L)
|
Academic Talks
-
Enhancing Inference for Large Language Models, at ByteDance, Dec 20, 2024.
-
Prompt Optimization for Large Language Models, at HKUST(GZ), AI Thrust Seminar, Sep 11, 2024.
-
Optimization in the Real World without Gradients: Theory and Practice, at HKUST, Computer Science and Engineering, Feb 29, 2024.
|
|