Dai Zhongxiang

I will join Chinese University of Hong Kong, Shenzhen (CUHKSZ), School of Data Science as an Assistant Professor in Aug 2024! I'll be looking for PhD students, RAs, postdocs and visiting students. Feel free to reach out if you're interested in working with me. [School Webpage: Zhongxiang Dai].

I'm currently a Postdoctoral Associate in MIT Laboratory for Information and Decision Systems (LIDS), advised by Prof. Patrick Jaillet. Previously, I was a Research Fellow in Department of Computer Science, National University of Singapore, advised by Assoc. Prof. Bryan Kian Hsiang Low.

I work on AI and machine learning. My main research interests include Bayesian optimization (BO) and multi-armed bandits (MAB), as well as other related areas such as active learning and reinforcement learning. The goal of my research is to

develop novel BO and MAB algorithms to solve complex real-world optimization problems (e.g., AutoML and AI4Science problems) in a theoretically principled manner (e.g., via regret analysis).

Email  /  Google Scholar  /  Twitter  /  Github

profile photo
My Research

  • Automating advanced AI algorithms such as large language models (LLMs)
  • AI4Science: solving complex optimization problems in different areas of science
  • Fundamental theoretical problems in Bayesian optimization and multi-armed bandits
What's New
  • May 2024: Our paper on automated prompting accepted to ICML 2024!

  • Apr 2024: I'll join CUHKSZ, School of Data Science as an Assistant Professor!

  • Mar 2024: Our 2 contributed chapters to the book Federated Learning: Theory and Practice are online!

  • Jan 2024: Our paper on NAS accepted to ICLR 2024!

  • Oct 2023: Check out our 2 pre-prints on LLM about Automated Prompting and Watermarking!

  • Sep 2023: 3 papers accepted to NeurIPS 2023!

Education

  • National University of Singapore (NUS)   (Aug 2017 - Apr 2021)
    • Ph.D. student in Artificial Intelligence, Department of Computer Science
    • Advisors: Bryan Kian Hsiang Low (NUS) & Patrick Jaillet (MIT)
    • Supported by Singapore-MIT Alliance for Research and Technology (SMART) Graduate Fellowship, eligible for co-supervision by an MIT faculty and research residency at MIT for up to six months
  • National University of Singapore (NUS)   (Aug 2011 - Jun 2015)
    • Bachelor of Engineering (Electrical Engineering), First Class Honors
Book Chapters
* denotes equal contribution.
Selected Workshop Papers & Pre-prints
* denotes equal contribution, denotes corresponding author.
  1. Prompt Optimization with Human Feedback.
    Xiaoqiang Lin, Zhongxiang Dai, Arun Verma, See-Kiong Ng, Patrick Jaillet and Kian Hsiang Low.
    ICML 2024, Workshop on Models of Human Feedback for AI Alignment. [arXiv]
    Selected as Oral

  2. Neural Dueling Bandits.
    Arun Verma*, Zhongxiang Dai*, Xiaoqiang Lin, Patrick Jaillet and Kian Hsiang Low.
    ICML 2024, Workshop on Foundations of Reinforcement Learning and Control -- Connections and Perspectives.

  3. Prompt Optimization with EASE? Efficient Ordering-aware Automated Selection of Exemplars.
    Zhaoxuan Wu*, Xiaoqiang Lin*, Zhongxiang Dai, Wenyang Hu, Yao Shu, See-Kiong Ng, Patrick Jaillet and Kian Hsiang Low.
    ICML 2024, Workshop on In-Context Learning. [arXiv]

  4. Localized Zeroth-Order Prompt Optimization.
    Wenyang Hu, Yao Shu, Zongmin Yu, Zhaoxuan Wu, Xiaoqiang Lin, Zhongxiang Dai, See-Kiong Ng and Kian Hsiang Low.
    ICML 2024, Workshop on In-Context Learning. [arXiv]

  5. Data-Centric AI in the Age of Large Language Models.
    Xinyi Xu, Zhaoxuan Wu, Rui Qiao, Arun Verma, Yao Shu, Jingtan Wang, Xinyuan Niu, Zhenfeng He, Jiangwei Chen, Zijian Zhou, Gregory Kang Ruey Lau, Hieu Dao, Lucas Agussurja, Rachael Hwee Ling Sim, Xiaoqiang Lin, Wenyang Hu, Zhongxiang Dai, Pang Wei Koh, Kian Hsiang Low.
    Pre-print, 2024. [arXiv]

  6. WASA: WAtermark-based Source Attribution for Large Language Model-Generated Data.
    Jingtan Wang*, Xinyang Lu*, Zitong Zhao*, Zhongxiang Dai, Chuan-Sheng Foo, See-Kiong Ng and Kian Hsiang Low
    Pre-print, 2023. [arXiv]

  7. Federated Zeroth-Order Optimization using Trajectory-Informed Surrogate Gradients.
    Yao Shu, Xiaoqiang Lin, Zhongxiang Dai and Kian Hsiang Low
    ICML 2024, Workshop on Differentiable Almost Everything: Differentiable Relaxations, Algorithms, Operators, and Simulators. [arXiv]

  8. Adjusted Expected Improvement for Cumulative Regret Minimization in Noisy Bayesian Optimization.
    Shouri Hu, Haowei Wang, Zhongxiang Dai, Kian Hsiang Low and Szu Hui Ng.
    Pre-print, 2022 [arXiv]

Publications
* denotes equal contribution, denotes corresponding author.
  1. Use Your INSTINCT: INSTruction optimization usIng Neural bandits Coupled with Transformers.
    Xiaoqiang Lin*, Zhaoxuan Wu*, Zhongxiang Dai, Wenyang Hu, Yao Shu, See-Kiong Ng, Patrick Jaillet and Kian Hsiang Low.
    ICML 2024. Acceptance rate: 27.5%.
    [Project page, Code, arXiv]
    Also presented at NeurIPS 2023, Workshop on Instruction Tuning and Instruction Following

  2. Robustifying and Boosting Training-Free Neural Architecture Search.
    Zhenfeng He, Yao Shu, Zhongxiang Dai, Bryan Kian Hsiang Low.
    ICLR 2024. Acceptance rate: 31%.

  3. Quantum Bayesian Optimization.
    Zhongxiang Dai*, Gregory Kang Ruey Lau*, Arun Verma, Yao Shu, Kian Hsiang Low and Patrick Jaillet.
    NeurIPS 2023. Acceptance rate: 26.1%. [code]

  4. Batch Bayesian Optimization For Replicable Experimental Design.
    Zhongxiang Dai, Quoc Phong Nguyen, Sebastian Shenghong Tay, Daisuke Urano, Richalynn Leong, Kian Hsiang Low and Patrick Jaillet.
    NeurIPS 2023. Acceptance rate: 26.1%.

  5. Exploiting Correlated Auxiliary Feedback in Parameterized Bandits.
    Arun Verma, Zhongxiang Dai, Yao Shu and Kian Hsiang Low.
    NeurIPS 2023. Acceptance rate: 26.1%.

  6. Training-Free Neural Active Learning with Initialization-Robustness Guarantees.
    Apivich Hemachandra, Zhongxiang Dai, Jasraj Singh, See-Kiong Ng and Kian Hsiang Low.
    ICML 2023. Acceptance rate: 27.9%.

  7. Federated Neural Bandits.
    Zhongxiang Dai, Yao Shu, Arun Verma, Flint Xiaofeng Fan, Kian Hsiang Low and Patrick Jaillet.
    ICLR 2023. Acceptance rate: 31.8%.

  8. Zeroth-Order Optimization with Trajectory-Informed Derivative Estimation.
    Yao Shu*, Zhongxiang Dai*, Weicong Sng, Arun Verma, Patrick Jaillet and Kian Hsiang Low.
    ICLR 2023. Acceptance rate: 31.8%.

  9. Recursive Reasoning-Based Training-Time Adversarial Machine Learning.
    Yizhou Chen, Zhongxiang Dai, Haibin Yu, Kian Hsiang Low and Teck-Hua Ho.
    In Artificial Intelligence (Special Issue on Risk-Aware Autonomous Systems: Theory and Practice), 2023.

  10. Sample-Then-Optimize Batch Neural Thompson Sampling.
    Zhongxiang Dai, Yao Shu, Kian Hsiang Low and Patrick Jaillet.
    NeurIPS 2022. Acceptance rate: 25.6%. [arXiv, Code]

  11. Unifying and Boosting Gradient-Based Training-Free Neural Architecture Search.
    Yao Shu, Zhongxiang Dai, Zhaoxuan Wu and Kian Hsiang Low.
    NeurIPS 2022. Acceptance rate: 25.6%. [arXiv]

  12. Bayesian Optimization under Stochastic Delayed Feedback.
    Arun Verma*, Zhongxiang Dai* and Kian Hsiang Low.
    ICML 2022. Acceptance rate: 21.9%.

  13. On Provably Robust Meta-Bayesian Optimization.
    Zhongxiang Dai, Yizhou Chen, Haibin Yu, Kian Hsiang Low and Patrick Jaillet.
    UAI 2022. Acceptance rate: 32.3%. [OpenReview]

  14. Neural Ensemble Search via Bayesian Sampling.
    Yao Shu, Yizhou Chen, Zhongxiang Dai and Kian Hsiang Low.
    UAI 2022. Acceptance rate: 32.3%. [OpenReview]

  15. NASI: Label- and Data-agnostic Neural Architecture Search at Initialization.
    Yao Shu, Shaofeng Cai, Zhongxiang Dai, Beng Chin Ooi and Kian Hsiang Low.
    ICLR 2022. Acceptance rate: 32.3%. [OpenReview, arXiv]

  16. Differentially Private Federated Bayesian Optimization with Distributed Exploration.
    Zhongxiang Dai, Kian Hsiang Low and Patrick Jaillet.
    NeurIPS 2021. Acceptance rate: 26%. [OpenReview, Code]

  17. Optimizing Conditional Value-At-Risk of Black-Box Functions.
    Quoc Phong Nguyen, Zhongxiang Dai, Kian Hsiang Low and Patrick Jaillet.
    NeurIPS 2021. Acceptance rate: 26%. [OpenReview, Code]

  18. Fault-Tolerant Federated Reinforcement Learning with Theoretical Guarantee.
    Xiaofeng Fan, Yining Ma, Zhongxiang Dai, Wei Jing, Cheston Tan and Kian Hsiang Low.
    NeurIPS 2021. Acceptance rate: 26%. [OpenReview, Code]

  19. Value-at-Risk Optimization with Gaussian Processes.
    Quoc Phong Nguyen, Zhongxiang Dai, Kian Hsiang Low and Patrick Jaillet.
    ICML 2021. Acceptance rate: 21.4%. [Proceedings, Code]

  20. Federated Bayesian Optimization via Thompson Sampling.
    Zhongxiang Dai, Kian Hsiang Low and Patrick Jaillet.
    NeurIPS 2020. Acceptance rate: 20.1%. [Code, Proceedings]

  21. R2-B2: Recursive Reasoning-Based Bayesian Optimization for No-Regret Learning in Games.
    Zhongxiang Dai, Yizhou Chen, Kian Hsiang Low, Patrick Jaillet and Teck-Hua Ho.
    ICML 2020. Acceptance rate: 21.8%. [Code, Proceedings, Video]

  22. Private Outsourced Bayesian Optimization.
    Dmitrii Kharkovskii, Zhongxiang Dai and Kian Hsiang Low.
    ICML 2020. Acceptance rate: 21.8%. [Code, Proceedings, Video]

  23. Bayesian Optimization Meets Bayesian Optimal Stopping.
    Zhongxiang Dai, Haibin Yu, Kian Hsiang Low, and Patrick Jaillet.
    ICML 2019. Acceptance rate: 22.6%. [Code, Proceedings]

  24. Bayesian Optimization with Binary Auxiliary Information.
    Yehong Zhang, Zhongxiang Dai, and Kian Hsiang Low.
    UAI 2019. Acceptance rate: 26.2% (plenary talk). [Code]

  25. Implicit Posterior Variational Inference for Deep Gaussian Processes.
    Haibin Yu*, Yizhou Chen*, Zhongxiang Dai, Kian Hsiang Low, and Patrick Jaillet.
    NeurIPS 2019. Acceptance rate: 3% (spotlight). [Code]

Awards and Honors
  • Dean's Graduate Research Excellence Award, NUS, School of Computing, 2021

  • Research Achievement Award × 2, NUS, School of Computing, 2019 & 2020

  • Singapore-MIT Alliance for Research and Technology (SMART) Graduate Fellowship, Aug 2017

  • ST Electronics Prize × 2 (the top student in the cohort of Electrical Engineering Year 1 & 2, NUS), Academic Year 2011/2012 & 2012/2013

  • Dean’s List × 5 (top 5% in Electrical Engineering, NUS), 2011-2015

Professional Services
  • Senior Program Committee (SPC) member of IJCAI 2021
  • Conference reviewer for: NeurIPS, ICML, ICLR, UAI, AISTATS, AAAI, CoRL, CVPR, ICCV, AAMAS, IROS, ICRA.
  • Journal reviewer for: IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), Operations Research, SIAM Journal on Optimization, Automatica, Transactions on Machine Learning Research (TMLR), Neural Networks, IEEE Robotics and Automation Letters (RA-L)
Academic Talks
  • Differentially Private Federated Bayesian Optimization with Distributed Exploration, at N-CRiPT Technical Workshop, Apr 20, 2023.
  • Bayesian Optimization Meets Bayesian Optimal Stopping, at Singapore-MIT Alliance, Future Urban Mobility Symposium 2019, Jan 28, 2019.
Teaching
  • Tutor for CS3244 Machine Learning, NUS School of Computing (Spring 2019)
  • Teaching Assistant for CS1010E Programming Methodology, NUS School of Computing (3 semesters from 2012 to 2014)
Website borrowed from Jon Barron.

Flag Counter