Dai Zhongxiang

I'm on the academic job market for 2024. Please feel free to reach out if you have openings.

I am a Postdoctoral Associate in MIT Laboratory for Information and Decision Systems (LIDS), advised by Prof. Patrick Jaillet. Previously, I was a Research Fellow in Department of Computer Science, National University of Singapore, advised by Assoc. Prof. Bryan Kian Hsiang Low.

I work on AI and machine learning. My main research interests include Bayesian optimization (BO) and multi-armed bandits (MAB), as well as other related areas such as active learning and reinforcement learning. The goal of my research is to

develop novel BO and MAB algorithms to solve complex real-world optimization problems (e.g., AutoML and AI4Science problems) in a theoretically principled manner (e.g., via regret analysis).

Email  /  Google Scholar  /  Twitter  /  Github

profile photo
My Research

    Below is a summary of my past research. Please see my list of publications below for more details.

    summary of my works

    My future research plans include:

  • Automating advanced AI algorithms such as large language models (LLMs)
  • AI4Science: solving complex optimization problems in different areas of science
  • Fundamental theoretical problems in Bayesian optimization and multi-armed bandits
What's New
  • Mar 2024: Our 2 contributed chapters to the book Federated Learning: Theory and Practice are online!

  • Jan 2024: Our paper on NAS accepted to ICLR 2024!

  • Oct 2023: Check out our 2 pre-prints on LLM about Automated Prompting and Watermarking!

  • Sep 2023: 3 papers accepted to NeurIPS 2023!

  • Apr 2023: Our paper on Neural Active Learning accepted to ICML 2023!

  • Apr 2023: Gave a presentation at N-CRiPT Technical Workshop about our work on Differentially Private Federated Bayesian Optimization

  • Mar 2023: Invited to serve as a reviewer for NeurIPS 2023

  • Jan 2023: 2 papers accepted to ICLR 2023!

Education

  • National University of Singapore (NUS)   (Aug 2017 - Apr 2021)
    • Ph.D. student in Artificial Intelligence, Department of Computer Science
    • Advisors: Bryan Kian Hsiang Low (NUS) & Patrick Jaillet (MIT)
    • Supported by Singapore-MIT Alliance for Research and Technology (SMART) Graduate Fellowship, eligible for co-supervision by an MIT faculty and research residency at MIT for up to six months
  • National University of Singapore (NUS)   (Aug 2011 - Jun 2015)
    • Bachelor of Engineering (Electrical Engineering), First Class Honors
Book Chapters
* denotes equal contribution.
Selected Workshop Papers & Pre-prints
* denotes equal contribution, denotes corresponding author.
Publications
* denotes equal contribution, denotes corresponding author.
  1. Robustifying and Boosting Training-Free Neural Architecture Search.
    Zhenfeng He, Yao Shu, Zhongxiang Dai, Bryan Kian Hsiang Low.
    ICLR 2024. Acceptance rate: 31%.

  2. Quantum Bayesian Optimization.
    Zhongxiang Dai*, Gregory Kang Ruey Lau*, Arun Verma, Yao Shu, Kian Hsiang Low and Patrick Jaillet.
    NeurIPS 2023. Acceptance rate: 26.1%. [code]

  3. Batch Bayesian Optimization For Replicable Experimental Design.
    Zhongxiang Dai, Quoc Phong Nguyen, Sebastian Shenghong Tay, Daisuke Urano, Richalynn Leong, Kian Hsiang Low and Patrick Jaillet.
    NeurIPS 2023. Acceptance rate: 26.1%.

  4. Exploiting Correlated Auxiliary Feedback in Parameterized Bandits.
    Arun Verma, Zhongxiang Dai, Yao Shu and Kian Hsiang Low.
    NeurIPS 2023. Acceptance rate: 26.1%.

  5. Training-Free Neural Active Learning with Initialization-Robustness Guarantees.
    Apivich Hemachandra, Zhongxiang Dai, Jasraj Singh, See-Kiong Ng and Kian Hsiang Low.
    ICML 2023. Acceptance rate: 27.9%.

  6. Federated Neural Bandits.
    Zhongxiang Dai, Yao Shu, Arun Verma, Flint Xiaofeng Fan, Kian Hsiang Low and Patrick Jaillet.
    ICLR 2023. Acceptance rate: 31.8%.

  7. Zeroth-Order Optimization with Trajectory-Informed Derivative Estimation.
    Yao Shu*, Zhongxiang Dai*, Weicong Sng, Arun Verma, Patrick Jaillet and Kian Hsiang Low.
    ICLR 2023. Acceptance rate: 31.8%.

  8. Recursive Reasoning-Based Training-Time Adversarial Machine Learning.
    Yizhou Chen, Zhongxiang Dai, Haibin Yu, Kian Hsiang Low and Teck-Hua Ho.
    In Artificial Intelligence (Special Issue on Risk-Aware Autonomous Systems: Theory and Practice), 2023.

  9. Sample-Then-Optimize Batch Neural Thompson Sampling.
    Zhongxiang Dai, Yao Shu, Kian Hsiang Low and Patrick Jaillet.
    NeurIPS 2022. Acceptance rate: 25.6%. [arXiv, Code]

  10. Unifying and Boosting Gradient-Based Training-Free Neural Architecture Search.
    Yao Shu, Zhongxiang Dai, Zhaoxuan Wu and Kian Hsiang Low.
    NeurIPS 2022. Acceptance rate: 25.6%. [arXiv]

  11. Bayesian Optimization under Stochastic Delayed Feedback.
    Arun Verma*, Zhongxiang Dai* and Kian Hsiang Low.
    ICML 2022. Acceptance rate: 21.9%.

  12. On Provably Robust Meta-Bayesian Optimization.
    Zhongxiang Dai, Yizhou Chen, Haibin Yu, Kian Hsiang Low and Patrick Jaillet.
    UAI 2022. Acceptance rate: 32.3%. [OpenReview]

  13. Neural Ensemble Search via Bayesian Sampling.
    Yao Shu, Yizhou Chen, Zhongxiang Dai and Kian Hsiang Low.
    UAI 2022. Acceptance rate: 32.3%. [OpenReview]

  14. NASI: Label- and Data-agnostic Neural Architecture Search at Initialization.
    Yao Shu, Shaofeng Cai, Zhongxiang Dai, Beng Chin Ooi and Kian Hsiang Low.
    ICLR 2022. Acceptance rate: 32.3%. [OpenReview, arXiv]

  15. Differentially Private Federated Bayesian Optimization with Distributed Exploration.
    Zhongxiang Dai, Kian Hsiang Low and Patrick Jaillet.
    NeurIPS 2021. Acceptance rate: 26%. [OpenReview, Code]

  16. Optimizing Conditional Value-At-Risk of Black-Box Functions.
    Quoc Phong Nguyen, Zhongxiang Dai, Kian Hsiang Low and Patrick Jaillet.
    NeurIPS 2021. Acceptance rate: 26%. [OpenReview, Code]

  17. Fault-Tolerant Federated Reinforcement Learning with Theoretical Guarantee.
    Xiaofeng Fan, Yining Ma, Zhongxiang Dai, Wei Jing, Cheston Tan and Kian Hsiang Low.
    NeurIPS 2021. Acceptance rate: 26%. [OpenReview, Code]

  18. Value-at-Risk Optimization with Gaussian Processes.
    Quoc Phong Nguyen, Zhongxiang Dai, Kian Hsiang Low and Patrick Jaillet.
    ICML 2021. Acceptance rate: 21.4%. [Proceedings, Code]

  19. Federated Bayesian Optimization via Thompson Sampling.
    Zhongxiang Dai, Kian Hsiang Low and Patrick Jaillet.
    NeurIPS 2020. Acceptance rate: 20.1%. [Code, Proceedings]

  20. R2-B2: Recursive Reasoning-Based Bayesian Optimization for No-Regret Learning in Games.
    Zhongxiang Dai, Yizhou Chen, Kian Hsiang Low, Patrick Jaillet and Teck-Hua Ho.
    ICML 2020. Acceptance rate: 21.8%. [Code, Proceedings, Video]

  21. Private Outsourced Bayesian Optimization.
    Dmitrii Kharkovskii, Zhongxiang Dai and Kian Hsiang Low.
    ICML 2020. Acceptance rate: 21.8%. [Code, Proceedings, Video]

  22. Bayesian Optimization Meets Bayesian Optimal Stopping.
    Zhongxiang Dai, Haibin Yu, Kian Hsiang Low, and Patrick Jaillet.
    ICML 2019. Acceptance rate: 22.6%. [Code, Proceedings]

  23. Bayesian Optimization with Binary Auxiliary Information.
    Yehong Zhang, Zhongxiang Dai, and Kian Hsiang Low.
    UAI 2019. Acceptance rate: 26.2% (plenary talk). [Code]

  24. Implicit Posterior Variational Inference for Deep Gaussian Processes.
    Haibin Yu*, Yizhou Chen*, Zhongxiang Dai, Kian Hsiang Low, and Patrick Jaillet.
    NeurIPS 2019. Acceptance rate: 3% (spotlight). [Code]

Awards and Honors
  • Dean's Graduate Research Excellence Award, NUS, School of Computing, 2021

  • Research Achievement Award × 2, NUS, School of Computing, 2019 & 2020

  • Singapore-MIT Alliance for Research and Technology (SMART) Graduate Fellowship, Aug 2017

  • ST Electronics Prize × 2 (the top student in the cohort of Electrical Engineering Year 1 & 2, NUS), Academic Year 2011/2012 & 2012/2013

  • Dean’s List × 5 (top 5% in Electrical Engineering, NUS), 2011-2015

Professional Services
  • Senior Program Committee (SPC) member of IJCAI 2021, Program Committee Board member of IJCAI 2022-2024
  • Program Committee (PC) member/reviewer of
    • ICML (2021, 2022, 2023)
    • NeurIPS (2020, 2021, 2022, 2023)
    • ICLR (2021, 2022, 2023)
    • UAI (2023)
    • AISTATS (2023)
    • AAAI (2021, 2022, 2023)
    • CoRL (2020, 2021, 2022)
    • CVPR (2021, 2022)
    • ICCV (2021)
    • AAMAS (2023)
    • IROS (2021)
    • ICRA (2022)
  • Journal reviewer for
    • IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)
    • Transactions on Machine Learning Research (TMLR)
    • Neural Networks
    • IEEE Robotics and Automation Letters (RA-L)
Academic Talks
  • Differentially Private Federated Bayesian Optimization with Distributed Exploration, at N-CRiPT Technical Workshop, Apr 20, 2023.
  • Bayesian Optimization Meets Bayesian Optimal Stopping, at Singapore-MIT Alliance, Future Urban Mobility Symposium 2019, Jan 28, 2019.
Teaching
  • Tutor for CS3244 Machine Learning, NUS School of Computing (Spring 2019)
  • Teaching Assistant for CS1010E Programming Methodology, NUS School of Computing (3 semesters from 2012 to 2014)
Website borrowed from Jon Barron.

Flag Counter