Zheng Zhou 周正
Logo Ph.D. Candidate @ BUAA
Logo Former Embedded Software Engineer @ Haier

I am a Ph.D. student at Beihang University. From 2018 to 2023, I worked as an Embedded Software Engineer at Haier Group while completing a Master's degree at Shandong University, supervised by Prof. Ju Liu. I previously studied at TU Ilmenau in Germany and hold a Bachelor's degree from Qingdao University of Science and Technology.

My research focuses on enhancing the reliability and efficiency of machine learning, with particular emphasis on enhancing model robustness against adversarial examples and improving training efficiency through techniques such as dataset distillation.

I am currently seeking internship opportunities and research collaborations. Please feel free to contact me via email.

Curriculum Vitae

Education
  • Beihang University
    Beihang University
    School of Electronic and Information Engineering
    Ph.D. Candidate in Electronic Engineering
    Sep. 2023 - present
  • Shandong University
    Shandong University
    M.Eng. in Electronic Engineering
    Sep. 2020 - Jun. 2023
  • TU Ilmenau
    TU Ilmenau
    Visiting Student in Electronic Engineering
    Oct. 2016 - Oct. 2018
  • Qingdao University of Science and Technology
    Qingdao University of Science and Technology
    B.Eng. in Mechanical Engineering and Automation
    Sep. 2012 - Jun. 2016
Experience
  • Haier Group
    Haier Group
    Embedded Software Engineer
    Oct. 2018 - Jun. 2023
Honors & Awards
  • Top Reviewer at NeruIPS 2024
    Sep. 2023
  • Silver Award at ASCEND Competition for Re-ID
    Oct. 2022
  • Oral Presentation at International Conference on Swarm Intelligence (ICSI 2022)
    Sep. 2022
News
2025
Honored to present at the 2nd International Student Academic Forum, Beihang University Talk
Jun 13
The project page and code repository for ROME are now public, and the paper is coming soon: Project Page , Code Featured
Jun 01
ROME accepted as a poster at ICML 2025 (Acceptance Rate: 26.9%, 3,260/12,107) Poster
May 01
The BEARD black-box library was updated. Access the Project Page , Code , Paper Featured
Mar 31
Invited as a reviewer for TMLR 2025
Jan 24
2024
Invited as a reviewer for ICML 2025
Dec 12
Honored to be selected as a Top Reviewer for NeurIPS 2024
Nov 06
Invited as a reviewer for AISTATS 2025
Oct 01
Invited as a reviewer for ICLR 2025
Aug 25
Preprin paper on BACON, a new framework for dataset distillation: Project Page , Code , Paper Featured
Jun 03
Services
Conference Reviewer
NeurIPS 2024 ( Top Reviewer)/2025, ICLR 2025, ICML 2025, AISTATS 2025
2024-Present
Journal Reviewer
Transactions on Machine Learning Research (TMLR)
2025-Present
Academic Talks
Invited Talk at the Second International Student Academic Forum, Beihang University
June 2025
Selected Publications (view all )
ROME is Forged in Adversity: Robust Distilled Datasets via Information Bottleneck
ROME is Forged in Adversity: Robust Distilled Datasets via Information Bottleneck

Zheng Zhou, Wenquan Feng, Qiaosheng Zhang, Shuchang Lyu, Qi Zhao, Guangliang Cheng

International Conference on Machine Learning (ICML) 2025 Poster (Acceptance Rate: 26.9%, 3,260/12,107)

We introduce ROME, a method that enhances the adversarial robustness of dataset distillation by leveraging the information bottleneck principle, leading to significant improvements in robustness against both white-box and black-box attacks.

ROME is Forged in Adversity: Robust Distilled Datasets via Information Bottleneck

Zheng Zhou, Wenquan Feng, Qiaosheng Zhang, Shuchang Lyu, Qi Zhao, Guangliang Cheng

International Conference on Machine Learning (ICML) 2025 Poster (Acceptance Rate: 26.9%, 3,260/12,107)

We introduce ROME, a method that enhances the adversarial robustness of dataset distillation by leveraging the information bottleneck principle, leading to significant improvements in robustness against both white-box and black-box attacks.

BEARD: Benchmarking the Adversarial Robustness for Dataset Distillation
BEARD: Benchmarking the Adversarial Robustness for Dataset Distillation

Zheng Zhou, Wenquan Feng, Shuchang Lyu, Guangliang Cheng, Xiaowei Huang, Qi Zhao

Under review. 2024

BEARD is a unified benchmark for evaluating the adversarial robustness of dataset distillation methods, providing standardized metrics and tools to support reproducible research.

BEARD: Benchmarking the Adversarial Robustness for Dataset Distillation

Zheng Zhou, Wenquan Feng, Shuchang Lyu, Guangliang Cheng, Xiaowei Huang, Qi Zhao

Under review. 2024

BEARD is a unified benchmark for evaluating the adversarial robustness of dataset distillation methods, providing standardized metrics and tools to support reproducible research.

BACON: Bayesian Optimal Condensation Framework for Dataset Distillation
BACON: Bayesian Optimal Condensation Framework for Dataset Distillation

Zheng Zhou, Hongbo Zhao, Guangliang Cheng, Xiangtai Li, Shuchang Lyu, Wenquan Feng, Qi Zhao

Under review. 2024

This work presents BACON, the first Bayesian framework for Dataset Distillation, offering strong theoretical support to improve performance.

BACON: Bayesian Optimal Condensation Framework for Dataset Distillation

Zheng Zhou, Hongbo Zhao, Guangliang Cheng, Xiangtai Li, Shuchang Lyu, Wenquan Feng, Qi Zhao

Under review. 2024

This work presents BACON, the first Bayesian framework for Dataset Distillation, offering strong theoretical support to improve performance.

All publications