I am a Ph.D. student at Beihang University. From 2018 to 2023, I worked as an Embedded Software Engineer at Haier Group while completing a Master's degree at Shandong University, supervised by Prof. Ju Liu. I previously studied at TU Ilmenau in Germany and hold a Bachelor's degree from Qingdao University of Science and Technology.
My research focuses on enhancing the reliability and efficiency of machine learning, with particular emphasis on enhancing model robustness against adversarial examples and improving training efficiency through techniques such as dataset distillation.
I am currently seeking internship opportunities and research collaborations. Please feel free to contact me via email.
") does not match the recommended repository name for your site ("
").
", so that your site can be accessed directly at "http://
".
However, if the current repository name is intended, you can ignore this message by removing "{% include widgets/debug_repo_name.html %}
" in index.html
.
",
which does not match the baseurl
("
") configured in _config.yml
.
baseurl
in _config.yml
to "
".
Zheng Zhou, Wenquan Feng, Qiaosheng Zhang, Shuchang Lyu, Qi Zhao, Guangliang Cheng
International Conference on Machine Learning (ICML) 2025 Poster (Acceptance Rate: 26.9%, 3,260/12,107)
We introduce ROME, a method that enhances the adversarial robustness of dataset distillation by leveraging the information bottleneck principle, leading to significant improvements in robustness against both white-box and black-box attacks.
Zheng Zhou, Wenquan Feng, Qiaosheng Zhang, Shuchang Lyu, Qi Zhao, Guangliang Cheng
International Conference on Machine Learning (ICML) 2025 Poster (Acceptance Rate: 26.9%, 3,260/12,107)
We introduce ROME, a method that enhances the adversarial robustness of dataset distillation by leveraging the information bottleneck principle, leading to significant improvements in robustness against both white-box and black-box attacks.
Zheng Zhou, Wenquan Feng, Shuchang Lyu, Guangliang Cheng, Xiaowei Huang, Qi Zhao
Under review. 2024
BEARD is a unified benchmark for evaluating the adversarial robustness of dataset distillation methods, providing standardized metrics and tools to support reproducible research.
Zheng Zhou, Wenquan Feng, Shuchang Lyu, Guangliang Cheng, Xiaowei Huang, Qi Zhao
Under review. 2024
BEARD is a unified benchmark for evaluating the adversarial robustness of dataset distillation methods, providing standardized metrics and tools to support reproducible research.
Zheng Zhou, Hongbo Zhao, Guangliang Cheng, Xiangtai Li, Shuchang Lyu, Wenquan Feng, Qi Zhao
Under review. 2024
This work presents BACON, the first Bayesian framework for Dataset Distillation, offering strong theoretical support to improve performance.
Zheng Zhou, Hongbo Zhao, Guangliang Cheng, Xiangtai Li, Shuchang Lyu, Wenquan Feng, Qi Zhao
Under review. 2024
This work presents BACON, the first Bayesian framework for Dataset Distillation, offering strong theoretical support to improve performance.