Huancheng Chen

PhD student, University of Texas at Austin
Austin, Texas

huanchengch [AT] utexas.edu

Bio

I am currently an AI Research Scientist at Accenture Advanced AI Center. I obtained my PhD in Electrical and Computer Engineering in June 2025 from UT Austin, advised by Prof. Haris Vikalo. I was also a member of the WNCG lab (Wireless Networking and Communications Group). Before joining UT Austin, I obtained my B.Eng degree from the department of Electrical Engineering and Automation, South China University of Technology.

Research Interests

My PhD research concentrates on developing Scalable, Trustworthy, Efficient learning system and their applications on Large Language Models.
Specifically, I am interested in:
1) federated learning with data-heterogeneous and system-heterogeneous clients;
2) model compression (pruning, quantization, distillation);
3) continual learning with large pretrained foundation models.

I am currently working on LLM post-training, including SFT, DPO, and RLHF, to better align pretrained models with human preferences and reduce hallucinations.

News

June, 2025 I joined Accenture as AI Research Scientist (Senior Manager).
June, 2025 Passed my PhD Defense!.
November, 2024 One paper about layout-to-image based on diffusion models in arXiv.
September, 2024 One paper about continual learning on foundation models in arXiv.
November, 2024 Pass my Ph.D progress review.
September, 2024 One paper accepted in NeurIPS2024.
May, 2024 One paper accepted in ICML2024.
February, 2024 One paper accepted in CVPR2024.
February, 2024 Joining PPML team in SonyAI as research intern.
January, 2024 Invited as ICML2024, IJCAI2024 reviewers.
November, 2023 One paper about mixed-precision quantization preprinted in arXiv.
September, 2023 One paper about client selection preprinted in arXiv.
March, 2023 One paper accepted in CVPR2023 workshop.
Jan, 2023 One paper accepted in ICLR2023.

Publications

Most recent publications on Google Scholar.
indicates equal contribution.

Boundary Attention Constrained Zero-Shot Layout-To-Image Generation.

Huancheng Chen, Jingtao Li, Weiming Zhuang, Haris Vikalo, Lingjuan Lyu

arxiv

Dual Low-Rank Adaptation for Continual Learning with Pre-Trained Models.

Huancheng Chen, Jingtao Li, Weiming Zhuang, Chen Chen, Lingjuan Lyu

arxiv

Heterogeneity-Guided Client Sampling: Towards Fast and Efficient Non-IID Federated Learning

Huancheng Chen, Haris Vikalo

NeurIPS'24: Conference on Neural Information Processing Systems (poster)

Recovering Labels from Local Updates in Federated Learning

Huancheng Chen, Haris Vikalo

ICML'24: International Conference on Machine Learning (poster)

Mixed-Precision Quantization for Federated Learning on Resource-Constrained Heterogeneous Devices

Huancheng Chen, Haris Vikalo

CVPR'24: Conference on Computer Vision and Pattern Recognition (poster)

Federated Learning in Non-IID Settings Aided by Differentially Private Synthetic Data

Huancheng Chen, Haris Vikalo

CVPR'23: Conference on Computer Vision and Pattern Recognition FedVision Workshop (oral)

The Best of Both Worlds Accurate Global and Personalized Models through Federated Learning with Data-Free Hyper-Knowledge Distillation

Huancheng Chen, Johnny Wang, Haris Vikalo

ICLR'23: International Conference on Learning Representation (poster)

Skeleton-Graph: Long-Term 3D Motion Prediction From 2D Observations Using Deep Spatio-Temporal Graph CNNs

Abduallah Mohamed , Huancheng Chen, Zhangyang Wang, Christian Claudel

ICCV'21: International Conference on Computer Vision Workshop

Boundary Attention Constrained Zero-Shot Layout-To-Image Generation.

Huancheng Chen, Jingtao Li, Weiming Zhuang, Haris Vikalo, Lingjuan Lyu

arxiv

Dual Low-Rank Adaptation for Continual Learning with Pre-Trained Models.

Huancheng Chen, Jingtao Li, Weiming Zhuang, Chen Chen, Lingjuan Lyu

arxiv

Heterogeneity-Guided Client Sampling: Towards Fast and Efficient Non-IID Federated Learning

Huancheng Chen, Haris Vikalo

NeurIPS'24: Conference on Neural Information Processing Systems (poster)

Recovering Labels from Local Updates in Federated Learning

Huancheng Chen, Haris Vikalo

ICML'24: International Conference on Machine Learning (poster)

Mixed-Precision Quantization for Federated Learning on Resource-Constrained Heterogeneous Devices

Huancheng Chen, Haris Vikalo

CVPR'24: Conference on Computer Vision and Pattern Recognition (poster)

Federated Learning in Non-IID Settings Aided by Differentially Private Synthetic Data

Huancheng Chen, Haris Vikalo

CVPR'23: Conference on Computer Vision and Pattern Recognition FedVision Workshop (oral)

The Best of Both Worlds Accurate Global and Personalized Models through Federated Learning with Data-Free Hyper-Knowledge Distillation

Huancheng Chen, Johnny Wang, Haris Vikalo

ICLR'23: International Conference on Learning Representation (poster)

Skeleton-Graph: Long-Term 3D Motion Prediction From 2D Observations Using Deep Spatio-Temporal Graph CNNs

Abduallah Mohamed , Huancheng Chen, Zhangyang Wang, Christian Claudel

ICCV'21: International Conference on Computer Vision Workshop

Vitæ

Full Resume in PDF.

Teaching

TA for CS395T, 2020 Fall: Foundation of Predictive Machine Learning
TA for EE381K, 2021 Spring: Statistical Machine Learning
TA for EE422C, 2021 Summer: Software Design and Implementation II (Java)
TA for EE380L: 2021 Fall: Data Mining
TA for CS395T, 2022 Spring: Convex Optimization
TA for EE351M, 2022 Fall: Digital Signal Processing

Service

conference reviewer: ICML(22,23,24,25), NeurIPS(22,23,24, 25), ICLR(24,25), IJCAI(24,25), AAAI(25), CVPR(25).
journal reviwer: IEEE TMC

Skills

Programming Languages: Python, Java, C/C++, SQL, LaTeX
Softwares: Pytorch, Tensorflow, Transformers, Linux, AWS, Google Cloud, Matlab, Git, Docker

About Me

Outside research, I like listening to progressive rock. The most beautiful lyric I have ever heard: We just two lost souls swimming in a fishbowl, year after year. Running over the same old ground, what have we found? The same old fears. I am trying to find out who I am, driven by the same old fear and curiosity.

Acknowledgement

This website was built based on a template by Martin Saveski. Thanks for the author's contribution.