AlignHuman: Improving Motion and Fidelity via Timestep-Segment Preference Optimization for Audio-Driven Human Animation

Chao Liang1*, Jianwen Jiang1*†, Wang Liao1*, Jiaqi Yang1, Zerong Zheng1, Weihong Zeng1, Han Liang1,
1Bytedance *Equal contribution,Project lead TL;DR: Recent advancements in human video generation and animation tasks, driven by diffusion models, have achieved significant progress. However, expressive and realistic human animation remains challenging due to the trade-off between motion naturalness and visual fidelity. To address this, we propose AlignHuman, a framework that combines Preference Optimization as a post-training technique with a divide-and-conquer training strategy to jointly optimize these competing objectives. Our key insight stems from an analysis of the denoising process across timesteps: (1) early denoising timesteps primarily control motion dynamics, while (2) fidelity and human structure can be effectively managed by later timesteps, even if early steps are skipped. Building on this observation, we propose timestep-segment preference optimization (TPO) and introduce two specialized LoRAs as expert alignment modules, each targeting a specific dimension in its corresponding timestep interval. The LoRAs are trained using their respective preference data and activated in the corresponding intervals during inference to enhance motion naturalness and fidelity. Extensive experiments demonstrate that AlignHuman improves strong baselines and reduces NFEs during inference, achieving a 3.3× speedup (from 100 NFEs to 30 NFEs) with minimal impact on generation quality.

Generated Videos at 30 NFEs

AlignHuman is an audio-driven human animation framework, supporting various visual (cartoons, portraits, whole-body, any aspect-ratios, .etc) and audio (sing, talk) styles. It addresses the challenge of balancing motion naturalness and visual fidelity.

30 NFEs vs 100 NFEs

Comparison with Other Methods

Ethics Concerns

The purpose of this work is only for research. The images and audios used in these demos are from AIGC tools. If there are any concerns, please contact us and we will delete it in time.