Portrait animation from a single source image and a driving video is a long-standing problem. Recent approaches tend to adopt diffusion-based image/video generation models for realistic and expressive animation. However, none of these diffusion models realizes high-fidelity disentangled control between the head pose and facial expression, hindering applications like expression-only or pose-only editing and animation. To address this, we propose DeX-Portrait, a novel approach capable of generating expressive portrait animation driven by disentangled pose and expression signals. Specifically, we represent the pose as an explicit global transformation and the expression as an implicit latent code. First, we design a powerful motion trainer to learn both pose and expression encoders for extracting precise and decomposed driving signals. Then we propose to inject the pose transformation into the diffusion model through a dual-branch conditioning mechanism, and the expression latent through cross attention. Finally, we design a progressive hybrid classifier-free guidance for more faithful identity consistency. Experiments show that our method outperforms state-of-the-art baselines on both animation quality and disentangled controllability.
Our method realizes expressive and precise controls over head pose and facial expression for cross-reenactment scenarios.
@misc{shi2025dexportraitdisentangledexpressiveportrait,
title={DeX-Portrait: Disentangled and Expressive Portrait Animation via Explicit and Latent Motion Representations},
author={Yuxiang Shi and Zhe Li and Yanwen Wang and Hao Zhu and Xun Cao and Ligang Liu},
year={2025},
eprint={2512.15524},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2512.15524},
}