Conventional bidirectional DiT (used in INFP, CVPR 2025) generates long-range motion latent chunks that include future latents for temporal consistency. This design naturally restricts real-time interaction with users, apart from inference time. Avatar Forcing causally generates motion latents based on diffusion forcing, allowing interaction with users through their multimodal audio-visual signals. We introduce blockwise causal look-ahead masks for motion latent generation, which address temporal inconsistency between adjacent motion blocks.
We observe that listening avatar videos show less expressive or less active motions than talking avatar videos. Modeling listening motion generation on these videos leads to stiff or static motion generation. This limitation highlights the necessity of quantifying activeness or naturalness, or alternatively using human labels; however, obtaining such consistent measurements or labels is difficult.
To tackle this problem, we introduce direct preference optimization (DPO) targeting more engaing and more active motion generation, leveraging synthesized non-active motion latents as less-preferred samples. This learning-from-losing paradigm significantly improves motion activeness and naturalness in a cost-effective way. We reformulate DPO post-training obejctive in the context of the diffusion forcing framework.
Avatar Forcing outperforms baselines across four metric categories: Reactiveness, Motion Richness, Visual Quality, and Lip Sync. In particular, it achieves superior rPCC scores, which measure the correlation between user and avatar motions. Avatar Forcing also achieves strong performance in a human evaluation study.
We qualitatively compare Avatar Forcing with INFP using the demo video of INFP, as their official implementation is not available.
We also reprouce INFP, denoted as INFP*, and compare with it.
We conduct ablation studies on our key components: DPO and user's motion latent. Note the proposed DPO strategy significantly improves the reactiveness and motion richness. Additionally, we can generate more reactive facial expression (e.g., mirroring) when incorporating the user's motion for the avatar motion generation.
We provide the video results including the blockwise causal look-ahead mask that help better illustrate the ablation study.
We compare our method with talking avatar geneartion methods on HDTF. Avatar Forcing can produce compretitive head avatar videos compared to SOTA methods.
We compare our method with listening avatar geneartion methods on ViCo. Avatar Forcing can produce compretitive head avatar videos compared to SOTA methods.
@article{ki2026avatar,
title={Avatar Forcing: Real-Time Interactive Head Avatar Generation for Natural Conversation},
author={Ki, Taekyung and Jang, Sangwon and Jo, Jaehyeong and Yoon, Jaehong and Hwang, Sung Ju},
journal={arXiv preprint arXiv:2601.00664},
year={2026}
}
The source images and audio are collected from datasets or generated by Gemini. This page is based on REPA.