Avatar Forcing:
Real-Time Interactive Head Avatar Generation for Natural Conversation

1KAIST    2NTU Singapore    3DeepAuto.ai

*Equal Contribution

ArXiv 2026

** The code will be released soon. **

All videos in this page contain audio.

[TL;DR] We present Avatar Forcing, a diffusion forcing-based head avatar generation model that can interact with users through their audio-visual signals at low latency on a single GPU (H100, 14GB). We improve interactive motions (e.g., active listening) through preference optimization by utilizing synthesized motion latents as losing samples.


Methods


Causal Motion Generation with Diffusion Forcing

Conventional bidirectional DiT (used in INFP, CVPR 2025) generates long-range motion latent chunks that include future latents for temporal consistency. This design naturally restricts real-time interaction with users, apart from inference time. Avatar Forcing causally generates motion latents based on diffusion forcing, allowing interaction with users through their multimodal audio-visual signals. We introduce blockwise causal look-ahead masks for motion latent generation, which address temporal inconsistency between adjacent motion blocks.

PDF Image PDF Image


Expressive & Engaging Interactive Motion Generation with Direct Preference Optimization (DPO)

We observe that listening avatar videos show less expressive or less active motions than talking avatar videos. Modeling listening motion generation on these videos leads to stiff or static motion generation. This limitation highlights the necessity of quantifying activeness or naturalness, or alternatively using human labels; however, obtaining such consistent measurements or labels is difficult.

PDF Image

To tackle this problem, we introduce direct preference optimization (DPO) targeting more engaing and more active motion generation, leveraging synthesized non-active motion latents as less-preferred samples. This learning-from-losing paradigm significantly improves motion activeness and naturalness in a cost-effective way. We reformulate DPO post-training obejctive in the context of the diffusion forcing framework.

PDF Image

Results


Comparison with Baselines

Avatar Forcing outperforms baselines across four metric categories: Reactiveness, Motion Richness, Visual Quality, and Lip Sync. In particular, it achieves superior rPCC scores, which measure the correlation between user and avatar motions. Avatar Forcing also achieves strong performance in a human evaluation study.

PDF Image PDF Image

We qualitatively compare Avatar Forcing with INFP using the demo video of INFP, as their official implementation is not available.


Ablation Studies

We conduct ablation studies on our key components: DPO and user's motion latent. Note the proposed DPO strategy significantly improves the reactiveness and motion richness. Additionally, we can generate more reactive facial expression (e.g., mirroring) when incorporating the user's motion for the avatar motion generation.


PDF Image
PDF Image PDF Image

We provide the video results including the blockwise causal look-ahead mask that help better illustrate the ablation study.


Additional Results


Talking Avatar Generation

Listening Avatar Generation

Citation


@article{ki2026avatar,
    title={Avatar Forcing: Real-Time Interactive Head Avatar Generation for Natural Conversation},
    author={Ki, Taekyung and Jang, Sangwon and Jo, Jaehyeong and Yoon, Jaehong and Hwang, Sung Ju},
    journal={arXiv preprint arXiv:2601.00664},
    year={2026}
}

Acknowledgement


The source images and audio are collected from datasets or generated by Gemini. This page is based on REPA.