Context Forcing: Consistent Autoregressive Video Generation with Long Context
Abstract
Context Forcing addresses student-teacher mismatch in long video generation by using a long-context teacher to guide long-rollout students through a Slow-Fast Memory architecture that extends context length beyond 20 seconds.
Recent approaches to real-time long video generation typically employ streaming tuning strategies, attempting to train a long-context student using a short-context (memoryless) teacher. In these frameworks, the student performs long rollouts but receives supervision from a teacher limited to short 5-second windows. This structural discrepancy creates a critical student-teacher mismatch: the teacher's inability to access long-term history prevents it from guiding the student on global temporal dependencies, effectively capping the student's context length. To resolve this, we propose Context Forcing, a novel framework that trains a long-context student via a long-context teacher. By ensuring the teacher is aware of the full generation history, we eliminate the supervision mismatch, enabling the robust training of models capable of long-term consistency. To make this computationally feasible for extreme durations (e.g., 2 minutes), we introduce a context management system that transforms the linearly growing context into a Slow-Fast Memory architecture, significantly reducing visual redundancy. Extensive results demonstrate that our method enables effective context lengths exceeding 20 seconds -- 2 to 10 times longer than state-of-the-art methods like LongLive and Infinite-RoPE. By leveraging this extended context, Context Forcing preserves superior consistency across long durations, surpassing state-of-the-art baselines on various long video evaluation metrics.
Community
arXivLens breakdown of this paper ๐ https://arxivlens.com/PaperView/Details/context-forcing-consistent-autoregressive-video-generation-with-long-context-9536-7490abff
- Executive Summary
- Detailed Breakdown
- Practical Applications
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Past- and Future-Informed KV Cache Policy with Salience Estimation in Autoregressive Video Diffusion (2026)
- Memorize-and-Generate: Towards Long-Term Consistency in Real-Time Video Generation (2025)
- Causal Forcing: Autoregressive Diffusion Distillation Done Right for High-Quality Real-Time Interactive Video Generation (2026)
- JoyAvatar-Flash: Real-time and Infinite Audio-Driven Avatar Generation with Autoregressive Diffusion (2025)
- BAgger: Backwards Aggregation for Mitigating Drift in Autoregressive Video Diffusion Models (2025)
- End-to-End Training for Autoregressive Video Diffusion via Self-Resampling (2025)
- WorldPlay: Towards Long-Term Geometric Consistency for Real-Time Interactive World Modeling (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper