Current frontier video diffusion models have demonstrated remarkable results at generating high-quality
videos. However, they can only generate short video clips, normally around 5 seconds or 120 frames, due to
computation limitations during training. In this work, we show that existing models can be naturally adapted
to autoregressive video diffusion models without changing the architectures. Our key idea is to assign the
latent frames with progressively increasing noise levels rather than a single noise level. Thus, each latent
can condition on all the less noisy latents before it and provide condition for all the more noisy latents
after it. Such progressive video denoising allows our models to autoregressively generate frames without
quality degradation. We present state-of-the-art results on long video generation at 1 minute (1440 frames
at 24 FPS).