Abstract
Native 4K (2160$\times$3840) video generation remains a critical challenge due to the quadratic computational explosion of full-attention as spatiotemporal resolution increases, making it difficult for models to strike a balance between efficiency and quality. This paper proposes a novel Transformer retrofit strategy termed $\textbf{T3}$ ($\textbf{T}$ransform $\textbf{T}$rained $\textbf{T}$ransformer) that, without altering the core architecture of full-attention pretrained models, significantly reduces compute requirements by optimizing their forward logic. Specifically, $\textbf{T3-Video}$ introduces a multi-scale weight-sharing window attention mechanism and, via hierarchical blocking together with an axis-preserving full-attention design, can effect an "attention pattern" transformation of a pretrained model using only modest compute and data. Results on 4K-VBench show that $\textbf{T3-Video}$ substantially outperforms existing approaches: while delivering performance improvements (+4.29$\uparrow$ VQA and +0.08$\uparrow$ VTC), it accelerates native 4K video generation by more than 10$\times$. Project page at this https URL
Abstract (translated)
原生4K (2160×3840) 视频生成仍然是一个关键挑战,因为随着时空分辨率的增加,全注意力机制的计算量呈二次增长,使得模型难以在效率和质量之间找到平衡点。本文提出了一种新颖的Transformer改进策略,称为T3(Transform Trained Transformer),该策略无需改变预训练全注意力模型的核心架构,而是通过优化其前向逻辑显著降低计算需求。 具体来说,**T3-Video** 引入了多尺度权重共享窗口注意机制,并且通过层次化阻塞与轴保留的全注意力设计,在仅使用适度计算和数据的情况下,能够实现对预训练模型的“注意力模式”转换。在4K-VBench上的结果显示,**T3-Video** 显著优于现有方法:它不仅提供了性能改进(+4.29↑ VQA 和 +0.08↑ VTC),还使得原生4K视频生成的速度提高了10倍以上。 项目页面在此链接中提供。
URL
https://arxiv.org/abs/2512.13492