TransPixeler: Advancing Text-to-Video Generation with Transparency

Bibliographic Details
Title: TransPixeler: Advancing Text-to-Video Generation with Transparency
Authors: Wang, Luozhou, Li, Yijun, Chen, Zhifei, Wang, Jui-Hsien, Zhang, Zhifei, Zhang, He, Lin, Zhe, Chen, Yingcong
Publication Year: 2025
Collection: Computer Science
Subject Terms: Computer Science - Computer Vision and Pattern Recognition
More Details: Text-to-video generative models have made significant strides, enabling diverse applications in entertainment, advertising, and education. However, generating RGBA video, which includes alpha channels for transparency, remains a challenge due to limited datasets and the difficulty of adapting existing models. Alpha channels are crucial for visual effects (VFX), allowing transparent elements like smoke and reflections to blend seamlessly into scenes. We introduce TransPixeler, a method to extend pretrained video models for RGBA generation while retaining the original RGB capabilities. TransPixar leverages a diffusion transformer (DiT) architecture, incorporating alpha-specific tokens and using LoRA-based fine-tuning to jointly generate RGB and alpha channels with high consistency. By optimizing attention mechanisms, TransPixar preserves the strengths of the original RGB model and achieves strong alignment between RGB and alpha channels despite limited training data. Our approach effectively generates diverse and consistent RGBA videos, advancing the possibilities for VFX and interactive content creation.
Comment: Project page: https://wileewang.github.io/TransPixar/
Document Type: Working Paper
Access URL: http://arxiv.org/abs/2501.03006
Accession Number: edsarx.2501.03006
Database: arXiv