Title: |
DreamMapping: High-Fidelity Text-to-3D Generation via Variational Distribution Mapping |
Authors: |
Cai, Zeyu, Wang, Duotun, Liang, Yixun, Shao, Zhijing, Chen, Ying-Cong, Zhan, Xiaohang, Wang, Zeyu |
Publication Year: |
2024 |
Collection: |
Computer Science |
Subject Terms: |
Computer Science - Computer Vision and Pattern Recognition, Computer Science - Graphics, I.4.9, I.3.6 |
More Details: |
Score Distillation Sampling (SDS) has emerged as a prevalent technique for text-to-3D generation, enabling 3D content creation by distilling view-dependent information from text-to-2D guidance. However, they frequently exhibit shortcomings such as over-saturated color and excess smoothness. In this paper, we conduct a thorough analysis of SDS and refine its formulation, finding that the core design is to model the distribution of rendered images. Following this insight, we introduce a novel strategy called Variational Distribution Mapping (VDM), which expedites the distribution modeling process by regarding the rendered images as instances of degradation from diffusion-based generation. This special design enables the efficient training of variational distribution by skipping the calculations of the Jacobians in the diffusion U-Net. We also introduce timestep-dependent Distribution Coefficient Annealing (DCA) to further improve distilling precision. Leveraging VDM and DCA, we use Gaussian Splatting as the 3D representation and build a text-to-3D generation framework. Extensive experiments and evaluations demonstrate the capability of VDM and DCA to generate high-fidelity and realistic assets with optimization efficiency. Comment: 15 pages, 14 figures |
Document Type: |
Working Paper |
Access URL: |
http://arxiv.org/abs/2409.05099 |
Accession Number: |
edsarx.2409.05099 |
Database: |
arXiv |