Searching Priors Makes Text-to-Video Synthesis Better

Bibliographic Details
Title: Searching Priors Makes Text-to-Video Synthesis Better
Authors: Cheng, Haoran, Peng, Liang, Xia, Linxuan, Hu, Yuepeng, Li, Hengjia, Lu, Qinglin, He, Xiaofei, Wu, Boxi
Publication Year: 2024
Collection: Computer Science
Subject Terms: Computer Science - Computer Vision and Pattern Recognition
More Details: Significant advancements in video diffusion models have brought substantial progress to the field of text-to-video (T2V) synthesis. However, existing T2V synthesis model struggle to accurately generate complex motion dynamics, leading to a reduction in video realism. One possible solution is to collect massive data and train the model on it, but this would be extremely expensive. To alleviate this problem, in this paper, we reformulate the typical T2V generation process as a search-based generation pipeline. Instead of scaling up the model training, we employ existing videos as the motion prior database. Specifically, we divide T2V generation process into two steps: (i) For a given prompt input, we search existing text-video datasets to find videos with text labels that closely match the prompt motions. We propose a tailored search algorithm that emphasizes object motion features. (ii) Retrieved videos are processed and distilled into motion priors to fine-tune a pre-trained base T2V model, followed by generating desired videos using input prompt. By utilizing the priors gleaned from the searched videos, we enhance the realism of the generated videos' motion. All operations can be finished on a single NVIDIA RTX 4090 GPU. We validate our method against state-of-the-art T2V models across diverse prompt inputs. The code will be public.
Document Type: Working Paper
Access URL: http://arxiv.org/abs/2406.03215
Accession Number: edsarx.2406.03215
Database: arXiv
FullText Text:
  Availability: 0
CustomLinks:
  – Url: http://arxiv.org/abs/2406.03215
    Name: EDS - Arxiv
    Category: fullText
    Text: View this record from Arxiv
    MouseOverText: View this record from Arxiv
  – Url: https://resolver.ebsco.com/c/xy5jbn/result?sid=EBSCO:edsarx&genre=article&issn=&ISBN=&volume=&issue=&date=20240605&spage=&pages=&title=Searching Priors Makes Text-to-Video Synthesis Better&atitle=Searching%20Priors%20Makes%20Text-to-Video%20Synthesis%20Better&aulast=Cheng%2C%20Haoran&id=DOI:
    Name: Full Text Finder (for New FTF UI) (s8985755)
    Category: fullText
    Text: Find It @ SCU Libraries
    MouseOverText: Find It @ SCU Libraries
Header DbId: edsarx
DbLabel: arXiv
An: edsarx.2406.03215
RelevancyScore: 1098
AccessLevel: 3
PubType: Report
PubTypeId: report
PreciseRelevancyScore: 1098.04272460938
IllustrationInfo
Items – Name: Title
  Label: Title
  Group: Ti
  Data: Searching Priors Makes Text-to-Video Synthesis Better
– Name: Author
  Label: Authors
  Group: Au
  Data: <searchLink fieldCode="AR" term="%22Cheng%2C+Haoran%22">Cheng, Haoran</searchLink><br /><searchLink fieldCode="AR" term="%22Peng%2C+Liang%22">Peng, Liang</searchLink><br /><searchLink fieldCode="AR" term="%22Xia%2C+Linxuan%22">Xia, Linxuan</searchLink><br /><searchLink fieldCode="AR" term="%22Hu%2C+Yuepeng%22">Hu, Yuepeng</searchLink><br /><searchLink fieldCode="AR" term="%22Li%2C+Hengjia%22">Li, Hengjia</searchLink><br /><searchLink fieldCode="AR" term="%22Lu%2C+Qinglin%22">Lu, Qinglin</searchLink><br /><searchLink fieldCode="AR" term="%22He%2C+Xiaofei%22">He, Xiaofei</searchLink><br /><searchLink fieldCode="AR" term="%22Wu%2C+Boxi%22">Wu, Boxi</searchLink>
– Name: DatePubCY
  Label: Publication Year
  Group: Date
  Data: 2024
– Name: Subset
  Label: Collection
  Group: HoldingsInfo
  Data: Computer Science
– Name: Subject
  Label: Subject Terms
  Group: Su
  Data: <searchLink fieldCode="DE" term="%22Computer+Science+-+Computer+Vision+and+Pattern+Recognition%22">Computer Science - Computer Vision and Pattern Recognition</searchLink>
– Name: Abstract
  Label: Description
  Group: Ab
  Data: Significant advancements in video diffusion models have brought substantial progress to the field of text-to-video (T2V) synthesis. However, existing T2V synthesis model struggle to accurately generate complex motion dynamics, leading to a reduction in video realism. One possible solution is to collect massive data and train the model on it, but this would be extremely expensive. To alleviate this problem, in this paper, we reformulate the typical T2V generation process as a search-based generation pipeline. Instead of scaling up the model training, we employ existing videos as the motion prior database. Specifically, we divide T2V generation process into two steps: (i) For a given prompt input, we search existing text-video datasets to find videos with text labels that closely match the prompt motions. We propose a tailored search algorithm that emphasizes object motion features. (ii) Retrieved videos are processed and distilled into motion priors to fine-tune a pre-trained base T2V model, followed by generating desired videos using input prompt. By utilizing the priors gleaned from the searched videos, we enhance the realism of the generated videos' motion. All operations can be finished on a single NVIDIA RTX 4090 GPU. We validate our method against state-of-the-art T2V models across diverse prompt inputs. The code will be public.
– Name: TypeDocument
  Label: Document Type
  Group: TypDoc
  Data: Working Paper
– Name: URL
  Label: Access URL
  Group: URL
  Data: <link linkTarget="URL" linkTerm="http://arxiv.org/abs/2406.03215" linkWindow="_blank">http://arxiv.org/abs/2406.03215</link>
– Name: AN
  Label: Accession Number
  Group: ID
  Data: edsarx.2406.03215
PLink https://login.libproxy.scu.edu/login?url=https://search.ebscohost.com/login.aspx?direct=true&site=eds-live&scope=site&db=edsarx&AN=edsarx.2406.03215
RecordInfo BibRecord:
  BibEntity:
    Subjects:
      – SubjectFull: Computer Science - Computer Vision and Pattern Recognition
        Type: general
    Titles:
      – TitleFull: Searching Priors Makes Text-to-Video Synthesis Better
        Type: main
  BibRelationships:
    HasContributorRelationships:
      – PersonEntity:
          Name:
            NameFull: Cheng, Haoran
      – PersonEntity:
          Name:
            NameFull: Peng, Liang
      – PersonEntity:
          Name:
            NameFull: Xia, Linxuan
      – PersonEntity:
          Name:
            NameFull: Hu, Yuepeng
      – PersonEntity:
          Name:
            NameFull: Li, Hengjia
      – PersonEntity:
          Name:
            NameFull: Lu, Qinglin
      – PersonEntity:
          Name:
            NameFull: He, Xiaofei
      – PersonEntity:
          Name:
            NameFull: Wu, Boxi
    IsPartOfRelationships:
      – BibEntity:
          Dates:
            – D: 05
              M: 06
              Type: published
              Y: 2024
ResultId 1