Training Matting Models without Alpha Labels

Bibliographic Details
Title: Training Matting Models without Alpha Labels
Authors: Liu, Wenze, Ye, Zixuan, Lu, Hao, Cao, Zhiguo, Yue, Xiangyu
Publication Year: 2024
Collection: Computer Science
Subject Terms: Computer Science - Computer Vision and Pattern Recognition
More Details: The labelling difficulty has been a longstanding problem in deep image matting. To escape from fine labels, this work explores using rough annotations such as trimaps coarsely indicating the foreground/background as supervision. We present that the cooperation between learned semantics from indicated known regions and proper assumed matting rules can help infer alpha values at transition areas. Inspired by the nonlocal principle in traditional image matting, we build a directional distance consistency loss (DDC loss) at each pixel neighborhood to constrain the alpha values conditioned on the input image. DDC loss forces the distance of similar pairs on the alpha matte and on its corresponding image to be consistent. In this way, the alpha values can be propagated from learned known regions to unknown transition areas. With only images and trimaps, a matting model can be trained under the supervision of a known loss and the proposed DDC loss. Experiments on AM-2K and P3M-10K dataset show that our paradigm achieves comparable performance with the fine-label-supervised baseline, while sometimes offers even more satisfying results than human-labelled ground truth. Code is available at \url{https://github.com/poppuppy/alpha-free-matting}.
Comment: 12 pages, 12 figures
Document Type: Working Paper
Access URL: http://arxiv.org/abs/2408.10539
Accession Number: edsarx.2408.10539
Database: arXiv
FullText Text:
  Availability: 0
CustomLinks:
  – Url: http://arxiv.org/abs/2408.10539
    Name: EDS - Arxiv
    Category: fullText
    Text: View this record from Arxiv
    MouseOverText: View this record from Arxiv
  – Url: https://resolver.ebsco.com/c/xy5jbn/result?sid=EBSCO:edsarx&genre=article&issn=&ISBN=&volume=&issue=&date=20240820&spage=&pages=&title=Training Matting Models without Alpha Labels&atitle=Training%20Matting%20Models%20without%20Alpha%20Labels&aulast=Liu%2C%20Wenze&id=DOI:
    Name: Full Text Finder (for New FTF UI) (s8985755)
    Category: fullText
    Text: Find It @ SCU Libraries
    MouseOverText: Find It @ SCU Libraries
Header DbId: edsarx
DbLabel: arXiv
An: edsarx.2408.10539
RelevancyScore: 1112
AccessLevel: 3
PubType: Report
PubTypeId: report
PreciseRelevancyScore: 1112.24450683594
IllustrationInfo
Items – Name: Title
  Label: Title
  Group: Ti
  Data: Training Matting Models without Alpha Labels
– Name: Author
  Label: Authors
  Group: Au
  Data: <searchLink fieldCode="AR" term="%22Liu%2C+Wenze%22">Liu, Wenze</searchLink><br /><searchLink fieldCode="AR" term="%22Ye%2C+Zixuan%22">Ye, Zixuan</searchLink><br /><searchLink fieldCode="AR" term="%22Lu%2C+Hao%22">Lu, Hao</searchLink><br /><searchLink fieldCode="AR" term="%22Cao%2C+Zhiguo%22">Cao, Zhiguo</searchLink><br /><searchLink fieldCode="AR" term="%22Yue%2C+Xiangyu%22">Yue, Xiangyu</searchLink>
– Name: DatePubCY
  Label: Publication Year
  Group: Date
  Data: 2024
– Name: Subset
  Label: Collection
  Group: HoldingsInfo
  Data: Computer Science
– Name: Subject
  Label: Subject Terms
  Group: Su
  Data: <searchLink fieldCode="DE" term="%22Computer+Science+-+Computer+Vision+and+Pattern+Recognition%22">Computer Science - Computer Vision and Pattern Recognition</searchLink>
– Name: Abstract
  Label: Description
  Group: Ab
  Data: The labelling difficulty has been a longstanding problem in deep image matting. To escape from fine labels, this work explores using rough annotations such as trimaps coarsely indicating the foreground/background as supervision. We present that the cooperation between learned semantics from indicated known regions and proper assumed matting rules can help infer alpha values at transition areas. Inspired by the nonlocal principle in traditional image matting, we build a directional distance consistency loss (DDC loss) at each pixel neighborhood to constrain the alpha values conditioned on the input image. DDC loss forces the distance of similar pairs on the alpha matte and on its corresponding image to be consistent. In this way, the alpha values can be propagated from learned known regions to unknown transition areas. With only images and trimaps, a matting model can be trained under the supervision of a known loss and the proposed DDC loss. Experiments on AM-2K and P3M-10K dataset show that our paradigm achieves comparable performance with the fine-label-supervised baseline, while sometimes offers even more satisfying results than human-labelled ground truth. Code is available at \url{https://github.com/poppuppy/alpha-free-matting}.<br />Comment: 12 pages, 12 figures
– Name: TypeDocument
  Label: Document Type
  Group: TypDoc
  Data: Working Paper
– Name: URL
  Label: Access URL
  Group: URL
  Data: <link linkTarget="URL" linkTerm="http://arxiv.org/abs/2408.10539" linkWindow="_blank">http://arxiv.org/abs/2408.10539</link>
– Name: AN
  Label: Accession Number
  Group: ID
  Data: edsarx.2408.10539
PLink https://login.libproxy.scu.edu/login?url=https://search.ebscohost.com/login.aspx?direct=true&site=eds-live&scope=site&db=edsarx&AN=edsarx.2408.10539
RecordInfo BibRecord:
  BibEntity:
    Subjects:
      – SubjectFull: Computer Science - Computer Vision and Pattern Recognition
        Type: general
    Titles:
      – TitleFull: Training Matting Models without Alpha Labels
        Type: main
  BibRelationships:
    HasContributorRelationships:
      – PersonEntity:
          Name:
            NameFull: Liu, Wenze
      – PersonEntity:
          Name:
            NameFull: Ye, Zixuan
      – PersonEntity:
          Name:
            NameFull: Lu, Hao
      – PersonEntity:
          Name:
            NameFull: Cao, Zhiguo
      – PersonEntity:
          Name:
            NameFull: Yue, Xiangyu
    IsPartOfRelationships:
      – BibEntity:
          Dates:
            – D: 20
              M: 08
              Type: published
              Y: 2024
ResultId 1