Beyond pixel-wise supervision for segmentation: A few global shape descriptors might be surprisingly good!
Title: | Beyond pixel-wise supervision for segmentation: A few global shape descriptors might be surprisingly good! |
---|---|
Authors: | Kervadec, Hoel, Bahig, Houda, Letourneau-Guillon, Laurent, Dolz, Jose, Ayed, Ismail Ben |
Publication Year: | 2021 |
Collection: | Computer Science |
Subject Terms: | Computer Science - Computer Vision and Pattern Recognition |
More Details: | Standard losses for training deep segmentation networks could be seen as individual classifications of pixels, instead of supervising the global shape of the predicted segmentations. While effective, they require exact knowledge of the label of each pixel in an image. This study investigates how effective global geometric shape descriptors could be, when used on their own as segmentation losses for training deep networks. Not only interesting theoretically, there exist deeper motivations to posing segmentation problems as a reconstruction of shape descriptors: Annotations to obtain approximations of low-order shape moments could be much less cumbersome than their full-mask counterparts, and anatomical priors could be readily encoded into invariant shape descriptions, which might alleviate the annotation burden. Also, and most importantly, we hypothesize that, given a task, certain shape descriptions might be invariant across image acquisition protocols/modalities and subject populations, which might open interesting research avenues for generalization in medical image segmentation. We introduce and formulate a few shape descriptors in the context of deep segmentation, and evaluate their potential as standalone losses on two different challenging tasks. Inspired by recent works in constrained optimization for deep networks, we propose a way to use those descriptors to supervise segmentation, without any pixel-level label. Very surprisingly, as little as 4 descriptors values per class can approach the performance of a segmentation mask with 65k individual discrete labels. We also found that shape descriptors can be a valid way to encode anatomical priors about the task, enabling to leverage expert knowledge without additional annotations. Our implementation is publicly available and can be easily extended to other tasks and descriptors: https://github.com/hkervadec/shape_descriptors Comment: Accepted at Medical Imaging with Deep Learning (MIDL) 2021 |
Document Type: | Working Paper |
Access URL: | http://arxiv.org/abs/2105.00859 |
Accession Number: | edsarx.2105.00859 |
Database: | arXiv |
FullText | Text: Availability: 0 CustomLinks: – Url: http://arxiv.org/abs/2105.00859 Name: EDS - Arxiv Category: fullText Text: View this record from Arxiv MouseOverText: View this record from Arxiv – Url: https://resolver.ebsco.com/c/xy5jbn/result?sid=EBSCO:edsarx&genre=article&issn=&ISBN=&volume=&issue=&date=20210503&spage=&pages=&title=Beyond pixel-wise supervision for segmentation: A few global shape descriptors might be surprisingly good!&atitle=Beyond%20pixel-wise%20supervision%20for%20segmentation%3A%20A%20few%20global%20shape%20descriptors%20might%20be%20surprisingly%20good%21&aulast=Kervadec%2C%20Hoel&id=DOI: Name: Full Text Finder (for New FTF UI) (s8985755) Category: fullText Text: Find It @ SCU Libraries MouseOverText: Find It @ SCU Libraries |
---|---|
Header | DbId: edsarx DbLabel: arXiv An: edsarx.2105.00859 RelevancyScore: 1014 AccessLevel: 3 PubType: Report PubTypeId: report PreciseRelevancyScore: 1013.72424316406 |
IllustrationInfo | |
Items | – Name: Title Label: Title Group: Ti Data: Beyond pixel-wise supervision for segmentation: A few global shape descriptors might be surprisingly good! – Name: Author Label: Authors Group: Au Data: <searchLink fieldCode="AR" term="%22Kervadec%2C+Hoel%22">Kervadec, Hoel</searchLink><br /><searchLink fieldCode="AR" term="%22Bahig%2C+Houda%22">Bahig, Houda</searchLink><br /><searchLink fieldCode="AR" term="%22Letourneau-Guillon%2C+Laurent%22">Letourneau-Guillon, Laurent</searchLink><br /><searchLink fieldCode="AR" term="%22Dolz%2C+Jose%22">Dolz, Jose</searchLink><br /><searchLink fieldCode="AR" term="%22Ayed%2C+Ismail+Ben%22">Ayed, Ismail Ben</searchLink> – Name: DatePubCY Label: Publication Year Group: Date Data: 2021 – Name: Subset Label: Collection Group: HoldingsInfo Data: Computer Science – Name: Subject Label: Subject Terms Group: Su Data: <searchLink fieldCode="DE" term="%22Computer+Science+-+Computer+Vision+and+Pattern+Recognition%22">Computer Science - Computer Vision and Pattern Recognition</searchLink> – Name: Abstract Label: Description Group: Ab Data: Standard losses for training deep segmentation networks could be seen as individual classifications of pixels, instead of supervising the global shape of the predicted segmentations. While effective, they require exact knowledge of the label of each pixel in an image. This study investigates how effective global geometric shape descriptors could be, when used on their own as segmentation losses for training deep networks. Not only interesting theoretically, there exist deeper motivations to posing segmentation problems as a reconstruction of shape descriptors: Annotations to obtain approximations of low-order shape moments could be much less cumbersome than their full-mask counterparts, and anatomical priors could be readily encoded into invariant shape descriptions, which might alleviate the annotation burden. Also, and most importantly, we hypothesize that, given a task, certain shape descriptions might be invariant across image acquisition protocols/modalities and subject populations, which might open interesting research avenues for generalization in medical image segmentation. We introduce and formulate a few shape descriptors in the context of deep segmentation, and evaluate their potential as standalone losses on two different challenging tasks. Inspired by recent works in constrained optimization for deep networks, we propose a way to use those descriptors to supervise segmentation, without any pixel-level label. Very surprisingly, as little as 4 descriptors values per class can approach the performance of a segmentation mask with 65k individual discrete labels. We also found that shape descriptors can be a valid way to encode anatomical priors about the task, enabling to leverage expert knowledge without additional annotations. Our implementation is publicly available and can be easily extended to other tasks and descriptors: https://github.com/hkervadec/shape_descriptors<br />Comment: Accepted at Medical Imaging with Deep Learning (MIDL) 2021 – Name: TypeDocument Label: Document Type Group: TypDoc Data: Working Paper – Name: URL Label: Access URL Group: URL Data: <link linkTarget="URL" linkTerm="http://arxiv.org/abs/2105.00859" linkWindow="_blank">http://arxiv.org/abs/2105.00859</link> – Name: AN Label: Accession Number Group: ID Data: edsarx.2105.00859 |
PLink | https://login.libproxy.scu.edu/login?url=https://search.ebscohost.com/login.aspx?direct=true&site=eds-live&scope=site&db=edsarx&AN=edsarx.2105.00859 |
RecordInfo | BibRecord: BibEntity: Subjects: – SubjectFull: Computer Science - Computer Vision and Pattern Recognition Type: general Titles: – TitleFull: Beyond pixel-wise supervision for segmentation: A few global shape descriptors might be surprisingly good! Type: main BibRelationships: HasContributorRelationships: – PersonEntity: Name: NameFull: Kervadec, Hoel – PersonEntity: Name: NameFull: Bahig, Houda – PersonEntity: Name: NameFull: Letourneau-Guillon, Laurent – PersonEntity: Name: NameFull: Dolz, Jose – PersonEntity: Name: NameFull: Ayed, Ismail Ben IsPartOfRelationships: – BibEntity: Dates: – D: 03 M: 05 Type: published Y: 2021 |
ResultId | 1 |