Sparse Weight Averaging with Multiple Particles for Iterative Magnitude Pruning

Bibliographic Details
Title: Sparse Weight Averaging with Multiple Particles for Iterative Magnitude Pruning
Authors: Choi, Moonseok, Lee, Hyungi, Nam, Giung, Lee, Juho
Publication Year: 2023
Collection: Computer Science
Subject Terms: Computer Science - Machine Learning, Computer Science - Artificial Intelligence
More Details: Given the ever-increasing size of modern neural networks, the significance of sparse architectures has surged due to their accelerated inference speeds and minimal memory demands. When it comes to global pruning techniques, Iterative Magnitude Pruning (IMP) still stands as a state-of-the-art algorithm despite its simple nature, particularly in extremely sparse regimes. In light of the recent finding that the two successive matching IMP solutions are linearly connected without a loss barrier, we propose Sparse Weight Averaging with Multiple Particles (SWAMP), a straightforward modification of IMP that achieves performance comparable to an ensemble of two IMP solutions. For every iteration, we concurrently train multiple sparse models, referred to as particles, using different batch orders yet the same matching ticket, and then weight average such models to produce a single mask. We demonstrate that our method consistently outperforms existing baselines across different sparsities through extensive experiments on various data and neural network structures.
Comment: ICLR 2024
Document Type: Working Paper
Access URL: http://arxiv.org/abs/2305.14852
Accession Number: edsarx.2305.14852
Database: arXiv
FullText Text:
  Availability: 0
CustomLinks:
  – Url: http://arxiv.org/abs/2305.14852
    Name: EDS - Arxiv
    Category: fullText
    Text: View this record from Arxiv
    MouseOverText: View this record from Arxiv
  – Url: https://resolver.ebsco.com/c/xy5jbn/result?sid=EBSCO:edsarx&genre=article&issn=&ISBN=&volume=&issue=&date=20230524&spage=&pages=&title=Sparse Weight Averaging with Multiple Particles for Iterative Magnitude Pruning&atitle=Sparse%20Weight%20Averaging%20with%20Multiple%20Particles%20for%20Iterative%20Magnitude%20Pruning&aulast=Choi%2C%20Moonseok&id=DOI:
    Name: Full Text Finder (for New FTF UI) (s8985755)
    Category: fullText
    Text: Find It @ SCU Libraries
    MouseOverText: Find It @ SCU Libraries
Header DbId: edsarx
DbLabel: arXiv
An: edsarx.2305.14852
RelevancyScore: 1057
AccessLevel: 3
PubType: Report
PubTypeId: report
PreciseRelevancyScore: 1057.33178710938
IllustrationInfo
Items – Name: Title
  Label: Title
  Group: Ti
  Data: Sparse Weight Averaging with Multiple Particles for Iterative Magnitude Pruning
– Name: Author
  Label: Authors
  Group: Au
  Data: <searchLink fieldCode="AR" term="%22Choi%2C+Moonseok%22">Choi, Moonseok</searchLink><br /><searchLink fieldCode="AR" term="%22Lee%2C+Hyungi%22">Lee, Hyungi</searchLink><br /><searchLink fieldCode="AR" term="%22Nam%2C+Giung%22">Nam, Giung</searchLink><br /><searchLink fieldCode="AR" term="%22Lee%2C+Juho%22">Lee, Juho</searchLink>
– Name: DatePubCY
  Label: Publication Year
  Group: Date
  Data: 2023
– Name: Subset
  Label: Collection
  Group: HoldingsInfo
  Data: Computer Science
– Name: Subject
  Label: Subject Terms
  Group: Su
  Data: <searchLink fieldCode="DE" term="%22Computer+Science+-+Machine+Learning%22">Computer Science - Machine Learning</searchLink><br /><searchLink fieldCode="DE" term="%22Computer+Science+-+Artificial+Intelligence%22">Computer Science - Artificial Intelligence</searchLink>
– Name: Abstract
  Label: Description
  Group: Ab
  Data: Given the ever-increasing size of modern neural networks, the significance of sparse architectures has surged due to their accelerated inference speeds and minimal memory demands. When it comes to global pruning techniques, Iterative Magnitude Pruning (IMP) still stands as a state-of-the-art algorithm despite its simple nature, particularly in extremely sparse regimes. In light of the recent finding that the two successive matching IMP solutions are linearly connected without a loss barrier, we propose Sparse Weight Averaging with Multiple Particles (SWAMP), a straightforward modification of IMP that achieves performance comparable to an ensemble of two IMP solutions. For every iteration, we concurrently train multiple sparse models, referred to as particles, using different batch orders yet the same matching ticket, and then weight average such models to produce a single mask. We demonstrate that our method consistently outperforms existing baselines across different sparsities through extensive experiments on various data and neural network structures.<br />Comment: ICLR 2024
– Name: TypeDocument
  Label: Document Type
  Group: TypDoc
  Data: Working Paper
– Name: URL
  Label: Access URL
  Group: URL
  Data: <link linkTarget="URL" linkTerm="http://arxiv.org/abs/2305.14852" linkWindow="_blank">http://arxiv.org/abs/2305.14852</link>
– Name: AN
  Label: Accession Number
  Group: ID
  Data: edsarx.2305.14852
PLink https://login.libproxy.scu.edu/login?url=https://search.ebscohost.com/login.aspx?direct=true&site=eds-live&scope=site&db=edsarx&AN=edsarx.2305.14852
RecordInfo BibRecord:
  BibEntity:
    Subjects:
      – SubjectFull: Computer Science - Machine Learning
        Type: general
      – SubjectFull: Computer Science - Artificial Intelligence
        Type: general
    Titles:
      – TitleFull: Sparse Weight Averaging with Multiple Particles for Iterative Magnitude Pruning
        Type: main
  BibRelationships:
    HasContributorRelationships:
      – PersonEntity:
          Name:
            NameFull: Choi, Moonseok
      – PersonEntity:
          Name:
            NameFull: Lee, Hyungi
      – PersonEntity:
          Name:
            NameFull: Nam, Giung
      – PersonEntity:
          Name:
            NameFull: Lee, Juho
    IsPartOfRelationships:
      – BibEntity:
          Dates:
            – D: 24
              M: 05
              Type: published
              Y: 2023
ResultId 1