Vision Transformer with Attention Map Hallucination and FFN Compaction

Bibliographic Details
Title: Vision Transformer with Attention Map Hallucination and FFN Compaction
Authors: Xu, Haiyang, Zhou, Zhichao, He, Dongliang, Li, Fu, Wang, Jingdong
Publication Year: 2023
Collection: Computer Science
Subject Terms: Computer Science - Computer Vision and Pattern Recognition
More Details: Vision Transformer(ViT) is now dominating many vision tasks. The drawback of quadratic complexity of its token-wise multi-head self-attention (MHSA), is extensively addressed via either token sparsification or dimension reduction (in spatial or channel). However, the therein redundancy of MHSA is usually overlooked and so is the feed-forward network (FFN). To this end, we propose attention map hallucination and FFN compaction to fill in the blank. Specifically, we observe similar attention maps exist in vanilla ViT and propose to hallucinate half of the attention maps from the rest with much cheaper operations, which is called hallucinated-MHSA (hMHSA). As for FFN, we factorize its hidden-to-output projection matrix and leverage the re-parameterization technique to strengthen its capability, making it compact-FFN (cFFN). With our proposed modules, a 10$\%$-20$\%$ reduction of floating point operations (FLOPs) and parameters (Params) is achieved for various ViT-based backbones, including straight (DeiT), hybrid (NextViT) and hierarchical (PVT) structures, meanwhile, the performances are quite competitive.
Document Type: Working Paper
Access URL: http://arxiv.org/abs/2306.10875
Accession Number: edsarx.2306.10875
Database: arXiv
FullText Text:
  Availability: 0
CustomLinks:
  – Url: http://arxiv.org/abs/2306.10875
    Name: EDS - Arxiv
    Category: fullText
    Text: View this record from Arxiv
    MouseOverText: View this record from Arxiv
  – Url: https://resolver.ebsco.com/c/xy5jbn/result?sid=EBSCO:edsarx&genre=article&issn=&ISBN=&volume=&issue=&date=20230619&spage=&pages=&title=Vision Transformer with Attention Map Hallucination and FFN Compaction&atitle=Vision%20Transformer%20with%20Attention%20Map%20Hallucination%20and%20FFN%20Compaction&aulast=Xu%2C%20Haiyang&id=DOI:
    Name: Full Text Finder (for New FTF UI) (s8985755)
    Category: fullText
    Text: Find It @ SCU Libraries
    MouseOverText: Find It @ SCU Libraries
Header DbId: edsarx
DbLabel: arXiv
An: edsarx.2306.10875
RelevancyScore: 1057
AccessLevel: 3
PubType: Report
PubTypeId: report
PreciseRelevancyScore: 1057.34448242188
IllustrationInfo
Items – Name: Title
  Label: Title
  Group: Ti
  Data: Vision Transformer with Attention Map Hallucination and FFN Compaction
– Name: Author
  Label: Authors
  Group: Au
  Data: <searchLink fieldCode="AR" term="%22Xu%2C+Haiyang%22">Xu, Haiyang</searchLink><br /><searchLink fieldCode="AR" term="%22Zhou%2C+Zhichao%22">Zhou, Zhichao</searchLink><br /><searchLink fieldCode="AR" term="%22He%2C+Dongliang%22">He, Dongliang</searchLink><br /><searchLink fieldCode="AR" term="%22Li%2C+Fu%22">Li, Fu</searchLink><br /><searchLink fieldCode="AR" term="%22Wang%2C+Jingdong%22">Wang, Jingdong</searchLink>
– Name: DatePubCY
  Label: Publication Year
  Group: Date
  Data: 2023
– Name: Subset
  Label: Collection
  Group: HoldingsInfo
  Data: Computer Science
– Name: Subject
  Label: Subject Terms
  Group: Su
  Data: <searchLink fieldCode="DE" term="%22Computer+Science+-+Computer+Vision+and+Pattern+Recognition%22">Computer Science - Computer Vision and Pattern Recognition</searchLink>
– Name: Abstract
  Label: Description
  Group: Ab
  Data: Vision Transformer(ViT) is now dominating many vision tasks. The drawback of quadratic complexity of its token-wise multi-head self-attention (MHSA), is extensively addressed via either token sparsification or dimension reduction (in spatial or channel). However, the therein redundancy of MHSA is usually overlooked and so is the feed-forward network (FFN). To this end, we propose attention map hallucination and FFN compaction to fill in the blank. Specifically, we observe similar attention maps exist in vanilla ViT and propose to hallucinate half of the attention maps from the rest with much cheaper operations, which is called hallucinated-MHSA (hMHSA). As for FFN, we factorize its hidden-to-output projection matrix and leverage the re-parameterization technique to strengthen its capability, making it compact-FFN (cFFN). With our proposed modules, a 10$\%$-20$\%$ reduction of floating point operations (FLOPs) and parameters (Params) is achieved for various ViT-based backbones, including straight (DeiT), hybrid (NextViT) and hierarchical (PVT) structures, meanwhile, the performances are quite competitive.
– Name: TypeDocument
  Label: Document Type
  Group: TypDoc
  Data: Working Paper
– Name: URL
  Label: Access URL
  Group: URL
  Data: <link linkTarget="URL" linkTerm="http://arxiv.org/abs/2306.10875" linkWindow="_blank">http://arxiv.org/abs/2306.10875</link>
– Name: AN
  Label: Accession Number
  Group: ID
  Data: edsarx.2306.10875
PLink https://login.libproxy.scu.edu/login?url=https://search.ebscohost.com/login.aspx?direct=true&site=eds-live&scope=site&db=edsarx&AN=edsarx.2306.10875
RecordInfo BibRecord:
  BibEntity:
    Subjects:
      – SubjectFull: Computer Science - Computer Vision and Pattern Recognition
        Type: general
    Titles:
      – TitleFull: Vision Transformer with Attention Map Hallucination and FFN Compaction
        Type: main
  BibRelationships:
    HasContributorRelationships:
      – PersonEntity:
          Name:
            NameFull: Xu, Haiyang
      – PersonEntity:
          Name:
            NameFull: Zhou, Zhichao
      – PersonEntity:
          Name:
            NameFull: He, Dongliang
      – PersonEntity:
          Name:
            NameFull: Li, Fu
      – PersonEntity:
          Name:
            NameFull: Wang, Jingdong
    IsPartOfRelationships:
      – BibEntity:
          Dates:
            – D: 19
              M: 06
              Type: published
              Y: 2023
ResultId 1