LIPT: Latency-aware Image Processing Transformer

Bibliographic Details
Title: LIPT: Latency-aware Image Processing Transformer
Authors: Qiao, Junbo, Li, Wei, Xie, Haizhen, Chen, Hanting, Zhou, Yunshuai, Tu, Zhijun, Hu, Jie, Lin, Shaohui
Publication Year: 2024
Collection: Computer Science
Subject Terms: Computer Science - Computer Vision and Pattern Recognition
More Details: Transformer is leading a trend in the field of image processing. Despite the great success that existing lightweight image processing transformers have achieved, they are tailored to FLOPs or parameters reduction, rather than practical inference acceleration. In this paper, we present a latency-aware image processing transformer, termed LIPT. We devise the low-latency proportion LIPT block that substitutes memory-intensive operators with the combination of self-attention and convolutions to achieve practical speedup. Specifically, we propose a novel non-volatile sparse masking self-attention (NVSM-SA) that utilizes a pre-computing sparse mask to capture contextual information from a larger window with no extra computation overload. Besides, a high-frequency reparameterization module (HRM) is proposed to make LIPT block reparameterization friendly, which improves the model's detail reconstruction capability. Extensive experiments on multiple image processing tasks (e.g., image super-resolution (SR), JPEG artifact reduction, and image denoising) demonstrate the superiority of LIPT on both latency and PSNR. LIPT achieves real-time GPU inference with state-of-the-art performance on multiple image SR benchmarks.
Document Type: Working Paper
Access URL: http://arxiv.org/abs/2404.06075
Accession Number: edsarx.2404.06075
Database: arXiv
More Details
Description not available.