FADE: A Task-Agnostic Upsampling Operator for Encoder-Decoder Architectures

Bibliographic Details
Title: FADE: A Task-Agnostic Upsampling Operator for Encoder-Decoder Architectures
Authors: Lu, Hao, Liu, Wenze, Fu, Hongtao, Cao, Zhiguo
Publication Year: 2024
Collection: Computer Science
Subject Terms: Computer Science - Computer Vision and Pattern Recognition
More Details: The goal of this work is to develop a task-agnostic feature upsampling operator for dense prediction where the operator is required to facilitate not only region-sensitive tasks like semantic segmentation but also detail-sensitive tasks such as image matting. Prior upsampling operators often can work well in either type of the tasks, but not both. We argue that task-agnostic upsampling should dynamically trade off between semantic preservation and detail delineation, instead of having a bias between the two properties. In this paper, we present FADE, a novel, plug-and-play, lightweight, and task-agnostic upsampling operator by fusing the assets of decoder and encoder features at three levels: i) considering both the encoder and decoder feature in upsampling kernel generation; ii) controlling the per-point contribution of the encoder/decoder feature in upsampling kernels with an efficient semi-shift convolutional operator; and iii) enabling the selective pass of encoder features with a decoder-dependent gating mechanism for compensating details. To improve the practicality of FADE, we additionally study parameter- and memory-efficient implementations of semi-shift convolution. We analyze the upsampling behavior of FADE on toy data and show through large-scale experiments that FADE is task-agnostic with consistent performance improvement on a number of dense prediction tasks with little extra cost. For the first time, we demonstrate robust feature upsampling on both region- and detail-sensitive tasks successfully. Code is made available at: https://github.com/poppinace/fade
Comment: Accepted to International Journal of Computer Vision. Extended version of ECCV 2022 paper at arXiv:2207.10392
Document Type: Working Paper
Access URL: http://arxiv.org/abs/2407.13500
Accession Number: edsarx.2407.13500
Database: arXiv
More Details
Description not available.