Exploiting the Textual Potential from Vision-Language Pre-training for Text-based Person Search

Bibliographic Details
Title: Exploiting the Textual Potential from Vision-Language Pre-training for Text-based Person Search
Authors: Wang, Guanshuo, Yu, Fufu, Li, Junjie, Jia, Qiong, Ding, Shouhong
Publication Year: 2023
Collection: Computer Science
Subject Terms: Computer Science - Computer Vision and Pattern Recognition
More Details: Text-based Person Search (TPS), is targeted on retrieving pedestrians to match text descriptions instead of query images. Recent Vision-Language Pre-training (VLP) models can bring transferable knowledge to downstream TPS tasks, resulting in more efficient performance gains. However, existing TPS methods improved by VLP only utilize pre-trained visual encoders, neglecting the corresponding textual representation and breaking the significant modality alignment learned from large-scale pre-training. In this paper, we explore the full utilization of textual potential from VLP in TPS tasks. We build on the proposed VLP-TPS baseline model, which is the first TPS model with both pre-trained modalities. We propose the Multi-Integrity Description Constraints (MIDC) to enhance the robustness of the textual modality by incorporating different components of fine-grained corpus during training. Inspired by the prompt approach for zero-shot classification with VLP models, we propose the Dynamic Attribute Prompt (DAP) to provide a unified corpus of fine-grained attributes as language hints for the image modality. Extensive experiments show that our proposed TPS framework achieves state-of-the-art performance, exceeding the previous best method by a margin.
Comment: 10 pages, 6 figures
Document Type: Working Paper
Access URL: http://arxiv.org/abs/2303.04497
Accession Number: edsarx.2303.04497
Database: arXiv
More Details
Description not available.