Bibliographic Details
Title: |
PANDA: Prompt Transfer Meets Knowledge Distillation for Efficient Model Adaptation |
Authors: |
Zhong, Qihuang, Ding, Liang, Liu, Juhua, Du, Bo, Tao, Dacheng |
Publication Year: |
2022 |
Collection: |
Computer Science |
Subject Terms: |
Computer Science - Computation and Language |
More Details: |
Prompt Transfer (PoT) is a recently-proposed approach to improve prompt-tuning, by initializing the target prompt with the existing prompt trained on similar source tasks. However, such a vanilla PoT approach usually achieves sub-optimal performance, as (i) the PoT is sensitive to the similarity of source-target pair and (ii) directly fine-tuning the prompt initialized with source prompt on target task might lead to forgetting of the useful general knowledge learned from source task. To tackle these issues, we propose a new metric to accurately predict the prompt transferability (regarding (i)), and a novel PoT approach (namely PANDA) that leverages the knowledge distillation technique to alleviate the knowledge forgetting effectively (regarding (ii)). Extensive and systematic experiments on 189 combinations of 21 source and 9 target datasets across 5 scales of PLMs demonstrate that: 1) our proposed metric works well to predict the prompt transferability; 2) our PANDA consistently outperforms the vanilla PoT approach by 2.3% average score (up to 24.1%) among all tasks and model sizes; 3) with our PANDA approach, prompt-tuning can achieve competitive and even better performance than model-tuning in various PLM scales scenarios. We have publicly released our code in https://github.com/WHU-ZQH/PANDA. Comment: Accepted by IEEE TKDE |
Document Type: |
Working Paper |
Access URL: |
http://arxiv.org/abs/2208.10160 |
Accession Number: |
edsarx.2208.10160 |
Database: |
arXiv |