Bibliographic Details
Title: |
How Private are Language Models in Abstractive Summarization? |
Authors: |
Hughes, Anthony, Aletras, Nikolaos, Ma, Ning |
Publication Year: |
2024 |
Collection: |
Computer Science |
Subject Terms: |
Computer Science - Computation and Language |
More Details: |
Language models (LMs) have shown outstanding performance in text summarization including sensitive domains such as medicine and law. In these settings, it is important that personally identifying information (PII) included in the source document should not leak in the summary. Prior efforts have mostly focused on studying how LMs may inadvertently elicit PII from training data. However, to what extent LMs can provide privacy-preserving summaries given a non-private source document remains under-explored. In this paper, we perform a comprehensive study across two closed- and three open-weight LMs of different sizes and families. We experiment with prompting and fine-tuning strategies for privacy-preservation across a range of summarization datasets across three domains. Our extensive quantitative and qualitative analysis including human evaluation shows that LMs often cannot prevent PII leakage on their summaries and that current widely-used metrics cannot capture context dependent privacy risks. |
Document Type: |
Working Paper |
Access URL: |
http://arxiv.org/abs/2412.12040 |
Accession Number: |
edsarx.2412.12040 |
Database: |
arXiv |