Prompt Perturbation in Retrieval-Augmented Generation based Large Language Models

Bibliographic Details
Title: Prompt Perturbation in Retrieval-Augmented Generation based Large Language Models
Authors: Hu, Zhibo, Wang, Chen, Shu, Yanfeng, Helen, Paik, Zhu, Liming
Publication Year: 2024
Collection: Computer Science
Subject Terms: Computer Science - Computation and Language, Computer Science - Information Retrieval, I.2.7, H.3.3
More Details: The robustness of large language models (LLMs) becomes increasingly important as their use rapidly grows in a wide range of domains. Retrieval-Augmented Generation (RAG) is considered as a means to improve the trustworthiness of text generation from LLMs. However, how the outputs from RAG-based LLMs are affected by slightly different inputs is not well studied. In this work, we find that the insertion of even a short prefix to the prompt leads to the generation of outputs far away from factually correct answers. We systematically evaluate the effect of such prefixes on RAG by introducing a novel optimization technique called Gradient Guided Prompt Perturbation (GGPP). GGPP achieves a high success rate in steering outputs of RAG-based LLMs to targeted wrong answers. It can also cope with instructions in the prompts requesting to ignore irrelevant context. We also exploit LLMs' neuron activation difference between prompts with and without GGPP perturbations to give a method that improves the robustness of RAG-based LLMs through a highly effective detector trained on neuron activation triggered by GGPP generated prompts. Our evaluation on open-sourced LLMs demonstrates the effectiveness of our methods.
Comment: 12 pages, 9 figures
Document Type: Working Paper
Access URL: http://arxiv.org/abs/2402.07179
Accession Number: edsarx.2402.07179
Database: arXiv
More Details
Description not available.